00:00:00.001 Started by upstream project "autotest-spdk-master-vs-dpdk-v22.11" build number 2026 00:00:00.001 originally caused by: 00:00:00.002 Started by upstream project "nightly-trigger" build number 3291 00:00:00.002 originally caused by: 00:00:00.002 Started by timer 00:00:00.002 Started by timer 00:00:00.083 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.084 The recommended git tool is: git 00:00:00.084 using credential 00000000-0000-0000-0000-000000000002 00:00:00.086 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.117 Fetching changes from the remote Git repository 00:00:00.118 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.142 Using shallow fetch with depth 1 00:00:00.142 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.142 > git --version # timeout=10 00:00:00.165 > git --version # 'git version 2.39.2' 00:00:00.165 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.179 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.179 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.215 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.227 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.238 Checking out Revision 456d80899d5187c68de113852b37bde1201fd33a (FETCH_HEAD) 00:00:06.239 > git config core.sparsecheckout # timeout=10 00:00:06.248 > git read-tree -mu HEAD # timeout=10 00:00:06.263 > git checkout -f 456d80899d5187c68de113852b37bde1201fd33a # timeout=5 00:00:06.288 Commit message: "jenkins/config: Drop WFP25 for maintenance" 00:00:06.288 > git rev-list --no-walk 456d80899d5187c68de113852b37bde1201fd33a # timeout=10 00:00:06.364 [Pipeline] Start of Pipeline 00:00:06.376 [Pipeline] library 00:00:06.377 Loading library shm_lib@master 00:00:06.377 Library shm_lib@master is cached. Copying from home. 00:00:06.395 [Pipeline] node 00:00:06.411 Running on GP11 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:06.413 [Pipeline] { 00:00:06.421 [Pipeline] catchError 00:00:06.422 [Pipeline] { 00:00:06.432 [Pipeline] wrap 00:00:06.439 [Pipeline] { 00:00:06.447 [Pipeline] stage 00:00:06.449 [Pipeline] { (Prologue) 00:00:06.658 [Pipeline] sh 00:00:06.937 + logger -p user.info -t JENKINS-CI 00:00:06.956 [Pipeline] echo 00:00:06.958 Node: GP11 00:00:06.967 [Pipeline] sh 00:00:07.261 [Pipeline] setCustomBuildProperty 00:00:07.274 [Pipeline] echo 00:00:07.276 Cleanup processes 00:00:07.282 [Pipeline] sh 00:00:07.561 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.561 1185487 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.573 [Pipeline] sh 00:00:07.848 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.848 ++ grep -v 'sudo pgrep' 00:00:07.848 ++ awk '{print $1}' 00:00:07.848 + sudo kill -9 00:00:07.848 + true 00:00:07.862 [Pipeline] cleanWs 00:00:07.872 [WS-CLEANUP] Deleting project workspace... 00:00:07.872 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.877 [WS-CLEANUP] done 00:00:07.882 [Pipeline] setCustomBuildProperty 00:00:07.897 [Pipeline] sh 00:00:08.175 + sudo git config --global --replace-all safe.directory '*' 00:00:08.266 [Pipeline] httpRequest 00:00:08.297 [Pipeline] echo 00:00:08.298 Sorcerer 10.211.164.101 is alive 00:00:08.306 [Pipeline] httpRequest 00:00:08.309 HttpMethod: GET 00:00:08.310 URL: http://10.211.164.101/packages/jbp_456d80899d5187c68de113852b37bde1201fd33a.tar.gz 00:00:08.310 Sending request to url: http://10.211.164.101/packages/jbp_456d80899d5187c68de113852b37bde1201fd33a.tar.gz 00:00:08.320 Response Code: HTTP/1.1 200 OK 00:00:08.321 Success: Status code 200 is in the accepted range: 200,404 00:00:08.321 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_456d80899d5187c68de113852b37bde1201fd33a.tar.gz 00:00:14.589 [Pipeline] sh 00:00:14.868 + tar --no-same-owner -xf jbp_456d80899d5187c68de113852b37bde1201fd33a.tar.gz 00:00:14.884 [Pipeline] httpRequest 00:00:14.913 [Pipeline] echo 00:00:14.915 Sorcerer 10.211.164.101 is alive 00:00:14.924 [Pipeline] httpRequest 00:00:14.929 HttpMethod: GET 00:00:14.929 URL: http://10.211.164.101/packages/spdk_78cbcfdde1ea721461a0377ef7e908b0636460ea.tar.gz 00:00:14.929 Sending request to url: http://10.211.164.101/packages/spdk_78cbcfdde1ea721461a0377ef7e908b0636460ea.tar.gz 00:00:14.945 Response Code: HTTP/1.1 200 OK 00:00:14.945 Success: Status code 200 is in the accepted range: 200,404 00:00:14.946 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_78cbcfdde1ea721461a0377ef7e908b0636460ea.tar.gz 00:00:42.336 [Pipeline] sh 00:00:42.618 + tar --no-same-owner -xf spdk_78cbcfdde1ea721461a0377ef7e908b0636460ea.tar.gz 00:00:45.164 [Pipeline] sh 00:00:45.448 + git -C spdk log --oneline -n5 00:00:45.448 78cbcfdde test/scheduler: fix cpu mask for rpc governor tests 00:00:45.448 ba69d4678 event/scheduler: remove custom opts from static scheduler 00:00:45.448 79fce488b test/scheduler: test scheduling period with dynamic scheduler 00:00:45.448 673f37314 ut/nvme_pcie: allocate nvme_pcie_qpair instead of spdk_nvme_qpair 00:00:45.448 084afa904 util: copy errno before calling stdlib's functions 00:00:45.468 [Pipeline] withCredentials 00:00:45.478 > git --version # timeout=10 00:00:45.492 > git --version # 'git version 2.39.2' 00:00:45.508 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:00:45.511 [Pipeline] { 00:00:45.521 [Pipeline] retry 00:00:45.523 [Pipeline] { 00:00:45.541 [Pipeline] sh 00:00:45.823 + git ls-remote http://dpdk.org/git/dpdk-stable v22.11.4 00:00:46.405 [Pipeline] } 00:00:46.428 [Pipeline] // retry 00:00:46.434 [Pipeline] } 00:00:46.455 [Pipeline] // withCredentials 00:00:46.466 [Pipeline] httpRequest 00:00:46.490 [Pipeline] echo 00:00:46.492 Sorcerer 10.211.164.101 is alive 00:00:46.502 [Pipeline] httpRequest 00:00:46.507 HttpMethod: GET 00:00:46.507 URL: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:00:46.508 Sending request to url: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:00:46.521 Response Code: HTTP/1.1 200 OK 00:00:46.521 Success: Status code 200 is in the accepted range: 200,404 00:00:46.522 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:00:55.844 [Pipeline] sh 00:00:56.126 + tar --no-same-owner -xf dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:00:58.040 [Pipeline] sh 00:00:58.322 + git -C dpdk log --oneline -n5 00:00:58.322 caf0f5d395 version: 22.11.4 00:00:58.322 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:00:58.322 dc9c799c7d vhost: fix missing spinlock unlock 00:00:58.322 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:00:58.322 6ef77f2a5e net/gve: fix RX buffer size alignment 00:00:58.333 [Pipeline] } 00:00:58.350 [Pipeline] // stage 00:00:58.361 [Pipeline] stage 00:00:58.363 [Pipeline] { (Prepare) 00:00:58.385 [Pipeline] writeFile 00:00:58.403 [Pipeline] sh 00:00:58.683 + logger -p user.info -t JENKINS-CI 00:00:58.693 [Pipeline] sh 00:00:58.972 + logger -p user.info -t JENKINS-CI 00:00:58.984 [Pipeline] sh 00:00:59.291 + cat autorun-spdk.conf 00:00:59.291 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:59.291 SPDK_TEST_NVMF=1 00:00:59.291 SPDK_TEST_NVME_CLI=1 00:00:59.291 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:59.291 SPDK_TEST_NVMF_NICS=e810 00:00:59.291 SPDK_TEST_VFIOUSER=1 00:00:59.291 SPDK_RUN_UBSAN=1 00:00:59.291 NET_TYPE=phy 00:00:59.291 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:00:59.291 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:00:59.297 RUN_NIGHTLY=1 00:00:59.302 [Pipeline] readFile 00:00:59.326 [Pipeline] withEnv 00:00:59.328 [Pipeline] { 00:00:59.339 [Pipeline] sh 00:00:59.616 + set -ex 00:00:59.616 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:00:59.616 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:59.616 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:59.616 ++ SPDK_TEST_NVMF=1 00:00:59.616 ++ SPDK_TEST_NVME_CLI=1 00:00:59.616 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:59.616 ++ SPDK_TEST_NVMF_NICS=e810 00:00:59.616 ++ SPDK_TEST_VFIOUSER=1 00:00:59.616 ++ SPDK_RUN_UBSAN=1 00:00:59.616 ++ NET_TYPE=phy 00:00:59.616 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:00:59.616 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:00:59.616 ++ RUN_NIGHTLY=1 00:00:59.616 + case $SPDK_TEST_NVMF_NICS in 00:00:59.616 + DRIVERS=ice 00:00:59.616 + [[ tcp == \r\d\m\a ]] 00:00:59.616 + [[ -n ice ]] 00:00:59.616 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:00:59.616 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:00:59.616 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:00:59.616 rmmod: ERROR: Module irdma is not currently loaded 00:00:59.616 rmmod: ERROR: Module i40iw is not currently loaded 00:00:59.616 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:00:59.616 + true 00:00:59.616 + for D in $DRIVERS 00:00:59.616 + sudo modprobe ice 00:00:59.616 + exit 0 00:00:59.624 [Pipeline] } 00:00:59.641 [Pipeline] // withEnv 00:00:59.646 [Pipeline] } 00:00:59.662 [Pipeline] // stage 00:00:59.671 [Pipeline] catchError 00:00:59.673 [Pipeline] { 00:00:59.688 [Pipeline] timeout 00:00:59.688 Timeout set to expire in 50 min 00:00:59.690 [Pipeline] { 00:00:59.704 [Pipeline] stage 00:00:59.706 [Pipeline] { (Tests) 00:00:59.722 [Pipeline] sh 00:01:00.001 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:00.001 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:00.001 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:00.001 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:00.001 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:00.001 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:00.001 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:00.001 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:00.001 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:00.001 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:00.001 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:00.001 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:00.001 + source /etc/os-release 00:01:00.001 ++ NAME='Fedora Linux' 00:01:00.001 ++ VERSION='38 (Cloud Edition)' 00:01:00.001 ++ ID=fedora 00:01:00.001 ++ VERSION_ID=38 00:01:00.001 ++ VERSION_CODENAME= 00:01:00.001 ++ PLATFORM_ID=platform:f38 00:01:00.001 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:00.001 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:00.001 ++ LOGO=fedora-logo-icon 00:01:00.001 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:00.001 ++ HOME_URL=https://fedoraproject.org/ 00:01:00.001 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:00.001 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:00.001 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:00.001 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:00.001 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:00.001 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:00.001 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:00.001 ++ SUPPORT_END=2024-05-14 00:01:00.001 ++ VARIANT='Cloud Edition' 00:01:00.001 ++ VARIANT_ID=cloud 00:01:00.001 + uname -a 00:01:00.001 Linux spdk-gp-11 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:00.001 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:00.936 Hugepages 00:01:00.936 node hugesize free / total 00:01:00.936 node0 1048576kB 0 / 0 00:01:00.936 node0 2048kB 0 / 0 00:01:00.936 node1 1048576kB 0 / 0 00:01:00.936 node1 2048kB 0 / 0 00:01:00.936 00:01:00.936 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:00.936 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:01:00.936 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:01:00.936 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:01:00.936 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:01:00.936 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:01:00.936 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:01:00.936 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:01:00.936 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:01:00.936 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:01:00.936 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:01:00.936 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:01:00.936 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:01:00.936 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:01:00.936 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:01:00.936 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:01:00.936 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:01:01.195 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:01.195 + rm -f /tmp/spdk-ld-path 00:01:01.195 + source autorun-spdk.conf 00:01:01.195 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:01.195 ++ SPDK_TEST_NVMF=1 00:01:01.195 ++ SPDK_TEST_NVME_CLI=1 00:01:01.195 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:01.195 ++ SPDK_TEST_NVMF_NICS=e810 00:01:01.195 ++ SPDK_TEST_VFIOUSER=1 00:01:01.195 ++ SPDK_RUN_UBSAN=1 00:01:01.195 ++ NET_TYPE=phy 00:01:01.195 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:01.195 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:01.195 ++ RUN_NIGHTLY=1 00:01:01.195 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:01.195 + [[ -n '' ]] 00:01:01.195 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:01.195 + for M in /var/spdk/build-*-manifest.txt 00:01:01.195 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:01.195 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:01.196 + for M in /var/spdk/build-*-manifest.txt 00:01:01.196 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:01.196 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:01.196 ++ uname 00:01:01.196 + [[ Linux == \L\i\n\u\x ]] 00:01:01.196 + sudo dmesg -T 00:01:01.196 + sudo dmesg --clear 00:01:01.196 + dmesg_pid=1186810 00:01:01.196 + [[ Fedora Linux == FreeBSD ]] 00:01:01.196 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:01.196 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:01.196 + sudo dmesg -Tw 00:01:01.196 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:01.196 + [[ -x /usr/src/fio-static/fio ]] 00:01:01.196 + export FIO_BIN=/usr/src/fio-static/fio 00:01:01.196 + FIO_BIN=/usr/src/fio-static/fio 00:01:01.196 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:01.196 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:01.196 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:01.196 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:01.196 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:01.196 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:01.196 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:01.196 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:01.196 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:01.196 Test configuration: 00:01:01.196 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:01.196 SPDK_TEST_NVMF=1 00:01:01.196 SPDK_TEST_NVME_CLI=1 00:01:01.196 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:01.196 SPDK_TEST_NVMF_NICS=e810 00:01:01.196 SPDK_TEST_VFIOUSER=1 00:01:01.196 SPDK_RUN_UBSAN=1 00:01:01.196 NET_TYPE=phy 00:01:01.196 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:01.196 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:01.196 RUN_NIGHTLY=1 01:39:16 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:01.196 01:39:16 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:01.196 01:39:16 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:01.196 01:39:16 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:01.196 01:39:16 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:01.196 01:39:16 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:01.196 01:39:16 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:01.196 01:39:16 -- paths/export.sh@5 -- $ export PATH 00:01:01.196 01:39:16 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:01.196 01:39:16 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:01.196 01:39:16 -- common/autobuild_common.sh@447 -- $ date +%s 00:01:01.196 01:39:16 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721777956.XXXXXX 00:01:01.196 01:39:16 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721777956.w4ukvs 00:01:01.196 01:39:16 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:01:01.196 01:39:16 -- common/autobuild_common.sh@453 -- $ '[' -n v22.11.4 ']' 00:01:01.196 01:39:16 -- common/autobuild_common.sh@454 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:01.196 01:39:16 -- common/autobuild_common.sh@454 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:01:01.196 01:39:16 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:01.196 01:39:16 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:01.196 01:39:16 -- common/autobuild_common.sh@463 -- $ get_config_params 00:01:01.196 01:39:16 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:01:01.196 01:39:16 -- common/autotest_common.sh@10 -- $ set +x 00:01:01.196 01:39:16 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:01:01.196 01:39:16 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:01:01.196 01:39:16 -- pm/common@17 -- $ local monitor 00:01:01.196 01:39:16 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:01.196 01:39:16 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:01.196 01:39:16 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:01.196 01:39:16 -- pm/common@21 -- $ date +%s 00:01:01.196 01:39:16 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:01.196 01:39:16 -- pm/common@21 -- $ date +%s 00:01:01.196 01:39:16 -- pm/common@25 -- $ sleep 1 00:01:01.196 01:39:16 -- pm/common@21 -- $ date +%s 00:01:01.196 01:39:16 -- pm/common@21 -- $ date +%s 00:01:01.196 01:39:16 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721777956 00:01:01.196 01:39:16 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721777956 00:01:01.196 01:39:16 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721777956 00:01:01.196 01:39:16 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721777956 00:01:01.196 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721777956_collect-vmstat.pm.log 00:01:01.196 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721777956_collect-cpu-load.pm.log 00:01:01.196 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721777956_collect-cpu-temp.pm.log 00:01:01.196 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721777956_collect-bmc-pm.bmc.pm.log 00:01:02.576 01:39:17 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:01:02.576 01:39:17 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:02.576 01:39:17 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:02.576 01:39:17 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:02.576 01:39:17 -- spdk/autobuild.sh@16 -- $ date -u 00:01:02.576 Tue Jul 23 11:39:17 PM UTC 2024 00:01:02.576 01:39:17 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:02.576 v24.09-pre-309-g78cbcfdde 00:01:02.576 01:39:17 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:02.576 01:39:17 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:02.576 01:39:17 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:02.576 01:39:17 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:02.576 01:39:17 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:02.576 01:39:17 -- common/autotest_common.sh@10 -- $ set +x 00:01:02.576 ************************************ 00:01:02.576 START TEST ubsan 00:01:02.576 ************************************ 00:01:02.576 01:39:17 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:01:02.576 using ubsan 00:01:02.576 00:01:02.576 real 0m0.000s 00:01:02.576 user 0m0.000s 00:01:02.576 sys 0m0.000s 00:01:02.576 01:39:17 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:02.576 01:39:17 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:02.576 ************************************ 00:01:02.576 END TEST ubsan 00:01:02.576 ************************************ 00:01:02.576 01:39:17 -- spdk/autobuild.sh@27 -- $ '[' -n v22.11.4 ']' 00:01:02.576 01:39:17 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:01:02.576 01:39:17 -- common/autobuild_common.sh@439 -- $ run_test build_native_dpdk _build_native_dpdk 00:01:02.576 01:39:17 -- common/autotest_common.sh@1099 -- $ '[' 2 -le 1 ']' 00:01:02.576 01:39:17 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:02.576 01:39:17 -- common/autotest_common.sh@10 -- $ set +x 00:01:02.576 ************************************ 00:01:02.576 START TEST build_native_dpdk 00:01:02.576 ************************************ 00:01:02.576 01:39:17 build_native_dpdk -- common/autotest_common.sh@1123 -- $ _build_native_dpdk 00:01:02.576 01:39:17 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:01:02.576 01:39:17 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:01:02.576 01:39:17 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:01:02.576 01:39:17 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:01:02.576 01:39:17 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:01:02.576 01:39:17 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:01:02.576 01:39:17 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:01:02.576 01:39:17 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:01:02.576 01:39:17 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:01:02.576 01:39:17 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:01:02.576 01:39:17 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:01:02.576 01:39:17 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:01:02.576 01:39:17 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:01:02.576 01:39:17 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:01:02.576 01:39:17 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:02.576 01:39:17 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:02.576 01:39:17 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:02.576 01:39:17 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk ]] 00:01:02.576 01:39:17 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:02.576 01:39:17 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk log --oneline -n 5 00:01:02.576 caf0f5d395 version: 22.11.4 00:01:02.576 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:01:02.576 dc9c799c7d vhost: fix missing spinlock unlock 00:01:02.576 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:01:02.576 6ef77f2a5e net/gve: fix RX buffer size alignment 00:01:02.576 01:39:17 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:01:02.576 01:39:17 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:01:02.576 01:39:17 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=22.11.4 00:01:02.576 01:39:17 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:01:02.576 01:39:17 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:01:02.576 01:39:17 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:01:02.576 01:39:17 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:01:02.576 01:39:17 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:01:02.576 01:39:17 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:01:02.576 01:39:17 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:01:02.576 01:39:17 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:01:02.576 01:39:17 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:02.576 01:39:17 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:02.576 01:39:17 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:01:02.576 01:39:17 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:02.576 01:39:17 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:01:02.576 01:39:17 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:01:02.576 01:39:17 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 22.11.4 21.11.0 00:01:02.576 01:39:17 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 22.11.4 '<' 21.11.0 00:01:02.576 01:39:17 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:01:02.576 01:39:17 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:01:02.576 01:39:17 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:01:02.576 01:39:17 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:01:02.576 01:39:17 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:01:02.576 01:39:17 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:01:02.576 01:39:17 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:01:02.576 01:39:17 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=3 00:01:02.576 01:39:17 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:01:02.576 01:39:17 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:01:02.576 01:39:17 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:01:02.576 01:39:17 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:01:02.576 01:39:17 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:01:02.576 01:39:17 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:02.576 01:39:17 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 22 00:01:02.576 01:39:17 build_native_dpdk -- scripts/common.sh@350 -- $ local d=22 00:01:02.576 01:39:17 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:01:02.576 01:39:17 build_native_dpdk -- scripts/common.sh@352 -- $ echo 22 00:01:02.576 01:39:17 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=22 00:01:02.576 01:39:17 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 21 00:01:02.576 01:39:17 build_native_dpdk -- scripts/common.sh@350 -- $ local d=21 00:01:02.576 01:39:17 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:01:02.576 01:39:17 build_native_dpdk -- scripts/common.sh@352 -- $ echo 21 00:01:02.576 01:39:17 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=21 00:01:02.576 01:39:17 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:01:02.576 01:39:17 build_native_dpdk -- scripts/common.sh@364 -- $ return 1 00:01:02.576 01:39:17 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:01:02.576 patching file config/rte_config.h 00:01:02.576 Hunk #1 succeeded at 60 (offset 1 line). 00:01:02.576 01:39:17 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 22.11.4 24.07.0 00:01:02.577 01:39:17 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 22.11.4 '<' 24.07.0 00:01:02.577 01:39:17 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:01:02.577 01:39:17 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:01:02.577 01:39:17 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:01:02.577 01:39:17 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:01:02.577 01:39:17 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:01:02.577 01:39:17 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:01:02.577 01:39:17 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:01:02.577 01:39:17 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=3 00:01:02.577 01:39:17 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:01:02.577 01:39:17 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:01:02.577 01:39:17 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:01:02.577 01:39:17 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:01:02.577 01:39:17 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:01:02.577 01:39:17 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:02.577 01:39:17 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 22 00:01:02.577 01:39:17 build_native_dpdk -- scripts/common.sh@350 -- $ local d=22 00:01:02.577 01:39:17 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:01:02.577 01:39:17 build_native_dpdk -- scripts/common.sh@352 -- $ echo 22 00:01:02.577 01:39:17 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=22 00:01:02.577 01:39:17 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 24 00:01:02.577 01:39:17 build_native_dpdk -- scripts/common.sh@350 -- $ local d=24 00:01:02.577 01:39:17 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:01:02.577 01:39:17 build_native_dpdk -- scripts/common.sh@352 -- $ echo 24 00:01:02.577 01:39:17 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=24 00:01:02.577 01:39:17 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:01:02.577 01:39:17 build_native_dpdk -- scripts/common.sh@365 -- $ (( ver1[v] < ver2[v] )) 00:01:02.577 01:39:17 build_native_dpdk -- scripts/common.sh@365 -- $ return 0 00:01:02.577 01:39:17 build_native_dpdk -- common/autobuild_common.sh@177 -- $ patch -p1 00:01:02.577 patching file lib/pcapng/rte_pcapng.c 00:01:02.577 Hunk #1 succeeded at 110 (offset -18 lines). 00:01:02.577 01:39:17 build_native_dpdk -- common/autobuild_common.sh@180 -- $ dpdk_kmods=false 00:01:02.577 01:39:17 build_native_dpdk -- common/autobuild_common.sh@181 -- $ uname -s 00:01:02.577 01:39:17 build_native_dpdk -- common/autobuild_common.sh@181 -- $ '[' Linux = FreeBSD ']' 00:01:02.577 01:39:17 build_native_dpdk -- common/autobuild_common.sh@185 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:01:02.577 01:39:17 build_native_dpdk -- common/autobuild_common.sh@185 -- $ meson build-tmp --prefix=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:06.766 The Meson build system 00:01:06.766 Version: 1.3.1 00:01:06.766 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:06.766 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp 00:01:06.766 Build type: native build 00:01:06.766 Program cat found: YES (/usr/bin/cat) 00:01:06.766 Project name: DPDK 00:01:06.766 Project version: 22.11.4 00:01:06.766 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:06.766 C linker for the host machine: gcc ld.bfd 2.39-16 00:01:06.766 Host machine cpu family: x86_64 00:01:06.766 Host machine cpu: x86_64 00:01:06.766 Message: ## Building in Developer Mode ## 00:01:06.766 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:06.766 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:01:06.766 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:01:06.766 Program objdump found: YES (/usr/bin/objdump) 00:01:06.766 Program python3 found: YES (/usr/bin/python3) 00:01:06.766 Program cat found: YES (/usr/bin/cat) 00:01:06.766 config/meson.build:83: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:01:06.766 Checking for size of "void *" : 8 00:01:06.766 Checking for size of "void *" : 8 (cached) 00:01:06.767 Library m found: YES 00:01:06.767 Library numa found: YES 00:01:06.767 Has header "numaif.h" : YES 00:01:06.767 Library fdt found: NO 00:01:06.767 Library execinfo found: NO 00:01:06.767 Has header "execinfo.h" : YES 00:01:06.767 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:06.767 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:06.767 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:06.767 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:06.767 Run-time dependency openssl found: YES 3.0.9 00:01:06.767 Run-time dependency libpcap found: YES 1.10.4 00:01:06.767 Has header "pcap.h" with dependency libpcap: YES 00:01:06.767 Compiler for C supports arguments -Wcast-qual: YES 00:01:06.767 Compiler for C supports arguments -Wdeprecated: YES 00:01:06.767 Compiler for C supports arguments -Wformat: YES 00:01:06.767 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:06.767 Compiler for C supports arguments -Wformat-security: NO 00:01:06.767 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:06.767 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:06.767 Compiler for C supports arguments -Wnested-externs: YES 00:01:06.767 Compiler for C supports arguments -Wold-style-definition: YES 00:01:06.767 Compiler for C supports arguments -Wpointer-arith: YES 00:01:06.767 Compiler for C supports arguments -Wsign-compare: YES 00:01:06.767 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:06.767 Compiler for C supports arguments -Wundef: YES 00:01:06.767 Compiler for C supports arguments -Wwrite-strings: YES 00:01:06.767 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:06.767 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:06.767 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:06.767 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:06.767 Compiler for C supports arguments -mavx512f: YES 00:01:06.767 Checking if "AVX512 checking" compiles: YES 00:01:06.767 Fetching value of define "__SSE4_2__" : 1 00:01:06.767 Fetching value of define "__AES__" : 1 00:01:06.767 Fetching value of define "__AVX__" : 1 00:01:06.767 Fetching value of define "__AVX2__" : (undefined) 00:01:06.767 Fetching value of define "__AVX512BW__" : (undefined) 00:01:06.767 Fetching value of define "__AVX512CD__" : (undefined) 00:01:06.767 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:06.767 Fetching value of define "__AVX512F__" : (undefined) 00:01:06.767 Fetching value of define "__AVX512VL__" : (undefined) 00:01:06.767 Fetching value of define "__PCLMUL__" : 1 00:01:06.767 Fetching value of define "__RDRND__" : 1 00:01:06.767 Fetching value of define "__RDSEED__" : (undefined) 00:01:06.767 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:06.767 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:06.767 Message: lib/kvargs: Defining dependency "kvargs" 00:01:06.767 Message: lib/telemetry: Defining dependency "telemetry" 00:01:06.767 Checking for function "getentropy" : YES 00:01:06.767 Message: lib/eal: Defining dependency "eal" 00:01:06.767 Message: lib/ring: Defining dependency "ring" 00:01:06.767 Message: lib/rcu: Defining dependency "rcu" 00:01:06.767 Message: lib/mempool: Defining dependency "mempool" 00:01:06.767 Message: lib/mbuf: Defining dependency "mbuf" 00:01:06.767 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:06.767 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:06.767 Compiler for C supports arguments -mpclmul: YES 00:01:06.767 Compiler for C supports arguments -maes: YES 00:01:06.767 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:06.767 Compiler for C supports arguments -mavx512bw: YES 00:01:06.767 Compiler for C supports arguments -mavx512dq: YES 00:01:06.767 Compiler for C supports arguments -mavx512vl: YES 00:01:06.767 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:06.767 Compiler for C supports arguments -mavx2: YES 00:01:06.767 Compiler for C supports arguments -mavx: YES 00:01:06.767 Message: lib/net: Defining dependency "net" 00:01:06.767 Message: lib/meter: Defining dependency "meter" 00:01:06.767 Message: lib/ethdev: Defining dependency "ethdev" 00:01:06.767 Message: lib/pci: Defining dependency "pci" 00:01:06.767 Message: lib/cmdline: Defining dependency "cmdline" 00:01:06.767 Message: lib/metrics: Defining dependency "metrics" 00:01:06.767 Message: lib/hash: Defining dependency "hash" 00:01:06.767 Message: lib/timer: Defining dependency "timer" 00:01:06.767 Fetching value of define "__AVX2__" : (undefined) (cached) 00:01:06.767 Compiler for C supports arguments -mavx2: YES (cached) 00:01:06.767 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:06.767 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:01:06.767 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:01:06.767 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:01:06.767 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:01:06.767 Message: lib/acl: Defining dependency "acl" 00:01:06.767 Message: lib/bbdev: Defining dependency "bbdev" 00:01:06.767 Message: lib/bitratestats: Defining dependency "bitratestats" 00:01:06.767 Run-time dependency libelf found: YES 0.190 00:01:06.767 Message: lib/bpf: Defining dependency "bpf" 00:01:06.767 Message: lib/cfgfile: Defining dependency "cfgfile" 00:01:06.767 Message: lib/compressdev: Defining dependency "compressdev" 00:01:06.767 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:06.767 Message: lib/distributor: Defining dependency "distributor" 00:01:06.767 Message: lib/efd: Defining dependency "efd" 00:01:06.767 Message: lib/eventdev: Defining dependency "eventdev" 00:01:06.767 Message: lib/gpudev: Defining dependency "gpudev" 00:01:06.767 Message: lib/gro: Defining dependency "gro" 00:01:06.767 Message: lib/gso: Defining dependency "gso" 00:01:06.767 Message: lib/ip_frag: Defining dependency "ip_frag" 00:01:06.767 Message: lib/jobstats: Defining dependency "jobstats" 00:01:06.767 Message: lib/latencystats: Defining dependency "latencystats" 00:01:06.767 Message: lib/lpm: Defining dependency "lpm" 00:01:06.767 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:06.767 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:01:06.767 Fetching value of define "__AVX512IFMA__" : (undefined) 00:01:06.767 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:01:06.767 Message: lib/member: Defining dependency "member" 00:01:06.767 Message: lib/pcapng: Defining dependency "pcapng" 00:01:06.767 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:06.767 Message: lib/power: Defining dependency "power" 00:01:06.767 Message: lib/rawdev: Defining dependency "rawdev" 00:01:06.767 Message: lib/regexdev: Defining dependency "regexdev" 00:01:06.767 Message: lib/dmadev: Defining dependency "dmadev" 00:01:06.767 Message: lib/rib: Defining dependency "rib" 00:01:06.767 Message: lib/reorder: Defining dependency "reorder" 00:01:06.767 Message: lib/sched: Defining dependency "sched" 00:01:06.767 Message: lib/security: Defining dependency "security" 00:01:06.767 Message: lib/stack: Defining dependency "stack" 00:01:06.767 Has header "linux/userfaultfd.h" : YES 00:01:06.767 Message: lib/vhost: Defining dependency "vhost" 00:01:06.767 Message: lib/ipsec: Defining dependency "ipsec" 00:01:06.767 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:06.767 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:01:06.767 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:01:06.767 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:06.767 Message: lib/fib: Defining dependency "fib" 00:01:06.767 Message: lib/port: Defining dependency "port" 00:01:06.767 Message: lib/pdump: Defining dependency "pdump" 00:01:06.767 Message: lib/table: Defining dependency "table" 00:01:06.767 Message: lib/pipeline: Defining dependency "pipeline" 00:01:06.767 Message: lib/graph: Defining dependency "graph" 00:01:06.767 Message: lib/node: Defining dependency "node" 00:01:06.767 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:06.767 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:06.767 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:06.767 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:06.767 Compiler for C supports arguments -Wno-sign-compare: YES 00:01:06.767 Compiler for C supports arguments -Wno-unused-value: YES 00:01:07.706 Compiler for C supports arguments -Wno-format: YES 00:01:07.706 Compiler for C supports arguments -Wno-format-security: YES 00:01:07.706 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:01:07.706 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:01:07.706 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:01:07.706 Compiler for C supports arguments -Wno-unused-parameter: YES 00:01:07.706 Fetching value of define "__AVX2__" : (undefined) (cached) 00:01:07.706 Compiler for C supports arguments -mavx2: YES (cached) 00:01:07.706 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:07.706 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:07.706 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:07.706 Compiler for C supports arguments -march=skylake-avx512: YES 00:01:07.706 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:01:07.706 Program doxygen found: YES (/usr/bin/doxygen) 00:01:07.706 Configuring doxy-api.conf using configuration 00:01:07.706 Program sphinx-build found: NO 00:01:07.706 Configuring rte_build_config.h using configuration 00:01:07.706 Message: 00:01:07.706 ================= 00:01:07.706 Applications Enabled 00:01:07.706 ================= 00:01:07.706 00:01:07.706 apps: 00:01:07.706 dumpcap, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, 00:01:07.706 test-eventdev, test-fib, test-flow-perf, test-gpudev, test-pipeline, test-pmd, test-regex, test-sad, 00:01:07.706 test-security-perf, 00:01:07.706 00:01:07.706 Message: 00:01:07.706 ================= 00:01:07.706 Libraries Enabled 00:01:07.706 ================= 00:01:07.706 00:01:07.706 libs: 00:01:07.706 kvargs, telemetry, eal, ring, rcu, mempool, mbuf, net, 00:01:07.706 meter, ethdev, pci, cmdline, metrics, hash, timer, acl, 00:01:07.706 bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, efd, 00:01:07.706 eventdev, gpudev, gro, gso, ip_frag, jobstats, latencystats, lpm, 00:01:07.706 member, pcapng, power, rawdev, regexdev, dmadev, rib, reorder, 00:01:07.706 sched, security, stack, vhost, ipsec, fib, port, pdump, 00:01:07.706 table, pipeline, graph, node, 00:01:07.706 00:01:07.706 Message: 00:01:07.706 =============== 00:01:07.706 Drivers Enabled 00:01:07.706 =============== 00:01:07.706 00:01:07.706 common: 00:01:07.706 00:01:07.706 bus: 00:01:07.706 pci, vdev, 00:01:07.706 mempool: 00:01:07.706 ring, 00:01:07.706 dma: 00:01:07.706 00:01:07.706 net: 00:01:07.706 i40e, 00:01:07.706 raw: 00:01:07.706 00:01:07.706 crypto: 00:01:07.706 00:01:07.706 compress: 00:01:07.706 00:01:07.706 regex: 00:01:07.706 00:01:07.706 vdpa: 00:01:07.706 00:01:07.706 event: 00:01:07.706 00:01:07.706 baseband: 00:01:07.706 00:01:07.706 gpu: 00:01:07.706 00:01:07.706 00:01:07.706 Message: 00:01:07.706 ================= 00:01:07.706 Content Skipped 00:01:07.706 ================= 00:01:07.706 00:01:07.706 apps: 00:01:07.706 00:01:07.706 libs: 00:01:07.706 kni: explicitly disabled via build config (deprecated lib) 00:01:07.706 flow_classify: explicitly disabled via build config (deprecated lib) 00:01:07.706 00:01:07.706 drivers: 00:01:07.706 common/cpt: not in enabled drivers build config 00:01:07.706 common/dpaax: not in enabled drivers build config 00:01:07.706 common/iavf: not in enabled drivers build config 00:01:07.706 common/idpf: not in enabled drivers build config 00:01:07.706 common/mvep: not in enabled drivers build config 00:01:07.706 common/octeontx: not in enabled drivers build config 00:01:07.706 bus/auxiliary: not in enabled drivers build config 00:01:07.706 bus/dpaa: not in enabled drivers build config 00:01:07.706 bus/fslmc: not in enabled drivers build config 00:01:07.706 bus/ifpga: not in enabled drivers build config 00:01:07.706 bus/vmbus: not in enabled drivers build config 00:01:07.706 common/cnxk: not in enabled drivers build config 00:01:07.706 common/mlx5: not in enabled drivers build config 00:01:07.706 common/qat: not in enabled drivers build config 00:01:07.706 common/sfc_efx: not in enabled drivers build config 00:01:07.706 mempool/bucket: not in enabled drivers build config 00:01:07.706 mempool/cnxk: not in enabled drivers build config 00:01:07.706 mempool/dpaa: not in enabled drivers build config 00:01:07.706 mempool/dpaa2: not in enabled drivers build config 00:01:07.706 mempool/octeontx: not in enabled drivers build config 00:01:07.706 mempool/stack: not in enabled drivers build config 00:01:07.706 dma/cnxk: not in enabled drivers build config 00:01:07.706 dma/dpaa: not in enabled drivers build config 00:01:07.706 dma/dpaa2: not in enabled drivers build config 00:01:07.706 dma/hisilicon: not in enabled drivers build config 00:01:07.706 dma/idxd: not in enabled drivers build config 00:01:07.706 dma/ioat: not in enabled drivers build config 00:01:07.707 dma/skeleton: not in enabled drivers build config 00:01:07.707 net/af_packet: not in enabled drivers build config 00:01:07.707 net/af_xdp: not in enabled drivers build config 00:01:07.707 net/ark: not in enabled drivers build config 00:01:07.707 net/atlantic: not in enabled drivers build config 00:01:07.707 net/avp: not in enabled drivers build config 00:01:07.707 net/axgbe: not in enabled drivers build config 00:01:07.707 net/bnx2x: not in enabled drivers build config 00:01:07.707 net/bnxt: not in enabled drivers build config 00:01:07.707 net/bonding: not in enabled drivers build config 00:01:07.707 net/cnxk: not in enabled drivers build config 00:01:07.707 net/cxgbe: not in enabled drivers build config 00:01:07.707 net/dpaa: not in enabled drivers build config 00:01:07.707 net/dpaa2: not in enabled drivers build config 00:01:07.707 net/e1000: not in enabled drivers build config 00:01:07.707 net/ena: not in enabled drivers build config 00:01:07.707 net/enetc: not in enabled drivers build config 00:01:07.707 net/enetfec: not in enabled drivers build config 00:01:07.707 net/enic: not in enabled drivers build config 00:01:07.707 net/failsafe: not in enabled drivers build config 00:01:07.707 net/fm10k: not in enabled drivers build config 00:01:07.707 net/gve: not in enabled drivers build config 00:01:07.707 net/hinic: not in enabled drivers build config 00:01:07.707 net/hns3: not in enabled drivers build config 00:01:07.707 net/iavf: not in enabled drivers build config 00:01:07.707 net/ice: not in enabled drivers build config 00:01:07.707 net/idpf: not in enabled drivers build config 00:01:07.707 net/igc: not in enabled drivers build config 00:01:07.707 net/ionic: not in enabled drivers build config 00:01:07.707 net/ipn3ke: not in enabled drivers build config 00:01:07.707 net/ixgbe: not in enabled drivers build config 00:01:07.707 net/kni: not in enabled drivers build config 00:01:07.707 net/liquidio: not in enabled drivers build config 00:01:07.707 net/mana: not in enabled drivers build config 00:01:07.707 net/memif: not in enabled drivers build config 00:01:07.707 net/mlx4: not in enabled drivers build config 00:01:07.707 net/mlx5: not in enabled drivers build config 00:01:07.707 net/mvneta: not in enabled drivers build config 00:01:07.707 net/mvpp2: not in enabled drivers build config 00:01:07.707 net/netvsc: not in enabled drivers build config 00:01:07.707 net/nfb: not in enabled drivers build config 00:01:07.707 net/nfp: not in enabled drivers build config 00:01:07.707 net/ngbe: not in enabled drivers build config 00:01:07.707 net/null: not in enabled drivers build config 00:01:07.707 net/octeontx: not in enabled drivers build config 00:01:07.707 net/octeon_ep: not in enabled drivers build config 00:01:07.707 net/pcap: not in enabled drivers build config 00:01:07.707 net/pfe: not in enabled drivers build config 00:01:07.707 net/qede: not in enabled drivers build config 00:01:07.707 net/ring: not in enabled drivers build config 00:01:07.707 net/sfc: not in enabled drivers build config 00:01:07.707 net/softnic: not in enabled drivers build config 00:01:07.707 net/tap: not in enabled drivers build config 00:01:07.707 net/thunderx: not in enabled drivers build config 00:01:07.707 net/txgbe: not in enabled drivers build config 00:01:07.707 net/vdev_netvsc: not in enabled drivers build config 00:01:07.707 net/vhost: not in enabled drivers build config 00:01:07.707 net/virtio: not in enabled drivers build config 00:01:07.707 net/vmxnet3: not in enabled drivers build config 00:01:07.707 raw/cnxk_bphy: not in enabled drivers build config 00:01:07.707 raw/cnxk_gpio: not in enabled drivers build config 00:01:07.707 raw/dpaa2_cmdif: not in enabled drivers build config 00:01:07.707 raw/ifpga: not in enabled drivers build config 00:01:07.707 raw/ntb: not in enabled drivers build config 00:01:07.707 raw/skeleton: not in enabled drivers build config 00:01:07.707 crypto/armv8: not in enabled drivers build config 00:01:07.707 crypto/bcmfs: not in enabled drivers build config 00:01:07.707 crypto/caam_jr: not in enabled drivers build config 00:01:07.707 crypto/ccp: not in enabled drivers build config 00:01:07.707 crypto/cnxk: not in enabled drivers build config 00:01:07.707 crypto/dpaa_sec: not in enabled drivers build config 00:01:07.707 crypto/dpaa2_sec: not in enabled drivers build config 00:01:07.707 crypto/ipsec_mb: not in enabled drivers build config 00:01:07.707 crypto/mlx5: not in enabled drivers build config 00:01:07.707 crypto/mvsam: not in enabled drivers build config 00:01:07.707 crypto/nitrox: not in enabled drivers build config 00:01:07.707 crypto/null: not in enabled drivers build config 00:01:07.707 crypto/octeontx: not in enabled drivers build config 00:01:07.707 crypto/openssl: not in enabled drivers build config 00:01:07.707 crypto/scheduler: not in enabled drivers build config 00:01:07.707 crypto/uadk: not in enabled drivers build config 00:01:07.707 crypto/virtio: not in enabled drivers build config 00:01:07.707 compress/isal: not in enabled drivers build config 00:01:07.707 compress/mlx5: not in enabled drivers build config 00:01:07.707 compress/octeontx: not in enabled drivers build config 00:01:07.707 compress/zlib: not in enabled drivers build config 00:01:07.707 regex/mlx5: not in enabled drivers build config 00:01:07.707 regex/cn9k: not in enabled drivers build config 00:01:07.707 vdpa/ifc: not in enabled drivers build config 00:01:07.707 vdpa/mlx5: not in enabled drivers build config 00:01:07.707 vdpa/sfc: not in enabled drivers build config 00:01:07.707 event/cnxk: not in enabled drivers build config 00:01:07.707 event/dlb2: not in enabled drivers build config 00:01:07.707 event/dpaa: not in enabled drivers build config 00:01:07.707 event/dpaa2: not in enabled drivers build config 00:01:07.707 event/dsw: not in enabled drivers build config 00:01:07.707 event/opdl: not in enabled drivers build config 00:01:07.707 event/skeleton: not in enabled drivers build config 00:01:07.707 event/sw: not in enabled drivers build config 00:01:07.707 event/octeontx: not in enabled drivers build config 00:01:07.707 baseband/acc: not in enabled drivers build config 00:01:07.707 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:01:07.707 baseband/fpga_lte_fec: not in enabled drivers build config 00:01:07.707 baseband/la12xx: not in enabled drivers build config 00:01:07.707 baseband/null: not in enabled drivers build config 00:01:07.707 baseband/turbo_sw: not in enabled drivers build config 00:01:07.707 gpu/cuda: not in enabled drivers build config 00:01:07.707 00:01:07.707 00:01:07.707 Build targets in project: 316 00:01:07.707 00:01:07.707 DPDK 22.11.4 00:01:07.707 00:01:07.707 User defined options 00:01:07.707 libdir : lib 00:01:07.707 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:07.707 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:01:07.707 c_link_args : 00:01:07.707 enable_docs : false 00:01:07.707 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:07.707 enable_kmods : false 00:01:07.707 machine : native 00:01:07.707 tests : false 00:01:07.707 00:01:07.707 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:07.707 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:01:07.707 01:39:22 build_native_dpdk -- common/autobuild_common.sh@189 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 00:01:07.975 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:01:07.975 [1/745] Generating lib/rte_telemetry_def with a custom command 00:01:07.975 [2/745] Generating lib/rte_kvargs_def with a custom command 00:01:07.975 [3/745] Generating lib/rte_kvargs_mingw with a custom command 00:01:07.975 [4/745] Generating lib/rte_telemetry_mingw with a custom command 00:01:07.975 [5/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:07.975 [6/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:07.975 [7/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:07.975 [8/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:07.975 [9/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:07.975 [10/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:07.975 [11/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:07.975 [12/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:07.975 [13/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:07.975 [14/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:07.975 [15/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:07.975 [16/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:08.233 [17/745] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:08.233 [18/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:08.233 [19/745] Linking static target lib/librte_kvargs.a 00:01:08.233 [20/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:08.233 [21/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:08.233 [22/745] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:08.233 [23/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:08.234 [24/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:08.234 [25/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:08.234 [26/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:08.234 [27/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:08.234 [28/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:08.234 [29/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:08.234 [30/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:08.234 [31/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_log.c.o 00:01:08.234 [32/745] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:08.234 [33/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:08.234 [34/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:08.234 [35/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:08.234 [36/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:08.234 [37/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:08.234 [38/745] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:08.234 [39/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:08.234 [40/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:08.234 [41/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:08.234 [42/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:08.234 [43/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:08.234 [44/745] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:08.234 [45/745] Generating lib/rte_eal_mingw with a custom command 00:01:08.234 [46/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:08.234 [47/745] Generating lib/rte_ring_def with a custom command 00:01:08.234 [48/745] Generating lib/rte_eal_def with a custom command 00:01:08.234 [49/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:08.234 [50/745] Generating lib/rte_ring_mingw with a custom command 00:01:08.234 [51/745] Generating lib/rte_rcu_def with a custom command 00:01:08.234 [52/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:08.234 [53/745] Generating lib/rte_rcu_mingw with a custom command 00:01:08.234 [54/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:08.234 [55/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:08.234 [56/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:08.234 [57/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:08.234 [58/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_log.c.o 00:01:08.234 [59/745] Generating lib/rte_mempool_mingw with a custom command 00:01:08.234 [60/745] Generating lib/rte_mempool_def with a custom command 00:01:08.234 [61/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:08.234 [62/745] Generating lib/rte_mbuf_mingw with a custom command 00:01:08.234 [63/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:08.234 [64/745] Generating lib/rte_mbuf_def with a custom command 00:01:08.234 [65/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:08.234 [66/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:08.496 [67/745] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:08.496 [68/745] Generating lib/rte_net_def with a custom command 00:01:08.496 [69/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:08.496 [70/745] Generating lib/rte_net_mingw with a custom command 00:01:08.496 [71/745] Generating lib/rte_meter_def with a custom command 00:01:08.496 [72/745] Generating lib/rte_meter_mingw with a custom command 00:01:08.496 [73/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:08.496 [74/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:08.496 [75/745] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:08.496 [76/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:08.496 [77/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:08.496 [78/745] Generating lib/rte_ethdev_def with a custom command 00:01:08.496 [79/745] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.496 [80/745] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:08.496 [81/745] Linking static target lib/librte_ring.a 00:01:08.496 [82/745] Generating lib/rte_ethdev_mingw with a custom command 00:01:08.496 [83/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:08.496 [84/745] Linking target lib/librte_kvargs.so.23.0 00:01:08.496 [85/745] Generating lib/rte_pci_def with a custom command 00:01:08.496 [86/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:08.762 [87/745] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:08.762 [88/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:08.762 [89/745] Linking static target lib/librte_meter.a 00:01:08.762 [90/745] Generating lib/rte_pci_mingw with a custom command 00:01:08.762 [91/745] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:08.762 [92/745] Linking static target lib/librte_pci.a 00:01:08.762 [93/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:08.762 [94/745] Generating symbol file lib/librte_kvargs.so.23.0.p/librte_kvargs.so.23.0.symbols 00:01:08.762 [95/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:08.762 [96/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:08.762 [97/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:08.762 [98/745] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:09.023 [99/745] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:09.023 [100/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:09.023 [101/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:09.023 [102/745] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:09.023 [103/745] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:09.023 [104/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:09.023 [105/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:09.023 [106/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:09.023 [107/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:09.023 [108/745] Generating lib/rte_cmdline_def with a custom command 00:01:09.023 [109/745] Generating lib/rte_cmdline_mingw with a custom command 00:01:09.023 [110/745] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:09.023 [111/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:09.023 [112/745] Linking static target lib/librte_telemetry.a 00:01:09.023 [113/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:09.023 [114/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:09.023 [115/745] Generating lib/rte_metrics_def with a custom command 00:01:09.023 [116/745] Generating lib/rte_metrics_mingw with a custom command 00:01:09.023 [117/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:09.023 [118/745] Generating lib/rte_hash_def with a custom command 00:01:09.023 [119/745] Generating lib/rte_hash_mingw with a custom command 00:01:09.286 [120/745] Generating lib/rte_timer_def with a custom command 00:01:09.286 [121/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:09.286 [122/745] Generating lib/rte_timer_mingw with a custom command 00:01:09.286 [123/745] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:01:09.286 [124/745] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:09.286 [125/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:09.549 [126/745] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:09.549 [127/745] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:09.549 [128/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:09.549 [129/745] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:09.549 [130/745] Generating lib/rte_acl_def with a custom command 00:01:09.549 [131/745] Generating lib/rte_acl_mingw with a custom command 00:01:09.549 [132/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:09.549 [133/745] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:09.549 [134/745] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:09.549 [135/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:09.549 [136/745] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:09.549 [137/745] Generating lib/rte_bbdev_def with a custom command 00:01:09.549 [138/745] Generating lib/rte_bitratestats_mingw with a custom command 00:01:09.549 [139/745] Generating lib/rte_bbdev_mingw with a custom command 00:01:09.549 [140/745] Generating lib/rte_bitratestats_def with a custom command 00:01:09.549 [141/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:09.549 [142/745] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:09.549 [143/745] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:09.549 [144/745] Linking target lib/librte_telemetry.so.23.0 00:01:09.549 [145/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:09.814 [146/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:09.814 [147/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:09.814 [148/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:09.814 [149/745] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:09.814 [150/745] Generating lib/rte_bpf_mingw with a custom command 00:01:09.814 [151/745] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:09.814 [152/745] Generating lib/rte_bpf_def with a custom command 00:01:09.814 [153/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:09.814 [154/745] Generating lib/rte_cfgfile_def with a custom command 00:01:09.814 [155/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:09.814 [156/745] Generating lib/rte_cfgfile_mingw with a custom command 00:01:09.814 [157/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:09.814 [158/745] Generating symbol file lib/librte_telemetry.so.23.0.p/librte_telemetry.so.23.0.symbols 00:01:09.814 [159/745] Generating lib/rte_compressdev_mingw with a custom command 00:01:09.814 [160/745] Generating lib/rte_compressdev_def with a custom command 00:01:10.076 [161/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:10.076 [162/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:10.076 [163/745] Generating lib/rte_cryptodev_def with a custom command 00:01:10.076 [164/745] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:10.076 [165/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:10.076 [166/745] Linking static target lib/librte_rcu.a 00:01:10.076 [167/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:10.076 [168/745] Generating lib/rte_cryptodev_mingw with a custom command 00:01:10.076 [169/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:10.076 [170/745] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:10.076 [171/745] Linking static target lib/librte_timer.a 00:01:10.076 [172/745] Generating lib/rte_distributor_def with a custom command 00:01:10.076 [173/745] Generating lib/rte_distributor_mingw with a custom command 00:01:10.076 [174/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:10.076 [175/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:10.076 [176/745] Linking static target lib/librte_cmdline.a 00:01:10.076 [177/745] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:10.076 [178/745] Linking static target lib/librte_net.a 00:01:10.076 [179/745] Generating lib/rte_efd_def with a custom command 00:01:10.076 [180/745] Generating lib/rte_efd_mingw with a custom command 00:01:10.076 [181/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:10.337 [182/745] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:10.337 [183/745] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:01:10.337 [184/745] Linking static target lib/librte_mempool.a 00:01:10.337 [185/745] Linking static target lib/librte_metrics.a 00:01:10.337 [186/745] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:01:10.337 [187/745] Linking static target lib/librte_cfgfile.a 00:01:10.607 [188/745] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:10.607 [189/745] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:01:10.607 [190/745] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:10.607 [191/745] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:10.607 [192/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:10.607 [193/745] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:01:10.607 [194/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:10.607 [195/745] Generating lib/rte_eventdev_def with a custom command 00:01:10.607 [196/745] Generating lib/rte_eventdev_mingw with a custom command 00:01:10.607 [197/745] Linking static target lib/librte_eal.a 00:01:10.867 [198/745] Generating lib/rte_gpudev_def with a custom command 00:01:10.867 [199/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:01:10.867 [200/745] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:01:10.867 [201/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:01:10.867 [202/745] Generating lib/rte_gpudev_mingw with a custom command 00:01:10.867 [203/745] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:01:10.867 [204/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:01:10.867 [205/745] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:01:10.867 [206/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:01:10.867 [207/745] Linking static target lib/librte_bitratestats.a 00:01:10.867 [208/745] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:01:10.867 [209/745] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:01:10.867 [210/745] Generating lib/rte_gro_def with a custom command 00:01:10.867 [211/745] Generating lib/rte_gro_mingw with a custom command 00:01:11.127 [212/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:01:11.127 [213/745] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:11.127 [214/745] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:01:11.127 [215/745] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:11.127 [216/745] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:11.127 [217/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:01:11.127 [218/745] Generating lib/rte_gso_def with a custom command 00:01:11.389 [219/745] Generating lib/rte_gso_mingw with a custom command 00:01:11.389 [220/745] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:01:11.389 [221/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:01:11.389 [222/745] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:11.389 [223/745] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:01:11.389 [224/745] Generating lib/rte_ip_frag_def with a custom command 00:01:11.389 [225/745] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:01:11.389 [226/745] Linking static target lib/librte_bbdev.a 00:01:11.389 [227/745] Generating lib/rte_ip_frag_mingw with a custom command 00:01:11.389 [228/745] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:11.389 [229/745] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:11.650 [230/745] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:11.650 [231/745] Generating lib/rte_jobstats_def with a custom command 00:01:11.650 [232/745] Generating lib/rte_jobstats_mingw with a custom command 00:01:11.650 [233/745] Generating lib/rte_latencystats_mingw with a custom command 00:01:11.650 [234/745] Generating lib/rte_latencystats_def with a custom command 00:01:11.650 [235/745] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:01:11.650 [236/745] Generating lib/rte_lpm_def with a custom command 00:01:11.650 [237/745] Generating lib/rte_lpm_mingw with a custom command 00:01:11.650 [238/745] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:11.650 [239/745] Linking static target lib/librte_compressdev.a 00:01:11.650 [240/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:01:11.650 [241/745] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:01:11.650 [242/745] Linking static target lib/librte_jobstats.a 00:01:11.917 [243/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:01:11.917 [244/745] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:01:11.917 [245/745] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:11.917 [246/745] Linking static target lib/librte_distributor.a 00:01:12.180 [247/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:01:12.180 [248/745] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:01:12.180 [249/745] Generating lib/rte_member_def with a custom command 00:01:12.180 [250/745] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:01:12.180 [251/745] Generating lib/rte_member_mingw with a custom command 00:01:12.180 [252/745] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:12.180 [253/745] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:01:12.180 [254/745] Generating lib/rte_pcapng_def with a custom command 00:01:12.180 [255/745] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:01:12.180 [256/745] Generating lib/rte_pcapng_mingw with a custom command 00:01:12.444 [257/745] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:01:12.444 [258/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:01:12.444 [259/745] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:01:12.444 [260/745] Linking static target lib/librte_bpf.a 00:01:12.444 [261/745] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:12.444 [262/745] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:12.444 [263/745] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:01:12.444 [264/745] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:01:12.444 [265/745] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:12.444 [266/745] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:12.444 [267/745] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:01:12.444 [268/745] Generating lib/rte_power_def with a custom command 00:01:12.444 [269/745] Generating lib/rte_power_mingw with a custom command 00:01:12.444 [270/745] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:01:12.444 [271/745] Generating lib/rte_rawdev_def with a custom command 00:01:12.444 [272/745] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:01:12.444 [273/745] Linking static target lib/librte_gpudev.a 00:01:12.444 [274/745] Generating lib/rte_rawdev_mingw with a custom command 00:01:12.708 [275/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:01:12.708 [276/745] Generating lib/rte_regexdev_def with a custom command 00:01:12.708 [277/745] Generating lib/rte_regexdev_mingw with a custom command 00:01:12.708 [278/745] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:01:12.708 [279/745] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:12.708 [280/745] Linking static target lib/librte_gro.a 00:01:12.708 [281/745] Generating lib/rte_dmadev_def with a custom command 00:01:12.708 [282/745] Generating lib/rte_dmadev_mingw with a custom command 00:01:12.708 [283/745] Generating lib/rte_rib_def with a custom command 00:01:12.708 [284/745] Generating lib/rte_rib_mingw with a custom command 00:01:12.708 [285/745] Generating lib/rte_reorder_def with a custom command 00:01:12.708 [286/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:01:12.708 [287/745] Generating lib/rte_reorder_mingw with a custom command 00:01:12.708 [288/745] Compiling C object lib/librte_power.a.p/power_rte_power_empty_poll.c.o 00:01:12.972 [289/745] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:12.972 [290/745] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:01:12.972 [291/745] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:01:12.972 [292/745] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:01:12.972 [293/745] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:01:12.972 [294/745] Generating lib/rte_sched_def with a custom command 00:01:12.972 [295/745] Generating lib/rte_sched_mingw with a custom command 00:01:12.972 [296/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:01:12.972 [297/745] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:01:12.972 [298/745] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:01:12.972 [299/745] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:01:12.972 [300/745] Linking static target lib/librte_latencystats.a 00:01:12.972 [301/745] Generating lib/rte_security_def with a custom command 00:01:12.972 [302/745] Generating lib/rte_security_mingw with a custom command 00:01:12.972 [303/745] Linking static target lib/member/libsketch_avx512_tmp.a 00:01:12.972 [304/745] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:01:12.972 [305/745] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:13.235 [306/745] Generating lib/rte_stack_mingw with a custom command 00:01:13.235 [307/745] Generating lib/rte_stack_def with a custom command 00:01:13.235 [308/745] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:01:13.235 [309/745] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:01:13.235 [310/745] Linking static target lib/librte_rawdev.a 00:01:13.235 [311/745] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:01:13.235 [312/745] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:01:13.235 [313/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:01:13.235 [314/745] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:01:13.235 [315/745] Linking static target lib/librte_stack.a 00:01:13.235 [316/745] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:01:13.235 [317/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:01:13.235 [318/745] Generating lib/rte_vhost_def with a custom command 00:01:13.235 [319/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:01:13.235 [320/745] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:13.235 [321/745] Generating lib/rte_vhost_mingw with a custom command 00:01:13.235 [322/745] Linking static target lib/librte_dmadev.a 00:01:13.235 [323/745] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:13.496 [324/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:01:13.496 [325/745] Linking static target lib/librte_ip_frag.a 00:01:13.496 [326/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:01:13.496 [327/745] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:13.496 [328/745] Generating lib/rte_ipsec_def with a custom command 00:01:13.496 [329/745] Generating lib/rte_ipsec_mingw with a custom command 00:01:13.496 [330/745] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:01:13.761 [331/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:01:13.761 [332/745] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:01:13.761 [333/745] Compiling C object lib/librte_power.a.p/power_rte_power_intel_uncore.c.o 00:01:13.761 [334/745] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:13.761 [335/745] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:01:14.024 [336/745] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:14.024 [337/745] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:01:14.024 [338/745] Generating lib/rte_fib_def with a custom command 00:01:14.024 [339/745] Generating lib/rte_fib_mingw with a custom command 00:01:14.024 [340/745] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:14.024 [341/745] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:01:14.024 [342/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:14.024 [343/745] Linking static target lib/librte_regexdev.a 00:01:14.024 [344/745] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:01:14.024 [345/745] Linking static target lib/librte_gso.a 00:01:14.286 [346/745] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:14.286 [347/745] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:01:14.286 [348/745] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:01:14.286 [349/745] Linking static target lib/librte_efd.a 00:01:14.286 [350/745] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:01:14.549 [351/745] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:01:14.549 [352/745] Linking static target lib/librte_pcapng.a 00:01:14.549 [353/745] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:01:14.549 [354/745] Linking static target lib/librte_lpm.a 00:01:14.549 [355/745] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:01:14.549 [356/745] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:14.549 [357/745] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:01:14.549 [358/745] Linking static target lib/librte_reorder.a 00:01:14.549 [359/745] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:14.549 [360/745] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:14.813 [361/745] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:14.813 [362/745] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:01:14.813 [363/745] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:01:14.813 [364/745] Generating lib/rte_port_def with a custom command 00:01:14.813 [365/745] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:14.813 [366/745] Generating lib/rte_port_mingw with a custom command 00:01:14.813 [367/745] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:01:14.813 [368/745] Linking static target lib/acl/libavx2_tmp.a 00:01:14.813 [369/745] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:01:14.813 [370/745] Linking static target lib/fib/libtrie_avx512_tmp.a 00:01:14.813 [371/745] Generating lib/rte_pdump_def with a custom command 00:01:14.813 [372/745] Generating lib/rte_pdump_mingw with a custom command 00:01:15.077 [373/745] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:01:15.077 [374/745] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:01:15.077 [375/745] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:01:15.077 [376/745] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:15.077 [377/745] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:15.077 [378/745] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:01:15.077 [379/745] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:01:15.077 [380/745] Linking static target lib/librte_security.a 00:01:15.077 [381/745] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:15.077 [382/745] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:15.077 [383/745] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:01:15.077 [384/745] Linking static target lib/librte_power.a 00:01:15.077 [385/745] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:01:15.077 [386/745] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:15.077 [387/745] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:15.338 [388/745] Linking static target lib/librte_hash.a 00:01:15.338 [389/745] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:01:15.338 [390/745] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:01:15.338 [391/745] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:01:15.338 [392/745] Linking static target lib/acl/libavx512_tmp.a 00:01:15.338 [393/745] Linking static target lib/librte_rib.a 00:01:15.338 [394/745] Linking static target lib/librte_acl.a 00:01:15.338 [395/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:01:15.601 [396/745] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:01:15.601 [397/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:01:15.601 [398/745] Generating lib/rte_table_def with a custom command 00:01:15.601 [399/745] Generating lib/rte_table_mingw with a custom command 00:01:15.601 [400/745] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:15.859 [401/745] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:01:15.859 [402/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:01:15.859 [403/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:15.859 [404/745] Linking static target lib/librte_ethdev.a 00:01:15.859 [405/745] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:16.123 [406/745] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:01:16.124 [407/745] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:01:16.124 [408/745] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:01:16.124 [409/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:16.124 [410/745] Generating lib/rte_pipeline_def with a custom command 00:01:16.124 [411/745] Linking static target lib/librte_mbuf.a 00:01:16.124 [412/745] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:16.124 [413/745] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:01:16.124 [414/745] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:01:16.124 [415/745] Generating lib/rte_pipeline_mingw with a custom command 00:01:16.382 [416/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:01:16.382 [417/745] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:01:16.383 [418/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:01:16.383 [419/745] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:16.383 [420/745] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:01:16.383 [421/745] Generating lib/rte_graph_def with a custom command 00:01:16.383 [422/745] Linking static target lib/librte_fib.a 00:01:16.383 [423/745] Generating lib/rte_graph_mingw with a custom command 00:01:16.383 [424/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:01:16.645 [425/745] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:01:16.645 [426/745] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:16.645 [427/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:01:16.645 [428/745] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:01:16.645 [429/745] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:01:16.645 [430/745] Linking static target lib/librte_member.a 00:01:16.645 [431/745] Compiling C object lib/librte_node.a.p/node_null.c.o 00:01:16.645 [432/745] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:01:16.645 [433/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:01:16.645 [434/745] Generating lib/rte_node_def with a custom command 00:01:16.645 [435/745] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:01:16.904 [436/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:01:16.904 [437/745] Generating lib/rte_node_mingw with a custom command 00:01:16.904 [438/745] Linking static target lib/librte_eventdev.a 00:01:16.904 [439/745] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:01:16.904 [440/745] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:16.904 [441/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:16.904 [442/745] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:01:16.904 [443/745] Linking static target lib/librte_sched.a 00:01:16.904 [444/745] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:01:16.904 [445/745] Generating drivers/rte_bus_pci_def with a custom command 00:01:17.197 [446/745] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:17.197 [447/745] Generating drivers/rte_bus_pci_mingw with a custom command 00:01:17.197 [448/745] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:17.197 [449/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:01:17.197 [450/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:17.197 [451/745] Generating drivers/rte_bus_vdev_def with a custom command 00:01:17.197 [452/745] Generating drivers/rte_bus_vdev_mingw with a custom command 00:01:17.197 [453/745] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:17.197 [454/745] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:01:17.197 [455/745] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:17.197 [456/745] Linking static target lib/librte_cryptodev.a 00:01:17.197 [457/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:01:17.197 [458/745] Generating drivers/rte_mempool_ring_def with a custom command 00:01:17.198 [459/745] Generating drivers/rte_mempool_ring_mingw with a custom command 00:01:17.198 [460/745] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:01:17.198 [461/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:01:17.488 [462/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:17.488 [463/745] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:01:17.488 [464/745] Linking static target lib/librte_pdump.a 00:01:17.488 [465/745] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:01:17.488 [466/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:17.488 [467/745] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:01:17.488 [468/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:01:17.488 [469/745] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:17.488 [470/745] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:01:17.488 [471/745] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:17.488 [472/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:01:17.488 [473/745] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:01:17.760 [474/745] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:01:17.760 [475/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:17.760 [476/745] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:01:17.760 [477/745] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:01:17.760 [478/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:01:17.760 [479/745] Compiling C object lib/librte_node.a.p/node_log.c.o 00:01:17.760 [480/745] Generating drivers/rte_net_i40e_def with a custom command 00:01:17.760 [481/745] Generating drivers/rte_net_i40e_mingw with a custom command 00:01:18.021 [482/745] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:01:18.021 [483/745] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:01:18.021 [484/745] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:18.021 [485/745] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:18.021 [486/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:01:18.021 [487/745] Linking static target drivers/librte_bus_vdev.a 00:01:18.021 [488/745] Linking static target lib/librte_table.a 00:01:18.021 [489/745] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:01:18.021 [490/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:01:18.021 [491/745] Linking static target lib/librte_ipsec.a 00:01:18.284 [492/745] Compiling C object drivers/librte_bus_vdev.so.23.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:18.284 [493/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:18.284 [494/745] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:18.284 [495/745] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:18.547 [496/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:01:18.547 [497/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:01:18.547 [498/745] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:01:18.547 [499/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:01:18.547 [500/745] Linking static target lib/librte_graph.a 00:01:18.547 [501/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:01:18.547 [502/745] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:01:18.547 [503/745] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:18.808 [504/745] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:01:18.808 [505/745] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:18.808 [506/745] Linking static target drivers/librte_bus_pci.a 00:01:18.808 [507/745] Compiling C object drivers/librte_bus_pci.so.23.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:18.808 [508/745] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:01:18.808 [509/745] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:01:18.808 [510/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:01:18.808 [511/745] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:01:19.067 [512/745] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:01:19.067 [513/745] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:01:19.067 [514/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:01:19.331 [515/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:01:19.331 [516/745] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:19.594 [517/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:01:19.594 [518/745] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:01:19.594 [519/745] Linking static target lib/librte_port.a 00:01:19.594 [520/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:01:19.594 [521/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:01:19.857 [522/745] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:19.857 [523/745] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:01:19.857 [524/745] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:19.857 [525/745] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:01:19.857 [526/745] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:20.119 [527/745] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:01:20.119 [528/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:01:20.119 [529/745] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:20.119 [530/745] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:20.119 [531/745] Linking static target drivers/librte_mempool_ring.a 00:01:20.119 [532/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:01:20.119 [533/745] Compiling C object drivers/librte_mempool_ring.so.23.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:20.119 [534/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:01:20.384 [535/745] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:01:20.384 [536/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:01:20.384 [537/745] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:01:20.384 [538/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:01:20.384 [539/745] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:01:20.645 [540/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:01:20.645 [541/745] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:20.908 [542/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:01:20.908 [543/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:01:20.908 [544/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:01:21.173 [545/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:01:21.173 [546/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:01:21.173 [547/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:01:21.173 [548/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:01:21.173 [549/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:01:21.435 [550/745] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:01:21.435 [551/745] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:01:21.435 [552/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:01:21.435 [553/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:01:21.695 [554/745] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:01:21.695 [555/745] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:01:21.961 [556/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:01:21.961 [557/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:01:21.961 [558/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:01:22.223 [559/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:01:22.223 [560/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:01:22.482 [561/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:01:22.482 [562/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:01:22.482 [563/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:01:22.482 [564/745] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:01:22.745 [565/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:01:22.746 [566/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:01:22.746 [567/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:01:22.746 [568/745] Linking static target drivers/net/i40e/base/libi40e_base.a 00:01:22.746 [569/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:01:22.746 [570/745] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:01:22.746 [571/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:01:22.746 [572/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:01:22.746 [573/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:01:23.009 [574/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:01:23.009 [575/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:01:23.272 [576/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:01:23.272 [577/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:01:23.272 [578/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:01:23.272 [579/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:01:23.272 [580/745] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:23.272 [581/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:01:23.272 [582/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:01:23.272 [583/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:01:23.272 [584/745] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:01:23.536 [585/745] Linking target lib/librte_eal.so.23.0 00:01:23.536 [586/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:01:23.795 [587/745] Generating symbol file lib/librte_eal.so.23.0.p/librte_eal.so.23.0.symbols 00:01:23.795 [588/745] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:23.795 [589/745] Linking target lib/librte_ring.so.23.0 00:01:23.795 [590/745] Linking target lib/librte_meter.so.23.0 00:01:23.795 [591/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:01:23.795 [592/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:01:24.058 [593/745] Linking target lib/librte_pci.so.23.0 00:01:24.058 [594/745] Linking target lib/librte_timer.so.23.0 00:01:24.058 [595/745] Generating symbol file lib/librte_ring.so.23.0.p/librte_ring.so.23.0.symbols 00:01:24.058 [596/745] Generating symbol file lib/librte_meter.so.23.0.p/librte_meter.so.23.0.symbols 00:01:24.058 [597/745] Linking target lib/librte_rcu.so.23.0 00:01:24.058 [598/745] Linking target lib/librte_mempool.so.23.0 00:01:24.058 [599/745] Generating symbol file lib/librte_pci.so.23.0.p/librte_pci.so.23.0.symbols 00:01:24.058 [600/745] Generating symbol file lib/librte_timer.so.23.0.p/librte_timer.so.23.0.symbols 00:01:24.321 [601/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:01:24.321 [602/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:01:24.321 [603/745] Linking target lib/librte_cfgfile.so.23.0 00:01:24.321 [604/745] Linking target lib/librte_acl.so.23.0 00:01:24.321 [605/745] Linking target lib/librte_jobstats.so.23.0 00:01:24.321 [606/745] Linking target lib/librte_rawdev.so.23.0 00:01:24.321 [607/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:01:24.321 [608/745] Linking target lib/librte_dmadev.so.23.0 00:01:24.321 [609/745] Linking target lib/librte_stack.so.23.0 00:01:24.321 [610/745] Generating symbol file lib/librte_rcu.so.23.0.p/librte_rcu.so.23.0.symbols 00:01:24.321 [611/745] Linking target lib/librte_graph.so.23.0 00:01:24.321 [612/745] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:01:24.321 [613/745] Linking target drivers/librte_bus_pci.so.23.0 00:01:24.321 [614/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:01:24.321 [615/745] Generating symbol file lib/librte_mempool.so.23.0.p/librte_mempool.so.23.0.symbols 00:01:24.321 [616/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:01:24.321 [617/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:01:24.321 [618/745] Linking target drivers/librte_bus_vdev.so.23.0 00:01:24.580 [619/745] Linking target lib/librte_rib.so.23.0 00:01:24.580 [620/745] Linking target drivers/librte_mempool_ring.so.23.0 00:01:24.580 [621/745] Generating symbol file lib/librte_acl.so.23.0.p/librte_acl.so.23.0.symbols 00:01:24.580 [622/745] Linking target lib/librte_mbuf.so.23.0 00:01:24.580 [623/745] Generating symbol file lib/librte_dmadev.so.23.0.p/librte_dmadev.so.23.0.symbols 00:01:24.580 [624/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:01:24.580 [625/745] Generating symbol file lib/librte_graph.so.23.0.p/librte_graph.so.23.0.symbols 00:01:24.580 [626/745] Generating symbol file drivers/librte_bus_pci.so.23.0.p/librte_bus_pci.so.23.0.symbols 00:01:24.580 [627/745] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:01:24.580 [628/745] Generating symbol file drivers/librte_bus_vdev.so.23.0.p/librte_bus_vdev.so.23.0.symbols 00:01:24.580 [629/745] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:01:24.580 [630/745] Generating symbol file lib/librte_mbuf.so.23.0.p/librte_mbuf.so.23.0.symbols 00:01:24.580 [631/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:01:24.580 [632/745] Generating symbol file lib/librte_rib.so.23.0.p/librte_rib.so.23.0.symbols 00:01:24.839 [633/745] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:01:24.839 [634/745] Linking target lib/librte_net.so.23.0 00:01:24.839 [635/745] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:01:24.839 [636/745] Linking target lib/librte_bbdev.so.23.0 00:01:24.839 [637/745] Linking target lib/librte_distributor.so.23.0 00:01:24.839 [638/745] Linking target lib/librte_reorder.so.23.0 00:01:24.839 [639/745] Linking target lib/librte_gpudev.so.23.0 00:01:24.839 [640/745] Linking target lib/librte_regexdev.so.23.0 00:01:24.839 [641/745] Linking target lib/librte_sched.so.23.0 00:01:24.839 [642/745] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:01:24.839 [643/745] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:01:24.839 [644/745] Linking target lib/librte_cryptodev.so.23.0 00:01:24.839 [645/745] Linking target lib/librte_fib.so.23.0 00:01:24.839 [646/745] Linking target lib/librte_compressdev.so.23.0 00:01:24.839 [647/745] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:01:24.839 [648/745] Generating symbol file lib/librte_net.so.23.0.p/librte_net.so.23.0.symbols 00:01:24.839 [649/745] Generating symbol file lib/librte_cryptodev.so.23.0.p/librte_cryptodev.so.23.0.symbols 00:01:24.839 [650/745] Linking target lib/librte_ethdev.so.23.0 00:01:24.839 [651/745] Linking target lib/librte_hash.so.23.0 00:01:24.839 [652/745] Linking target lib/librte_cmdline.so.23.0 00:01:24.839 [653/745] Linking target lib/librte_security.so.23.0 00:01:25.098 [654/745] Generating symbol file lib/librte_sched.so.23.0.p/librte_sched.so.23.0.symbols 00:01:25.098 [655/745] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:01:25.098 [656/745] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:01:25.098 [657/745] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:01:25.098 [658/745] Generating symbol file lib/librte_ethdev.so.23.0.p/librte_ethdev.so.23.0.symbols 00:01:25.098 [659/745] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:01:25.098 [660/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:01:25.098 [661/745] Generating symbol file lib/librte_security.so.23.0.p/librte_security.so.23.0.symbols 00:01:25.098 [662/745] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:01:25.098 [663/745] Linking target lib/librte_metrics.so.23.0 00:01:25.098 [664/745] Linking target lib/librte_pcapng.so.23.0 00:01:25.098 [665/745] Linking target lib/librte_gso.so.23.0 00:01:25.098 [666/745] Linking target lib/librte_gro.so.23.0 00:01:25.098 [667/745] Generating symbol file lib/librte_hash.so.23.0.p/librte_hash.so.23.0.symbols 00:01:25.098 [668/745] Linking target lib/librte_power.so.23.0 00:01:25.098 [669/745] Linking target lib/librte_bpf.so.23.0 00:01:25.098 [670/745] Linking target lib/librte_efd.so.23.0 00:01:25.098 [671/745] Linking target lib/librte_lpm.so.23.0 00:01:25.098 [672/745] Linking target lib/librte_member.so.23.0 00:01:25.098 [673/745] Linking target lib/librte_ip_frag.so.23.0 00:01:25.356 [674/745] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:01:25.356 [675/745] Linking target lib/librte_ipsec.so.23.0 00:01:25.356 [676/745] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:01:25.356 [677/745] Linking target lib/librte_eventdev.so.23.0 00:01:25.356 [678/745] Generating symbol file lib/librte_pcapng.so.23.0.p/librte_pcapng.so.23.0.symbols 00:01:25.356 [679/745] Generating symbol file lib/librte_lpm.so.23.0.p/librte_lpm.so.23.0.symbols 00:01:25.356 [680/745] Generating symbol file lib/librte_metrics.so.23.0.p/librte_metrics.so.23.0.symbols 00:01:25.356 [681/745] Generating symbol file lib/librte_bpf.so.23.0.p/librte_bpf.so.23.0.symbols 00:01:25.356 [682/745] Generating symbol file lib/librte_ip_frag.so.23.0.p/librte_ip_frag.so.23.0.symbols 00:01:25.356 [683/745] Linking target lib/librte_latencystats.so.23.0 00:01:25.356 [684/745] Linking target lib/librte_bitratestats.so.23.0 00:01:25.356 [685/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:01:25.356 [686/745] Linking target lib/librte_pdump.so.23.0 00:01:25.356 [687/745] Generating symbol file lib/librte_eventdev.so.23.0.p/librte_eventdev.so.23.0.symbols 00:01:25.356 [688/745] Linking target lib/librte_port.so.23.0 00:01:25.615 [689/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:01:25.615 [690/745] Generating symbol file lib/librte_port.so.23.0.p/librte_port.so.23.0.symbols 00:01:25.615 [691/745] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:01:25.615 [692/745] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:01:25.615 [693/745] Linking target lib/librte_table.so.23.0 00:01:25.615 [694/745] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:01:25.873 [695/745] Generating symbol file lib/librte_table.so.23.0.p/librte_table.so.23.0.symbols 00:01:25.873 [696/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:01:26.132 [697/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:01:26.390 [698/745] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:01:26.390 [699/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:01:26.390 [700/745] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:01:26.390 [701/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:01:26.390 [702/745] Linking static target drivers/libtmp_rte_net_i40e.a 00:01:26.390 [703/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:01:26.956 [704/745] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:01:26.956 [705/745] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:26.956 [706/745] Compiling C object drivers/librte_net_i40e.so.23.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:26.956 [707/745] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:01:26.956 [708/745] Linking static target drivers/librte_net_i40e.a 00:01:26.956 [709/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:01:27.214 [710/745] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:01:27.472 [711/745] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.472 [712/745] Linking target drivers/librte_net_i40e.so.23.0 00:01:28.406 [713/745] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:01:28.406 [714/745] Linking static target lib/librte_node.a 00:01:28.663 [715/745] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:01:28.663 [716/745] Linking target lib/librte_node.so.23.0 00:01:28.920 [717/745] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:01:29.852 [718/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:01:30.109 [719/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:01:38.238 [720/745] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:10.297 [721/745] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:10.297 [722/745] Linking static target lib/librte_vhost.a 00:02:10.297 [723/745] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.297 [724/745] Linking target lib/librte_vhost.so.23.0 00:02:25.208 [725/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:02:25.208 [726/745] Linking static target lib/librte_pipeline.a 00:02:25.208 [727/745] Linking target app/dpdk-dumpcap 00:02:25.208 [728/745] Linking target app/dpdk-test-cmdline 00:02:25.208 [729/745] Linking target app/dpdk-pdump 00:02:25.208 [730/745] Linking target app/dpdk-test-fib 00:02:25.208 [731/745] Linking target app/dpdk-proc-info 00:02:25.208 [732/745] Linking target app/dpdk-test-acl 00:02:25.208 [733/745] Linking target app/dpdk-test-sad 00:02:25.208 [734/745] Linking target app/dpdk-test-pipeline 00:02:25.208 [735/745] Linking target app/dpdk-test-gpudev 00:02:25.208 [736/745] Linking target app/dpdk-test-crypto-perf 00:02:25.208 [737/745] Linking target app/dpdk-test-regex 00:02:25.208 [738/745] Linking target app/dpdk-test-flow-perf 00:02:25.208 [739/745] Linking target app/dpdk-test-security-perf 00:02:25.208 [740/745] Linking target app/dpdk-test-bbdev 00:02:25.208 [741/745] Linking target app/dpdk-test-compress-perf 00:02:25.208 [742/745] Linking target app/dpdk-test-eventdev 00:02:25.208 [743/745] Linking target app/dpdk-testpmd 00:02:25.208 [744/745] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.466 [745/745] Linking target lib/librte_pipeline.so.23.0 00:02:25.466 01:40:40 build_native_dpdk -- common/autobuild_common.sh@191 -- $ uname -s 00:02:25.466 01:40:40 build_native_dpdk -- common/autobuild_common.sh@191 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:02:25.466 01:40:40 build_native_dpdk -- common/autobuild_common.sh@204 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 install 00:02:25.466 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:02:25.466 [0/1] Installing files. 00:02:25.728 Installing subdir /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples 00:02:25.728 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:25.728 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:25.728 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:25.728 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:25.728 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:25.728 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:25.728 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:25.728 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:25.728 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:25.728 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:25.728 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:25.728 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:25.728 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:25.728 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:25.728 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:25.728 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:25.728 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:25.728 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:25.728 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:25.728 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:25.728 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:25.728 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:25.728 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:25.728 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:25.728 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:25.728 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:25.728 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:25.728 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:25.729 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:25.729 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:25.729 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:25.729 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:25.729 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:25.729 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:25.729 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:25.729 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:25.729 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:25.729 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:25.729 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_classify/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:02:25.729 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_classify/flow_classify.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:02:25.729 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_classify/ipv4_rules_file.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:02:25.729 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:25.729 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:25.729 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:25.729 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:25.729 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:25.729 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:25.729 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:25.729 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:25.729 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:25.729 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:25.729 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:25.729 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:25.729 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:25.729 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:25.729 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:25.729 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:25.729 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common 00:02:25.729 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:02:25.729 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:02:25.729 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:02:25.729 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:25.729 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:25.729 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:25.729 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:25.729 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:25.729 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:25.729 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:25.729 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:25.729 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:25.729 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:25.729 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:25.729 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:25.729 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:25.729 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:25.729 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:25.729 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:25.729 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:25.729 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:25.729 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:25.729 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:25.729 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:25.729 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:25.729 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:25.729 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:25.729 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:25.729 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:25.729 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:25.729 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:25.729 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:25.729 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:25.729 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:25.729 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:25.729 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:25.729 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:25.729 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:25.729 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:25.729 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:25.730 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:25.730 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:25.730 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:25.730 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:25.730 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:25.730 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:25.730 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:25.730 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:25.730 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:25.730 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:25.730 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:25.730 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:25.730 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:25.730 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:25.730 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:25.730 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:25.730 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:25.730 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:25.730 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:25.730 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:25.730 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:25.730 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:25.730 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:25.730 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:25.730 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:25.730 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:25.730 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:25.730 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:25.730 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:25.730 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:25.730 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:25.730 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:25.730 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:25.730 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:25.730 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:25.730 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:25.730 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:25.730 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:25.730 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:25.730 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:25.730 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:25.730 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:25.730 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:25.730 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:25.730 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:25.730 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:25.730 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:25.730 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:25.730 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:25.730 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:02:25.730 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:25.730 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:25.730 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:25.730 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:25.730 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:25.730 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:25.730 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:25.730 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:25.730 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:25.730 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:25.730 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:02:25.730 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:25.730 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:25.730 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:25.730 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:25.730 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:25.731 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:25.731 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:25.731 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:25.731 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:02:25.731 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:25.731 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:25.731 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:25.731 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:25.731 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:25.731 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:25.731 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:25.731 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:25.731 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:25.731 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:25.731 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:25.731 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:25.731 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:25.731 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:25.731 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:25.731 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:25.731 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:25.731 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:25.731 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:25.731 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:25.731 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:25.731 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:25.731 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:25.731 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:25.731 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:25.731 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:25.731 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:25.731 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:25.731 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:25.731 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:25.731 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:25.731 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:25.731 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:25.731 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:25.731 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:25.731 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:25.731 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:25.731 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:25.731 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:25.731 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:25.731 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:25.731 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:25.731 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:25.731 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:25.731 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:25.731 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:25.731 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:25.731 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:25.731 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:25.731 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:25.731 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:25.731 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:25.731 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:25.731 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:25.731 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:25.731 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:25.731 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:25.731 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:25.731 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:25.731 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:25.731 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:25.731 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:25.731 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:25.731 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:25.731 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:25.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:25.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:25.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:25.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:25.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:25.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:25.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:25.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:25.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:25.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:25.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:25.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:25.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:25.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:25.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:25.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:25.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:25.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:25.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:02:25.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:25.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:25.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:25.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:25.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:25.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:25.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/node/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/node 00:02:25.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/node/node.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/node 00:02:25.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:02:25.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:25.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:25.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:25.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:25.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:25.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:25.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:25.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:25.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/kni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:25.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:25.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:25.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:25.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:25.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:25.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:25.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:25.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:25.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:25.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:25.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:25.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:25.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:25.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:25.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:25.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:25.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:25.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:25.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:25.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/kni.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:25.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:25.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:25.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:25.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/kni.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:25.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:25.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:25.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:25.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:25.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:25.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:25.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:25.732 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:25.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:25.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:25.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:25.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:25.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:25.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:25.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:25.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:25.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:25.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:25.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:25.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:25.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:25.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:25.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:25.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:25.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:25.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:25.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:25.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:25.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.733 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.734 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.734 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:25.734 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:25.734 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:25.734 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:25.734 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:25.734 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:02:25.734 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:25.734 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:25.734 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:25.734 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:25.734 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:25.734 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:25.734 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:25.734 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:25.734 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:25.734 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:25.734 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:25.734 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:25.734 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:25.734 Installing lib/librte_kvargs.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.734 Installing lib/librte_kvargs.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.734 Installing lib/librte_telemetry.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.734 Installing lib/librte_telemetry.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.734 Installing lib/librte_eal.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.734 Installing lib/librte_eal.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.734 Installing lib/librte_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.734 Installing lib/librte_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.734 Installing lib/librte_rcu.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.734 Installing lib/librte_rcu.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.734 Installing lib/librte_mempool.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.734 Installing lib/librte_mempool.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.734 Installing lib/librte_mbuf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.734 Installing lib/librte_mbuf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.734 Installing lib/librte_net.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.734 Installing lib/librte_net.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.734 Installing lib/librte_meter.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.734 Installing lib/librte_meter.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.734 Installing lib/librte_ethdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.734 Installing lib/librte_ethdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.734 Installing lib/librte_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.734 Installing lib/librte_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.734 Installing lib/librte_cmdline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.734 Installing lib/librte_cmdline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.734 Installing lib/librte_metrics.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.734 Installing lib/librte_metrics.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.734 Installing lib/librte_hash.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.734 Installing lib/librte_hash.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.734 Installing lib/librte_timer.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.734 Installing lib/librte_timer.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.734 Installing lib/librte_acl.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.734 Installing lib/librte_acl.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.734 Installing lib/librte_bbdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.734 Installing lib/librte_bbdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.734 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.734 Installing lib/librte_bitratestats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.734 Installing lib/librte_bpf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.734 Installing lib/librte_bpf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.734 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.734 Installing lib/librte_cfgfile.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.734 Installing lib/librte_compressdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.734 Installing lib/librte_compressdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.734 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.734 Installing lib/librte_cryptodev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.734 Installing lib/librte_distributor.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.734 Installing lib/librte_distributor.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.734 Installing lib/librte_efd.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.734 Installing lib/librte_efd.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.734 Installing lib/librte_eventdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.734 Installing lib/librte_eventdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.734 Installing lib/librte_gpudev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.734 Installing lib/librte_gpudev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.734 Installing lib/librte_gro.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.734 Installing lib/librte_gro.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.734 Installing lib/librte_gso.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.734 Installing lib/librte_gso.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.734 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.734 Installing lib/librte_ip_frag.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.734 Installing lib/librte_jobstats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.734 Installing lib/librte_jobstats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.734 Installing lib/librte_latencystats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.734 Installing lib/librte_latencystats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.734 Installing lib/librte_lpm.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.734 Installing lib/librte_lpm.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.734 Installing lib/librte_member.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.734 Installing lib/librte_member.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.734 Installing lib/librte_pcapng.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.734 Installing lib/librte_pcapng.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.734 Installing lib/librte_power.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.734 Installing lib/librte_power.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.734 Installing lib/librte_rawdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.734 Installing lib/librte_rawdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.734 Installing lib/librte_regexdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.734 Installing lib/librte_regexdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.734 Installing lib/librte_dmadev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.734 Installing lib/librte_dmadev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.734 Installing lib/librte_rib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.734 Installing lib/librte_rib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.734 Installing lib/librte_reorder.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.734 Installing lib/librte_reorder.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.734 Installing lib/librte_sched.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.734 Installing lib/librte_sched.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.734 Installing lib/librte_security.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.734 Installing lib/librte_security.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.734 Installing lib/librte_stack.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.735 Installing lib/librte_stack.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.735 Installing lib/librte_vhost.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.735 Installing lib/librte_vhost.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.735 Installing lib/librte_ipsec.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.735 Installing lib/librte_ipsec.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.735 Installing lib/librte_fib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.735 Installing lib/librte_fib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.735 Installing lib/librte_port.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.735 Installing lib/librte_port.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.735 Installing lib/librte_pdump.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.735 Installing lib/librte_pdump.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:26.305 Installing lib/librte_table.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:26.305 Installing lib/librte_table.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:26.305 Installing lib/librte_pipeline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:26.305 Installing lib/librte_pipeline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:26.305 Installing lib/librte_graph.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:26.305 Installing lib/librte_graph.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:26.305 Installing lib/librte_node.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:26.305 Installing lib/librte_node.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:26.305 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:26.305 Installing drivers/librte_bus_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:02:26.305 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:26.305 Installing drivers/librte_bus_vdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:02:26.305 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:26.305 Installing drivers/librte_mempool_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:02:26.305 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:26.305 Installing drivers/librte_net_i40e.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:02:26.305 Installing app/dpdk-dumpcap to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:26.305 Installing app/dpdk-pdump to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:26.305 Installing app/dpdk-proc-info to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:26.305 Installing app/dpdk-test-acl to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:26.305 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:26.305 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:26.305 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:26.305 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:26.305 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:26.305 Installing app/dpdk-test-fib to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:26.305 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:26.305 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:26.305 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:26.305 Installing app/dpdk-testpmd to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:26.305 Installing app/dpdk-test-regex to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:26.305 Installing app/dpdk-test-sad to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:26.305 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:26.305 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.305 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.305 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.305 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:26.305 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:26.305 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:26.305 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:26.305 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:26.305 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:26.305 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:26.305 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:26.305 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:26.305 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:26.305 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:26.305 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:26.305 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.305 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.305 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.305 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.305 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.305 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.305 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.305 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.305 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.305 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.305 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.305 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.305 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.305 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.305 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.305 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.305 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.305 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.305 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.305 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.305 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.305 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.305 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.305 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.305 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.305 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.305 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.305 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_log.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.306 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.307 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_empty_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_intel_uncore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.308 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:26.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:26.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:26.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:26.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:02:26.309 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:02:26.309 Installing symlink pointing to librte_kvargs.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so.23 00:02:26.309 Installing symlink pointing to librte_kvargs.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:02:26.309 Installing symlink pointing to librte_telemetry.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so.23 00:02:26.309 Installing symlink pointing to librte_telemetry.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:02:26.309 Installing symlink pointing to librte_eal.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so.23 00:02:26.309 Installing symlink pointing to librte_eal.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so 00:02:26.309 Installing symlink pointing to librte_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so.23 00:02:26.309 Installing symlink pointing to librte_ring.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so 00:02:26.309 Installing symlink pointing to librte_rcu.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so.23 00:02:26.309 Installing symlink pointing to librte_rcu.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so 00:02:26.309 Installing symlink pointing to librte_mempool.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so.23 00:02:26.309 Installing symlink pointing to librte_mempool.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so 00:02:26.309 Installing symlink pointing to librte_mbuf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so.23 00:02:26.309 Installing symlink pointing to librte_mbuf.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:02:26.309 Installing symlink pointing to librte_net.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so.23 00:02:26.309 Installing symlink pointing to librte_net.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so 00:02:26.309 Installing symlink pointing to librte_meter.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so.23 00:02:26.309 Installing symlink pointing to librte_meter.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so 00:02:26.309 Installing symlink pointing to librte_ethdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so.23 00:02:26.309 Installing symlink pointing to librte_ethdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:02:26.309 Installing symlink pointing to librte_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so.23 00:02:26.309 Installing symlink pointing to librte_pci.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so 00:02:26.309 Installing symlink pointing to librte_cmdline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so.23 00:02:26.309 Installing symlink pointing to librte_cmdline.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:02:26.309 Installing symlink pointing to librte_metrics.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so.23 00:02:26.309 Installing symlink pointing to librte_metrics.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so 00:02:26.309 Installing symlink pointing to librte_hash.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so.23 00:02:26.309 Installing symlink pointing to librte_hash.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so 00:02:26.309 Installing symlink pointing to librte_timer.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so.23 00:02:26.309 Installing symlink pointing to librte_timer.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so 00:02:26.309 Installing symlink pointing to librte_acl.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so.23 00:02:26.309 Installing symlink pointing to librte_acl.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so 00:02:26.309 Installing symlink pointing to librte_bbdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so.23 00:02:26.309 Installing symlink pointing to librte_bbdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:02:26.309 Installing symlink pointing to librte_bitratestats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so.23 00:02:26.309 Installing symlink pointing to librte_bitratestats.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:02:26.309 Installing symlink pointing to librte_bpf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so.23 00:02:26.309 Installing symlink pointing to librte_bpf.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so 00:02:26.309 Installing symlink pointing to librte_cfgfile.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so.23 00:02:26.309 Installing symlink pointing to librte_cfgfile.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:02:26.309 Installing symlink pointing to librte_compressdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so.23 00:02:26.309 Installing symlink pointing to librte_compressdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:02:26.309 Installing symlink pointing to librte_cryptodev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so.23 00:02:26.309 Installing symlink pointing to librte_cryptodev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:02:26.309 Installing symlink pointing to librte_distributor.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so.23 00:02:26.309 Installing symlink pointing to librte_distributor.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so 00:02:26.309 Installing symlink pointing to librte_efd.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so.23 00:02:26.309 Installing symlink pointing to librte_efd.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so 00:02:26.309 Installing symlink pointing to librte_eventdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so.23 00:02:26.309 Installing symlink pointing to librte_eventdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:02:26.309 Installing symlink pointing to librte_gpudev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so.23 00:02:26.309 Installing symlink pointing to librte_gpudev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:02:26.309 Installing symlink pointing to librte_gro.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so.23 00:02:26.309 Installing symlink pointing to librte_gro.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so 00:02:26.309 Installing symlink pointing to librte_gso.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so.23 00:02:26.309 Installing symlink pointing to librte_gso.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so 00:02:26.309 Installing symlink pointing to librte_ip_frag.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so.23 00:02:26.309 Installing symlink pointing to librte_ip_frag.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:02:26.309 Installing symlink pointing to librte_jobstats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so.23 00:02:26.309 Installing symlink pointing to librte_jobstats.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:02:26.310 Installing symlink pointing to librte_latencystats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so.23 00:02:26.310 Installing symlink pointing to librte_latencystats.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:02:26.310 Installing symlink pointing to librte_lpm.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so.23 00:02:26.310 Installing symlink pointing to librte_lpm.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so 00:02:26.310 Installing symlink pointing to librte_member.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so.23 00:02:26.310 Installing symlink pointing to librte_member.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so 00:02:26.310 Installing symlink pointing to librte_pcapng.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so.23 00:02:26.310 Installing symlink pointing to librte_pcapng.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:02:26.310 Installing symlink pointing to librte_power.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so.23 00:02:26.310 Installing symlink pointing to librte_power.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so 00:02:26.310 Installing symlink pointing to librte_rawdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so.23 00:02:26.310 Installing symlink pointing to librte_rawdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:02:26.310 Installing symlink pointing to librte_regexdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so.23 00:02:26.310 Installing symlink pointing to librte_regexdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:02:26.310 Installing symlink pointing to librte_dmadev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so.23 00:02:26.310 Installing symlink pointing to librte_dmadev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:02:26.310 Installing symlink pointing to librte_rib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so.23 00:02:26.310 Installing symlink pointing to librte_rib.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so 00:02:26.310 Installing symlink pointing to librte_reorder.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so.23 00:02:26.310 Installing symlink pointing to librte_reorder.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so 00:02:26.310 Installing symlink pointing to librte_sched.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so.23 00:02:26.310 Installing symlink pointing to librte_sched.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so 00:02:26.310 Installing symlink pointing to librte_security.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so.23 00:02:26.310 Installing symlink pointing to librte_security.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so 00:02:26.310 Installing symlink pointing to librte_stack.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so.23 00:02:26.310 Installing symlink pointing to librte_stack.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so 00:02:26.310 Installing symlink pointing to librte_vhost.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so.23 00:02:26.310 Installing symlink pointing to librte_vhost.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so 00:02:26.310 Installing symlink pointing to librte_ipsec.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so.23 00:02:26.310 Installing symlink pointing to librte_ipsec.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:02:26.310 Installing symlink pointing to librte_fib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so.23 00:02:26.310 Installing symlink pointing to librte_fib.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so 00:02:26.310 Installing symlink pointing to librte_port.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so.23 00:02:26.310 Installing symlink pointing to librte_port.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so 00:02:26.310 Installing symlink pointing to librte_pdump.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so.23 00:02:26.310 Installing symlink pointing to librte_pdump.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so 00:02:26.310 Installing symlink pointing to librte_table.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so.23 00:02:26.310 Installing symlink pointing to librte_table.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so 00:02:26.310 Installing symlink pointing to librte_pipeline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so.23 00:02:26.310 Installing symlink pointing to librte_pipeline.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:02:26.310 Installing symlink pointing to librte_graph.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so.23 00:02:26.310 Installing symlink pointing to librte_graph.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so 00:02:26.310 Installing symlink pointing to librte_node.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so.23 00:02:26.310 Installing symlink pointing to librte_node.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so 00:02:26.310 Installing symlink pointing to librte_bus_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23 00:02:26.310 Installing symlink pointing to librte_bus_pci.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:02:26.310 Installing symlink pointing to librte_bus_vdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23 00:02:26.310 Installing symlink pointing to librte_bus_vdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:02:26.310 './librte_bus_pci.so' -> 'dpdk/pmds-23.0/librte_bus_pci.so' 00:02:26.310 './librte_bus_pci.so.23' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23' 00:02:26.310 './librte_bus_pci.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23.0' 00:02:26.310 './librte_bus_vdev.so' -> 'dpdk/pmds-23.0/librte_bus_vdev.so' 00:02:26.310 './librte_bus_vdev.so.23' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23' 00:02:26.310 './librte_bus_vdev.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23.0' 00:02:26.310 './librte_mempool_ring.so' -> 'dpdk/pmds-23.0/librte_mempool_ring.so' 00:02:26.310 './librte_mempool_ring.so.23' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23' 00:02:26.310 './librte_mempool_ring.so.23.0' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23.0' 00:02:26.310 './librte_net_i40e.so' -> 'dpdk/pmds-23.0/librte_net_i40e.so' 00:02:26.310 './librte_net_i40e.so.23' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23' 00:02:26.310 './librte_net_i40e.so.23.0' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23.0' 00:02:26.310 Installing symlink pointing to librte_mempool_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23 00:02:26.310 Installing symlink pointing to librte_mempool_ring.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:02:26.310 Installing symlink pointing to librte_net_i40e.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23 00:02:26.310 Installing symlink pointing to librte_net_i40e.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:02:26.310 Running custom install script '/bin/sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-23.0' 00:02:26.310 01:40:41 build_native_dpdk -- common/autobuild_common.sh@210 -- $ cat 00:02:26.310 01:40:41 build_native_dpdk -- common/autobuild_common.sh@215 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:26.310 00:02:26.310 real 1m24.011s 00:02:26.310 user 14m23.208s 00:02:26.310 sys 1m47.151s 00:02:26.310 01:40:41 build_native_dpdk -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:02:26.310 01:40:41 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:02:26.310 ************************************ 00:02:26.310 END TEST build_native_dpdk 00:02:26.310 ************************************ 00:02:26.310 01:40:41 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:26.310 01:40:41 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:26.310 01:40:41 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:26.310 01:40:41 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:26.310 01:40:41 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:26.310 01:40:41 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:26.310 01:40:41 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:26.310 01:40:41 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --with-shared 00:02:26.569 Using /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:02:26.569 DPDK libraries: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:26.569 DPDK includes: //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.569 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:02:26.827 Using 'verbs' RDMA provider 00:02:37.393 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:45.502 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:45.761 Creating mk/config.mk...done. 00:02:45.761 Creating mk/cc.flags.mk...done. 00:02:45.761 Type 'make' to build. 00:02:45.761 01:41:00 -- spdk/autobuild.sh@69 -- $ run_test make make -j48 00:02:45.761 01:41:00 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:02:45.761 01:41:00 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:45.761 01:41:00 -- common/autotest_common.sh@10 -- $ set +x 00:02:45.761 ************************************ 00:02:45.761 START TEST make 00:02:45.761 ************************************ 00:02:45.761 01:41:00 make -- common/autotest_common.sh@1123 -- $ make -j48 00:02:46.019 make[1]: Nothing to be done for 'all'. 00:02:47.936 The Meson build system 00:02:47.936 Version: 1.3.1 00:02:47.936 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:02:47.936 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:47.936 Build type: native build 00:02:47.936 Project name: libvfio-user 00:02:47.936 Project version: 0.0.1 00:02:47.936 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:47.936 C linker for the host machine: gcc ld.bfd 2.39-16 00:02:47.936 Host machine cpu family: x86_64 00:02:47.936 Host machine cpu: x86_64 00:02:47.936 Run-time dependency threads found: YES 00:02:47.936 Library dl found: YES 00:02:47.936 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:47.936 Run-time dependency json-c found: YES 0.17 00:02:47.936 Run-time dependency cmocka found: YES 1.1.7 00:02:47.936 Program pytest-3 found: NO 00:02:47.936 Program flake8 found: NO 00:02:47.936 Program misspell-fixer found: NO 00:02:47.936 Program restructuredtext-lint found: NO 00:02:47.936 Program valgrind found: YES (/usr/bin/valgrind) 00:02:47.936 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:47.936 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:47.936 Compiler for C supports arguments -Wwrite-strings: YES 00:02:47.936 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:47.936 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:02:47.936 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:02:47.936 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:47.936 Build targets in project: 8 00:02:47.936 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:47.936 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:47.936 00:02:47.936 libvfio-user 0.0.1 00:02:47.936 00:02:47.936 User defined options 00:02:47.936 buildtype : debug 00:02:47.936 default_library: shared 00:02:47.936 libdir : /usr/local/lib 00:02:47.936 00:02:47.936 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:48.515 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:48.515 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:02:48.515 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:02:48.515 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:02:48.515 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:02:48.515 [5/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:48.515 [6/37] Compiling C object samples/lspci.p/lspci.c.o 00:02:48.515 [7/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:48.515 [8/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:48.774 [9/37] Compiling C object samples/null.p/null.c.o 00:02:48.774 [10/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:48.774 [11/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:02:48.774 [12/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:48.774 [13/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:48.774 [14/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:48.774 [15/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:48.774 [16/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:48.774 [17/37] Compiling C object test/unit_tests.p/mocks.c.o 00:02:48.774 [18/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:02:48.774 [19/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:48.774 [20/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:48.774 [21/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:02:48.774 [22/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:48.774 [23/37] Compiling C object samples/server.p/server.c.o 00:02:48.774 [24/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:02:48.774 [25/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:48.774 [26/37] Compiling C object samples/client.p/client.c.o 00:02:48.774 [27/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:48.774 [28/37] Linking target lib/libvfio-user.so.0.0.1 00:02:48.774 [29/37] Linking target samples/client 00:02:49.040 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:49.040 [31/37] Linking target test/unit_tests 00:02:49.040 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:02:49.040 [33/37] Linking target samples/null 00:02:49.040 [34/37] Linking target samples/server 00:02:49.040 [35/37] Linking target samples/lspci 00:02:49.040 [36/37] Linking target samples/gpio-pci-idio-16 00:02:49.040 [37/37] Linking target samples/shadow_ioeventfd_server 00:02:49.040 INFO: autodetecting backend as ninja 00:02:49.040 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:49.300 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:49.873 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:49.873 ninja: no work to do. 00:03:02.079 CC lib/ut/ut.o 00:03:02.079 CC lib/log/log.o 00:03:02.079 CC lib/log/log_flags.o 00:03:02.079 CC lib/log/log_deprecated.o 00:03:02.079 CC lib/ut_mock/mock.o 00:03:02.079 LIB libspdk_log.a 00:03:02.079 LIB libspdk_ut.a 00:03:02.079 LIB libspdk_ut_mock.a 00:03:02.079 SO libspdk_ut.so.2.0 00:03:02.079 SO libspdk_log.so.7.0 00:03:02.079 SO libspdk_ut_mock.so.6.0 00:03:02.079 SYMLINK libspdk_ut.so 00:03:02.079 SYMLINK libspdk_log.so 00:03:02.079 SYMLINK libspdk_ut_mock.so 00:03:02.079 CC lib/ioat/ioat.o 00:03:02.079 CC lib/dma/dma.o 00:03:02.079 CXX lib/trace_parser/trace.o 00:03:02.079 CC lib/util/base64.o 00:03:02.079 CC lib/util/bit_array.o 00:03:02.079 CC lib/util/cpuset.o 00:03:02.079 CC lib/util/crc16.o 00:03:02.079 CC lib/util/crc32.o 00:03:02.079 CC lib/util/crc32c.o 00:03:02.079 CC lib/util/crc32_ieee.o 00:03:02.079 CC lib/util/crc64.o 00:03:02.079 CC lib/util/dif.o 00:03:02.079 CC lib/util/fd.o 00:03:02.079 CC lib/util/fd_group.o 00:03:02.079 CC lib/util/file.o 00:03:02.079 CC lib/util/hexlify.o 00:03:02.079 CC lib/util/iov.o 00:03:02.079 CC lib/util/math.o 00:03:02.080 CC lib/util/net.o 00:03:02.080 CC lib/util/pipe.o 00:03:02.080 CC lib/util/strerror_tls.o 00:03:02.080 CC lib/util/string.o 00:03:02.080 CC lib/util/uuid.o 00:03:02.080 CC lib/util/xor.o 00:03:02.080 CC lib/util/zipf.o 00:03:02.080 CC lib/vfio_user/host/vfio_user_pci.o 00:03:02.080 CC lib/vfio_user/host/vfio_user.o 00:03:02.080 LIB libspdk_dma.a 00:03:02.080 SO libspdk_dma.so.4.0 00:03:02.080 SYMLINK libspdk_dma.so 00:03:02.080 LIB libspdk_ioat.a 00:03:02.080 SO libspdk_ioat.so.7.0 00:03:02.080 SYMLINK libspdk_ioat.so 00:03:02.080 LIB libspdk_vfio_user.a 00:03:02.080 SO libspdk_vfio_user.so.5.0 00:03:02.080 SYMLINK libspdk_vfio_user.so 00:03:02.336 LIB libspdk_util.a 00:03:02.336 SO libspdk_util.so.10.0 00:03:02.336 SYMLINK libspdk_util.so 00:03:02.594 CC lib/conf/conf.o 00:03:02.594 CC lib/json/json_parse.o 00:03:02.594 CC lib/rdma_utils/rdma_utils.o 00:03:02.594 CC lib/vmd/vmd.o 00:03:02.594 CC lib/rdma_provider/common.o 00:03:02.594 CC lib/idxd/idxd.o 00:03:02.594 CC lib/env_dpdk/env.o 00:03:02.594 CC lib/json/json_util.o 00:03:02.594 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:02.594 CC lib/vmd/led.o 00:03:02.594 CC lib/idxd/idxd_user.o 00:03:02.594 CC lib/json/json_write.o 00:03:02.594 CC lib/env_dpdk/memory.o 00:03:02.594 CC lib/idxd/idxd_kernel.o 00:03:02.594 CC lib/env_dpdk/pci.o 00:03:02.594 CC lib/env_dpdk/init.o 00:03:02.594 CC lib/env_dpdk/threads.o 00:03:02.594 CC lib/env_dpdk/pci_ioat.o 00:03:02.594 CC lib/env_dpdk/pci_virtio.o 00:03:02.594 CC lib/env_dpdk/pci_vmd.o 00:03:02.594 CC lib/env_dpdk/pci_idxd.o 00:03:02.594 CC lib/env_dpdk/pci_event.o 00:03:02.594 CC lib/env_dpdk/sigbus_handler.o 00:03:02.594 CC lib/env_dpdk/pci_dpdk.o 00:03:02.594 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:02.594 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:02.594 LIB libspdk_trace_parser.a 00:03:02.594 SO libspdk_trace_parser.so.5.0 00:03:02.852 SYMLINK libspdk_trace_parser.so 00:03:02.852 LIB libspdk_rdma_provider.a 00:03:02.852 SO libspdk_rdma_provider.so.6.0 00:03:02.852 LIB libspdk_conf.a 00:03:02.852 SO libspdk_conf.so.6.0 00:03:02.852 LIB libspdk_rdma_utils.a 00:03:02.852 SYMLINK libspdk_rdma_provider.so 00:03:02.852 SO libspdk_rdma_utils.so.1.0 00:03:02.852 SYMLINK libspdk_conf.so 00:03:02.852 LIB libspdk_json.a 00:03:02.852 SYMLINK libspdk_rdma_utils.so 00:03:03.110 SO libspdk_json.so.6.0 00:03:03.110 SYMLINK libspdk_json.so 00:03:03.110 LIB libspdk_idxd.a 00:03:03.110 CC lib/jsonrpc/jsonrpc_server.o 00:03:03.110 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:03.110 CC lib/jsonrpc/jsonrpc_client.o 00:03:03.110 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:03.110 SO libspdk_idxd.so.12.0 00:03:03.368 SYMLINK libspdk_idxd.so 00:03:03.368 LIB libspdk_vmd.a 00:03:03.368 SO libspdk_vmd.so.6.0 00:03:03.368 SYMLINK libspdk_vmd.so 00:03:03.368 LIB libspdk_jsonrpc.a 00:03:03.368 SO libspdk_jsonrpc.so.6.0 00:03:03.625 SYMLINK libspdk_jsonrpc.so 00:03:03.625 CC lib/rpc/rpc.o 00:03:03.883 LIB libspdk_rpc.a 00:03:03.883 SO libspdk_rpc.so.6.0 00:03:03.883 SYMLINK libspdk_rpc.so 00:03:04.140 CC lib/trace/trace.o 00:03:04.140 CC lib/trace/trace_flags.o 00:03:04.140 CC lib/trace/trace_rpc.o 00:03:04.140 CC lib/keyring/keyring.o 00:03:04.140 CC lib/keyring/keyring_rpc.o 00:03:04.140 CC lib/notify/notify.o 00:03:04.140 CC lib/notify/notify_rpc.o 00:03:04.399 LIB libspdk_notify.a 00:03:04.399 SO libspdk_notify.so.6.0 00:03:04.399 LIB libspdk_keyring.a 00:03:04.399 SYMLINK libspdk_notify.so 00:03:04.399 LIB libspdk_trace.a 00:03:04.399 SO libspdk_keyring.so.1.0 00:03:04.399 SO libspdk_trace.so.10.0 00:03:04.399 SYMLINK libspdk_keyring.so 00:03:04.399 SYMLINK libspdk_trace.so 00:03:04.665 LIB libspdk_env_dpdk.a 00:03:04.665 SO libspdk_env_dpdk.so.15.0 00:03:04.665 CC lib/thread/thread.o 00:03:04.665 CC lib/thread/iobuf.o 00:03:04.665 CC lib/sock/sock.o 00:03:04.665 CC lib/sock/sock_rpc.o 00:03:04.665 SYMLINK libspdk_env_dpdk.so 00:03:05.234 LIB libspdk_sock.a 00:03:05.234 SO libspdk_sock.so.10.0 00:03:05.234 SYMLINK libspdk_sock.so 00:03:05.234 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:05.234 CC lib/nvme/nvme_ctrlr.o 00:03:05.234 CC lib/nvme/nvme_fabric.o 00:03:05.234 CC lib/nvme/nvme_ns_cmd.o 00:03:05.234 CC lib/nvme/nvme_ns.o 00:03:05.234 CC lib/nvme/nvme_pcie_common.o 00:03:05.234 CC lib/nvme/nvme_pcie.o 00:03:05.234 CC lib/nvme/nvme_qpair.o 00:03:05.234 CC lib/nvme/nvme.o 00:03:05.234 CC lib/nvme/nvme_quirks.o 00:03:05.234 CC lib/nvme/nvme_transport.o 00:03:05.234 CC lib/nvme/nvme_discovery.o 00:03:05.234 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:05.234 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:05.234 CC lib/nvme/nvme_tcp.o 00:03:05.234 CC lib/nvme/nvme_opal.o 00:03:05.234 CC lib/nvme/nvme_io_msg.o 00:03:05.234 CC lib/nvme/nvme_poll_group.o 00:03:05.234 CC lib/nvme/nvme_zns.o 00:03:05.234 CC lib/nvme/nvme_stubs.o 00:03:05.234 CC lib/nvme/nvme_auth.o 00:03:05.234 CC lib/nvme/nvme_cuse.o 00:03:05.234 CC lib/nvme/nvme_vfio_user.o 00:03:05.234 CC lib/nvme/nvme_rdma.o 00:03:06.168 LIB libspdk_thread.a 00:03:06.168 SO libspdk_thread.so.10.1 00:03:06.168 SYMLINK libspdk_thread.so 00:03:06.426 CC lib/virtio/virtio.o 00:03:06.426 CC lib/accel/accel.o 00:03:06.426 CC lib/vfu_tgt/tgt_endpoint.o 00:03:06.426 CC lib/init/json_config.o 00:03:06.426 CC lib/blob/blobstore.o 00:03:06.426 CC lib/virtio/virtio_vhost_user.o 00:03:06.426 CC lib/accel/accel_rpc.o 00:03:06.426 CC lib/blob/request.o 00:03:06.426 CC lib/init/subsystem.o 00:03:06.426 CC lib/vfu_tgt/tgt_rpc.o 00:03:06.426 CC lib/accel/accel_sw.o 00:03:06.426 CC lib/blob/zeroes.o 00:03:06.426 CC lib/virtio/virtio_vfio_user.o 00:03:06.426 CC lib/init/subsystem_rpc.o 00:03:06.426 CC lib/virtio/virtio_pci.o 00:03:06.426 CC lib/init/rpc.o 00:03:06.426 CC lib/blob/blob_bs_dev.o 00:03:06.684 LIB libspdk_init.a 00:03:06.684 SO libspdk_init.so.5.0 00:03:06.684 LIB libspdk_virtio.a 00:03:06.684 LIB libspdk_vfu_tgt.a 00:03:06.942 SYMLINK libspdk_init.so 00:03:06.942 SO libspdk_vfu_tgt.so.3.0 00:03:06.942 SO libspdk_virtio.so.7.0 00:03:06.942 SYMLINK libspdk_vfu_tgt.so 00:03:06.942 SYMLINK libspdk_virtio.so 00:03:06.942 CC lib/event/app.o 00:03:06.942 CC lib/event/reactor.o 00:03:06.942 CC lib/event/log_rpc.o 00:03:06.942 CC lib/event/app_rpc.o 00:03:06.942 CC lib/event/scheduler_static.o 00:03:07.508 LIB libspdk_event.a 00:03:07.508 SO libspdk_event.so.14.0 00:03:07.508 LIB libspdk_accel.a 00:03:07.508 SYMLINK libspdk_event.so 00:03:07.508 SO libspdk_accel.so.16.0 00:03:07.508 SYMLINK libspdk_accel.so 00:03:07.792 LIB libspdk_nvme.a 00:03:07.792 CC lib/bdev/bdev.o 00:03:07.792 CC lib/bdev/bdev_rpc.o 00:03:07.792 CC lib/bdev/bdev_zone.o 00:03:07.792 CC lib/bdev/part.o 00:03:07.792 CC lib/bdev/scsi_nvme.o 00:03:07.792 SO libspdk_nvme.so.13.1 00:03:08.062 SYMLINK libspdk_nvme.so 00:03:09.436 LIB libspdk_blob.a 00:03:09.436 SO libspdk_blob.so.11.0 00:03:09.694 SYMLINK libspdk_blob.so 00:03:09.694 CC lib/blobfs/blobfs.o 00:03:09.694 CC lib/blobfs/tree.o 00:03:09.694 CC lib/lvol/lvol.o 00:03:10.260 LIB libspdk_bdev.a 00:03:10.260 SO libspdk_bdev.so.16.0 00:03:10.260 SYMLINK libspdk_bdev.so 00:03:10.527 CC lib/nbd/nbd.o 00:03:10.527 CC lib/nbd/nbd_rpc.o 00:03:10.527 CC lib/ftl/ftl_core.o 00:03:10.527 CC lib/ublk/ublk.o 00:03:10.527 CC lib/ftl/ftl_init.o 00:03:10.527 CC lib/ublk/ublk_rpc.o 00:03:10.527 CC lib/ftl/ftl_layout.o 00:03:10.527 CC lib/scsi/dev.o 00:03:10.527 CC lib/ftl/ftl_debug.o 00:03:10.527 CC lib/scsi/lun.o 00:03:10.527 CC lib/ftl/ftl_io.o 00:03:10.527 CC lib/scsi/port.o 00:03:10.527 CC lib/ftl/ftl_sb.o 00:03:10.527 CC lib/nvmf/ctrlr.o 00:03:10.527 CC lib/scsi/scsi.o 00:03:10.527 CC lib/ftl/ftl_l2p.o 00:03:10.527 CC lib/nvmf/ctrlr_discovery.o 00:03:10.527 CC lib/scsi/scsi_bdev.o 00:03:10.527 CC lib/ftl/ftl_l2p_flat.o 00:03:10.527 CC lib/nvmf/ctrlr_bdev.o 00:03:10.527 CC lib/nvmf/subsystem.o 00:03:10.527 CC lib/scsi/scsi_pr.o 00:03:10.527 CC lib/scsi/scsi_rpc.o 00:03:10.527 CC lib/ftl/ftl_nv_cache.o 00:03:10.527 CC lib/scsi/task.o 00:03:10.527 CC lib/nvmf/nvmf.o 00:03:10.527 CC lib/ftl/ftl_band.o 00:03:10.527 CC lib/nvmf/nvmf_rpc.o 00:03:10.527 CC lib/nvmf/transport.o 00:03:10.527 CC lib/ftl/ftl_band_ops.o 00:03:10.527 CC lib/ftl/ftl_writer.o 00:03:10.527 CC lib/nvmf/tcp.o 00:03:10.527 CC lib/nvmf/stubs.o 00:03:10.527 CC lib/ftl/ftl_rq.o 00:03:10.527 CC lib/nvmf/mdns_server.o 00:03:10.527 CC lib/ftl/ftl_reloc.o 00:03:10.527 CC lib/nvmf/vfio_user.o 00:03:10.527 CC lib/ftl/ftl_l2p_cache.o 00:03:10.527 CC lib/nvmf/rdma.o 00:03:10.527 CC lib/ftl/ftl_p2l.o 00:03:10.527 CC lib/nvmf/auth.o 00:03:10.527 CC lib/ftl/mngt/ftl_mngt.o 00:03:10.527 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:10.527 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:10.527 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:10.527 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:10.527 LIB libspdk_blobfs.a 00:03:10.527 SO libspdk_blobfs.so.10.0 00:03:10.787 SYMLINK libspdk_blobfs.so 00:03:10.787 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:10.787 LIB libspdk_lvol.a 00:03:10.787 SO libspdk_lvol.so.10.0 00:03:10.787 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:10.787 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:10.787 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:11.049 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:11.049 SYMLINK libspdk_lvol.so 00:03:11.049 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:11.049 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:11.049 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:11.049 CC lib/ftl/utils/ftl_conf.o 00:03:11.049 CC lib/ftl/utils/ftl_md.o 00:03:11.049 CC lib/ftl/utils/ftl_mempool.o 00:03:11.049 CC lib/ftl/utils/ftl_bitmap.o 00:03:11.049 CC lib/ftl/utils/ftl_property.o 00:03:11.049 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:11.049 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:11.049 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:11.049 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:11.049 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:11.049 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:11.049 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:11.049 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:11.307 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:11.307 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:11.307 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:11.307 CC lib/ftl/base/ftl_base_dev.o 00:03:11.307 CC lib/ftl/base/ftl_base_bdev.o 00:03:11.307 CC lib/ftl/ftl_trace.o 00:03:11.307 LIB libspdk_nbd.a 00:03:11.307 SO libspdk_nbd.so.7.0 00:03:11.566 LIB libspdk_scsi.a 00:03:11.566 SYMLINK libspdk_nbd.so 00:03:11.566 SO libspdk_scsi.so.9.0 00:03:11.566 SYMLINK libspdk_scsi.so 00:03:11.566 LIB libspdk_ublk.a 00:03:11.566 SO libspdk_ublk.so.3.0 00:03:11.824 SYMLINK libspdk_ublk.so 00:03:11.824 CC lib/vhost/vhost.o 00:03:11.824 CC lib/iscsi/conn.o 00:03:11.824 CC lib/iscsi/init_grp.o 00:03:11.824 CC lib/vhost/vhost_rpc.o 00:03:11.824 CC lib/iscsi/iscsi.o 00:03:11.824 CC lib/vhost/vhost_scsi.o 00:03:11.824 CC lib/vhost/vhost_blk.o 00:03:11.824 CC lib/iscsi/md5.o 00:03:11.824 CC lib/iscsi/param.o 00:03:11.824 CC lib/vhost/rte_vhost_user.o 00:03:11.824 CC lib/iscsi/portal_grp.o 00:03:11.824 CC lib/iscsi/tgt_node.o 00:03:11.824 CC lib/iscsi/iscsi_subsystem.o 00:03:11.824 CC lib/iscsi/iscsi_rpc.o 00:03:11.824 CC lib/iscsi/task.o 00:03:12.083 LIB libspdk_ftl.a 00:03:12.083 SO libspdk_ftl.so.9.0 00:03:12.649 SYMLINK libspdk_ftl.so 00:03:12.907 LIB libspdk_vhost.a 00:03:13.165 SO libspdk_vhost.so.8.0 00:03:13.165 SYMLINK libspdk_vhost.so 00:03:13.165 LIB libspdk_nvmf.a 00:03:13.165 LIB libspdk_iscsi.a 00:03:13.165 SO libspdk_nvmf.so.19.0 00:03:13.165 SO libspdk_iscsi.so.8.0 00:03:13.424 SYMLINK libspdk_iscsi.so 00:03:13.424 SYMLINK libspdk_nvmf.so 00:03:13.682 CC module/env_dpdk/env_dpdk_rpc.o 00:03:13.682 CC module/vfu_device/vfu_virtio.o 00:03:13.682 CC module/vfu_device/vfu_virtio_blk.o 00:03:13.683 CC module/vfu_device/vfu_virtio_scsi.o 00:03:13.683 CC module/vfu_device/vfu_virtio_rpc.o 00:03:13.683 CC module/accel/ioat/accel_ioat.o 00:03:13.683 CC module/keyring/linux/keyring.o 00:03:13.683 CC module/accel/ioat/accel_ioat_rpc.o 00:03:13.683 CC module/scheduler/gscheduler/gscheduler.o 00:03:13.683 CC module/keyring/linux/keyring_rpc.o 00:03:13.683 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:13.683 CC module/blob/bdev/blob_bdev.o 00:03:13.941 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:13.941 CC module/accel/iaa/accel_iaa.o 00:03:13.941 CC module/accel/error/accel_error.o 00:03:13.941 CC module/sock/posix/posix.o 00:03:13.941 CC module/accel/dsa/accel_dsa.o 00:03:13.941 CC module/accel/error/accel_error_rpc.o 00:03:13.941 CC module/accel/iaa/accel_iaa_rpc.o 00:03:13.941 CC module/keyring/file/keyring.o 00:03:13.941 CC module/accel/dsa/accel_dsa_rpc.o 00:03:13.941 CC module/keyring/file/keyring_rpc.o 00:03:13.941 LIB libspdk_env_dpdk_rpc.a 00:03:13.941 SO libspdk_env_dpdk_rpc.so.6.0 00:03:13.941 SYMLINK libspdk_env_dpdk_rpc.so 00:03:13.941 LIB libspdk_keyring_linux.a 00:03:13.941 LIB libspdk_keyring_file.a 00:03:13.941 LIB libspdk_scheduler_gscheduler.a 00:03:13.941 LIB libspdk_scheduler_dpdk_governor.a 00:03:13.941 SO libspdk_keyring_linux.so.1.0 00:03:13.941 SO libspdk_keyring_file.so.1.0 00:03:13.941 SO libspdk_scheduler_gscheduler.so.4.0 00:03:13.941 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:13.942 LIB libspdk_accel_error.a 00:03:13.942 LIB libspdk_accel_ioat.a 00:03:13.942 LIB libspdk_scheduler_dynamic.a 00:03:13.942 LIB libspdk_accel_iaa.a 00:03:13.942 SO libspdk_accel_error.so.2.0 00:03:13.942 SO libspdk_accel_ioat.so.6.0 00:03:14.199 SYMLINK libspdk_keyring_linux.so 00:03:14.199 SO libspdk_scheduler_dynamic.so.4.0 00:03:14.199 SYMLINK libspdk_keyring_file.so 00:03:14.199 SYMLINK libspdk_scheduler_gscheduler.so 00:03:14.199 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:14.199 SO libspdk_accel_iaa.so.3.0 00:03:14.199 LIB libspdk_accel_dsa.a 00:03:14.199 SYMLINK libspdk_accel_error.so 00:03:14.199 SYMLINK libspdk_accel_ioat.so 00:03:14.199 SYMLINK libspdk_scheduler_dynamic.so 00:03:14.199 SYMLINK libspdk_accel_iaa.so 00:03:14.199 SO libspdk_accel_dsa.so.5.0 00:03:14.199 LIB libspdk_blob_bdev.a 00:03:14.199 SO libspdk_blob_bdev.so.11.0 00:03:14.199 SYMLINK libspdk_accel_dsa.so 00:03:14.199 SYMLINK libspdk_blob_bdev.so 00:03:14.458 LIB libspdk_vfu_device.a 00:03:14.458 SO libspdk_vfu_device.so.3.0 00:03:14.458 CC module/bdev/delay/vbdev_delay.o 00:03:14.458 CC module/bdev/lvol/vbdev_lvol.o 00:03:14.458 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:14.458 CC module/bdev/malloc/bdev_malloc.o 00:03:14.458 CC module/bdev/error/vbdev_error.o 00:03:14.458 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:14.458 CC module/blobfs/bdev/blobfs_bdev.o 00:03:14.458 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:14.458 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:14.458 CC module/bdev/error/vbdev_error_rpc.o 00:03:14.458 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:14.458 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:14.458 CC module/bdev/passthru/vbdev_passthru.o 00:03:14.458 CC module/bdev/nvme/bdev_nvme.o 00:03:14.458 CC module/bdev/raid/bdev_raid.o 00:03:14.458 CC module/bdev/null/bdev_null.o 00:03:14.458 CC module/bdev/aio/bdev_aio.o 00:03:14.458 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:14.458 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:14.458 CC module/bdev/nvme/nvme_rpc.o 00:03:14.458 CC module/bdev/gpt/gpt.o 00:03:14.458 CC module/bdev/raid/bdev_raid_rpc.o 00:03:14.458 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:14.458 CC module/bdev/gpt/vbdev_gpt.o 00:03:14.458 CC module/bdev/aio/bdev_aio_rpc.o 00:03:14.458 CC module/bdev/null/bdev_null_rpc.o 00:03:14.458 CC module/bdev/raid/bdev_raid_sb.o 00:03:14.458 CC module/bdev/nvme/bdev_mdns_client.o 00:03:14.458 CC module/bdev/iscsi/bdev_iscsi.o 00:03:14.458 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:14.458 CC module/bdev/split/vbdev_split.o 00:03:14.458 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:14.458 CC module/bdev/raid/raid0.o 00:03:14.458 CC module/bdev/ftl/bdev_ftl.o 00:03:14.458 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:14.458 CC module/bdev/nvme/vbdev_opal.o 00:03:14.458 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:14.458 CC module/bdev/split/vbdev_split_rpc.o 00:03:14.458 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:14.458 CC module/bdev/raid/raid1.o 00:03:14.458 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:14.458 CC module/bdev/raid/concat.o 00:03:14.458 SYMLINK libspdk_vfu_device.so 00:03:14.717 LIB libspdk_sock_posix.a 00:03:14.717 SO libspdk_sock_posix.so.6.0 00:03:14.976 SYMLINK libspdk_sock_posix.so 00:03:14.976 LIB libspdk_blobfs_bdev.a 00:03:14.976 SO libspdk_blobfs_bdev.so.6.0 00:03:14.976 LIB libspdk_bdev_malloc.a 00:03:14.976 LIB libspdk_bdev_split.a 00:03:14.976 SO libspdk_bdev_malloc.so.6.0 00:03:14.976 LIB libspdk_bdev_gpt.a 00:03:14.976 SYMLINK libspdk_blobfs_bdev.so 00:03:14.976 SO libspdk_bdev_split.so.6.0 00:03:14.976 LIB libspdk_bdev_error.a 00:03:14.976 SO libspdk_bdev_gpt.so.6.0 00:03:14.976 LIB libspdk_bdev_null.a 00:03:14.976 LIB libspdk_bdev_aio.a 00:03:14.976 SO libspdk_bdev_error.so.6.0 00:03:14.976 SYMLINK libspdk_bdev_malloc.so 00:03:14.976 SYMLINK libspdk_bdev_split.so 00:03:14.976 SO libspdk_bdev_null.so.6.0 00:03:14.976 LIB libspdk_bdev_passthru.a 00:03:14.976 SO libspdk_bdev_aio.so.6.0 00:03:14.976 LIB libspdk_bdev_ftl.a 00:03:14.976 LIB libspdk_bdev_zone_block.a 00:03:14.976 SYMLINK libspdk_bdev_gpt.so 00:03:14.976 LIB libspdk_bdev_delay.a 00:03:14.976 SO libspdk_bdev_ftl.so.6.0 00:03:14.976 SO libspdk_bdev_passthru.so.6.0 00:03:14.976 SYMLINK libspdk_bdev_error.so 00:03:14.976 SO libspdk_bdev_zone_block.so.6.0 00:03:14.976 SO libspdk_bdev_delay.so.6.0 00:03:14.976 SYMLINK libspdk_bdev_null.so 00:03:15.233 SYMLINK libspdk_bdev_aio.so 00:03:15.233 LIB libspdk_bdev_iscsi.a 00:03:15.233 SYMLINK libspdk_bdev_passthru.so 00:03:15.234 SYMLINK libspdk_bdev_ftl.so 00:03:15.234 SYMLINK libspdk_bdev_zone_block.so 00:03:15.234 SYMLINK libspdk_bdev_delay.so 00:03:15.234 SO libspdk_bdev_iscsi.so.6.0 00:03:15.234 LIB libspdk_bdev_lvol.a 00:03:15.234 SYMLINK libspdk_bdev_iscsi.so 00:03:15.234 LIB libspdk_bdev_virtio.a 00:03:15.234 SO libspdk_bdev_lvol.so.6.0 00:03:15.234 SO libspdk_bdev_virtio.so.6.0 00:03:15.234 SYMLINK libspdk_bdev_lvol.so 00:03:15.234 SYMLINK libspdk_bdev_virtio.so 00:03:15.799 LIB libspdk_bdev_raid.a 00:03:15.799 SO libspdk_bdev_raid.so.6.0 00:03:15.799 SYMLINK libspdk_bdev_raid.so 00:03:16.733 LIB libspdk_bdev_nvme.a 00:03:16.991 SO libspdk_bdev_nvme.so.7.0 00:03:16.991 SYMLINK libspdk_bdev_nvme.so 00:03:17.248 CC module/event/subsystems/vmd/vmd.o 00:03:17.248 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:17.248 CC module/event/subsystems/iobuf/iobuf.o 00:03:17.248 CC module/event/subsystems/sock/sock.o 00:03:17.248 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:17.248 CC module/event/subsystems/keyring/keyring.o 00:03:17.248 CC module/event/subsystems/scheduler/scheduler.o 00:03:17.248 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:17.248 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:17.507 LIB libspdk_event_keyring.a 00:03:17.507 LIB libspdk_event_vhost_blk.a 00:03:17.507 LIB libspdk_event_vfu_tgt.a 00:03:17.507 LIB libspdk_event_scheduler.a 00:03:17.507 LIB libspdk_event_vmd.a 00:03:17.507 LIB libspdk_event_sock.a 00:03:17.507 LIB libspdk_event_iobuf.a 00:03:17.507 SO libspdk_event_vhost_blk.so.3.0 00:03:17.507 SO libspdk_event_keyring.so.1.0 00:03:17.507 SO libspdk_event_vfu_tgt.so.3.0 00:03:17.507 SO libspdk_event_scheduler.so.4.0 00:03:17.507 SO libspdk_event_sock.so.5.0 00:03:17.507 SO libspdk_event_vmd.so.6.0 00:03:17.507 SO libspdk_event_iobuf.so.3.0 00:03:17.507 SYMLINK libspdk_event_keyring.so 00:03:17.507 SYMLINK libspdk_event_vhost_blk.so 00:03:17.507 SYMLINK libspdk_event_vfu_tgt.so 00:03:17.507 SYMLINK libspdk_event_scheduler.so 00:03:17.507 SYMLINK libspdk_event_sock.so 00:03:17.507 SYMLINK libspdk_event_vmd.so 00:03:17.507 SYMLINK libspdk_event_iobuf.so 00:03:17.765 CC module/event/subsystems/accel/accel.o 00:03:18.023 LIB libspdk_event_accel.a 00:03:18.023 SO libspdk_event_accel.so.6.0 00:03:18.023 SYMLINK libspdk_event_accel.so 00:03:18.280 CC module/event/subsystems/bdev/bdev.o 00:03:18.280 LIB libspdk_event_bdev.a 00:03:18.280 SO libspdk_event_bdev.so.6.0 00:03:18.538 SYMLINK libspdk_event_bdev.so 00:03:18.538 CC module/event/subsystems/ublk/ublk.o 00:03:18.538 CC module/event/subsystems/scsi/scsi.o 00:03:18.538 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:18.538 CC module/event/subsystems/nbd/nbd.o 00:03:18.538 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:18.795 LIB libspdk_event_ublk.a 00:03:18.795 LIB libspdk_event_nbd.a 00:03:18.795 LIB libspdk_event_scsi.a 00:03:18.795 SO libspdk_event_ublk.so.3.0 00:03:18.795 SO libspdk_event_nbd.so.6.0 00:03:18.795 SO libspdk_event_scsi.so.6.0 00:03:18.795 SYMLINK libspdk_event_nbd.so 00:03:18.795 SYMLINK libspdk_event_ublk.so 00:03:18.795 LIB libspdk_event_nvmf.a 00:03:18.795 SYMLINK libspdk_event_scsi.so 00:03:18.795 SO libspdk_event_nvmf.so.6.0 00:03:18.795 SYMLINK libspdk_event_nvmf.so 00:03:19.052 CC module/event/subsystems/iscsi/iscsi.o 00:03:19.052 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:19.052 LIB libspdk_event_vhost_scsi.a 00:03:19.052 SO libspdk_event_vhost_scsi.so.3.0 00:03:19.052 LIB libspdk_event_iscsi.a 00:03:19.310 SO libspdk_event_iscsi.so.6.0 00:03:19.310 SYMLINK libspdk_event_vhost_scsi.so 00:03:19.310 SYMLINK libspdk_event_iscsi.so 00:03:19.310 SO libspdk.so.6.0 00:03:19.310 SYMLINK libspdk.so 00:03:19.571 CXX app/trace/trace.o 00:03:19.571 CC app/trace_record/trace_record.o 00:03:19.571 CC app/spdk_lspci/spdk_lspci.o 00:03:19.571 CC app/spdk_nvme_discover/discovery_aer.o 00:03:19.571 CC app/spdk_top/spdk_top.o 00:03:19.571 CC app/spdk_nvme_perf/perf.o 00:03:19.571 CC app/spdk_nvme_identify/identify.o 00:03:19.571 CC test/rpc_client/rpc_client_test.o 00:03:19.571 TEST_HEADER include/spdk/accel.h 00:03:19.571 TEST_HEADER include/spdk/accel_module.h 00:03:19.571 TEST_HEADER include/spdk/assert.h 00:03:19.571 TEST_HEADER include/spdk/barrier.h 00:03:19.571 TEST_HEADER include/spdk/bdev.h 00:03:19.571 TEST_HEADER include/spdk/base64.h 00:03:19.571 TEST_HEADER include/spdk/bdev_module.h 00:03:19.571 TEST_HEADER include/spdk/bdev_zone.h 00:03:19.571 TEST_HEADER include/spdk/bit_array.h 00:03:19.571 TEST_HEADER include/spdk/bit_pool.h 00:03:19.571 TEST_HEADER include/spdk/blob_bdev.h 00:03:19.571 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:19.571 TEST_HEADER include/spdk/blobfs.h 00:03:19.571 TEST_HEADER include/spdk/blob.h 00:03:19.571 TEST_HEADER include/spdk/conf.h 00:03:19.571 TEST_HEADER include/spdk/config.h 00:03:19.571 TEST_HEADER include/spdk/cpuset.h 00:03:19.571 TEST_HEADER include/spdk/crc16.h 00:03:19.571 TEST_HEADER include/spdk/crc32.h 00:03:19.571 TEST_HEADER include/spdk/crc64.h 00:03:19.571 TEST_HEADER include/spdk/dif.h 00:03:19.571 TEST_HEADER include/spdk/dma.h 00:03:19.571 TEST_HEADER include/spdk/endian.h 00:03:19.571 TEST_HEADER include/spdk/env_dpdk.h 00:03:19.571 TEST_HEADER include/spdk/env.h 00:03:19.571 TEST_HEADER include/spdk/event.h 00:03:19.571 TEST_HEADER include/spdk/fd_group.h 00:03:19.571 TEST_HEADER include/spdk/fd.h 00:03:19.571 TEST_HEADER include/spdk/file.h 00:03:19.571 TEST_HEADER include/spdk/ftl.h 00:03:19.571 TEST_HEADER include/spdk/gpt_spec.h 00:03:19.571 TEST_HEADER include/spdk/hexlify.h 00:03:19.571 TEST_HEADER include/spdk/histogram_data.h 00:03:19.571 TEST_HEADER include/spdk/idxd.h 00:03:19.571 TEST_HEADER include/spdk/idxd_spec.h 00:03:19.571 TEST_HEADER include/spdk/init.h 00:03:19.571 TEST_HEADER include/spdk/ioat.h 00:03:19.571 TEST_HEADER include/spdk/ioat_spec.h 00:03:19.571 TEST_HEADER include/spdk/iscsi_spec.h 00:03:19.571 TEST_HEADER include/spdk/json.h 00:03:19.571 TEST_HEADER include/spdk/keyring.h 00:03:19.571 TEST_HEADER include/spdk/jsonrpc.h 00:03:19.571 TEST_HEADER include/spdk/keyring_module.h 00:03:19.571 TEST_HEADER include/spdk/likely.h 00:03:19.571 TEST_HEADER include/spdk/log.h 00:03:19.571 TEST_HEADER include/spdk/lvol.h 00:03:19.571 TEST_HEADER include/spdk/mmio.h 00:03:19.571 TEST_HEADER include/spdk/memory.h 00:03:19.571 TEST_HEADER include/spdk/nbd.h 00:03:19.571 TEST_HEADER include/spdk/net.h 00:03:19.571 TEST_HEADER include/spdk/notify.h 00:03:19.571 TEST_HEADER include/spdk/nvme.h 00:03:19.571 TEST_HEADER include/spdk/nvme_intel.h 00:03:19.571 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:19.571 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:19.571 TEST_HEADER include/spdk/nvme_spec.h 00:03:19.571 TEST_HEADER include/spdk/nvme_zns.h 00:03:19.571 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:19.571 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:19.571 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:19.571 CC app/spdk_dd/spdk_dd.o 00:03:19.571 TEST_HEADER include/spdk/nvmf.h 00:03:19.571 TEST_HEADER include/spdk/nvmf_spec.h 00:03:19.571 TEST_HEADER include/spdk/nvmf_transport.h 00:03:19.571 TEST_HEADER include/spdk/opal.h 00:03:19.571 TEST_HEADER include/spdk/opal_spec.h 00:03:19.571 TEST_HEADER include/spdk/pci_ids.h 00:03:19.571 TEST_HEADER include/spdk/pipe.h 00:03:19.571 TEST_HEADER include/spdk/queue.h 00:03:19.571 TEST_HEADER include/spdk/rpc.h 00:03:19.571 TEST_HEADER include/spdk/reduce.h 00:03:19.571 TEST_HEADER include/spdk/scheduler.h 00:03:19.571 TEST_HEADER include/spdk/scsi.h 00:03:19.571 TEST_HEADER include/spdk/sock.h 00:03:19.571 TEST_HEADER include/spdk/scsi_spec.h 00:03:19.571 TEST_HEADER include/spdk/stdinc.h 00:03:19.571 TEST_HEADER include/spdk/string.h 00:03:19.571 TEST_HEADER include/spdk/thread.h 00:03:19.571 TEST_HEADER include/spdk/trace.h 00:03:19.571 TEST_HEADER include/spdk/trace_parser.h 00:03:19.571 TEST_HEADER include/spdk/tree.h 00:03:19.571 TEST_HEADER include/spdk/ublk.h 00:03:19.571 TEST_HEADER include/spdk/util.h 00:03:19.571 CC app/iscsi_tgt/iscsi_tgt.o 00:03:19.571 TEST_HEADER include/spdk/uuid.h 00:03:19.571 TEST_HEADER include/spdk/version.h 00:03:19.571 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:19.571 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:19.571 TEST_HEADER include/spdk/vhost.h 00:03:19.571 TEST_HEADER include/spdk/vmd.h 00:03:19.571 TEST_HEADER include/spdk/xor.h 00:03:19.571 TEST_HEADER include/spdk/zipf.h 00:03:19.571 CXX test/cpp_headers/accel.o 00:03:19.571 CXX test/cpp_headers/accel_module.o 00:03:19.571 CXX test/cpp_headers/assert.o 00:03:19.571 CXX test/cpp_headers/barrier.o 00:03:19.571 CXX test/cpp_headers/base64.o 00:03:19.571 CXX test/cpp_headers/bdev.o 00:03:19.571 CXX test/cpp_headers/bdev_module.o 00:03:19.571 CXX test/cpp_headers/bdev_zone.o 00:03:19.571 CXX test/cpp_headers/bit_array.o 00:03:19.571 CXX test/cpp_headers/bit_pool.o 00:03:19.571 CXX test/cpp_headers/blob_bdev.o 00:03:19.571 CXX test/cpp_headers/blobfs_bdev.o 00:03:19.571 CXX test/cpp_headers/blobfs.o 00:03:19.571 CXX test/cpp_headers/blob.o 00:03:19.571 CXX test/cpp_headers/conf.o 00:03:19.571 CXX test/cpp_headers/config.o 00:03:19.571 CXX test/cpp_headers/cpuset.o 00:03:19.571 CXX test/cpp_headers/crc16.o 00:03:19.571 CC app/nvmf_tgt/nvmf_main.o 00:03:19.571 CC app/spdk_tgt/spdk_tgt.o 00:03:19.571 CC examples/util/zipf/zipf.o 00:03:19.571 CC test/app/jsoncat/jsoncat.o 00:03:19.571 CC examples/ioat/verify/verify.o 00:03:19.571 CC examples/ioat/perf/perf.o 00:03:19.571 CC test/app/histogram_perf/histogram_perf.o 00:03:19.571 CC app/fio/nvme/fio_plugin.o 00:03:19.571 CC test/thread/poller_perf/poller_perf.o 00:03:19.571 CC test/env/memory/memory_ut.o 00:03:19.571 CC test/env/pci/pci_ut.o 00:03:19.571 CC test/env/vtophys/vtophys.o 00:03:19.571 CC test/app/stub/stub.o 00:03:19.571 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:19.837 CC app/fio/bdev/fio_plugin.o 00:03:19.837 CC test/app/bdev_svc/bdev_svc.o 00:03:19.837 CC test/dma/test_dma/test_dma.o 00:03:19.837 LINK spdk_lspci 00:03:19.837 CC test/env/mem_callbacks/mem_callbacks.o 00:03:19.837 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:19.837 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:19.837 LINK rpc_client_test 00:03:19.837 LINK spdk_nvme_discover 00:03:20.095 LINK jsoncat 00:03:20.095 LINK poller_perf 00:03:20.095 CXX test/cpp_headers/crc32.o 00:03:20.095 LINK interrupt_tgt 00:03:20.095 LINK zipf 00:03:20.095 LINK histogram_perf 00:03:20.095 CXX test/cpp_headers/crc64.o 00:03:20.095 LINK spdk_trace_record 00:03:20.095 LINK vtophys 00:03:20.095 CXX test/cpp_headers/dif.o 00:03:20.095 CXX test/cpp_headers/dma.o 00:03:20.095 LINK env_dpdk_post_init 00:03:20.095 CXX test/cpp_headers/endian.o 00:03:20.095 LINK nvmf_tgt 00:03:20.095 CXX test/cpp_headers/env_dpdk.o 00:03:20.095 CXX test/cpp_headers/env.o 00:03:20.095 CXX test/cpp_headers/event.o 00:03:20.095 CXX test/cpp_headers/fd_group.o 00:03:20.095 CXX test/cpp_headers/fd.o 00:03:20.095 CXX test/cpp_headers/file.o 00:03:20.095 LINK iscsi_tgt 00:03:20.095 LINK stub 00:03:20.095 CXX test/cpp_headers/gpt_spec.o 00:03:20.095 CXX test/cpp_headers/ftl.o 00:03:20.095 CXX test/cpp_headers/hexlify.o 00:03:20.095 CXX test/cpp_headers/histogram_data.o 00:03:20.095 LINK verify 00:03:20.095 LINK ioat_perf 00:03:20.095 LINK bdev_svc 00:03:20.095 LINK spdk_tgt 00:03:20.095 CXX test/cpp_headers/idxd.o 00:03:20.095 CXX test/cpp_headers/idxd_spec.o 00:03:20.095 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:20.357 CXX test/cpp_headers/init.o 00:03:20.357 LINK mem_callbacks 00:03:20.357 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:20.357 CXX test/cpp_headers/ioat.o 00:03:20.357 LINK spdk_dd 00:03:20.357 CXX test/cpp_headers/ioat_spec.o 00:03:20.357 LINK spdk_trace 00:03:20.357 CXX test/cpp_headers/iscsi_spec.o 00:03:20.357 CXX test/cpp_headers/json.o 00:03:20.357 CXX test/cpp_headers/jsonrpc.o 00:03:20.357 CXX test/cpp_headers/keyring.o 00:03:20.357 CXX test/cpp_headers/keyring_module.o 00:03:20.357 CXX test/cpp_headers/likely.o 00:03:20.357 CXX test/cpp_headers/log.o 00:03:20.357 CXX test/cpp_headers/lvol.o 00:03:20.357 LINK pci_ut 00:03:20.357 CXX test/cpp_headers/memory.o 00:03:20.357 CXX test/cpp_headers/mmio.o 00:03:20.357 CXX test/cpp_headers/nbd.o 00:03:20.357 CXX test/cpp_headers/net.o 00:03:20.357 CXX test/cpp_headers/notify.o 00:03:20.357 CXX test/cpp_headers/nvme.o 00:03:20.357 CXX test/cpp_headers/nvme_intel.o 00:03:20.357 CXX test/cpp_headers/nvme_ocssd.o 00:03:20.357 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:20.618 CXX test/cpp_headers/nvme_spec.o 00:03:20.618 LINK test_dma 00:03:20.618 CXX test/cpp_headers/nvme_zns.o 00:03:20.618 CXX test/cpp_headers/nvmf_cmd.o 00:03:20.618 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:20.618 CXX test/cpp_headers/nvmf.o 00:03:20.618 CXX test/cpp_headers/nvmf_spec.o 00:03:20.618 CXX test/cpp_headers/nvmf_transport.o 00:03:20.618 CXX test/cpp_headers/opal.o 00:03:20.618 CC test/event/reactor_perf/reactor_perf.o 00:03:20.618 CC test/event/reactor/reactor.o 00:03:20.618 CC test/event/event_perf/event_perf.o 00:03:20.618 CC examples/vmd/lsvmd/lsvmd.o 00:03:20.618 CC examples/sock/hello_world/hello_sock.o 00:03:20.618 CXX test/cpp_headers/opal_spec.o 00:03:20.618 CC examples/vmd/led/led.o 00:03:20.618 CXX test/cpp_headers/pci_ids.o 00:03:20.618 CC test/event/app_repeat/app_repeat.o 00:03:20.618 LINK nvme_fuzz 00:03:20.618 CXX test/cpp_headers/pipe.o 00:03:20.618 CC examples/thread/thread/thread_ex.o 00:03:20.618 CC test/event/scheduler/scheduler.o 00:03:20.618 CXX test/cpp_headers/queue.o 00:03:20.879 LINK spdk_nvme 00:03:20.879 CC examples/idxd/perf/perf.o 00:03:20.879 LINK spdk_bdev 00:03:20.879 CXX test/cpp_headers/reduce.o 00:03:20.879 CXX test/cpp_headers/rpc.o 00:03:20.879 CXX test/cpp_headers/scheduler.o 00:03:20.879 CXX test/cpp_headers/scsi.o 00:03:20.879 CXX test/cpp_headers/scsi_spec.o 00:03:20.879 CXX test/cpp_headers/sock.o 00:03:20.879 CXX test/cpp_headers/stdinc.o 00:03:20.879 CXX test/cpp_headers/string.o 00:03:20.879 CXX test/cpp_headers/thread.o 00:03:20.879 CXX test/cpp_headers/trace.o 00:03:20.879 CXX test/cpp_headers/trace_parser.o 00:03:20.879 CXX test/cpp_headers/tree.o 00:03:20.879 CXX test/cpp_headers/ublk.o 00:03:20.879 CXX test/cpp_headers/util.o 00:03:20.879 CXX test/cpp_headers/uuid.o 00:03:20.879 CXX test/cpp_headers/version.o 00:03:20.879 CXX test/cpp_headers/vfio_user_pci.o 00:03:20.879 CXX test/cpp_headers/vfio_user_spec.o 00:03:20.879 CXX test/cpp_headers/vhost.o 00:03:20.879 LINK reactor 00:03:20.879 CXX test/cpp_headers/vmd.o 00:03:20.879 LINK reactor_perf 00:03:20.879 LINK lsvmd 00:03:20.879 LINK event_perf 00:03:20.879 CXX test/cpp_headers/xor.o 00:03:20.879 CXX test/cpp_headers/zipf.o 00:03:20.879 LINK led 00:03:21.143 LINK spdk_nvme_perf 00:03:21.143 LINK app_repeat 00:03:21.143 CC app/vhost/vhost.o 00:03:21.143 LINK vhost_fuzz 00:03:21.143 LINK memory_ut 00:03:21.143 LINK spdk_nvme_identify 00:03:21.143 LINK hello_sock 00:03:21.143 LINK spdk_top 00:03:21.143 LINK scheduler 00:03:21.143 LINK thread 00:03:21.401 CC test/nvme/connect_stress/connect_stress.o 00:03:21.401 CC test/nvme/startup/startup.o 00:03:21.401 CC test/nvme/sgl/sgl.o 00:03:21.401 CC test/nvme/reset/reset.o 00:03:21.401 CC test/nvme/overhead/overhead.o 00:03:21.401 CC test/nvme/e2edp/nvme_dp.o 00:03:21.401 CC test/nvme/err_injection/err_injection.o 00:03:21.401 CC test/nvme/reserve/reserve.o 00:03:21.401 CC test/nvme/simple_copy/simple_copy.o 00:03:21.401 CC test/nvme/aer/aer.o 00:03:21.401 CC test/nvme/boot_partition/boot_partition.o 00:03:21.401 CC test/nvme/compliance/nvme_compliance.o 00:03:21.401 CC test/nvme/fused_ordering/fused_ordering.o 00:03:21.401 CC test/accel/dif/dif.o 00:03:21.401 CC test/nvme/cuse/cuse.o 00:03:21.401 CC test/nvme/fdp/fdp.o 00:03:21.401 LINK vhost 00:03:21.401 CC test/blobfs/mkfs/mkfs.o 00:03:21.401 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:21.401 CC test/lvol/esnap/esnap.o 00:03:21.401 LINK idxd_perf 00:03:21.659 LINK boot_partition 00:03:21.659 LINK err_injection 00:03:21.659 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:21.659 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:21.659 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:21.659 CC examples/nvme/abort/abort.o 00:03:21.659 CC examples/nvme/reconnect/reconnect.o 00:03:21.659 CC examples/nvme/hotplug/hotplug.o 00:03:21.659 CC examples/nvme/hello_world/hello_world.o 00:03:21.659 CC examples/nvme/arbitration/arbitration.o 00:03:21.659 LINK reserve 00:03:21.659 LINK startup 00:03:21.659 LINK doorbell_aers 00:03:21.659 LINK fused_ordering 00:03:21.659 LINK connect_stress 00:03:21.659 LINK mkfs 00:03:21.659 LINK aer 00:03:21.659 CC examples/accel/perf/accel_perf.o 00:03:21.659 LINK overhead 00:03:21.659 CC examples/blob/hello_world/hello_blob.o 00:03:21.659 CC examples/blob/cli/blobcli.o 00:03:21.659 LINK simple_copy 00:03:21.659 LINK nvme_dp 00:03:21.659 LINK nvme_compliance 00:03:21.659 LINK reset 00:03:21.917 LINK sgl 00:03:21.917 LINK fdp 00:03:21.917 LINK hello_world 00:03:21.917 LINK pmr_persistence 00:03:21.917 LINK cmb_copy 00:03:21.917 LINK hotplug 00:03:21.917 LINK dif 00:03:21.917 LINK arbitration 00:03:21.917 LINK hello_blob 00:03:22.174 LINK abort 00:03:22.174 LINK reconnect 00:03:22.174 LINK nvme_manage 00:03:22.174 LINK accel_perf 00:03:22.174 LINK blobcli 00:03:22.432 LINK iscsi_fuzz 00:03:22.432 CC test/bdev/bdevio/bdevio.o 00:03:22.691 CC examples/bdev/hello_world/hello_bdev.o 00:03:22.691 CC examples/bdev/bdevperf/bdevperf.o 00:03:22.948 LINK bdevio 00:03:22.948 LINK hello_bdev 00:03:22.948 LINK cuse 00:03:23.242 LINK bdevperf 00:03:23.832 CC examples/nvmf/nvmf/nvmf.o 00:03:24.090 LINK nvmf 00:03:26.618 LINK esnap 00:03:26.618 00:03:26.618 real 0m40.882s 00:03:26.618 user 7m23.684s 00:03:26.618 sys 1m46.969s 00:03:26.618 01:41:41 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:03:26.618 01:41:41 make -- common/autotest_common.sh@10 -- $ set +x 00:03:26.618 ************************************ 00:03:26.618 END TEST make 00:03:26.618 ************************************ 00:03:26.618 01:41:41 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:26.618 01:41:41 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:26.618 01:41:41 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:26.618 01:41:41 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:26.618 01:41:41 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:26.618 01:41:41 -- pm/common@44 -- $ pid=1186847 00:03:26.618 01:41:41 -- pm/common@50 -- $ kill -TERM 1186847 00:03:26.618 01:41:41 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:26.618 01:41:41 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:26.618 01:41:41 -- pm/common@44 -- $ pid=1186849 00:03:26.618 01:41:41 -- pm/common@50 -- $ kill -TERM 1186849 00:03:26.618 01:41:41 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:26.618 01:41:41 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:26.618 01:41:41 -- pm/common@44 -- $ pid=1186851 00:03:26.618 01:41:41 -- pm/common@50 -- $ kill -TERM 1186851 00:03:26.618 01:41:41 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:26.618 01:41:41 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:26.618 01:41:41 -- pm/common@44 -- $ pid=1186879 00:03:26.618 01:41:41 -- pm/common@50 -- $ sudo -E kill -TERM 1186879 00:03:26.876 01:41:41 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:26.876 01:41:41 -- nvmf/common.sh@7 -- # uname -s 00:03:26.876 01:41:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:26.876 01:41:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:26.876 01:41:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:26.876 01:41:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:26.876 01:41:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:26.876 01:41:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:26.876 01:41:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:26.876 01:41:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:26.876 01:41:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:26.876 01:41:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:26.876 01:41:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:03:26.876 01:41:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:03:26.876 01:41:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:26.876 01:41:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:26.876 01:41:41 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:26.876 01:41:41 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:26.876 01:41:41 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:26.876 01:41:41 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:26.876 01:41:41 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:26.876 01:41:41 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:26.877 01:41:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:26.877 01:41:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:26.877 01:41:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:26.877 01:41:41 -- paths/export.sh@5 -- # export PATH 00:03:26.877 01:41:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:26.877 01:41:41 -- nvmf/common.sh@47 -- # : 0 00:03:26.877 01:41:41 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:26.877 01:41:41 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:26.877 01:41:41 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:26.877 01:41:41 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:26.877 01:41:41 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:26.877 01:41:41 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:26.877 01:41:41 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:26.877 01:41:41 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:26.877 01:41:41 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:26.877 01:41:41 -- spdk/autotest.sh@32 -- # uname -s 00:03:26.877 01:41:41 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:26.877 01:41:41 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:26.877 01:41:41 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:26.877 01:41:41 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:26.877 01:41:41 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:26.877 01:41:41 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:26.877 01:41:41 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:26.877 01:41:41 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:26.877 01:41:41 -- spdk/autotest.sh@48 -- # udevadm_pid=1262154 00:03:26.877 01:41:41 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:26.877 01:41:41 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:26.877 01:41:41 -- pm/common@17 -- # local monitor 00:03:26.877 01:41:41 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:26.877 01:41:41 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:26.877 01:41:41 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:26.877 01:41:41 -- pm/common@21 -- # date +%s 00:03:26.877 01:41:41 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:26.877 01:41:41 -- pm/common@21 -- # date +%s 00:03:26.877 01:41:41 -- pm/common@25 -- # sleep 1 00:03:26.877 01:41:41 -- pm/common@21 -- # date +%s 00:03:26.877 01:41:41 -- pm/common@21 -- # date +%s 00:03:26.877 01:41:41 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721778101 00:03:26.877 01:41:41 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721778101 00:03:26.877 01:41:41 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721778101 00:03:26.877 01:41:41 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721778101 00:03:26.877 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721778101_collect-vmstat.pm.log 00:03:26.877 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721778101_collect-cpu-load.pm.log 00:03:26.877 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721778101_collect-cpu-temp.pm.log 00:03:26.877 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721778101_collect-bmc-pm.bmc.pm.log 00:03:27.814 01:41:42 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:27.814 01:41:42 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:27.814 01:41:42 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:27.814 01:41:42 -- common/autotest_common.sh@10 -- # set +x 00:03:27.814 01:41:42 -- spdk/autotest.sh@59 -- # create_test_list 00:03:27.814 01:41:42 -- common/autotest_common.sh@746 -- # xtrace_disable 00:03:27.814 01:41:42 -- common/autotest_common.sh@10 -- # set +x 00:03:27.814 01:41:42 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:03:27.814 01:41:42 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:27.814 01:41:42 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:27.814 01:41:42 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:27.814 01:41:42 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:27.814 01:41:42 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:27.814 01:41:42 -- common/autotest_common.sh@1453 -- # uname 00:03:27.814 01:41:42 -- common/autotest_common.sh@1453 -- # '[' Linux = FreeBSD ']' 00:03:27.814 01:41:42 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:27.814 01:41:42 -- common/autotest_common.sh@1473 -- # uname 00:03:27.814 01:41:42 -- common/autotest_common.sh@1473 -- # [[ Linux = FreeBSD ]] 00:03:27.814 01:41:42 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:03:27.814 01:41:42 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:03:27.814 01:41:42 -- spdk/autotest.sh@72 -- # hash lcov 00:03:27.814 01:41:42 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:27.814 01:41:42 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:03:27.814 --rc lcov_branch_coverage=1 00:03:27.814 --rc lcov_function_coverage=1 00:03:27.814 --rc genhtml_branch_coverage=1 00:03:27.814 --rc genhtml_function_coverage=1 00:03:27.814 --rc genhtml_legend=1 00:03:27.814 --rc geninfo_all_blocks=1 00:03:27.814 ' 00:03:27.814 01:41:42 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:03:27.814 --rc lcov_branch_coverage=1 00:03:27.814 --rc lcov_function_coverage=1 00:03:27.814 --rc genhtml_branch_coverage=1 00:03:27.814 --rc genhtml_function_coverage=1 00:03:27.814 --rc genhtml_legend=1 00:03:27.814 --rc geninfo_all_blocks=1 00:03:27.814 ' 00:03:27.814 01:41:42 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:03:27.814 --rc lcov_branch_coverage=1 00:03:27.814 --rc lcov_function_coverage=1 00:03:27.814 --rc genhtml_branch_coverage=1 00:03:27.814 --rc genhtml_function_coverage=1 00:03:27.814 --rc genhtml_legend=1 00:03:27.814 --rc geninfo_all_blocks=1 00:03:27.814 --no-external' 00:03:27.814 01:41:42 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:03:27.814 --rc lcov_branch_coverage=1 00:03:27.814 --rc lcov_function_coverage=1 00:03:27.814 --rc genhtml_branch_coverage=1 00:03:27.814 --rc genhtml_function_coverage=1 00:03:27.814 --rc genhtml_legend=1 00:03:27.814 --rc geninfo_all_blocks=1 00:03:27.814 --no-external' 00:03:27.814 01:41:42 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:27.814 lcov: LCOV version 1.14 00:03:27.815 01:41:42 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:03:29.716 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:03:29.716 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:03:29.716 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:03:29.716 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:03:29.716 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:03:29.716 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:03:29.716 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:03:29.716 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:03:29.716 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:03:29.716 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:03:29.716 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:03:29.716 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:03:29.716 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:03:29.716 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:03:29.716 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:03:29.716 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:03:29.716 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:03:29.716 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:03:29.716 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:03:29.716 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:03:29.716 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:03:29.716 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:03:29.716 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:03:29.716 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:03:29.716 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:03:29.716 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:03:29.716 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:03:29.716 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:03:29.716 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:03:29.716 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:03:29.716 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:03:29.716 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:03:29.716 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:03:29.716 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:03:29.716 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:03:29.716 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:03:29.716 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:03:29.716 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:03:29.716 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:03:29.716 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:03:29.716 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:03:29.716 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:03:29.716 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:03:29.716 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:03:29.716 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:03:29.716 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:03:29.716 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:03:29.716 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:03:29.716 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:03:29.716 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:03:29.716 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:03:29.716 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:03:29.716 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:03:29.716 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:03:29.716 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:03:29.716 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:03:29.716 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:03:29.716 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:03:29.716 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:03:29.716 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:03:29.716 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:03:29.716 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:03:29.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:03:29.717 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:03:29.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:03:29.717 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:03:29.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:03:29.717 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:03:29.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:03:29.717 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:03:29.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:03:29.717 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:03:29.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:03:29.717 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:03:29.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:03:29.717 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:03:29.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:03:29.717 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:03:29.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:03:29.717 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:03:29.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:03:29.717 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:03:29.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:03:29.717 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:03:29.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:03:29.717 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:03:29.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:03:29.717 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:03:29.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:03:29.717 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:03:29.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:03:29.717 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:03:29.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:03:29.717 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:03:29.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:03:29.717 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:03:29.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:03:29.717 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:03:29.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:03:29.717 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:03:29.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno:no functions found 00:03:29.717 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno 00:03:29.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:03:29.717 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:03:29.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:03:29.717 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:03:29.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:03:29.717 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:03:29.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:03:29.717 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:03:29.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:03:29.717 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:03:29.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:03:29.717 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:03:29.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:03:29.717 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:03:29.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:03:29.717 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:03:29.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:03:29.717 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:03:29.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:03:29.717 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:03:29.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:03:29.717 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:03:29.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:03:29.717 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:03:29.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:03:29.717 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:03:29.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:03:29.717 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:03:29.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:03:29.717 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:03:29.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:03:29.717 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:03:29.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:03:29.717 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:03:29.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:03:29.717 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:03:29.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:03:29.717 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:03:29.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:03:29.717 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:03:29.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:03:29.717 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:03:29.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:03:29.717 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:03:29.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:03:29.717 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:03:29.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:03:29.717 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:03:29.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:03:29.717 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:03:29.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:03:29.717 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:03:29.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:03:29.717 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:03:29.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:03:29.717 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:03:29.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:03:29.717 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:03:29.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:03:29.717 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:03:29.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:03:29.718 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:03:29.718 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:03:29.718 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:03:29.718 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:03:29.718 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:03:29.718 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:03:29.718 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:03:29.718 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:03:29.718 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:03:29.718 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:03:29.718 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:03:29.718 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:03:29.718 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:03:29.718 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:03:29.718 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:03:44.594 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:44.594 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:04:02.671 01:42:15 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:04:02.671 01:42:15 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:02.671 01:42:15 -- common/autotest_common.sh@10 -- # set +x 00:04:02.671 01:42:15 -- spdk/autotest.sh@91 -- # rm -f 00:04:02.671 01:42:15 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:02.671 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:04:02.671 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:04:02.671 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:04:02.671 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:04:02.671 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:04:02.671 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:04:02.672 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:04:02.672 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:04:02.672 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:04:02.672 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:04:02.672 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:04:02.672 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:04:02.672 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:04:02.672 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:04:02.672 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:04:02.672 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:04:02.672 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:04:02.672 01:42:16 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:04:02.672 01:42:16 -- common/autotest_common.sh@1667 -- # zoned_devs=() 00:04:02.672 01:42:16 -- common/autotest_common.sh@1667 -- # local -gA zoned_devs 00:04:02.672 01:42:16 -- common/autotest_common.sh@1668 -- # local nvme bdf 00:04:02.672 01:42:16 -- common/autotest_common.sh@1670 -- # for nvme in /sys/block/nvme* 00:04:02.672 01:42:16 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:04:02.672 01:42:16 -- common/autotest_common.sh@1660 -- # local device=nvme0n1 00:04:02.672 01:42:16 -- common/autotest_common.sh@1662 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:02.672 01:42:16 -- common/autotest_common.sh@1663 -- # [[ none != none ]] 00:04:02.672 01:42:16 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:04:02.672 01:42:16 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:02.672 01:42:16 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:02.672 01:42:16 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:04:02.672 01:42:16 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:04:02.672 01:42:16 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:02.672 No valid GPT data, bailing 00:04:02.672 01:42:17 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:02.672 01:42:17 -- scripts/common.sh@391 -- # pt= 00:04:02.672 01:42:17 -- scripts/common.sh@392 -- # return 1 00:04:02.672 01:42:17 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:02.672 1+0 records in 00:04:02.672 1+0 records out 00:04:02.672 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00249557 s, 420 MB/s 00:04:02.672 01:42:17 -- spdk/autotest.sh@118 -- # sync 00:04:02.672 01:42:17 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:02.672 01:42:17 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:02.672 01:42:17 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:04.047 01:42:18 -- spdk/autotest.sh@124 -- # uname -s 00:04:04.047 01:42:18 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:04:04.047 01:42:18 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:04:04.047 01:42:18 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:04.047 01:42:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:04.047 01:42:18 -- common/autotest_common.sh@10 -- # set +x 00:04:04.047 ************************************ 00:04:04.047 START TEST setup.sh 00:04:04.047 ************************************ 00:04:04.047 01:42:18 setup.sh -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:04:04.305 * Looking for test storage... 00:04:04.305 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:04.305 01:42:18 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:04:04.305 01:42:18 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:04.305 01:42:18 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:04:04.305 01:42:18 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:04.305 01:42:18 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:04.305 01:42:18 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:04.305 ************************************ 00:04:04.305 START TEST acl 00:04:04.305 ************************************ 00:04:04.305 01:42:19 setup.sh.acl -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:04:04.305 * Looking for test storage... 00:04:04.305 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:04.305 01:42:19 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:04:04.305 01:42:19 setup.sh.acl -- common/autotest_common.sh@1667 -- # zoned_devs=() 00:04:04.305 01:42:19 setup.sh.acl -- common/autotest_common.sh@1667 -- # local -gA zoned_devs 00:04:04.305 01:42:19 setup.sh.acl -- common/autotest_common.sh@1668 -- # local nvme bdf 00:04:04.305 01:42:19 setup.sh.acl -- common/autotest_common.sh@1670 -- # for nvme in /sys/block/nvme* 00:04:04.305 01:42:19 setup.sh.acl -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:04:04.305 01:42:19 setup.sh.acl -- common/autotest_common.sh@1660 -- # local device=nvme0n1 00:04:04.305 01:42:19 setup.sh.acl -- common/autotest_common.sh@1662 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:04.305 01:42:19 setup.sh.acl -- common/autotest_common.sh@1663 -- # [[ none != none ]] 00:04:04.305 01:42:19 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:04:04.305 01:42:19 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:04:04.305 01:42:19 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:04:04.305 01:42:19 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:04:04.305 01:42:19 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:04:04.305 01:42:19 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:04.305 01:42:19 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:05.680 01:42:20 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:04:05.680 01:42:20 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:04:05.680 01:42:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:05.680 01:42:20 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:04:05.680 01:42:20 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:04:05.680 01:42:20 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:06.661 Hugepages 00:04:06.661 node hugesize free / total 00:04:06.661 01:42:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:06.661 01:42:21 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:06.661 01:42:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:06.661 01:42:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:06.661 01:42:21 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:06.661 01:42:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:06.661 01:42:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:06.661 01:42:21 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:06.661 01:42:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:06.661 00:04:06.661 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:06.661 01:42:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:06.661 01:42:21 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:06.661 01:42:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:06.661 01:42:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:04:06.661 01:42:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:06.661 01:42:21 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:06.661 01:42:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:06.661 01:42:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:04:06.661 01:42:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:06.661 01:42:21 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:06.661 01:42:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:06.662 01:42:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:04:06.662 01:42:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:06.662 01:42:21 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:06.662 01:42:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:06.662 01:42:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:04:06.662 01:42:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:06.662 01:42:21 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:06.662 01:42:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:06.662 01:42:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:04:06.662 01:42:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:06.662 01:42:21 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:06.662 01:42:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:06.662 01:42:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:04:06.662 01:42:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:06.662 01:42:21 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:06.662 01:42:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:06.662 01:42:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:04:06.662 01:42:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:06.662 01:42:21 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:06.662 01:42:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:06.662 01:42:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:04:06.662 01:42:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:06.662 01:42:21 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:06.662 01:42:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:06.662 01:42:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:04:06.662 01:42:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:06.662 01:42:21 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:06.662 01:42:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:06.662 01:42:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:04:06.662 01:42:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:06.662 01:42:21 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:06.662 01:42:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:06.662 01:42:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:04:06.662 01:42:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:06.662 01:42:21 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:06.662 01:42:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:06.662 01:42:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:04:06.662 01:42:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:06.662 01:42:21 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:06.662 01:42:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:06.662 01:42:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:04:06.662 01:42:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:06.662 01:42:21 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:06.662 01:42:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:06.662 01:42:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:04:06.662 01:42:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:06.662 01:42:21 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:06.662 01:42:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:06.662 01:42:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:04:06.662 01:42:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:06.662 01:42:21 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:06.662 01:42:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:06.662 01:42:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:04:06.662 01:42:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:06.662 01:42:21 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:06.662 01:42:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:06.921 01:42:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:88:00.0 == *:*:*.* ]] 00:04:06.921 01:42:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:06.921 01:42:21 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:04:06.921 01:42:21 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:06.921 01:42:21 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:06.921 01:42:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:06.921 01:42:21 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:04:06.921 01:42:21 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:04:06.921 01:42:21 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:06.921 01:42:21 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:06.921 01:42:21 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:06.921 ************************************ 00:04:06.921 START TEST denied 00:04:06.921 ************************************ 00:04:06.921 01:42:21 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:04:06.921 01:42:21 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:88:00.0' 00:04:06.921 01:42:21 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:04:06.921 01:42:21 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:88:00.0' 00:04:06.921 01:42:21 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:04:06.921 01:42:21 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:08.297 0000:88:00.0 (8086 0a54): Skipping denied controller at 0000:88:00.0 00:04:08.297 01:42:22 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:88:00.0 00:04:08.297 01:42:22 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:04:08.297 01:42:22 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:04:08.297 01:42:22 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:88:00.0 ]] 00:04:08.297 01:42:22 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:88:00.0/driver 00:04:08.297 01:42:22 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:08.297 01:42:22 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:08.297 01:42:22 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:04:08.297 01:42:22 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:08.297 01:42:22 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:10.836 00:04:10.836 real 0m3.714s 00:04:10.836 user 0m1.075s 00:04:10.836 sys 0m1.733s 00:04:10.836 01:42:25 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:10.836 01:42:25 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:04:10.836 ************************************ 00:04:10.836 END TEST denied 00:04:10.836 ************************************ 00:04:10.836 01:42:25 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:10.836 01:42:25 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:10.836 01:42:25 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:10.836 01:42:25 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:10.836 ************************************ 00:04:10.836 START TEST allowed 00:04:10.836 ************************************ 00:04:10.836 01:42:25 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:04:10.836 01:42:25 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:88:00.0 00:04:10.836 01:42:25 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:04:10.836 01:42:25 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:88:00.0 .*: nvme -> .*' 00:04:10.836 01:42:25 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:04:10.836 01:42:25 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:13.367 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:04:13.367 01:42:27 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:04:13.367 01:42:27 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:04:13.367 01:42:27 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:04:13.367 01:42:27 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:13.367 01:42:27 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:14.743 00:04:14.743 real 0m3.892s 00:04:14.743 user 0m1.027s 00:04:14.744 sys 0m1.662s 00:04:14.744 01:42:29 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:14.744 01:42:29 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:04:14.744 ************************************ 00:04:14.744 END TEST allowed 00:04:14.744 ************************************ 00:04:14.744 00:04:14.744 real 0m10.271s 00:04:14.744 user 0m3.159s 00:04:14.744 sys 0m5.066s 00:04:14.744 01:42:29 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:14.744 01:42:29 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:14.744 ************************************ 00:04:14.744 END TEST acl 00:04:14.744 ************************************ 00:04:14.744 01:42:29 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:04:14.744 01:42:29 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:14.744 01:42:29 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:14.744 01:42:29 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:14.744 ************************************ 00:04:14.744 START TEST hugepages 00:04:14.744 ************************************ 00:04:14.744 01:42:29 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:04:14.744 * Looking for test storage... 00:04:14.744 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:14.744 01:42:29 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:14.744 01:42:29 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:14.744 01:42:29 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:14.744 01:42:29 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:14.744 01:42:29 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:14.744 01:42:29 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:14.744 01:42:29 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:14.744 01:42:29 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:04:14.744 01:42:29 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:04:14.744 01:42:29 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:04:14.744 01:42:29 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:14.744 01:42:29 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:14.744 01:42:29 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:14.744 01:42:29 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:04:14.744 01:42:29 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:14.744 01:42:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.744 01:42:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.744 01:42:29 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 41801060 kB' 'MemAvailable: 45292284 kB' 'Buffers: 2704 kB' 'Cached: 12223808 kB' 'SwapCached: 0 kB' 'Active: 9190212 kB' 'Inactive: 3493800 kB' 'Active(anon): 8796708 kB' 'Inactive(anon): 0 kB' 'Active(file): 393504 kB' 'Inactive(file): 3493800 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 460760 kB' 'Mapped: 213456 kB' 'Shmem: 8339208 kB' 'KReclaimable: 195516 kB' 'Slab: 568076 kB' 'SReclaimable: 195516 kB' 'SUnreclaim: 372560 kB' 'KernelStack: 12784 kB' 'PageTables: 8324 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36562312 kB' 'Committed_AS: 9899772 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196388 kB' 'VmallocChunk: 0 kB' 'Percpu: 33408 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 1930844 kB' 'DirectMap2M: 14766080 kB' 'DirectMap1G: 52428800 kB' 00:04:14.744 01:42:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.744 01:42:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.744 01:42:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.744 01:42:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.744 01:42:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.744 01:42:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.744 01:42:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.744 01:42:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.744 01:42:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.744 01:42:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.744 01:42:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.744 01:42:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.744 01:42:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.744 01:42:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.744 01:42:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.744 01:42:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.744 01:42:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.744 01:42:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.744 01:42:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.744 01:42:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.744 01:42:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.744 01:42:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.744 01:42:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.744 01:42:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.744 01:42:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.744 01:42:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.744 01:42:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.744 01:42:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.744 01:42:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.744 01:42:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.744 01:42:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.744 01:42:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.744 01:42:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.744 01:42:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.744 01:42:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.744 01:42:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.744 01:42:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.744 01:42:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.744 01:42:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.744 01:42:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.744 01:42:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.744 01:42:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.744 01:42:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.744 01:42:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.744 01:42:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.744 01:42:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.744 01:42:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.744 01:42:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.744 01:42:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.744 01:42:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.744 01:42:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.744 01:42:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.744 01:42:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.744 01:42:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.744 01:42:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.744 01:42:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.744 01:42:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.744 01:42:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.744 01:42:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.745 01:42:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.745 01:42:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.745 01:42:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.745 01:42:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.745 01:42:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.745 01:42:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.745 01:42:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.745 01:42:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.745 01:42:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.745 01:42:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.745 01:42:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.745 01:42:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.745 01:42:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.745 01:42:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.745 01:42:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.745 01:42:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.745 01:42:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.745 01:42:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.745 01:42:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.745 01:42:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.745 01:42:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.745 01:42:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.745 01:42:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.745 01:42:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.745 01:42:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.745 01:42:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.745 01:42:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.745 01:42:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.745 01:42:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.745 01:42:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.745 01:42:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.745 01:42:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.745 01:42:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.745 01:42:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.745 01:42:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.745 01:42:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.745 01:42:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.745 01:42:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.745 01:42:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.745 01:42:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.745 01:42:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.745 01:42:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.745 01:42:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.745 01:42:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.745 01:42:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.745 01:42:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.745 01:42:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.745 01:42:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.745 01:42:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.745 01:42:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.745 01:42:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.745 01:42:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.745 01:42:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.745 01:42:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.745 01:42:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.745 01:42:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.745 01:42:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.745 01:42:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.745 01:42:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.745 01:42:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.745 01:42:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.745 01:42:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.745 01:42:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.745 01:42:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.745 01:42:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.745 01:42:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.745 01:42:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.745 01:42:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.745 01:42:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.745 01:42:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.745 01:42:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.745 01:42:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.745 01:42:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.745 01:42:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.745 01:42:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.745 01:42:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.745 01:42:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.745 01:42:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.745 01:42:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.745 01:42:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.745 01:42:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.745 01:42:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.745 01:42:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.745 01:42:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.745 01:42:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.745 01:42:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.745 01:42:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.745 01:42:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.745 01:42:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.745 01:42:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.745 01:42:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.745 01:42:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.745 01:42:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.745 01:42:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.745 01:42:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.745 01:42:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.745 01:42:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.745 01:42:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.745 01:42:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.745 01:42:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.745 01:42:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.745 01:42:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.745 01:42:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.745 01:42:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.745 01:42:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.745 01:42:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.745 01:42:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.745 01:42:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.745 01:42:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.745 01:42:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.745 01:42:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.745 01:42:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.745 01:42:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.745 01:42:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.745 01:42:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.745 01:42:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.745 01:42:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.745 01:42:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.745 01:42:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.745 01:42:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.746 01:42:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.746 01:42:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.746 01:42:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.746 01:42:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.746 01:42:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.746 01:42:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.746 01:42:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.746 01:42:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.746 01:42:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.746 01:42:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.746 01:42:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.746 01:42:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.746 01:42:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.746 01:42:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.746 01:42:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.746 01:42:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.746 01:42:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.746 01:42:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.746 01:42:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.746 01:42:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.746 01:42:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.746 01:42:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.746 01:42:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.746 01:42:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.746 01:42:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.746 01:42:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.746 01:42:29 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:14.746 01:42:29 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:14.746 01:42:29 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:14.746 01:42:29 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:14.746 01:42:29 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:04:14.746 01:42:29 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:04:14.746 01:42:29 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:14.746 01:42:29 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:14.746 01:42:29 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:14.746 01:42:29 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:14.746 01:42:29 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:14.746 01:42:29 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:14.746 01:42:29 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:14.746 01:42:29 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:04:14.746 01:42:29 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:04:14.746 01:42:29 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:14.746 01:42:29 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:14.746 01:42:29 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:14.746 01:42:29 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:14.746 01:42:29 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:14.746 01:42:29 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:14.746 01:42:29 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:04:14.746 01:42:29 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:14.746 01:42:29 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:14.746 01:42:29 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:14.746 01:42:29 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:14.746 01:42:29 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:14.746 01:42:29 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:14.746 01:42:29 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:14.746 01:42:29 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:14.746 01:42:29 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:14.746 01:42:29 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:14.746 01:42:29 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:14.746 01:42:29 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:14.746 01:42:29 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:14.746 01:42:29 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:14.746 01:42:29 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:14.746 01:42:29 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:14.746 01:42:29 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:14.746 ************************************ 00:04:14.746 START TEST default_setup 00:04:14.746 ************************************ 00:04:14.746 01:42:29 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:04:14.746 01:42:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:14.746 01:42:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:04:14.746 01:42:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:14.746 01:42:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:04:14.746 01:42:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:14.746 01:42:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:04:14.746 01:42:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:14.746 01:42:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:14.746 01:42:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:14.746 01:42:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:14.746 01:42:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:04:14.746 01:42:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:14.746 01:42:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:14.746 01:42:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:14.746 01:42:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:14.746 01:42:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:14.746 01:42:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:14.746 01:42:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:14.746 01:42:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:04:14.746 01:42:29 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:04:14.746 01:42:29 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:04:14.746 01:42:29 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:15.682 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:15.682 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:15.682 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:15.941 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:15.941 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:15.941 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:15.941 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:15.941 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:15.941 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:15.941 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:15.941 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:15.941 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:15.941 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:15.941 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:15.941 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:15.941 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:16.880 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:04:16.880 01:42:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:16.880 01:42:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:04:16.880 01:42:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:04:16.880 01:42:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:04:16.880 01:42:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:04:16.880 01:42:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:04:16.880 01:42:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:04:16.880 01:42:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:16.880 01:42:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:16.880 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:16.880 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:16.880 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:16.880 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:16.880 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:16.880 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:16.880 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:16.880 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:16.880 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:16.880 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.880 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.880 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 43907348 kB' 'MemAvailable: 47398660 kB' 'Buffers: 2704 kB' 'Cached: 12223896 kB' 'SwapCached: 0 kB' 'Active: 9207572 kB' 'Inactive: 3493800 kB' 'Active(anon): 8814068 kB' 'Inactive(anon): 0 kB' 'Active(file): 393504 kB' 'Inactive(file): 3493800 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 477976 kB' 'Mapped: 213552 kB' 'Shmem: 8339296 kB' 'KReclaimable: 195692 kB' 'Slab: 567764 kB' 'SReclaimable: 195692 kB' 'SUnreclaim: 372072 kB' 'KernelStack: 12768 kB' 'PageTables: 8296 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 9920072 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196356 kB' 'VmallocChunk: 0 kB' 'Percpu: 33408 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1930844 kB' 'DirectMap2M: 14766080 kB' 'DirectMap1G: 52428800 kB' 00:04:16.880 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.880 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.880 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.880 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.880 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.880 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.880 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.880 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.880 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.880 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.880 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.880 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.880 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.880 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.880 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.880 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.880 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.880 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.880 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.880 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.880 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.880 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.880 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.880 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.880 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.880 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.880 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.880 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.880 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.880 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.880 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.880 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.880 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.880 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.880 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.880 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.880 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.880 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.880 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.880 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.880 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.880 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.880 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.880 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.880 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.880 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.880 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.880 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.880 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.880 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.880 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.880 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.880 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.880 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.880 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.880 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.880 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.880 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.881 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.881 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.881 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.881 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.881 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.881 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.881 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.881 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.881 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.881 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.881 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.881 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.881 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.881 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.881 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.881 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.881 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.881 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.881 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.881 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.881 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.881 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.881 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.881 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.881 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.881 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.881 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.881 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.881 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.881 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.881 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.881 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.881 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.881 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.881 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.881 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.881 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.881 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.881 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.881 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.881 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.881 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.881 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.881 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.881 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.881 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.881 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.881 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.881 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.881 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.881 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.881 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.881 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.881 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.881 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.881 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.881 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.881 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.881 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.881 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.881 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.881 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.881 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.881 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.881 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.881 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.881 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.881 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.881 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.881 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.881 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.881 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.881 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.881 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.881 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.881 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.881 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.881 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.881 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.881 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.881 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.881 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.881 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.881 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.881 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.881 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.881 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.881 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.881 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.881 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.881 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.881 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.881 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.881 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.881 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.881 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.881 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.881 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.881 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.881 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.881 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.881 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.881 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.881 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:16.881 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:16.881 01:42:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:04:16.882 01:42:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:16.882 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:16.882 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:16.882 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:16.882 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:16.882 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:16.882 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:16.882 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:16.882 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:16.882 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:16.882 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.882 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.882 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 43908580 kB' 'MemAvailable: 47399892 kB' 'Buffers: 2704 kB' 'Cached: 12223900 kB' 'SwapCached: 0 kB' 'Active: 9207068 kB' 'Inactive: 3493800 kB' 'Active(anon): 8813564 kB' 'Inactive(anon): 0 kB' 'Active(file): 393504 kB' 'Inactive(file): 3493800 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 477464 kB' 'Mapped: 213556 kB' 'Shmem: 8339300 kB' 'KReclaimable: 195692 kB' 'Slab: 567836 kB' 'SReclaimable: 195692 kB' 'SUnreclaim: 372144 kB' 'KernelStack: 12704 kB' 'PageTables: 8104 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 9920092 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196340 kB' 'VmallocChunk: 0 kB' 'Percpu: 33408 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1930844 kB' 'DirectMap2M: 14766080 kB' 'DirectMap1G: 52428800 kB' 00:04:16.882 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.882 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.882 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.882 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.882 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.882 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.882 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.882 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.882 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.882 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.882 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.882 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.882 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.882 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.882 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.882 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.882 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.882 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.882 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.882 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.882 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.882 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.882 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.882 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.882 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.882 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.882 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.882 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.882 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.882 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.882 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.882 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.882 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.882 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.882 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.882 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.882 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.882 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.882 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.882 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.882 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.882 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.882 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.882 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.882 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.882 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.882 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.882 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.882 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.882 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.882 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.882 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.882 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.882 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.882 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.882 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.882 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.882 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.882 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.882 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.882 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.882 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.882 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.882 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.882 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.882 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.882 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.882 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.882 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.882 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.882 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.882 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.882 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.882 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.882 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.882 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.882 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.882 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.883 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.883 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.883 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.883 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.883 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.883 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.883 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.883 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.883 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.883 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.883 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.883 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.883 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.883 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.883 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.883 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.883 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.883 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.883 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.883 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.883 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.883 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.883 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.883 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.883 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.883 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.883 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.883 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.883 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.883 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.883 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.883 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.883 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.883 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.883 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.883 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.883 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.883 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.883 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.883 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.883 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.883 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.883 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.883 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.883 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.883 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.883 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.883 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.883 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.883 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.883 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.883 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.883 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.883 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.883 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.883 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.883 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.883 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.883 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.883 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.883 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.883 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.883 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.883 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.883 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.883 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.883 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.883 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.883 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.883 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.883 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.883 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.883 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.883 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.883 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.883 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.883 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.883 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.883 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.883 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.883 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.883 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.883 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.883 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.883 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.883 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.883 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.883 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.883 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.883 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.883 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.883 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.883 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.883 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.883 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.883 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.883 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.883 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.884 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.884 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.884 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.884 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.884 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.884 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.884 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.884 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.884 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.884 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.884 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.884 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.884 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.884 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.884 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.884 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.884 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.884 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.884 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.884 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.884 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.884 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.884 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.884 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.884 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.884 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.884 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.884 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.884 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.884 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:16.884 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:16.884 01:42:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:04:16.884 01:42:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:16.884 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:16.884 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:16.884 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:16.884 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:16.884 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:16.884 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:16.884 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:16.884 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:16.884 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:16.884 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.884 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.884 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 43908984 kB' 'MemAvailable: 47400292 kB' 'Buffers: 2704 kB' 'Cached: 12223916 kB' 'SwapCached: 0 kB' 'Active: 9206956 kB' 'Inactive: 3493800 kB' 'Active(anon): 8813452 kB' 'Inactive(anon): 0 kB' 'Active(file): 393504 kB' 'Inactive(file): 3493800 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 477352 kB' 'Mapped: 213480 kB' 'Shmem: 8339316 kB' 'KReclaimable: 195684 kB' 'Slab: 567764 kB' 'SReclaimable: 195684 kB' 'SUnreclaim: 372080 kB' 'KernelStack: 12736 kB' 'PageTables: 8144 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 9925044 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196324 kB' 'VmallocChunk: 0 kB' 'Percpu: 33408 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1930844 kB' 'DirectMap2M: 14766080 kB' 'DirectMap1G: 52428800 kB' 00:04:16.884 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.884 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.884 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.884 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.884 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.884 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.884 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.884 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.884 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.884 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.884 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.884 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.884 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.884 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.884 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.884 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.884 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.884 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.884 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.884 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.884 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.884 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.884 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.884 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.884 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.884 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.884 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.884 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.884 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.884 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.884 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.884 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.884 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.884 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.884 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.884 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.884 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.884 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.884 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.884 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.884 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.884 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.884 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.884 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.884 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.884 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.884 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.885 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.885 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.885 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.885 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.885 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.885 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.885 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.885 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.885 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.885 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.885 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.885 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.885 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.885 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.885 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.885 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.885 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.885 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.885 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.885 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.885 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.885 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.885 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.885 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.885 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.885 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.885 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.885 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.885 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.885 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.885 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.885 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.885 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:17.147 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.147 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:17.147 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:17.147 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:17.147 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.147 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:17.147 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:17.147 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:17.147 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.147 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:17.147 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:17.147 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:17.147 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.147 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:17.147 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:17.147 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:17.147 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.147 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:17.147 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:17.147 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:17.147 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.147 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:17.147 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:17.147 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:17.147 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.147 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:17.147 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:17.147 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:17.147 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.147 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:17.147 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:17.147 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:17.147 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.147 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:17.147 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:17.147 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:17.147 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.147 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:17.147 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:17.147 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:17.147 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.147 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:17.147 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:17.147 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:17.147 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.147 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:17.147 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:17.147 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:17.147 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.147 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:17.147 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:17.147 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:17.147 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.147 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:17.147 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:17.147 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:17.147 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.147 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:17.147 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:17.147 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:17.147 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.147 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:17.147 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:17.147 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:17.147 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.147 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:17.147 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:17.147 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:17.148 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.148 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:17.148 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:17.148 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:17.148 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.148 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:17.148 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:17.148 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:17.148 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.148 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:17.148 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:17.148 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:17.148 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.148 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:17.148 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:17.148 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:17.148 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.148 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:17.148 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:17.148 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:17.148 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.148 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:17.148 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:17.148 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:17.148 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.148 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:17.148 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:17.148 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:17.148 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.148 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:17.148 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:17.148 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:17.148 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.148 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:17.148 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:17.148 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:17.148 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.148 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:17.148 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:17.148 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:17.148 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.148 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:17.148 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:17.148 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:17.148 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.148 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:17.148 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:17.148 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:17.148 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.148 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:17.148 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:17.148 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:17.148 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.148 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:17.148 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:17.148 01:42:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:04:17.148 01:42:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:17.148 nr_hugepages=1024 00:04:17.148 01:42:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:17.148 resv_hugepages=0 00:04:17.148 01:42:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:17.148 surplus_hugepages=0 00:04:17.148 01:42:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:17.148 anon_hugepages=0 00:04:17.148 01:42:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:17.148 01:42:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:17.148 01:42:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:17.148 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:17.148 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:17.148 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:17.148 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:17.148 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:17.148 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:17.148 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:17.148 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:17.148 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:17.148 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:17.148 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:17.148 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 43908736 kB' 'MemAvailable: 47400044 kB' 'Buffers: 2704 kB' 'Cached: 12223952 kB' 'SwapCached: 0 kB' 'Active: 9207272 kB' 'Inactive: 3493800 kB' 'Active(anon): 8813768 kB' 'Inactive(anon): 0 kB' 'Active(file): 393504 kB' 'Inactive(file): 3493800 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 477712 kB' 'Mapped: 213480 kB' 'Shmem: 8339352 kB' 'KReclaimable: 195684 kB' 'Slab: 567772 kB' 'SReclaimable: 195684 kB' 'SUnreclaim: 372088 kB' 'KernelStack: 12704 kB' 'PageTables: 8204 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 9920508 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196324 kB' 'VmallocChunk: 0 kB' 'Percpu: 33408 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1930844 kB' 'DirectMap2M: 14766080 kB' 'DirectMap1G: 52428800 kB' 00:04:17.148 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.148 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:17.148 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:17.148 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:17.148 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.148 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:17.148 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:17.148 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:17.148 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.148 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:17.148 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:17.148 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:17.148 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.148 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:17.148 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:17.148 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:17.149 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.149 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:17.149 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:17.149 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:17.149 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.149 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:17.149 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:17.149 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:17.149 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.149 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:17.149 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:17.149 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:17.149 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.149 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:17.149 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:17.149 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:17.149 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.149 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:17.149 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:17.149 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:17.149 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.149 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:17.149 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:17.149 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:17.149 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.149 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:17.149 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:17.149 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:17.149 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.149 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:17.149 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:17.149 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:17.149 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.149 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:17.149 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:17.149 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:17.149 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.149 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:17.149 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:17.149 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:17.149 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.149 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:17.149 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:17.149 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:17.149 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.149 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:17.149 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:17.149 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:17.149 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.149 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:17.149 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:17.149 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:17.149 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.149 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:17.149 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:17.149 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:17.149 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.149 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:17.149 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:17.149 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:17.149 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.149 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:17.149 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:17.149 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:17.149 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.149 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:17.149 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:17.149 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:17.149 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.149 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:17.149 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:17.149 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:17.149 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.149 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:17.149 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:17.149 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:17.149 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.149 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:17.149 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:17.149 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:17.149 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.149 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:17.149 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:17.149 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:17.149 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.149 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:17.149 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:17.149 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:17.149 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.149 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:17.149 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:17.149 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:17.149 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.149 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:17.149 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:17.149 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:17.149 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.149 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:17.149 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:17.149 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:17.149 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.149 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:17.149 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:17.149 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:17.149 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.149 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:17.150 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:17.150 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:17.150 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.150 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:17.150 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:17.150 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:17.150 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.150 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:17.150 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:17.150 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:17.150 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.150 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:17.150 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:17.150 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:17.150 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.150 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:17.150 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:17.150 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:17.150 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.150 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:17.150 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:17.150 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:17.150 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.150 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:17.150 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:17.150 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:17.150 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.150 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:17.150 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:17.150 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:17.150 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.150 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:17.150 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:17.150 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:17.150 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.150 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:17.150 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:17.150 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:17.150 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.150 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:17.150 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:17.150 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:17.150 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.150 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:17.150 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:17.150 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:17.150 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.150 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:17.150 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:17.150 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:17.150 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.150 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:17.150 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:17.150 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:17.150 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.150 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:17.150 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:17.150 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:17.150 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.150 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:17.150 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:17.150 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:17.150 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.150 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:17.150 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:17.150 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:17.150 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.150 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:17.150 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:17.150 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:17.150 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.150 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:04:17.150 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:17.150 01:42:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:17.150 01:42:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:04:17.150 01:42:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:04:17.150 01:42:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:17.150 01:42:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:17.150 01:42:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:17.150 01:42:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:17.150 01:42:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:17.150 01:42:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:17.150 01:42:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:17.150 01:42:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:17.150 01:42:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:17.150 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:17.150 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:04:17.150 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:17.150 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:17.150 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:17.150 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:17.150 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:17.150 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:17.150 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:17.150 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:17.150 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:17.150 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 19087928 kB' 'MemUsed: 13789012 kB' 'SwapCached: 0 kB' 'Active: 7248932 kB' 'Inactive: 3259208 kB' 'Active(anon): 7118228 kB' 'Inactive(anon): 0 kB' 'Active(file): 130704 kB' 'Inactive(file): 3259208 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10231120 kB' 'Mapped: 46180 kB' 'AnonPages: 280200 kB' 'Shmem: 6841208 kB' 'KernelStack: 7320 kB' 'PageTables: 4520 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 107248 kB' 'Slab: 349264 kB' 'SReclaimable: 107248 kB' 'SUnreclaim: 242016 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:17.150 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.151 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:17.151 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:17.151 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:17.151 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.151 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:17.151 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:17.151 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:17.151 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.151 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:17.151 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:17.151 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:17.151 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.151 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:17.151 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:17.151 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:17.151 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.151 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:17.151 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:17.151 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:17.151 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.151 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:17.151 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:17.151 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:17.151 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.151 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:17.151 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:17.151 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:17.151 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.151 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:17.151 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:17.151 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:17.151 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.151 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:17.151 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:17.151 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:17.151 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.151 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:17.151 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:17.151 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:17.151 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.151 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:17.151 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:17.151 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:17.151 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.151 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:17.151 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:17.151 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:17.151 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.151 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:17.151 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:17.151 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:17.151 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.151 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:17.151 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:17.151 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:17.151 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.151 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:17.151 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:17.151 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:17.151 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.151 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:17.151 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:17.151 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:17.151 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.151 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:17.151 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:17.151 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:17.151 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.151 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:17.151 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:17.151 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:17.151 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.151 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:17.151 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:17.151 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:17.151 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.151 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:17.151 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:17.151 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:17.151 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.151 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:17.151 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:17.151 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:17.151 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.151 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:17.151 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:17.151 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:17.151 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.151 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:17.151 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:17.151 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:17.151 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.151 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:17.151 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:17.151 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:17.151 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.151 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:17.151 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:17.151 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:17.151 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.151 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:17.151 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:17.151 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:17.151 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.151 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:17.151 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:17.151 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:17.151 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.152 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:17.152 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:17.152 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:17.152 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.152 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:17.152 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:17.152 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:17.152 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.152 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:17.152 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:17.152 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:17.152 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.152 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:17.152 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:17.152 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:17.152 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.152 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:17.152 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:17.152 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:17.152 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.152 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:17.152 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:17.152 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:17.152 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.152 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:17.152 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:17.152 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:17.152 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.152 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:17.152 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:17.152 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:17.152 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.152 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:17.152 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:17.152 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:17.152 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.152 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:17.152 01:42:31 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:17.152 01:42:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:17.152 01:42:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:17.152 01:42:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:17.152 01:42:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:17.152 01:42:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:17.152 node0=1024 expecting 1024 00:04:17.152 01:42:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:17.152 00:04:17.152 real 0m2.394s 00:04:17.152 user 0m0.625s 00:04:17.152 sys 0m0.881s 00:04:17.152 01:42:31 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:17.152 01:42:31 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:04:17.152 ************************************ 00:04:17.152 END TEST default_setup 00:04:17.152 ************************************ 00:04:17.152 01:42:31 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:17.152 01:42:31 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:17.152 01:42:31 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:17.152 01:42:31 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:17.152 ************************************ 00:04:17.152 START TEST per_node_1G_alloc 00:04:17.152 ************************************ 00:04:17.152 01:42:31 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:04:17.152 01:42:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:04:17.152 01:42:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:04:17.152 01:42:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:17.152 01:42:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:04:17.152 01:42:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:04:17.152 01:42:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:04:17.152 01:42:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:17.152 01:42:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:17.152 01:42:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:17.152 01:42:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:04:17.152 01:42:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:04:17.152 01:42:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:17.152 01:42:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:17.152 01:42:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:17.152 01:42:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:17.152 01:42:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:17.152 01:42:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:04:17.152 01:42:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:17.152 01:42:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:17.152 01:42:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:17.152 01:42:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:17.152 01:42:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:17.152 01:42:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:17.152 01:42:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:04:17.152 01:42:31 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:04:17.152 01:42:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:17.153 01:42:31 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:18.535 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:18.535 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:18.535 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:18.535 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:18.535 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:18.535 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:18.535 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:18.535 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:18.535 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:18.535 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:18.535 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:18.535 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:18.535 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:18.535 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:18.535 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:18.535 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:18.535 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:18.535 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:04:18.535 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:18.535 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:18.535 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:18.535 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:18.535 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:18.535 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:18.535 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:18.536 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:18.536 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:18.536 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:18.536 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:18.536 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:18.536 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:18.536 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:18.536 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:18.536 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:18.536 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:18.536 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:18.536 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.536 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.536 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 43902488 kB' 'MemAvailable: 47393792 kB' 'Buffers: 2704 kB' 'Cached: 12224016 kB' 'SwapCached: 0 kB' 'Active: 9213336 kB' 'Inactive: 3493800 kB' 'Active(anon): 8819832 kB' 'Inactive(anon): 0 kB' 'Active(file): 393504 kB' 'Inactive(file): 3493800 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 483620 kB' 'Mapped: 214396 kB' 'Shmem: 8339416 kB' 'KReclaimable: 195676 kB' 'Slab: 567672 kB' 'SReclaimable: 195676 kB' 'SUnreclaim: 371996 kB' 'KernelStack: 12688 kB' 'PageTables: 8152 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 9926804 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196392 kB' 'VmallocChunk: 0 kB' 'Percpu: 33408 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1930844 kB' 'DirectMap2M: 14766080 kB' 'DirectMap1G: 52428800 kB' 00:04:18.536 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.536 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.536 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.536 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.536 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.536 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.536 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.536 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.536 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.536 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.536 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.536 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.536 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.536 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.536 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.536 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.536 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.536 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.536 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.536 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.536 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.536 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.536 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.536 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.536 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.536 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.536 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.536 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.536 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.536 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.536 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.536 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.536 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.536 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.536 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.536 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.536 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.536 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.536 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.536 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.536 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.536 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.536 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.536 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.536 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.536 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.536 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.536 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.536 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.536 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.536 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.536 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.536 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.536 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.536 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.536 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.536 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.536 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.536 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.536 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.536 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.536 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.536 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.536 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.537 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.537 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.537 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.537 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.537 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.537 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.537 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.537 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.537 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.537 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.537 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.537 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.537 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.537 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.537 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.537 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.537 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.537 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.537 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.537 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.537 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.537 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.537 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.537 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.537 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.537 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.537 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.537 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.537 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.537 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.537 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.537 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.537 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.537 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.537 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.537 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.537 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.537 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.537 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.537 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.537 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.537 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.537 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.537 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.537 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.537 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.537 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.537 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.537 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.537 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.537 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.537 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.537 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.537 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.537 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.537 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.537 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.537 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.537 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.537 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.537 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.537 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.537 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.537 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.537 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.537 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.537 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.537 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.537 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.537 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.537 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.537 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.537 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.537 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.537 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.537 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.537 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.537 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.537 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.537 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.537 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.537 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.537 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.537 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.537 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.537 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.537 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.537 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.537 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.537 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.537 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.537 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.537 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.537 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.537 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.537 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.537 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.537 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:18.537 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:18.537 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:18.537 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:18.537 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:18.538 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:18.538 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:18.538 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:18.538 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:18.538 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:18.538 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:18.538 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:18.538 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:18.538 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.538 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.538 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 43904616 kB' 'MemAvailable: 47395920 kB' 'Buffers: 2704 kB' 'Cached: 12224020 kB' 'SwapCached: 0 kB' 'Active: 9208852 kB' 'Inactive: 3493800 kB' 'Active(anon): 8815348 kB' 'Inactive(anon): 0 kB' 'Active(file): 393504 kB' 'Inactive(file): 3493800 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 479128 kB' 'Mapped: 214396 kB' 'Shmem: 8339420 kB' 'KReclaimable: 195676 kB' 'Slab: 567672 kB' 'SReclaimable: 195676 kB' 'SUnreclaim: 371996 kB' 'KernelStack: 12752 kB' 'PageTables: 8288 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 9922720 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196404 kB' 'VmallocChunk: 0 kB' 'Percpu: 33408 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1930844 kB' 'DirectMap2M: 14766080 kB' 'DirectMap1G: 52428800 kB' 00:04:18.538 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.538 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.538 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.538 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.538 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.538 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.538 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.538 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.538 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.538 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.538 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.538 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.538 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.538 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.538 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.538 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.538 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.538 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.538 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.538 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.538 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.538 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.538 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.538 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.538 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.538 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.538 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.538 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.538 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.538 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.538 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.538 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.538 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.538 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.538 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.538 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.538 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.538 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.538 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.538 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.538 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.538 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.538 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.538 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.538 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.538 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.538 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.538 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.538 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.538 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.538 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.538 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.538 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.538 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.538 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.538 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.538 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.538 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.538 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.538 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.538 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.538 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.538 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.538 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.538 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.538 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.538 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.538 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.538 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.538 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.538 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.538 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.539 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.539 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.539 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.539 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.539 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.539 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.539 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.539 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.539 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.539 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.539 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.539 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.539 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.539 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.539 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.539 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.539 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.539 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.539 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.539 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.539 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.539 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.539 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.539 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.539 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.539 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.539 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.539 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.539 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.539 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.539 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.539 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.539 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.539 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.539 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.539 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.539 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.539 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.539 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.539 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.539 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.539 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.539 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.539 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.539 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.539 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.539 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.539 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.539 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.539 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.539 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.539 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.539 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.539 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.539 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.539 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.539 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.539 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.539 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.539 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.539 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.539 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.539 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.539 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.539 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.539 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.539 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.539 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.539 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.539 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.539 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.539 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.539 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.539 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.539 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.539 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.539 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.539 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.539 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.539 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.539 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.539 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.539 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.539 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.539 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.539 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.539 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.539 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.539 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.539 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.539 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.539 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.539 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.539 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.539 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.539 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.540 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.540 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.540 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.540 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.540 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.540 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.540 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.540 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.540 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.540 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.540 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.540 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.540 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.540 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.540 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.540 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.540 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.540 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.540 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.540 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.540 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.540 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.540 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.540 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.540 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.540 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.540 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.540 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.540 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.540 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.540 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.540 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.540 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.540 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.540 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.540 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.540 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.540 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:18.540 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:18.540 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:18.540 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:18.540 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:18.540 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:18.540 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:18.540 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:18.540 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:18.540 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:18.540 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:18.540 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:18.540 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:18.540 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.540 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.540 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 43905172 kB' 'MemAvailable: 47396476 kB' 'Buffers: 2704 kB' 'Cached: 12224036 kB' 'SwapCached: 0 kB' 'Active: 9213136 kB' 'Inactive: 3493800 kB' 'Active(anon): 8819632 kB' 'Inactive(anon): 0 kB' 'Active(file): 393504 kB' 'Inactive(file): 3493800 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 483380 kB' 'Mapped: 213928 kB' 'Shmem: 8339436 kB' 'KReclaimable: 195676 kB' 'Slab: 567728 kB' 'SReclaimable: 195676 kB' 'SUnreclaim: 372052 kB' 'KernelStack: 12752 kB' 'PageTables: 8256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 9926848 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196376 kB' 'VmallocChunk: 0 kB' 'Percpu: 33408 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1930844 kB' 'DirectMap2M: 14766080 kB' 'DirectMap1G: 52428800 kB' 00:04:18.540 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.540 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.540 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.540 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.540 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.540 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.540 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.540 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.540 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.540 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.540 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.540 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.540 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.540 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.540 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.540 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.540 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.540 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.540 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.540 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.540 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.540 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.540 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.540 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.540 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.540 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.540 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.540 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.540 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.540 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.540 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.540 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.540 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.540 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.540 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.540 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.540 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.540 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.541 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.541 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.541 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.541 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.541 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.541 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.541 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.541 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.541 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.541 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.541 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.541 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.541 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.541 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.541 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.541 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.541 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.541 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.541 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.541 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.541 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.541 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.541 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.541 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.541 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.541 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.541 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.541 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.541 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.541 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.541 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.541 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.541 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.541 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.541 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.541 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.541 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.541 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.541 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.541 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.541 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.541 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.541 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.541 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.541 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.541 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.541 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.541 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.541 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.541 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.541 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.541 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.541 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.541 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.541 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.541 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.541 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.541 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.541 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.541 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.541 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.541 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.541 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.541 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.541 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.541 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.541 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.541 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.541 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.541 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.541 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.541 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.541 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.541 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.541 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.541 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.541 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.541 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.541 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.541 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.541 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.542 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.542 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.542 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.542 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.542 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.542 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.542 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.542 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.542 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.542 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.542 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.542 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.542 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.542 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.542 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.542 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.542 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.542 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.542 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.542 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.542 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.542 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.542 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.542 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.542 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.542 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.542 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.542 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.542 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.542 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.542 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.542 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.542 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.542 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.542 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.542 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.542 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.542 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.542 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.542 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.542 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.542 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.542 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.542 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.542 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.542 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.542 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.542 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.542 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.542 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.542 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.542 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.542 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.542 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.542 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.542 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.542 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.542 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.542 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.542 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.542 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.542 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.542 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.542 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.542 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.542 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.542 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.542 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.542 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.542 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.542 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.542 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.542 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.542 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.542 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.542 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.542 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.542 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.542 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.542 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.542 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.542 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.542 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:18.542 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:18.542 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:18.542 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:18.542 nr_hugepages=1024 00:04:18.542 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:18.542 resv_hugepages=0 00:04:18.542 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:18.542 surplus_hugepages=0 00:04:18.542 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:18.542 anon_hugepages=0 00:04:18.542 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:18.542 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:18.542 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:18.542 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:18.542 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:18.542 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:18.542 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:18.542 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:18.542 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:18.542 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:18.542 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:18.542 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:18.543 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.543 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.543 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 43904620 kB' 'MemAvailable: 47395924 kB' 'Buffers: 2704 kB' 'Cached: 12224060 kB' 'SwapCached: 0 kB' 'Active: 9213220 kB' 'Inactive: 3493800 kB' 'Active(anon): 8819716 kB' 'Inactive(anon): 0 kB' 'Active(file): 393504 kB' 'Inactive(file): 3493800 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 483436 kB' 'Mapped: 214396 kB' 'Shmem: 8339460 kB' 'KReclaimable: 195676 kB' 'Slab: 567728 kB' 'SReclaimable: 195676 kB' 'SUnreclaim: 372052 kB' 'KernelStack: 12736 kB' 'PageTables: 8240 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 9926868 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196392 kB' 'VmallocChunk: 0 kB' 'Percpu: 33408 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1930844 kB' 'DirectMap2M: 14766080 kB' 'DirectMap1G: 52428800 kB' 00:04:18.543 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.543 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.543 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.543 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.543 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.543 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.543 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.543 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.543 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.543 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.543 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.543 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.543 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.543 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.543 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.543 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.543 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.543 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.543 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.543 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.543 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.543 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.543 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.543 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.543 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.543 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.543 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.543 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.543 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.543 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.543 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.543 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.543 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.543 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.543 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.543 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.543 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.543 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.543 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.543 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.543 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.543 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.543 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.543 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.543 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.543 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.543 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.543 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.543 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.543 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.543 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.543 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.543 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.543 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.543 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.543 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.543 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.543 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.543 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.543 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.543 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.543 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.543 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.543 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.543 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.543 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.543 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.543 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.543 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.543 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.543 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.543 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.543 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.543 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.543 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.543 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.543 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.543 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.543 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.543 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.543 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.543 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.543 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.543 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.543 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.543 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.543 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.543 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.543 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.543 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.544 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.544 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.544 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.544 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.544 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.544 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.544 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.544 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.544 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.544 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.544 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.544 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.544 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.544 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.544 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.544 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.544 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.544 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.544 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.544 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.544 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.544 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.544 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.544 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.544 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.544 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.544 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.544 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.544 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.544 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.544 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.544 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.544 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.544 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.544 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.544 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.544 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.544 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.544 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.544 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.544 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.544 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.544 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.544 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.544 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.544 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.544 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.544 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.544 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.544 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.544 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.544 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.544 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.544 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.544 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.544 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.544 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.544 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.544 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.544 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.544 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.544 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.544 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.544 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.544 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.544 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.544 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.544 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.544 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.544 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.544 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.544 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.544 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.544 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.544 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.544 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.544 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.544 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.544 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.544 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.544 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.544 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.544 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.544 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.544 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.544 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.544 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.544 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.544 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.544 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.544 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.544 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.544 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.544 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.544 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.544 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.544 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.544 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.545 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.545 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.545 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.545 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.545 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.545 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:18.545 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:18.545 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:18.545 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:18.545 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:18.545 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:18.545 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:18.545 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:18.545 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:18.545 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:18.545 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:18.545 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:18.545 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:18.545 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:18.545 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:18.545 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:04:18.545 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:18.545 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:18.545 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:18.545 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:18.545 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:18.545 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:18.545 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:18.545 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.545 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.545 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 20127868 kB' 'MemUsed: 12749072 kB' 'SwapCached: 0 kB' 'Active: 7249420 kB' 'Inactive: 3259208 kB' 'Active(anon): 7118716 kB' 'Inactive(anon): 0 kB' 'Active(file): 130704 kB' 'Inactive(file): 3259208 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10231124 kB' 'Mapped: 46496 kB' 'AnonPages: 280688 kB' 'Shmem: 6841212 kB' 'KernelStack: 7352 kB' 'PageTables: 4552 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 107248 kB' 'Slab: 349184 kB' 'SReclaimable: 107248 kB' 'SUnreclaim: 241936 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:18.545 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.545 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.545 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.545 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.545 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.545 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.545 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.545 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.545 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.545 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.545 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.545 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.545 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.545 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.545 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.545 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.545 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.545 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.545 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.545 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.545 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.545 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.545 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.545 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.545 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.545 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.545 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.545 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.545 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.545 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.545 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.545 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.545 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.545 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.545 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.545 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.545 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.545 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.545 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.545 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.545 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.545 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.545 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.545 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.545 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.545 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.545 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.545 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.545 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.545 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.545 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.545 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.545 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.545 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.545 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.545 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.545 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.545 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.545 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.545 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.546 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.546 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.546 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.546 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.546 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.546 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.546 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.546 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.546 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.546 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.546 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.546 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.546 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.546 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.546 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.546 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.546 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.546 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.546 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.546 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.546 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.546 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.546 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.546 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.546 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.546 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.546 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.546 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.546 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.546 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.546 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.546 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.546 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.546 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.546 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.546 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.546 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.546 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.546 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.546 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.546 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.546 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.546 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.546 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.546 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.546 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.546 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.546 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.546 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.546 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.546 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.546 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.546 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.546 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.546 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.546 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.546 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.546 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.546 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.546 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.546 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.546 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.546 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.546 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.546 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.546 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.546 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.546 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.546 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.546 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.546 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.546 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.546 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.546 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.546 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.546 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.546 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.546 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.546 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.546 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.546 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.546 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.546 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.546 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.546 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.546 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:18.546 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:18.546 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:18.547 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:18.547 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:18.547 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:18.547 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:18.547 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:04:18.547 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:18.547 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:18.547 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:18.547 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:18.547 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:18.547 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:18.547 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:18.547 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.547 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.547 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664780 kB' 'MemFree: 23781892 kB' 'MemUsed: 3882888 kB' 'SwapCached: 0 kB' 'Active: 1958128 kB' 'Inactive: 234592 kB' 'Active(anon): 1695328 kB' 'Inactive(anon): 0 kB' 'Active(file): 262800 kB' 'Inactive(file): 234592 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1995684 kB' 'Mapped: 167312 kB' 'AnonPages: 197108 kB' 'Shmem: 1498292 kB' 'KernelStack: 5384 kB' 'PageTables: 3688 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 88428 kB' 'Slab: 218544 kB' 'SReclaimable: 88428 kB' 'SUnreclaim: 130116 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:18.547 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.547 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.547 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.547 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.547 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.547 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.547 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.547 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.547 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.547 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.547 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.547 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.547 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.547 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.547 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.547 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.547 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.547 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.547 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.547 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.547 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.547 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.547 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.547 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.547 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.547 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.547 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.547 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.547 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.547 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.547 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.547 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.547 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.547 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.547 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.547 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.547 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.547 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.547 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.547 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.547 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.547 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.547 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.547 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.547 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.547 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.547 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.547 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.547 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.547 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.547 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.547 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.547 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.547 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.547 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.547 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.547 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.547 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.547 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.547 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.547 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.547 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.547 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.547 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.547 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.547 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.547 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.547 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.547 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.547 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.547 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.547 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.547 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.547 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.547 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.547 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.547 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.547 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.547 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.547 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.547 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.548 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.548 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.548 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.548 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.548 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.548 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.548 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.548 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.548 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.548 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.548 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.548 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.548 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.548 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.548 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.548 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.548 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.548 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.548 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.548 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.548 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.548 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.548 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.548 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.548 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.548 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.548 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.548 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.548 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.548 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.548 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.548 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.548 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.548 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.548 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.548 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.548 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.548 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.548 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.548 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.548 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.548 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.548 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.548 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.548 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.548 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.548 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.548 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.548 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.548 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.548 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.548 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.548 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.548 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.548 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.548 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.548 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.548 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.548 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.548 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.548 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:18.548 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.548 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.548 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.548 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:18.548 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:18.548 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:18.548 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:18.548 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:18.548 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:18.548 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:18.548 node0=512 expecting 512 00:04:18.548 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:18.548 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:18.548 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:18.548 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:18.548 node1=512 expecting 512 00:04:18.548 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:18.548 00:04:18.548 real 0m1.535s 00:04:18.548 user 0m0.620s 00:04:18.548 sys 0m0.881s 00:04:18.548 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:18.548 01:42:33 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:18.548 ************************************ 00:04:18.548 END TEST per_node_1G_alloc 00:04:18.548 ************************************ 00:04:18.807 01:42:33 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:18.807 01:42:33 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:18.807 01:42:33 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:18.807 01:42:33 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:18.807 ************************************ 00:04:18.807 START TEST even_2G_alloc 00:04:18.807 ************************************ 00:04:18.807 01:42:33 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:04:18.807 01:42:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:18.807 01:42:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:18.807 01:42:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:18.807 01:42:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:18.807 01:42:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:18.807 01:42:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:18.807 01:42:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:18.807 01:42:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:18.807 01:42:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:18.807 01:42:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:18.807 01:42:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:18.807 01:42:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:18.807 01:42:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:18.807 01:42:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:18.807 01:42:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:18.807 01:42:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:18.807 01:42:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:04:18.807 01:42:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:18.807 01:42:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:18.807 01:42:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:18.807 01:42:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:18.807 01:42:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:18.807 01:42:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:18.807 01:42:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:18.807 01:42:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:18.807 01:42:33 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:04:18.807 01:42:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:18.807 01:42:33 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:19.741 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:19.741 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:19.741 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:19.741 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:19.741 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:19.741 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:19.741 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:19.741 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:19.741 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:19.741 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:19.741 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:19.741 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:19.741 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:19.741 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:19.741 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:19.741 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:19.741 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:20.001 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:20.001 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:20.001 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:20.001 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:20.001 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:20.001 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:20.001 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:20.001 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:20.002 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:20.002 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:20.002 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:20.002 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:20.002 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:20.002 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:20.002 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:20.002 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:20.002 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:20.002 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:20.002 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.002 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.002 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 43919288 kB' 'MemAvailable: 47410592 kB' 'Buffers: 2704 kB' 'Cached: 12224156 kB' 'SwapCached: 0 kB' 'Active: 9207436 kB' 'Inactive: 3493800 kB' 'Active(anon): 8813932 kB' 'Inactive(anon): 0 kB' 'Active(file): 393504 kB' 'Inactive(file): 3493800 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 477528 kB' 'Mapped: 213564 kB' 'Shmem: 8339556 kB' 'KReclaimable: 195676 kB' 'Slab: 567548 kB' 'SReclaimable: 195676 kB' 'SUnreclaim: 371872 kB' 'KernelStack: 12704 kB' 'PageTables: 8012 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 9920828 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196548 kB' 'VmallocChunk: 0 kB' 'Percpu: 33408 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1930844 kB' 'DirectMap2M: 14766080 kB' 'DirectMap1G: 52428800 kB' 00:04:20.002 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.002 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.002 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.002 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.002 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.002 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.002 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.002 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.002 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.002 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.002 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.002 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.002 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.002 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.002 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.002 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.002 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.002 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.002 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.002 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.002 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.002 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.002 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.002 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.002 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.002 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.002 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.002 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.002 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.002 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.002 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.002 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.002 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.002 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.002 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.002 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.002 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.002 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.002 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.002 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.002 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.002 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.002 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.002 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.002 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.002 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.002 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.002 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.002 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.002 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.002 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.002 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.002 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.002 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.002 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.002 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.002 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.002 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.002 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.002 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.002 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.002 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.002 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.002 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.002 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.002 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.002 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.002 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.002 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.002 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.002 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.002 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.002 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.002 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.003 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.003 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.003 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.003 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.003 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.003 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.003 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.003 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.003 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.003 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.003 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.003 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.003 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.003 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.003 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.003 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.003 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.003 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.003 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.003 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.003 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.003 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.003 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.003 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.003 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.003 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.003 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.003 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.003 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.003 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.003 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.003 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.003 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.003 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.003 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.003 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.003 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.003 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.003 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.003 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.003 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.003 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.003 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.003 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.003 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.003 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.003 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.003 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.003 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.003 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.003 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.003 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.003 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.003 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.003 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.003 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.003 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.003 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.003 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.003 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.003 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.003 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.003 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.003 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.003 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.003 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.003 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.003 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.003 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.003 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.003 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.003 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.003 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.003 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.003 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.003 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.003 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.003 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.003 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.003 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.003 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.003 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.003 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.003 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.003 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.003 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.003 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.003 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:20.004 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:20.004 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:20.004 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:20.004 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:20.004 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:20.004 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:20.004 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:20.004 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:20.004 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:20.004 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:20.004 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:20.004 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:20.004 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.004 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.004 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 43920636 kB' 'MemAvailable: 47411940 kB' 'Buffers: 2704 kB' 'Cached: 12224160 kB' 'SwapCached: 0 kB' 'Active: 9207548 kB' 'Inactive: 3493800 kB' 'Active(anon): 8814044 kB' 'Inactive(anon): 0 kB' 'Active(file): 393504 kB' 'Inactive(file): 3493800 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 477628 kB' 'Mapped: 213500 kB' 'Shmem: 8339560 kB' 'KReclaimable: 195676 kB' 'Slab: 567536 kB' 'SReclaimable: 195676 kB' 'SUnreclaim: 371860 kB' 'KernelStack: 12736 kB' 'PageTables: 8104 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 9920844 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196500 kB' 'VmallocChunk: 0 kB' 'Percpu: 33408 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1930844 kB' 'DirectMap2M: 14766080 kB' 'DirectMap1G: 52428800 kB' 00:04:20.004 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.004 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.004 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.004 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.004 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.004 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.004 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.004 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.004 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.004 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.004 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.004 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.004 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.004 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.004 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.004 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.004 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.004 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.004 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.004 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.004 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.004 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.004 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.004 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.004 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.004 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.004 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.004 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.004 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.004 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.004 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.004 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.004 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.004 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.004 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.004 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.004 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.004 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.004 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.004 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.004 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.004 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.004 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.004 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.004 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.004 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.004 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.004 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.004 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.004 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.004 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.004 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.004 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.004 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.004 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.004 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.004 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.004 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.005 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.005 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.005 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.005 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.005 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.005 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.005 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.005 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.005 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.005 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.005 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.005 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.005 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.005 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.005 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.005 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.005 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.005 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.005 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.005 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.005 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.005 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.005 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.005 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.005 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.005 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.005 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.005 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.005 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.005 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.005 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.005 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.005 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.005 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.005 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.005 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.005 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.005 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.005 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.005 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.005 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.005 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.005 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.005 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.005 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.005 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.005 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.005 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.005 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.005 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.005 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.005 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.005 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.005 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.005 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.005 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.005 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.005 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.005 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.005 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.005 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.005 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.005 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.005 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.005 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.005 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.005 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.005 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.005 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.005 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.005 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.005 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.005 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.005 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.005 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.005 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.005 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.005 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.005 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.005 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.005 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.006 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.006 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.006 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.006 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.006 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.006 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.006 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.006 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.006 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.006 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.006 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.006 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.006 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.006 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.006 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.006 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.006 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.006 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.006 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.006 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.006 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.006 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.006 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.006 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.006 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.006 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.006 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.006 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.006 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.006 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.006 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.006 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.006 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.006 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.006 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.006 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.006 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.006 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.006 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.006 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.006 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.006 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.006 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.006 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.006 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.006 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.006 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.006 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.006 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.006 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.006 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.006 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.006 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.006 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.006 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.006 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.006 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.006 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.006 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.006 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.006 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.006 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.006 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.006 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.006 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.006 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.006 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:20.006 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:20.006 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:20.006 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:20.006 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:20.006 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:20.006 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:20.006 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:20.006 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:20.006 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:20.006 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:20.006 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:20.006 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:20.006 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.006 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.007 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 43921844 kB' 'MemAvailable: 47413148 kB' 'Buffers: 2704 kB' 'Cached: 12224192 kB' 'SwapCached: 0 kB' 'Active: 9207780 kB' 'Inactive: 3493800 kB' 'Active(anon): 8814276 kB' 'Inactive(anon): 0 kB' 'Active(file): 393504 kB' 'Inactive(file): 3493800 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 477900 kB' 'Mapped: 213500 kB' 'Shmem: 8339592 kB' 'KReclaimable: 195676 kB' 'Slab: 567592 kB' 'SReclaimable: 195676 kB' 'SUnreclaim: 371916 kB' 'KernelStack: 12768 kB' 'PageTables: 8208 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 9920864 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196484 kB' 'VmallocChunk: 0 kB' 'Percpu: 33408 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1930844 kB' 'DirectMap2M: 14766080 kB' 'DirectMap1G: 52428800 kB' 00:04:20.007 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.007 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.007 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.007 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.007 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.007 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.007 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.007 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.007 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.007 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.007 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.007 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.007 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.007 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.007 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.007 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.007 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.007 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.007 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.007 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.007 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.007 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.007 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.007 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.007 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.007 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.007 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.007 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.007 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.007 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.007 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.007 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.007 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.007 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.007 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.007 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.007 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.007 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.007 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.007 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.007 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.007 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.007 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.007 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.007 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.007 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.007 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.007 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.007 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.007 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.007 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.007 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.007 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.007 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.007 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.007 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.007 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.007 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.007 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.007 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.007 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.007 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.007 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.007 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.007 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.007 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.007 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.007 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.007 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.007 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.007 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.007 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.007 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.007 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.007 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.008 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.008 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.008 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.008 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.008 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.008 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.008 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.008 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.008 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.008 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.008 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.008 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.008 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.008 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.008 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.008 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.008 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.008 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.008 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.008 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.008 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.008 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.008 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.008 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.008 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.008 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.008 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.008 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.008 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.008 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.008 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.008 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.008 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.008 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.008 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.008 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.008 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.008 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.008 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.008 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.008 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.008 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.008 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.008 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.008 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.008 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.008 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.008 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.008 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.008 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.008 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.008 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.008 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.008 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.008 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.008 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.008 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.008 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.008 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.008 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.008 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.008 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.008 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.008 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.008 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.009 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.009 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.009 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.009 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.009 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.009 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.009 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.009 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.009 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.009 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.009 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.009 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.009 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.009 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.009 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.009 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.009 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.009 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.009 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.009 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.009 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.009 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.009 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.009 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.009 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.009 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.009 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.009 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.009 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.009 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.009 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.009 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.009 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.009 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.009 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.009 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.009 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.009 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.009 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.009 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.009 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.009 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.009 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.009 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.009 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.009 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.009 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.009 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.009 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.009 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.009 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.009 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.009 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.009 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.009 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.009 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.009 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.009 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.009 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.009 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.009 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.009 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:20.009 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:20.009 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:20.009 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:20.009 nr_hugepages=1024 00:04:20.009 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:20.009 resv_hugepages=0 00:04:20.009 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:20.009 surplus_hugepages=0 00:04:20.009 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:20.009 anon_hugepages=0 00:04:20.009 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:20.009 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:20.009 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:20.009 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:20.009 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:20.009 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:20.009 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:20.009 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:20.009 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:20.009 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:20.009 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:20.009 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:20.009 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.010 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.010 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 43921844 kB' 'MemAvailable: 47413148 kB' 'Buffers: 2704 kB' 'Cached: 12224200 kB' 'SwapCached: 0 kB' 'Active: 9207800 kB' 'Inactive: 3493800 kB' 'Active(anon): 8814296 kB' 'Inactive(anon): 0 kB' 'Active(file): 393504 kB' 'Inactive(file): 3493800 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 477892 kB' 'Mapped: 213500 kB' 'Shmem: 8339600 kB' 'KReclaimable: 195676 kB' 'Slab: 567592 kB' 'SReclaimable: 195676 kB' 'SUnreclaim: 371916 kB' 'KernelStack: 12768 kB' 'PageTables: 8208 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 9920888 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196484 kB' 'VmallocChunk: 0 kB' 'Percpu: 33408 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1930844 kB' 'DirectMap2M: 14766080 kB' 'DirectMap1G: 52428800 kB' 00:04:20.010 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.010 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.010 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.010 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.010 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.010 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.010 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.010 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.010 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.010 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.010 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.010 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.010 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.010 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.010 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.010 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.010 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.010 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.010 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.010 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.010 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.010 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.010 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.010 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.010 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.010 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.010 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.010 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.010 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.010 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.010 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.010 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.010 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.010 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.010 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.010 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.010 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.010 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.010 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.010 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.010 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.010 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.010 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.010 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.010 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.010 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.010 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.010 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.010 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.010 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.010 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.010 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.010 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.010 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.010 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.010 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.010 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.010 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.010 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.010 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.010 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.010 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.010 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.010 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.010 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.010 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.010 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.010 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.011 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.011 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.011 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.011 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.011 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.011 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.011 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.011 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.011 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.011 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.011 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.011 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.011 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.011 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.011 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.011 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.011 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.011 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.011 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.011 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.011 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.011 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.011 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.011 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.011 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.011 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.011 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.011 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.011 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.011 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.011 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.011 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.011 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.011 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.011 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.011 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.011 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.011 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.011 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.011 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.011 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.011 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.011 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.011 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.011 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.011 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.011 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.011 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.011 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.011 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.011 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.011 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.011 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.011 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.273 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.273 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.273 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.273 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.273 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.273 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.273 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.273 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.273 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.273 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.273 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.273 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.273 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.273 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.273 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.273 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.273 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.273 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.273 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.273 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.273 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.273 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.273 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.273 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.273 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.273 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.273 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.273 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.273 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.273 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.273 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.273 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.273 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.273 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.273 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.273 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.273 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.273 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.273 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.273 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.273 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.273 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.273 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.273 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.273 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.273 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.273 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.273 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.273 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.273 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.273 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.273 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.273 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.273 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.273 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.273 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.273 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.273 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.273 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.273 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.273 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.273 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.273 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.273 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.273 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.273 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.273 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.273 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.273 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.273 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.273 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.273 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:20.273 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:20.273 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:20.273 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:20.273 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:20.273 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:20.273 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:20.273 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:20.273 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:20.273 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:20.273 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:20.273 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:20.273 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:20.273 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:20.273 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:20.273 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:04:20.273 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:20.273 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:20.273 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:20.273 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:20.273 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:20.273 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:20.273 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:20.273 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.274 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 20138084 kB' 'MemUsed: 12738856 kB' 'SwapCached: 0 kB' 'Active: 7249508 kB' 'Inactive: 3259208 kB' 'Active(anon): 7118804 kB' 'Inactive(anon): 0 kB' 'Active(file): 130704 kB' 'Inactive(file): 3259208 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10231128 kB' 'Mapped: 46180 kB' 'AnonPages: 280720 kB' 'Shmem: 6841216 kB' 'KernelStack: 7368 kB' 'PageTables: 4520 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 107248 kB' 'Slab: 349012 kB' 'SReclaimable: 107248 kB' 'SUnreclaim: 241764 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:20.274 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.274 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.274 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.274 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.274 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.274 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.274 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.274 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.274 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.274 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.274 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.274 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.274 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.274 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.274 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.274 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.274 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.274 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.274 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.274 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.274 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.274 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.274 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.274 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.274 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.274 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.274 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.274 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.274 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.274 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.274 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.274 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.274 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.274 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.274 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.274 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.274 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.274 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.274 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.274 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.274 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.274 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.274 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.274 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.274 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.274 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.274 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.274 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.274 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.274 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.274 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.274 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.274 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.274 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.274 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.274 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.274 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.274 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.274 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.274 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.274 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.274 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.274 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.274 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.274 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.274 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.274 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.274 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.274 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.274 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.274 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.274 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.274 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.274 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.274 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.274 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.274 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.274 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.274 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.274 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.274 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.274 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.274 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.274 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.274 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.274 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.274 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.274 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.274 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.274 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.274 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.274 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.274 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.274 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.274 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.274 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.274 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.275 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.275 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.275 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.275 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.275 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.275 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.275 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.275 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.275 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.275 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.275 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.275 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.275 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.275 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.275 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.275 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.275 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.275 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.275 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.275 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.275 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.275 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.275 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.275 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.275 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.275 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.275 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.275 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.275 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.275 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.275 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.275 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.275 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.275 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.275 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.275 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.275 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.275 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.275 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.275 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.275 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.275 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.275 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.275 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.275 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.275 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.275 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.275 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.275 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.275 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:20.275 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:20.275 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:20.275 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:20.275 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:20.275 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:20.275 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:20.275 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:04:20.275 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:20.275 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:20.275 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:20.275 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:20.275 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:20.275 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:20.275 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:20.275 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.275 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.275 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664780 kB' 'MemFree: 23783256 kB' 'MemUsed: 3881524 kB' 'SwapCached: 0 kB' 'Active: 1958100 kB' 'Inactive: 234592 kB' 'Active(anon): 1695300 kB' 'Inactive(anon): 0 kB' 'Active(file): 262800 kB' 'Inactive(file): 234592 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1995820 kB' 'Mapped: 167320 kB' 'AnonPages: 196908 kB' 'Shmem: 1498428 kB' 'KernelStack: 5368 kB' 'PageTables: 3596 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 88428 kB' 'Slab: 218580 kB' 'SReclaimable: 88428 kB' 'SUnreclaim: 130152 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:20.275 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.275 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.275 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.275 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.275 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.275 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.275 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.275 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.275 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.275 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.275 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.275 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.275 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.275 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.275 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.275 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.275 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.275 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.275 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.276 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.276 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.276 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.276 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.276 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.276 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.276 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.276 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.276 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.276 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.276 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.276 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.276 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.276 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.276 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.276 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.276 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.276 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.276 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.276 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.276 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.276 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.276 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.276 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.276 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.276 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.276 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.276 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.276 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.276 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.276 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.276 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.276 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.276 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.276 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.276 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.276 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.276 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.276 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.276 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.276 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.276 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.276 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.276 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.276 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.276 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.276 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.276 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.276 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.276 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.276 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.276 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.276 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.276 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.276 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.276 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.276 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.276 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.276 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.276 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.276 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.276 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.276 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.276 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.276 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.276 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.276 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.276 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.276 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.276 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.276 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.276 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.276 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.276 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.276 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.276 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.276 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.276 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.276 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.276 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.276 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.276 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.276 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.276 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.276 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.276 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.276 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.277 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.277 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.277 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.277 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.277 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.277 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.277 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.277 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.277 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.277 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.277 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.277 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.277 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.277 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.277 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.277 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.277 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.277 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.277 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.277 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.277 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.277 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.277 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.277 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.277 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.277 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.277 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.277 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.277 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.277 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.277 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.277 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.277 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.277 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.277 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.277 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:20.277 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:20.277 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:20.277 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.277 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:20.277 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:20.277 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:20.277 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:20.277 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:20.277 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:20.277 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:20.277 node0=512 expecting 512 00:04:20.277 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:20.277 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:20.277 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:20.277 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:20.277 node1=512 expecting 512 00:04:20.277 01:42:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:20.277 00:04:20.277 real 0m1.483s 00:04:20.277 user 0m0.612s 00:04:20.277 sys 0m0.833s 00:04:20.277 01:42:34 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:20.277 01:42:34 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:20.277 ************************************ 00:04:20.277 END TEST even_2G_alloc 00:04:20.277 ************************************ 00:04:20.277 01:42:34 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:20.277 01:42:34 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:20.277 01:42:34 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:20.277 01:42:34 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:20.277 ************************************ 00:04:20.277 START TEST odd_alloc 00:04:20.277 ************************************ 00:04:20.277 01:42:34 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:04:20.277 01:42:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:20.277 01:42:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:04:20.277 01:42:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:20.277 01:42:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:20.277 01:42:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:20.277 01:42:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:20.277 01:42:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:20.277 01:42:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:20.277 01:42:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:20.277 01:42:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:20.277 01:42:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:20.277 01:42:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:20.277 01:42:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:20.277 01:42:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:20.277 01:42:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:20.277 01:42:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:20.277 01:42:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:04:20.277 01:42:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:20.278 01:42:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:20.278 01:42:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:04:20.278 01:42:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:20.278 01:42:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:20.278 01:42:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:20.278 01:42:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:20.278 01:42:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:20.278 01:42:34 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:04:20.278 01:42:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:20.278 01:42:34 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:21.656 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:21.656 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:21.656 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:21.656 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:21.656 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:21.656 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:21.656 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:21.656 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:21.656 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:21.656 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:21.656 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:21.656 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:21.656 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:21.656 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:21.656 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:21.656 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:21.656 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:21.656 01:42:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:21.656 01:42:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:04:21.656 01:42:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:21.656 01:42:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:21.656 01:42:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:21.656 01:42:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:21.656 01:42:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:21.656 01:42:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:21.656 01:42:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:21.656 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:21.656 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:21.656 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:21.656 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:21.656 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:21.656 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:21.656 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:21.656 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:21.656 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:21.656 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.656 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.657 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 43920716 kB' 'MemAvailable: 47412020 kB' 'Buffers: 2704 kB' 'Cached: 12224292 kB' 'SwapCached: 0 kB' 'Active: 9205072 kB' 'Inactive: 3493800 kB' 'Active(anon): 8811568 kB' 'Inactive(anon): 0 kB' 'Active(file): 393504 kB' 'Inactive(file): 3493800 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 475016 kB' 'Mapped: 212444 kB' 'Shmem: 8339692 kB' 'KReclaimable: 195676 kB' 'Slab: 567808 kB' 'SReclaimable: 195676 kB' 'SUnreclaim: 372132 kB' 'KernelStack: 12640 kB' 'PageTables: 7828 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609864 kB' 'Committed_AS: 9907784 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196500 kB' 'VmallocChunk: 0 kB' 'Percpu: 33408 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1930844 kB' 'DirectMap2M: 14766080 kB' 'DirectMap1G: 52428800 kB' 00:04:21.657 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.657 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.657 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.657 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.657 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.657 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.657 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.657 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.657 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.657 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.657 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.657 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.657 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.657 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.657 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.657 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.657 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.657 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.657 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.657 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.657 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.657 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.657 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.657 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.657 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.657 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.657 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.657 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.657 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.657 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.657 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.657 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.657 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.657 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.657 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.657 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.657 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.657 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.657 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.657 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.657 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.657 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.657 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.657 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.657 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.657 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.657 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.657 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.657 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.657 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.657 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.657 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.657 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.657 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.657 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.657 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.657 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.657 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.657 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.657 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.657 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.657 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.657 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.657 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.657 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.657 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.657 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.657 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.657 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.657 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.657 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.657 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.657 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.657 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.657 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.657 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.657 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.657 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.657 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.657 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.657 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.657 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.657 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.657 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.657 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.658 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.658 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.658 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.658 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.658 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.658 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.658 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.658 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.658 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.658 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.658 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.658 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.658 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.658 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.658 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.658 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.658 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.658 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.658 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.658 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.658 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.658 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.658 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.658 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.658 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.658 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.658 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.658 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.658 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.658 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.658 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.658 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.658 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.658 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.658 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.658 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.658 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.658 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.658 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.658 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.658 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.658 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.658 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.658 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.658 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.658 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.658 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.658 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.658 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.658 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.658 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.658 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.658 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.658 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.658 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.658 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.658 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.658 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.658 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.658 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.658 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.658 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.658 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.658 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.658 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.658 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.658 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.658 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.658 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.658 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.658 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.658 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.658 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.658 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.658 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.658 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.658 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:21.658 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:21.658 01:42:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:21.658 01:42:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:21.658 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:21.658 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:21.658 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:21.658 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:21.658 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:21.658 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:21.658 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:21.658 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:21.658 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:21.658 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.658 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.659 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 43925736 kB' 'MemAvailable: 47417040 kB' 'Buffers: 2704 kB' 'Cached: 12224296 kB' 'SwapCached: 0 kB' 'Active: 9204988 kB' 'Inactive: 3493800 kB' 'Active(anon): 8811484 kB' 'Inactive(anon): 0 kB' 'Active(file): 393504 kB' 'Inactive(file): 3493800 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 474988 kB' 'Mapped: 212428 kB' 'Shmem: 8339696 kB' 'KReclaimable: 195676 kB' 'Slab: 567800 kB' 'SReclaimable: 195676 kB' 'SUnreclaim: 372124 kB' 'KernelStack: 12672 kB' 'PageTables: 7896 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609864 kB' 'Committed_AS: 9907800 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196452 kB' 'VmallocChunk: 0 kB' 'Percpu: 33408 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1930844 kB' 'DirectMap2M: 14766080 kB' 'DirectMap1G: 52428800 kB' 00:04:21.659 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.659 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.659 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.659 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.659 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.659 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.659 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.659 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.659 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.659 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.659 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.659 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.659 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.659 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.659 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.659 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.659 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.659 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.659 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.659 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.659 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.659 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.659 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.659 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.659 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.659 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.659 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.659 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.659 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.659 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.659 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.659 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.659 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.659 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.659 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.659 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.659 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.659 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.659 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.659 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.659 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.659 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.659 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.659 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.659 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.659 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.659 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.659 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.659 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.659 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.659 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.659 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.659 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.659 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.659 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.659 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.659 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.659 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.659 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.659 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.659 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.659 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.659 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.659 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.659 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.659 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.659 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.659 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.659 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.659 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.659 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.659 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.659 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.659 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.659 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.659 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.660 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.660 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.660 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.660 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.660 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.660 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.660 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.660 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.660 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.660 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.660 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.660 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.660 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.660 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.660 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.660 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.660 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.660 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.660 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.660 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.660 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.660 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.660 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.660 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.660 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.660 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.660 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.660 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.660 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.660 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.660 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.660 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.660 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.660 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.660 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.660 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.660 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.660 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.660 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.660 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.660 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.660 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.660 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.660 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.660 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.660 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.660 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.660 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.660 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.660 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.660 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.660 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.660 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.660 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.660 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.660 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.660 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.660 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.660 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.660 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.660 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.660 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.660 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.660 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.660 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.660 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.660 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.660 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.660 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.660 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.660 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.660 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.660 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.660 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.660 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.660 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.660 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.660 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.660 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.660 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.660 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.660 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.660 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.660 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.661 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.661 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.661 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.661 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.661 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.661 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.661 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.661 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.661 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.661 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.661 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.661 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.661 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.661 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.661 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.661 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.661 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.661 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.661 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.661 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.661 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.661 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.661 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.661 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.661 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.661 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.661 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.661 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.661 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.661 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.661 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.661 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.661 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.661 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.661 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.661 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.661 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.661 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.661 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.661 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.661 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.661 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.661 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.661 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.661 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.661 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:21.661 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:21.661 01:42:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:21.661 01:42:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:21.661 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:21.661 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:21.661 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:21.661 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:21.661 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:21.661 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:21.661 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:21.661 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:21.661 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:21.661 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.661 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.661 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 43925736 kB' 'MemAvailable: 47417040 kB' 'Buffers: 2704 kB' 'Cached: 12224312 kB' 'SwapCached: 0 kB' 'Active: 9204996 kB' 'Inactive: 3493800 kB' 'Active(anon): 8811492 kB' 'Inactive(anon): 0 kB' 'Active(file): 393504 kB' 'Inactive(file): 3493800 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 474964 kB' 'Mapped: 212428 kB' 'Shmem: 8339712 kB' 'KReclaimable: 195676 kB' 'Slab: 567824 kB' 'SReclaimable: 195676 kB' 'SUnreclaim: 372148 kB' 'KernelStack: 12672 kB' 'PageTables: 7908 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609864 kB' 'Committed_AS: 9907820 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196468 kB' 'VmallocChunk: 0 kB' 'Percpu: 33408 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1930844 kB' 'DirectMap2M: 14766080 kB' 'DirectMap1G: 52428800 kB' 00:04:21.661 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.661 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.661 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.661 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.661 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.661 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.661 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.661 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.661 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.661 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.661 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.661 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.661 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.661 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.661 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.661 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.661 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.661 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.661 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.661 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.662 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.662 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.662 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.662 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.662 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.662 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.662 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.662 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.662 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.662 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.662 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.662 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.662 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.662 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.662 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.662 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.662 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.662 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.662 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.662 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.662 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.662 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.662 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.662 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.662 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.662 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.662 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.662 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.662 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.662 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.662 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.662 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.662 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.662 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.662 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.662 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.662 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.662 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.662 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.662 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.662 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.662 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.662 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.662 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.662 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.662 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.662 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.662 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.662 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.662 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.662 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.662 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.662 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.662 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.662 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.662 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.662 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.662 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.662 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.662 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.662 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.662 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.662 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.662 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.662 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.662 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.662 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.662 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.662 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.662 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.662 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.662 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.662 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.662 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.662 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.662 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.662 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.662 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.662 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.662 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.662 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.662 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.662 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.662 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.662 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.662 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.662 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.663 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.663 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.663 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.663 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.663 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.663 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.663 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.663 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.663 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.663 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.663 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.663 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.663 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.663 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.663 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.663 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.663 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.663 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.663 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.663 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.663 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.663 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.663 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.663 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.663 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.663 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.663 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.663 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.663 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.663 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.663 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.663 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.663 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.663 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.663 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.663 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.663 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.663 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.663 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.663 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.663 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.663 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.663 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.663 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.663 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.663 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.663 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.663 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.663 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.663 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.663 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.663 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.663 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.663 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.663 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.663 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.663 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.663 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.663 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.663 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.663 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.663 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.663 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.663 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.663 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.663 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.663 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.663 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.663 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.663 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.663 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.663 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.663 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.663 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.663 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.663 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.663 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.663 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.663 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.663 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.663 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.663 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.663 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.663 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.663 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.663 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.663 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.663 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.663 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.663 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.663 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.663 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.663 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.663 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.663 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:21.663 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:21.663 01:42:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:21.663 01:42:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:21.663 nr_hugepages=1025 00:04:21.663 01:42:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:21.663 resv_hugepages=0 00:04:21.663 01:42:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:21.663 surplus_hugepages=0 00:04:21.663 01:42:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:21.663 anon_hugepages=0 00:04:21.663 01:42:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:21.663 01:42:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:21.663 01:42:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:21.663 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:21.663 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:21.663 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:21.664 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:21.664 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:21.664 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:21.664 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:21.664 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:21.664 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:21.664 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.664 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.664 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 43926028 kB' 'MemAvailable: 47417332 kB' 'Buffers: 2704 kB' 'Cached: 12224332 kB' 'SwapCached: 0 kB' 'Active: 9205044 kB' 'Inactive: 3493800 kB' 'Active(anon): 8811540 kB' 'Inactive(anon): 0 kB' 'Active(file): 393504 kB' 'Inactive(file): 3493800 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 474968 kB' 'Mapped: 212428 kB' 'Shmem: 8339732 kB' 'KReclaimable: 195676 kB' 'Slab: 567824 kB' 'SReclaimable: 195676 kB' 'SUnreclaim: 372148 kB' 'KernelStack: 12672 kB' 'PageTables: 7908 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609864 kB' 'Committed_AS: 9907840 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196468 kB' 'VmallocChunk: 0 kB' 'Percpu: 33408 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1930844 kB' 'DirectMap2M: 14766080 kB' 'DirectMap1G: 52428800 kB' 00:04:21.664 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.664 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.664 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.664 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.664 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.664 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.664 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.664 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.664 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.664 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.664 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.664 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.664 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.664 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.664 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.664 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.664 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.664 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.664 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.664 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.664 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.664 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.664 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.664 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.664 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.664 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.664 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.664 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.664 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.664 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.664 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.664 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.664 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.664 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.664 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.664 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.664 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.664 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.664 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.664 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.664 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.664 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.664 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.664 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.664 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.664 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.664 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.664 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.664 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.664 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.664 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.664 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.664 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.664 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.664 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.664 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.664 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.664 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.664 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.664 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.664 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.664 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.664 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.664 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.664 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.664 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.664 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.664 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.664 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.664 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.664 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.664 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.664 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.664 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.664 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.664 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.664 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.664 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.664 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.664 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.664 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.664 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.664 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.664 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.664 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.664 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.664 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.664 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.664 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.665 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.665 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.665 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.665 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.665 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.665 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.665 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.665 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.665 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.665 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.665 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.665 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.665 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.665 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.665 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.665 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.665 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.665 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.665 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.665 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.665 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.665 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.665 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.665 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.665 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.665 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.665 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.665 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.665 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.665 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.665 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.665 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.665 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.665 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.665 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.665 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.665 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.665 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.665 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.665 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.665 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.665 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.665 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.665 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.665 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.665 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.665 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.665 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.665 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.665 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.665 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.665 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.665 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.665 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.665 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.665 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.665 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.665 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.665 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.665 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.665 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.665 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.665 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.665 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.665 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.665 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.665 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.665 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.665 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.665 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.665 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.665 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.665 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.665 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.665 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.665 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.665 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.665 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.665 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.665 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.665 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.665 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.665 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.665 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.665 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.665 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.665 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.665 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.665 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.665 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.665 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.665 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.665 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.665 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.665 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.665 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.665 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.665 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.665 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.665 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.665 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.665 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.665 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.665 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.665 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:04:21.665 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:21.666 01:42:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:21.666 01:42:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:21.666 01:42:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:04:21.666 01:42:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:21.666 01:42:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:21.666 01:42:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:21.666 01:42:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:04:21.666 01:42:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:21.666 01:42:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:21.666 01:42:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:21.666 01:42:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:21.666 01:42:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:21.666 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:21.666 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:04:21.666 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:21.666 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:21.666 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:21.666 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:21.666 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:21.666 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:21.666 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:21.666 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.666 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.666 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 20136244 kB' 'MemUsed: 12740696 kB' 'SwapCached: 0 kB' 'Active: 7248952 kB' 'Inactive: 3259208 kB' 'Active(anon): 7118248 kB' 'Inactive(anon): 0 kB' 'Active(file): 130704 kB' 'Inactive(file): 3259208 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10231136 kB' 'Mapped: 45192 kB' 'AnonPages: 280132 kB' 'Shmem: 6841224 kB' 'KernelStack: 7368 kB' 'PageTables: 4560 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 107248 kB' 'Slab: 349128 kB' 'SReclaimable: 107248 kB' 'SUnreclaim: 241880 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:21.666 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.666 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.666 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.666 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.666 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.666 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.666 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.666 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.666 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.666 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.666 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.666 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.666 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.666 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.666 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.666 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.666 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.666 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.666 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.666 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.666 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.666 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.666 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.666 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.666 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.666 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.666 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.666 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.666 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.666 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.666 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.666 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.666 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.666 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.666 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.666 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.666 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.666 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.666 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.666 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.666 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.666 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.666 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.666 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.666 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.666 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.666 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.666 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.666 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.666 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.666 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.666 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.666 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.667 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.667 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.667 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.667 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.667 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.667 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.667 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.667 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.667 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.667 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.667 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.667 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.667 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.667 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.667 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.667 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.667 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.667 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.667 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.667 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.667 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.667 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.667 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.667 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.667 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.667 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.667 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.667 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.667 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.667 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.667 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.667 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.667 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.667 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.667 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.667 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.667 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.667 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.667 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.667 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.667 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.667 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.667 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.667 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.667 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.667 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.667 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.667 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.667 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.667 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.667 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.667 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.667 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.667 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.667 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.667 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.667 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.667 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.667 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.667 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.667 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.667 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.667 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.667 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.667 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.667 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.667 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.667 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.667 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.667 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.667 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.667 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.667 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.667 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.667 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.667 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.667 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.667 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.667 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.667 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.667 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.667 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.667 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.667 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.667 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.667 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.667 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.667 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.667 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.667 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.667 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.667 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.667 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:21.667 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:21.667 01:42:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:21.667 01:42:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:21.667 01:42:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:21.667 01:42:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:21.667 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:21.667 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:04:21.667 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:21.667 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:21.668 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:21.668 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:21.668 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:21.668 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:21.668 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:21.668 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.668 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.668 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664780 kB' 'MemFree: 23793508 kB' 'MemUsed: 3871272 kB' 'SwapCached: 0 kB' 'Active: 1956144 kB' 'Inactive: 234592 kB' 'Active(anon): 1693344 kB' 'Inactive(anon): 0 kB' 'Active(file): 262800 kB' 'Inactive(file): 234592 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1995944 kB' 'Mapped: 167236 kB' 'AnonPages: 194836 kB' 'Shmem: 1498552 kB' 'KernelStack: 5304 kB' 'PageTables: 3348 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 88428 kB' 'Slab: 218672 kB' 'SReclaimable: 88428 kB' 'SUnreclaim: 130244 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:04:21.668 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.668 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.668 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.668 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.668 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.668 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.668 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.668 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.668 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.668 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.668 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.668 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.668 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.668 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.668 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.668 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.668 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.668 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.668 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.668 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.668 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.668 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.668 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.668 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.668 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.668 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.668 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.668 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.668 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.668 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.668 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.668 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.668 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.668 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.668 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.668 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.668 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.668 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.668 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.668 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.668 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.668 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.668 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.668 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.668 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.668 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.668 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.668 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.668 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.668 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.668 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.668 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.668 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.668 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.668 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.668 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.668 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.669 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.669 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.669 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.669 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.669 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.669 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.669 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.669 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.669 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.669 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.669 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.669 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.669 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.669 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.669 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.669 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.669 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.669 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.669 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.669 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.669 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.669 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.669 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.669 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.669 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.669 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.669 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.669 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.669 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.669 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.669 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.669 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.669 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.669 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.669 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.669 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.669 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.669 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.669 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.669 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.669 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.669 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.669 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.669 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.669 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.669 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.669 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.669 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.669 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.669 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.669 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.669 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.669 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.669 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.669 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.669 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.669 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.669 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.669 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.669 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.669 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.669 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.669 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.669 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.669 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.669 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.669 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.669 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.669 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.669 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.669 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.669 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.669 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.669 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.669 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.669 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.669 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.669 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.669 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.669 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.669 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.669 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.669 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.669 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.669 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:21.669 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.669 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.669 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.669 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:21.669 01:42:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:21.669 01:42:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:21.669 01:42:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:21.669 01:42:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:21.669 01:42:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:21.669 01:42:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:04:21.669 node0=512 expecting 513 00:04:21.669 01:42:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:21.669 01:42:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:21.670 01:42:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:21.670 01:42:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:04:21.670 node1=513 expecting 512 00:04:21.670 01:42:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:04:21.670 00:04:21.670 real 0m1.516s 00:04:21.670 user 0m0.633s 00:04:21.670 sys 0m0.849s 00:04:21.670 01:42:36 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:21.670 01:42:36 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:21.670 ************************************ 00:04:21.670 END TEST odd_alloc 00:04:21.670 ************************************ 00:04:21.670 01:42:36 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:21.670 01:42:36 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:21.670 01:42:36 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:21.670 01:42:36 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:21.928 ************************************ 00:04:21.928 START TEST custom_alloc 00:04:21.928 ************************************ 00:04:21.928 01:42:36 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:04:21.928 01:42:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:04:21.928 01:42:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:04:21.928 01:42:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:21.928 01:42:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:21.928 01:42:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:21.928 01:42:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:21.928 01:42:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:21.928 01:42:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:21.928 01:42:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:21.928 01:42:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:21.928 01:42:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:21.928 01:42:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:21.928 01:42:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:21.928 01:42:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:21.928 01:42:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:21.928 01:42:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:21.928 01:42:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:21.928 01:42:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:21.928 01:42:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:21.928 01:42:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:21.928 01:42:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:21.928 01:42:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:04:21.928 01:42:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:21.928 01:42:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:21.928 01:42:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:21.929 01:42:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:21.929 01:42:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:21.929 01:42:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:21.929 01:42:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:21.929 01:42:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:04:21.929 01:42:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:04:21.929 01:42:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:21.929 01:42:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:21.929 01:42:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:21.929 01:42:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:21.929 01:42:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:21.929 01:42:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:21.929 01:42:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:21.929 01:42:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:21.929 01:42:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:21.929 01:42:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:21.929 01:42:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:21.929 01:42:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:21.929 01:42:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:21.929 01:42:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:21.929 01:42:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:21.929 01:42:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:21.929 01:42:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:04:21.929 01:42:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:21.929 01:42:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:21.929 01:42:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:21.929 01:42:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:21.929 01:42:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:21.929 01:42:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:21.929 01:42:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:21.929 01:42:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:21.929 01:42:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:21.929 01:42:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:21.929 01:42:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:21.929 01:42:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:21.929 01:42:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:21.929 01:42:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:21.929 01:42:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:04:21.929 01:42:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:21.929 01:42:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:21.929 01:42:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:21.929 01:42:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:04:21.929 01:42:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:21.929 01:42:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:04:21.929 01:42:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:04:21.929 01:42:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:21.929 01:42:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:22.863 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:22.863 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:22.863 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:22.863 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:22.863 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:22.863 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:22.863 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:22.863 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:22.863 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:22.863 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:22.863 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:22.863 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:22.863 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:22.863 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:22.863 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:22.863 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:22.863 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:23.127 01:42:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:04:23.127 01:42:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:23.127 01:42:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:04:23.127 01:42:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:23.127 01:42:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:23.127 01:42:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:23.127 01:42:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:23.127 01:42:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:23.127 01:42:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:23.127 01:42:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:23.127 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:23.127 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:23.127 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:23.127 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:23.127 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:23.127 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:23.127 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:23.127 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:23.127 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:23.127 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.127 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.127 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 42848416 kB' 'MemAvailable: 46339720 kB' 'Buffers: 2704 kB' 'Cached: 12224420 kB' 'SwapCached: 0 kB' 'Active: 9210604 kB' 'Inactive: 3493800 kB' 'Active(anon): 8817100 kB' 'Inactive(anon): 0 kB' 'Active(file): 393504 kB' 'Inactive(file): 3493800 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 480552 kB' 'Mapped: 213028 kB' 'Shmem: 8339820 kB' 'KReclaimable: 195676 kB' 'Slab: 567964 kB' 'SReclaimable: 195676 kB' 'SUnreclaim: 372288 kB' 'KernelStack: 12688 kB' 'PageTables: 7864 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086600 kB' 'Committed_AS: 9913824 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196472 kB' 'VmallocChunk: 0 kB' 'Percpu: 33408 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1930844 kB' 'DirectMap2M: 14766080 kB' 'DirectMap1G: 52428800 kB' 00:04:23.127 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.127 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.127 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.127 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.127 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.127 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.127 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.127 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.127 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.127 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.127 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.127 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.127 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.127 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.127 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.127 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.127 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.127 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.127 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.127 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.127 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.127 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.127 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.127 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.127 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.127 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.127 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.127 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.127 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.127 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.127 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.127 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.128 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.128 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.128 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.128 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.128 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.128 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.128 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.128 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.128 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.128 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.128 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.128 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.128 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.128 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.128 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.128 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.128 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.128 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.128 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.128 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.128 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.128 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.128 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.128 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.128 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.128 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.128 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.128 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.128 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.128 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.128 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.128 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.128 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.128 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.128 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.128 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.128 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.128 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.128 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.128 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.128 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.128 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.128 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.128 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.128 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.128 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.128 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.128 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.128 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.128 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.128 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.128 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.128 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.128 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.128 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.128 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.128 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.128 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.128 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.128 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.128 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.128 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.128 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.128 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.128 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.128 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.128 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.128 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.128 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.128 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.128 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.128 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.128 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.128 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.128 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.128 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.128 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.128 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.128 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.128 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.128 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.128 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.128 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.128 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.129 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.129 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.129 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.129 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.129 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.129 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.129 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.129 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.129 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.129 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.129 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.129 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.129 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.129 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.129 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.129 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.129 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.129 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.129 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.129 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.129 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.129 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.129 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.129 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.129 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.129 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.129 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.129 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.129 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.129 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.129 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.129 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.129 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.129 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.129 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.129 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.129 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.129 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.129 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.129 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.129 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.129 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.129 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.129 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.129 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.129 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:23.129 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:23.129 01:42:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:23.129 01:42:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:23.129 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:23.129 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:23.129 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:23.129 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:23.129 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:23.129 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:23.129 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:23.129 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:23.129 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:23.129 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.129 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.129 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 42852560 kB' 'MemAvailable: 46343864 kB' 'Buffers: 2704 kB' 'Cached: 12224424 kB' 'SwapCached: 0 kB' 'Active: 9205172 kB' 'Inactive: 3493800 kB' 'Active(anon): 8811668 kB' 'Inactive(anon): 0 kB' 'Active(file): 393504 kB' 'Inactive(file): 3493800 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 475024 kB' 'Mapped: 213000 kB' 'Shmem: 8339824 kB' 'KReclaimable: 195676 kB' 'Slab: 568016 kB' 'SReclaimable: 195676 kB' 'SUnreclaim: 372340 kB' 'KernelStack: 12688 kB' 'PageTables: 7836 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086600 kB' 'Committed_AS: 9907724 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196452 kB' 'VmallocChunk: 0 kB' 'Percpu: 33408 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1930844 kB' 'DirectMap2M: 14766080 kB' 'DirectMap1G: 52428800 kB' 00:04:23.129 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.129 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.129 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.129 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.129 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.129 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.129 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.129 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.129 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.129 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.129 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.129 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.130 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.130 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.130 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.130 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.130 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.130 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.130 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.130 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.130 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.130 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.130 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.130 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.130 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.130 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.130 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.130 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.130 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.130 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.130 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.130 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.130 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.130 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.130 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.130 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.130 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.130 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.130 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.130 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.130 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.130 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.130 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.130 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.130 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.130 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.130 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.130 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.130 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.130 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.130 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.130 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.130 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.130 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.130 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.130 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.130 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.130 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.130 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.130 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.130 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.130 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.130 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.130 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.130 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.130 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.130 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.130 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.130 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.130 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.130 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.130 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.130 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.130 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.130 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.130 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.130 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.130 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.130 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.130 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.130 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.130 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.130 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.130 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.130 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.130 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.130 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.130 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.130 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.130 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.130 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.130 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.130 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.130 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.131 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.131 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.131 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.131 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.131 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.131 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.131 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.131 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.131 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.131 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.131 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.131 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.131 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.131 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.131 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.131 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.131 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.131 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.131 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.131 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.131 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.131 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.131 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.131 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.131 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.131 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.131 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.131 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.131 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.131 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.131 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.131 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.131 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.131 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.131 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.131 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.131 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.131 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.131 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.131 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.131 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.131 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.131 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.131 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.131 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.131 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.131 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.131 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.131 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.131 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.131 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.131 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.131 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.131 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.131 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.131 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.131 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.131 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.131 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.131 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.131 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.131 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.131 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.131 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.131 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.131 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.131 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.131 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.131 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.131 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.131 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.131 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.131 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.131 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.131 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.131 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.131 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.131 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.131 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.131 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.131 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.131 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.131 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.131 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.132 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.132 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.132 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.132 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.132 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.132 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.132 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.132 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.132 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.132 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.132 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.132 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.132 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.132 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.132 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.132 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.132 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.132 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.132 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.132 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.132 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.132 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.132 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.132 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.132 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.132 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.132 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.132 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:23.132 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:23.132 01:42:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:23.132 01:42:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:23.132 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:23.132 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:23.132 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:23.132 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:23.132 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:23.132 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:23.132 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:23.132 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:23.132 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:23.132 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.132 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.132 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 42853100 kB' 'MemAvailable: 46344404 kB' 'Buffers: 2704 kB' 'Cached: 12224440 kB' 'SwapCached: 0 kB' 'Active: 9205000 kB' 'Inactive: 3493800 kB' 'Active(anon): 8811496 kB' 'Inactive(anon): 0 kB' 'Active(file): 393504 kB' 'Inactive(file): 3493800 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 474928 kB' 'Mapped: 212440 kB' 'Shmem: 8339840 kB' 'KReclaimable: 195676 kB' 'Slab: 568024 kB' 'SReclaimable: 195676 kB' 'SUnreclaim: 372348 kB' 'KernelStack: 12704 kB' 'PageTables: 7856 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086600 kB' 'Committed_AS: 9907744 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196436 kB' 'VmallocChunk: 0 kB' 'Percpu: 33408 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1930844 kB' 'DirectMap2M: 14766080 kB' 'DirectMap1G: 52428800 kB' 00:04:23.132 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.132 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.132 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.132 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.132 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.132 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.132 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.132 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.132 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.132 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.132 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.132 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.132 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.132 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.132 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.132 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.132 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.132 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.132 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.132 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.132 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.132 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.132 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.132 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.132 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.132 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.132 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.132 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.132 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.132 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.132 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.133 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.133 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.133 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.133 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.133 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.133 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.133 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.133 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.133 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.133 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.133 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.133 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.133 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.133 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.133 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.133 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.133 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.133 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.133 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.133 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.133 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.133 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.133 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.133 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.133 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.133 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.133 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.133 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.133 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.133 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.133 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.133 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.133 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.133 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.133 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.133 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.133 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.133 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.133 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.133 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.133 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.133 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.133 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.133 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.133 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.133 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.133 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.133 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.133 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.133 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.133 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.133 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.133 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.133 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.133 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.133 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.133 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.133 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.133 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.133 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.133 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.133 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.133 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.134 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.134 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.134 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.134 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.134 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.134 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.134 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.134 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.134 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.134 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.134 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.134 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.134 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.134 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.134 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.134 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.134 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.134 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.134 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.134 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.134 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.134 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.134 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.134 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.134 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.134 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.134 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.134 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.134 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.134 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.134 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.134 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.134 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.134 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.134 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.134 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.134 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.134 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.134 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.134 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.134 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.134 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.134 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.134 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.134 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.134 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.134 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.134 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.134 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.134 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.134 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.134 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.134 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.134 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.134 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.134 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.134 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.134 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.134 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.134 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.134 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.134 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.134 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.134 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.134 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.134 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.134 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.134 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.134 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.134 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.134 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.134 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.134 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.134 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.134 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.134 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.134 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.134 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.134 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.134 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.134 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.135 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.135 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.135 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.135 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.135 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.135 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.135 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.135 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.135 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.135 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.135 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.135 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.135 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.135 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.135 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.135 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.135 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.135 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.135 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.135 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.135 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.135 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.135 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.135 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.135 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.135 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.135 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:23.135 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:23.135 01:42:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:23.135 01:42:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:04:23.135 nr_hugepages=1536 00:04:23.135 01:42:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:23.135 resv_hugepages=0 00:04:23.135 01:42:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:23.135 surplus_hugepages=0 00:04:23.135 01:42:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:23.135 anon_hugepages=0 00:04:23.135 01:42:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:23.135 01:42:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:04:23.135 01:42:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:23.135 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:23.135 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:23.135 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:23.135 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:23.135 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:23.135 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:23.135 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:23.135 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:23.135 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:23.135 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.135 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.135 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 42853472 kB' 'MemAvailable: 46344776 kB' 'Buffers: 2704 kB' 'Cached: 12224464 kB' 'SwapCached: 0 kB' 'Active: 9205028 kB' 'Inactive: 3493800 kB' 'Active(anon): 8811524 kB' 'Inactive(anon): 0 kB' 'Active(file): 393504 kB' 'Inactive(file): 3493800 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 474932 kB' 'Mapped: 212440 kB' 'Shmem: 8339864 kB' 'KReclaimable: 195676 kB' 'Slab: 568024 kB' 'SReclaimable: 195676 kB' 'SUnreclaim: 372348 kB' 'KernelStack: 12704 kB' 'PageTables: 7856 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086600 kB' 'Committed_AS: 9907764 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196436 kB' 'VmallocChunk: 0 kB' 'Percpu: 33408 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1930844 kB' 'DirectMap2M: 14766080 kB' 'DirectMap1G: 52428800 kB' 00:04:23.135 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.135 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.135 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.135 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.135 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.135 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.135 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.135 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.135 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.135 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.135 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.135 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.135 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.135 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.135 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.135 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.135 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.135 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.135 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.135 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.135 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.135 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.135 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.135 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.136 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.136 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.136 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.136 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.136 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.136 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.136 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.136 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.136 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.136 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.136 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.136 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.136 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.136 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.136 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.136 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.136 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.136 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.136 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.136 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.136 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.136 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.136 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.136 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.136 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.136 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.136 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.136 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.136 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.136 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.136 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.136 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.136 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.136 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.136 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.136 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.136 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.136 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.136 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.136 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.136 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.136 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.136 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.136 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.136 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.136 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.136 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.136 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.136 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.136 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.136 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.136 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.136 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.136 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.136 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.136 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.136 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.136 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.136 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.136 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.136 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.136 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.136 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.136 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.136 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.136 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.136 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.136 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.136 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.136 01:42:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.136 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.136 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.136 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.136 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.136 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.136 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.136 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.136 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.136 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.136 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.136 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.136 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.136 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.136 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.136 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.137 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.137 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.137 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.137 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.137 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.137 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.137 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.137 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.137 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.137 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.137 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.137 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.137 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.137 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.137 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.137 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.137 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.137 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.137 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.137 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.137 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.137 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.137 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.137 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.137 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.137 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.137 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.137 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.137 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.137 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.137 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.137 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.137 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.137 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.137 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.137 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.137 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.137 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.137 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.137 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.137 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.137 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.137 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.137 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.137 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.137 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.137 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.137 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.137 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.137 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.137 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.137 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.137 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.137 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.137 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.137 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.137 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.137 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.137 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.137 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.137 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.137 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.137 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.137 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.137 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.137 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.137 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.137 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.137 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.137 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.137 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.137 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.137 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.137 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.137 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.137 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.138 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.138 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.138 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.138 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.138 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.138 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.138 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.138 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.138 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:04:23.138 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:23.138 01:42:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:23.138 01:42:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:23.138 01:42:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:04:23.138 01:42:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:23.138 01:42:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:23.138 01:42:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:23.138 01:42:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:23.138 01:42:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:23.138 01:42:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:23.138 01:42:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:23.138 01:42:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:23.138 01:42:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:23.138 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:23.138 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:23.138 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:23.138 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:23.138 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:23.138 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:23.138 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:23.138 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:23.432 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:23.432 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.432 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.432 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 20123660 kB' 'MemUsed: 12753280 kB' 'SwapCached: 0 kB' 'Active: 7248012 kB' 'Inactive: 3259208 kB' 'Active(anon): 7117308 kB' 'Inactive(anon): 0 kB' 'Active(file): 130704 kB' 'Inactive(file): 3259208 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10231144 kB' 'Mapped: 45192 kB' 'AnonPages: 279172 kB' 'Shmem: 6841232 kB' 'KernelStack: 7352 kB' 'PageTables: 4396 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 107248 kB' 'Slab: 349344 kB' 'SReclaimable: 107248 kB' 'SUnreclaim: 242096 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:23.432 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.432 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.432 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.432 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.432 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.432 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.432 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.432 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.432 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.432 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.432 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.432 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.432 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.432 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.432 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.432 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.432 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.432 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.432 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.432 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.432 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.432 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.432 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.432 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.432 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.432 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.432 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.432 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.432 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.432 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.432 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.432 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.432 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.432 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.432 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.432 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.432 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.432 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.432 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.432 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.432 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.432 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.432 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.432 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.432 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.432 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.432 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.432 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.432 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.432 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.432 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.432 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.432 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.432 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.432 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.432 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.432 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.432 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.432 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.433 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.433 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.433 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.433 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.433 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.433 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.433 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.433 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.433 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.433 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.433 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.433 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.433 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.433 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.433 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.433 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.433 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.433 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.433 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.433 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.433 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.433 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.433 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.433 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.433 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.433 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.433 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.433 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.433 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.433 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.433 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.433 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.433 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.433 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.433 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.433 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.433 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.433 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.433 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.433 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.433 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.433 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.433 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.433 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.433 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.433 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.433 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.433 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.433 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.433 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.433 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.433 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.433 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.433 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.433 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.433 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.433 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.433 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.433 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.433 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.433 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.433 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.433 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.433 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.433 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.433 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.433 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.433 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.433 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.433 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.433 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.433 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.433 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.433 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.433 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.433 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.433 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.434 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.434 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.434 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.434 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.434 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.434 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.434 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.434 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.434 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.434 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:23.434 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:23.434 01:42:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:23.434 01:42:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:23.434 01:42:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:23.434 01:42:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:23.434 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:23.434 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:04:23.434 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:23.434 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:23.434 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:23.434 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:23.434 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:23.434 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:23.434 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:23.434 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.434 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664780 kB' 'MemFree: 22730556 kB' 'MemUsed: 4934224 kB' 'SwapCached: 0 kB' 'Active: 1957032 kB' 'Inactive: 234592 kB' 'Active(anon): 1694232 kB' 'Inactive(anon): 0 kB' 'Active(file): 262800 kB' 'Inactive(file): 234592 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1996060 kB' 'Mapped: 167248 kB' 'AnonPages: 195720 kB' 'Shmem: 1498668 kB' 'KernelStack: 5352 kB' 'PageTables: 3416 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 88428 kB' 'Slab: 218680 kB' 'SReclaimable: 88428 kB' 'SUnreclaim: 130252 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:23.434 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.434 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.434 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.434 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.434 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.434 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.434 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.434 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.434 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.434 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.434 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.434 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.434 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.434 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.434 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.434 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.434 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.434 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.434 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.434 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.434 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.434 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.434 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.434 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.434 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.434 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.434 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.434 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.434 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.434 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.434 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.434 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.434 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.434 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.434 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.434 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.434 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.434 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.434 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.434 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.434 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.434 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.434 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.434 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.434 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.434 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.434 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.434 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.434 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.434 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.434 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.434 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.434 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.435 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.435 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.435 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.435 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.435 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.435 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.435 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.435 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.435 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.435 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.435 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.435 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.435 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.435 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.435 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.435 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.435 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.435 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.435 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.435 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.435 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.435 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.435 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.435 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.435 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.435 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.435 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.435 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.435 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.435 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.435 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.435 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.435 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.435 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.435 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.435 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.435 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.435 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.435 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.435 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.435 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.435 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.435 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.435 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.435 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.435 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.435 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.435 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.435 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.435 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.435 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.435 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.435 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.435 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.435 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.435 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.435 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.435 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.435 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.435 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.435 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.435 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.435 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.435 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.435 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.435 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.435 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.435 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.435 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.435 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.435 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.435 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.435 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.435 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.435 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.435 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.435 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.435 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.435 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.435 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.435 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.435 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.435 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.435 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.435 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.436 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.436 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.436 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.436 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.436 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.436 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.436 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.436 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.436 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:23.436 01:42:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:23.436 01:42:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:23.436 01:42:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:23.436 01:42:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:23.436 01:42:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:23.436 01:42:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:23.436 node0=512 expecting 512 00:04:23.436 01:42:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:23.436 01:42:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:23.436 01:42:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:23.436 01:42:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:04:23.436 node1=1024 expecting 1024 00:04:23.436 01:42:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:04:23.436 00:04:23.436 real 0m1.506s 00:04:23.436 user 0m0.635s 00:04:23.436 sys 0m0.834s 00:04:23.436 01:42:38 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:23.436 01:42:38 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:23.436 ************************************ 00:04:23.436 END TEST custom_alloc 00:04:23.436 ************************************ 00:04:23.436 01:42:38 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:23.436 01:42:38 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:23.436 01:42:38 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:23.436 01:42:38 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:23.436 ************************************ 00:04:23.436 START TEST no_shrink_alloc 00:04:23.436 ************************************ 00:04:23.436 01:42:38 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:04:23.436 01:42:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:23.436 01:42:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:23.436 01:42:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:23.436 01:42:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:04:23.436 01:42:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:23.436 01:42:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:23.436 01:42:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:23.436 01:42:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:23.436 01:42:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:23.436 01:42:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:23.436 01:42:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:23.436 01:42:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:23.436 01:42:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:23.436 01:42:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:23.436 01:42:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:23.436 01:42:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:23.436 01:42:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:23.436 01:42:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:23.436 01:42:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:23.436 01:42:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:04:23.436 01:42:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:23.436 01:42:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:24.371 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:24.371 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:24.371 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:24.371 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:24.371 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:24.371 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:24.371 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:24.371 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:24.371 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:24.371 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:24.371 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:24.371 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:24.371 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:24.371 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:24.371 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:24.371 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:24.371 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:24.633 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:24.633 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:24.633 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:24.633 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:24.633 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:24.633 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:24.633 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:24.633 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:24.633 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:24.633 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:24.633 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:24.633 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:24.633 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:24.633 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.633 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:24.633 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:24.633 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.633 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.633 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.633 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.634 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 43845460 kB' 'MemAvailable: 47336764 kB' 'Buffers: 2704 kB' 'Cached: 12224552 kB' 'SwapCached: 0 kB' 'Active: 9205660 kB' 'Inactive: 3493800 kB' 'Active(anon): 8812156 kB' 'Inactive(anon): 0 kB' 'Active(file): 393504 kB' 'Inactive(file): 3493800 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 474960 kB' 'Mapped: 212460 kB' 'Shmem: 8339952 kB' 'KReclaimable: 195676 kB' 'Slab: 567332 kB' 'SReclaimable: 195676 kB' 'SUnreclaim: 371656 kB' 'KernelStack: 12688 kB' 'PageTables: 7812 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 9907964 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196420 kB' 'VmallocChunk: 0 kB' 'Percpu: 33408 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1930844 kB' 'DirectMap2M: 14766080 kB' 'DirectMap1G: 52428800 kB' 00:04:24.634 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.634 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.634 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.634 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.634 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.634 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.634 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.634 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.634 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.634 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.634 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.634 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.634 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.634 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.634 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.634 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.634 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.634 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.634 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.634 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.634 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.634 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.634 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.634 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.634 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.634 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.634 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.634 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.634 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.634 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.634 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.634 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.634 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.634 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.634 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.634 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.634 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.634 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.634 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.634 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.634 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.634 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.634 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.634 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.634 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.634 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.634 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.634 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.634 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.634 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.634 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.634 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.634 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.634 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.634 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.634 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.634 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.634 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.634 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.634 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.634 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.634 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.634 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.634 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.634 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.634 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.634 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.634 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.634 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.634 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.634 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.634 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.634 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.634 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.634 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.634 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.634 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.634 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.634 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.634 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.634 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.634 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.634 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.634 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.634 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.634 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.634 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.634 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.634 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.634 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.634 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.634 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.634 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.634 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.634 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.634 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.634 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.634 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.634 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.634 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.634 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.635 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.635 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.635 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.635 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.635 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.635 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.635 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.635 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.635 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.635 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.635 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.635 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.635 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.635 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.635 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.635 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.635 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.635 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.635 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.635 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.635 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.635 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.635 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.635 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.635 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.635 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.635 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.635 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.635 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.635 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.635 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.635 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.635 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.635 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.635 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.635 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.635 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.635 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.635 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.635 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.635 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.635 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.635 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.635 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.635 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.635 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.635 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.635 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.635 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.635 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.635 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.635 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.635 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.635 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.635 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.635 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.635 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.635 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.635 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.635 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.635 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:24.635 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:24.635 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:24.635 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:24.635 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:24.635 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:24.635 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:24.635 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:24.635 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.635 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:24.635 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:24.635 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.635 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.635 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.635 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.635 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 43845564 kB' 'MemAvailable: 47336868 kB' 'Buffers: 2704 kB' 'Cached: 12224556 kB' 'SwapCached: 0 kB' 'Active: 9205512 kB' 'Inactive: 3493800 kB' 'Active(anon): 8812008 kB' 'Inactive(anon): 0 kB' 'Active(file): 393504 kB' 'Inactive(file): 3493800 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 475256 kB' 'Mapped: 212452 kB' 'Shmem: 8339956 kB' 'KReclaimable: 195676 kB' 'Slab: 567312 kB' 'SReclaimable: 195676 kB' 'SUnreclaim: 371636 kB' 'KernelStack: 12704 kB' 'PageTables: 7832 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 9907980 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196420 kB' 'VmallocChunk: 0 kB' 'Percpu: 33408 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1930844 kB' 'DirectMap2M: 14766080 kB' 'DirectMap1G: 52428800 kB' 00:04:24.635 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.635 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.635 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.635 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.635 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.635 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.635 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.635 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.635 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.635 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.635 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.635 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.635 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.635 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.635 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.635 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.635 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.636 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.636 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.636 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.636 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.636 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.636 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.636 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.636 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.636 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.636 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.636 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.636 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.636 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.636 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.636 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.636 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.636 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.636 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.636 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.636 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.636 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.636 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.636 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.636 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.636 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.636 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.636 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.636 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.636 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.636 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.636 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.636 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.636 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.636 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.636 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.636 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.636 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.636 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.636 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.636 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.636 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.636 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.636 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.636 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.636 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.636 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.636 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.636 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.636 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.636 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.636 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.636 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.636 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.636 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.636 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.636 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.636 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.636 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.636 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.636 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.636 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.636 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.636 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.636 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.636 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.636 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.636 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.636 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.636 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.636 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.636 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.636 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.636 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.636 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.636 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.636 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.636 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.636 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.636 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.636 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.636 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.636 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.636 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.636 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.636 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.636 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.636 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.636 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.636 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.636 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.636 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.636 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.636 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.636 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.636 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.636 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.636 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.636 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.636 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.636 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.636 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.636 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.636 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.636 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.636 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.636 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.637 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.637 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.637 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.637 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.637 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.637 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.637 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.637 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.637 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.637 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.637 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.637 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.637 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.637 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.637 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.637 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.637 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.637 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.637 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.637 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.637 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.637 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.637 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.637 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.637 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.637 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.637 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.637 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.637 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.637 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.637 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.637 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.637 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.637 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.637 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.637 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.637 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.637 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.637 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.637 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.637 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.637 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.637 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.637 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.637 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.637 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.637 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.637 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.637 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.637 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.637 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.637 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.637 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.637 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.637 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.637 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.637 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.637 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.637 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.637 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.637 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.637 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.637 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.637 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.637 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.637 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.637 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.637 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.637 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.637 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.637 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.637 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.637 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.637 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.637 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.637 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.637 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.637 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.637 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.637 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.637 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.637 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.637 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:24.637 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:24.637 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:24.637 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:24.637 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:24.637 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:24.637 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:24.637 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:24.637 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.637 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:24.637 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:24.637 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.637 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.637 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.637 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.638 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 43848184 kB' 'MemAvailable: 47339488 kB' 'Buffers: 2704 kB' 'Cached: 12224572 kB' 'SwapCached: 0 kB' 'Active: 9205664 kB' 'Inactive: 3493800 kB' 'Active(anon): 8812160 kB' 'Inactive(anon): 0 kB' 'Active(file): 393504 kB' 'Inactive(file): 3493800 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 475428 kB' 'Mapped: 212452 kB' 'Shmem: 8339972 kB' 'KReclaimable: 195676 kB' 'Slab: 567400 kB' 'SReclaimable: 195676 kB' 'SUnreclaim: 371724 kB' 'KernelStack: 12688 kB' 'PageTables: 7808 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 9909168 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196420 kB' 'VmallocChunk: 0 kB' 'Percpu: 33408 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1930844 kB' 'DirectMap2M: 14766080 kB' 'DirectMap1G: 52428800 kB' 00:04:24.638 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.638 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.638 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.638 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.638 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.638 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.638 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.638 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.638 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.638 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.638 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.638 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.638 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.638 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.638 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.638 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.638 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.638 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.638 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.638 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.638 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.638 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.638 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.638 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.638 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.638 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.638 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.638 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.638 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.638 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.638 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.638 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.638 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.638 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.638 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.638 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.638 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.638 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.638 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.638 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.638 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.638 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.638 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.638 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.638 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.638 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.638 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.638 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.638 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.638 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.638 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.638 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.638 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.638 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.638 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.638 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.638 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.638 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.638 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.638 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.638 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.638 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.638 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.638 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.638 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.638 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.638 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.638 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.638 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.638 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.638 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.638 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.638 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.638 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.638 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.638 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.638 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.638 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.638 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.638 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.638 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.639 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.639 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.639 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.639 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.639 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.639 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.639 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.639 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.639 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.639 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.639 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.639 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.639 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.639 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.639 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.639 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.639 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.639 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.639 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.639 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.639 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.639 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.639 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.639 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.639 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.639 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.639 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.639 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.639 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.639 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.639 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.639 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.639 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.639 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.639 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.639 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.639 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.639 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.639 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.639 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.639 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.639 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.639 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.639 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.639 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.639 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.639 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.639 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.639 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.639 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.639 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.639 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.639 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.639 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.639 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.639 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.639 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.639 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.639 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.639 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.639 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.639 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.639 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.639 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.639 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.639 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.639 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.639 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.639 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.639 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.639 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.639 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.639 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.639 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.639 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.639 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.639 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.639 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.639 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.639 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.639 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.639 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.639 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.639 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.639 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.639 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.639 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.639 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.639 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.639 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.639 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.639 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.639 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.639 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.639 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.639 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.639 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.639 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.639 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.639 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.639 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.640 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.640 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.640 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.640 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.640 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.640 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.640 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.640 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.640 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.640 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.640 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.640 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.640 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.640 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.640 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.640 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.640 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.640 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.640 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.640 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:24.640 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:24.640 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:24.640 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:24.640 nr_hugepages=1024 00:04:24.640 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:24.640 resv_hugepages=0 00:04:24.640 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:24.640 surplus_hugepages=0 00:04:24.640 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:24.640 anon_hugepages=0 00:04:24.640 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:24.640 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:24.640 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:24.640 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:24.640 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:24.640 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:24.640 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:24.640 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.640 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:24.640 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:24.640 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.640 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.640 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.640 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.640 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 43845564 kB' 'MemAvailable: 47336868 kB' 'Buffers: 2704 kB' 'Cached: 12224592 kB' 'SwapCached: 0 kB' 'Active: 9205908 kB' 'Inactive: 3493800 kB' 'Active(anon): 8812404 kB' 'Inactive(anon): 0 kB' 'Active(file): 393504 kB' 'Inactive(file): 3493800 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 475596 kB' 'Mapped: 212452 kB' 'Shmem: 8339992 kB' 'KReclaimable: 195676 kB' 'Slab: 567400 kB' 'SReclaimable: 195676 kB' 'SUnreclaim: 371724 kB' 'KernelStack: 12720 kB' 'PageTables: 7584 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 9909028 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196516 kB' 'VmallocChunk: 0 kB' 'Percpu: 33408 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1930844 kB' 'DirectMap2M: 14766080 kB' 'DirectMap1G: 52428800 kB' 00:04:24.640 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.640 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.640 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.640 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.640 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.640 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.640 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.640 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.640 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.640 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.640 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.640 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.640 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.640 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.640 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.640 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.640 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.640 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.640 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.640 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.640 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.640 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.640 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.640 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.640 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.640 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.640 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.640 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.640 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.640 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.640 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.640 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.640 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.640 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.640 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.640 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.640 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.640 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.640 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.640 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.640 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.640 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.640 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.640 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.640 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.640 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.640 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.640 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.640 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.640 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.640 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.640 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.641 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.641 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.641 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.641 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.641 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.641 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.641 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.641 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.641 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.641 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.641 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.641 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.641 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.641 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.641 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.641 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.641 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.641 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.641 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.641 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.641 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.641 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.641 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.641 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.641 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.641 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.641 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.641 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.641 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.641 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.641 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.641 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.641 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.641 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.641 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.641 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.641 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.641 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.641 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.641 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.641 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.641 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.641 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.641 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.641 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.641 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.641 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.641 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.641 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.641 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.641 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.641 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.641 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.641 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.641 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.641 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.641 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.641 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.641 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.641 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.641 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.641 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.641 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.641 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.641 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.641 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.641 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.641 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.641 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.641 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.641 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.641 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.641 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.641 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.641 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.641 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.641 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.641 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.641 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.641 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.641 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.641 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.641 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.641 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.641 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.641 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.641 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.641 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.641 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.641 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.641 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.641 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.641 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.641 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.641 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.641 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.641 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.641 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.641 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.641 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.641 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.641 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.641 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.641 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.642 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.642 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.642 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.642 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.642 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.642 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.642 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.642 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.642 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.642 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.642 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.642 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.642 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.642 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.642 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.642 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.642 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.642 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.642 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.642 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.642 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.642 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.642 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.642 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.642 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.642 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.642 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.642 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.642 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.642 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.642 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.642 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.642 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.642 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.642 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.642 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.642 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.642 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:24.642 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:24.642 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:24.642 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:24.642 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:24.642 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:24.642 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:24.642 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:24.642 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:24.642 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:24.901 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:24.901 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:24.901 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:24.901 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:24.901 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:24.901 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:24.901 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:24.901 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:24.901 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.901 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:24.901 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:24.901 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.901 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.901 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.901 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.902 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 19056336 kB' 'MemUsed: 13820604 kB' 'SwapCached: 0 kB' 'Active: 7249088 kB' 'Inactive: 3259208 kB' 'Active(anon): 7118384 kB' 'Inactive(anon): 0 kB' 'Active(file): 130704 kB' 'Inactive(file): 3259208 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10231200 kB' 'Mapped: 45192 kB' 'AnonPages: 280268 kB' 'Shmem: 6841288 kB' 'KernelStack: 7384 kB' 'PageTables: 4608 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 107248 kB' 'Slab: 349040 kB' 'SReclaimable: 107248 kB' 'SUnreclaim: 241792 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:24.902 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.902 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.902 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.902 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.902 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.902 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.902 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.902 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.902 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.902 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.902 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.902 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.902 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.902 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.902 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.902 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.902 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.902 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.902 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.902 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.902 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.902 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.902 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.902 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.902 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.902 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.902 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.902 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.902 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.902 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.902 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.902 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.902 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.902 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.902 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.902 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.902 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.902 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.902 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.902 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.902 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.902 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.902 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.902 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.902 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.902 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.902 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.902 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.902 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.902 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.902 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.902 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.902 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.902 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.902 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.902 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.902 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.902 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.902 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.902 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.902 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.902 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.902 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.902 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.902 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.902 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.902 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.902 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.902 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.902 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.902 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.902 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.902 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.902 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.902 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.902 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.902 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.902 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.902 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.902 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.902 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.902 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.902 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.902 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.902 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.902 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.902 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.902 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.902 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.902 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.902 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.902 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.902 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.902 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.902 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.902 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.902 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.902 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.902 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.902 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.902 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.903 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.903 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.903 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.903 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.903 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.903 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.903 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.903 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.903 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.903 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.903 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.903 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.903 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.903 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.903 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.903 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.903 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.903 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.903 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.903 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.903 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.903 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.903 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.903 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.903 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.903 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.903 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.903 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.903 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.903 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.903 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.903 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.903 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.903 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.903 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.903 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.903 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.903 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.903 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.903 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.903 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.903 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.903 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.903 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.903 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:24.903 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:24.903 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:24.903 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:24.903 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:24.903 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:24.903 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:24.903 node0=1024 expecting 1024 00:04:24.903 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:24.903 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:24.903 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:24.903 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:04:24.903 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:24.903 01:42:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:25.836 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:25.836 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:25.836 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:25.836 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:25.836 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:25.836 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:25.836 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:25.836 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:25.836 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:25.836 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:25.837 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:25.837 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:25.837 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:25.837 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:25.837 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:25.837 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:25.837 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:26.100 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:26.100 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:26.100 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:26.100 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:26.100 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:26.100 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:26.100 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:26.100 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:26.100 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:26.100 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:26.100 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:26.100 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:26.100 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:26.100 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:26.100 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.100 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:26.100 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:26.100 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.100 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.100 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.100 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.100 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 43829160 kB' 'MemAvailable: 47320480 kB' 'Buffers: 2704 kB' 'Cached: 12224660 kB' 'SwapCached: 0 kB' 'Active: 9206264 kB' 'Inactive: 3493800 kB' 'Active(anon): 8812760 kB' 'Inactive(anon): 0 kB' 'Active(file): 393504 kB' 'Inactive(file): 3493800 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 475836 kB' 'Mapped: 212460 kB' 'Shmem: 8340060 kB' 'KReclaimable: 195708 kB' 'Slab: 567456 kB' 'SReclaimable: 195708 kB' 'SUnreclaim: 371748 kB' 'KernelStack: 12720 kB' 'PageTables: 7824 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 9908404 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196516 kB' 'VmallocChunk: 0 kB' 'Percpu: 33408 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1930844 kB' 'DirectMap2M: 14766080 kB' 'DirectMap1G: 52428800 kB' 00:04:26.100 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.100 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.100 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.100 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.100 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.100 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.100 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.100 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.100 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.100 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.100 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.100 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.100 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.100 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.100 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.100 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.100 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.100 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.100 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.100 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.100 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.100 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.100 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.100 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.100 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.100 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.100 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.100 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.100 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.100 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.100 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.100 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.100 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.100 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.100 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.100 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.100 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.100 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.100 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.101 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.101 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.101 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.101 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.101 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.101 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.101 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.101 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.101 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.101 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.101 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.101 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.101 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.101 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.101 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.101 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.101 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.101 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.101 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.101 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.101 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.101 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.101 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.101 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.101 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.101 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.101 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.101 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.101 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.101 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.101 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.101 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.101 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.101 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.101 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.101 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.101 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.101 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.101 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.101 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.101 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.101 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.101 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.101 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.101 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.101 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.101 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.101 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.101 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.101 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.101 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.101 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.101 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.101 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.101 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.101 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.101 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.101 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.101 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.101 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.101 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.101 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.101 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.101 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.101 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.101 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.101 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.101 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.101 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.101 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.101 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.101 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.101 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.101 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.101 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.101 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.101 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.101 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.101 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.101 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.101 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.101 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.101 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.101 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.101 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.101 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.101 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.101 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.101 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.101 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.101 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.101 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.101 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.101 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.101 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.101 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.101 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.101 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.101 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.101 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.101 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.101 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.101 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.101 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.101 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.101 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.101 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.102 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.102 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.102 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.102 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.102 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.102 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.102 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.102 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.102 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.102 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.102 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.102 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.102 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.102 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.102 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.102 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:26.102 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:26.102 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:26.102 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:26.102 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:26.102 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:26.102 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:26.102 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:26.102 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.102 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:26.102 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:26.102 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.102 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.102 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.102 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.102 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 43832736 kB' 'MemAvailable: 47324056 kB' 'Buffers: 2704 kB' 'Cached: 12224664 kB' 'SwapCached: 0 kB' 'Active: 9206760 kB' 'Inactive: 3493800 kB' 'Active(anon): 8813256 kB' 'Inactive(anon): 0 kB' 'Active(file): 393504 kB' 'Inactive(file): 3493800 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 476328 kB' 'Mapped: 212460 kB' 'Shmem: 8340064 kB' 'KReclaimable: 195708 kB' 'Slab: 567544 kB' 'SReclaimable: 195708 kB' 'SUnreclaim: 371836 kB' 'KernelStack: 12752 kB' 'PageTables: 7864 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 9908420 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196452 kB' 'VmallocChunk: 0 kB' 'Percpu: 33408 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1930844 kB' 'DirectMap2M: 14766080 kB' 'DirectMap1G: 52428800 kB' 00:04:26.102 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.102 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.102 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.102 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.102 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.102 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.102 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.102 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.102 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.102 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.102 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.102 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.102 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.102 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.102 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.102 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.102 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.102 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.102 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.102 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.102 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.102 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.102 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.102 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.102 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.102 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.102 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.102 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.102 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.102 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.102 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.102 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.102 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.102 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.102 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.102 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.102 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.102 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.102 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.102 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.102 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.102 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.102 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.102 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.102 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.102 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.102 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.102 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.102 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.102 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.102 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.102 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.102 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.102 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.102 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.102 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.102 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.102 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.102 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.102 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.102 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.102 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.103 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.103 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.103 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.103 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.103 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.103 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.103 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.103 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.103 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.103 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.103 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.103 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.103 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.103 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.103 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.103 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.103 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.103 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.103 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.103 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.103 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.103 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.103 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.103 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.103 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.103 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.103 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.103 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.103 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.103 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.103 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.103 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.103 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.103 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.103 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.103 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.103 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.103 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.103 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.103 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.103 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.103 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.103 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.103 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.103 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.103 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.103 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.103 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.103 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.103 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.103 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.103 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.103 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.103 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.103 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.103 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.103 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.103 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.103 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.103 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.103 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.103 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.103 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.103 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.103 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.103 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.103 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.103 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.103 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.103 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.103 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.103 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.103 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.103 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.103 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.103 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.103 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.103 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.103 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.103 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.103 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.103 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.103 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.103 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.103 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.103 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.103 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.103 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.103 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.103 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.103 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.103 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.103 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.103 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.103 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.103 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.103 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.103 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.103 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.103 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.103 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.103 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.103 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.103 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.103 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.103 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.103 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.104 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.104 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.104 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.104 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.104 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.104 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.104 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.104 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.104 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.104 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.104 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.104 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.104 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.104 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.104 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.104 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.104 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.104 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.104 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.104 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.104 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.104 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.104 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.104 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.104 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.104 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.104 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.104 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.104 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.104 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.104 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.104 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.104 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.104 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.104 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.104 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.104 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:26.104 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:26.104 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:26.104 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:26.104 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:26.104 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:26.104 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:26.104 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:26.104 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.104 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:26.104 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:26.104 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.104 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.104 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.104 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.104 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 43833040 kB' 'MemAvailable: 47324360 kB' 'Buffers: 2704 kB' 'Cached: 12224664 kB' 'SwapCached: 0 kB' 'Active: 9207176 kB' 'Inactive: 3493800 kB' 'Active(anon): 8813672 kB' 'Inactive(anon): 0 kB' 'Active(file): 393504 kB' 'Inactive(file): 3493800 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 476752 kB' 'Mapped: 212460 kB' 'Shmem: 8340064 kB' 'KReclaimable: 195708 kB' 'Slab: 567492 kB' 'SReclaimable: 195708 kB' 'SUnreclaim: 371784 kB' 'KernelStack: 12768 kB' 'PageTables: 7908 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 9908444 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196452 kB' 'VmallocChunk: 0 kB' 'Percpu: 33408 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1930844 kB' 'DirectMap2M: 14766080 kB' 'DirectMap1G: 52428800 kB' 00:04:26.104 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.104 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.104 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.104 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.104 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.104 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.104 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.104 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.104 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.104 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.104 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.104 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.104 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.104 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.104 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.104 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.104 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.104 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.104 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.104 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.104 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.104 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.104 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.104 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.104 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.104 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.104 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.104 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.104 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.104 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.104 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.104 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.104 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.104 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.104 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.104 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.104 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.104 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.104 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.104 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.104 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.104 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.105 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.105 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.105 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.105 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.105 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.105 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.105 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.105 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.105 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.105 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.105 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.105 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.105 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.105 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.105 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.105 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.105 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.105 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.105 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.105 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.105 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.105 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.105 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.105 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.105 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.105 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.105 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.105 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.105 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.105 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.105 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.105 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.105 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.105 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.105 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.105 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.105 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.105 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.105 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.105 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.105 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.105 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.105 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.105 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.105 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.105 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.105 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.105 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.105 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.105 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.105 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.105 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.105 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.105 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.105 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.105 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.105 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.105 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.105 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.105 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.105 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.105 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.105 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.105 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.105 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.105 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.105 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.105 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.105 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.105 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.105 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.105 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.105 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.105 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.105 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.105 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.105 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.105 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.105 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.105 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.105 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.105 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.105 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.105 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.106 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.106 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.106 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.106 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.106 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.106 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.106 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.106 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.106 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.106 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.106 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.106 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.106 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.106 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.106 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.106 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.106 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.106 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.106 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.106 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.106 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.106 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.106 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.106 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.106 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.106 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.106 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.106 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.106 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.106 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.106 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.106 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.106 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.106 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.106 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.106 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.106 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.106 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.106 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.106 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.106 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.106 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.106 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.106 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.106 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.106 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.106 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.106 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.106 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.106 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.106 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.106 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.106 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.106 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.106 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.106 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.106 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.106 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.106 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.106 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.106 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.106 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.106 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.106 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.106 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.106 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.106 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.106 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.106 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.106 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.106 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.106 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.106 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.106 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.106 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.106 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:26.106 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:26.106 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:26.106 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:26.106 nr_hugepages=1024 00:04:26.106 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:26.106 resv_hugepages=0 00:04:26.106 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:26.106 surplus_hugepages=0 00:04:26.106 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:26.106 anon_hugepages=0 00:04:26.106 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:26.106 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:26.106 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:26.106 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:26.106 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:26.106 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:26.106 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:26.106 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.106 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:26.106 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:26.106 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.106 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.106 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.106 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.107 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 43833040 kB' 'MemAvailable: 47324352 kB' 'Buffers: 2704 kB' 'Cached: 12224704 kB' 'SwapCached: 0 kB' 'Active: 9206116 kB' 'Inactive: 3493800 kB' 'Active(anon): 8812612 kB' 'Inactive(anon): 0 kB' 'Active(file): 393504 kB' 'Inactive(file): 3493800 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 475692 kB' 'Mapped: 212460 kB' 'Shmem: 8340104 kB' 'KReclaimable: 195692 kB' 'Slab: 567476 kB' 'SReclaimable: 195692 kB' 'SUnreclaim: 371784 kB' 'KernelStack: 12752 kB' 'PageTables: 7856 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 9908464 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196468 kB' 'VmallocChunk: 0 kB' 'Percpu: 33408 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1930844 kB' 'DirectMap2M: 14766080 kB' 'DirectMap1G: 52428800 kB' 00:04:26.107 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.107 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.107 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.107 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.107 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.107 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.107 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.107 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.107 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.107 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.107 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.107 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.107 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.107 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.107 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.107 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.107 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.107 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.107 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.107 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.107 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.107 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.107 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.107 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.107 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.107 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.107 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.107 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.107 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.107 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.107 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.107 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.107 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.107 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.107 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.107 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.107 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.107 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.107 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.107 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.107 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.107 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.107 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.107 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.107 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.107 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.107 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.107 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.107 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.107 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.107 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.107 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.107 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.107 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.107 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.107 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.107 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.107 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.107 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.107 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.107 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.107 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.107 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.107 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.107 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.107 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.107 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.107 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.107 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.107 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.107 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.107 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.107 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.107 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.107 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.107 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.107 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.107 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.107 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.107 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.107 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.107 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.107 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.107 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.107 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.107 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.107 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.107 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.107 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.107 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.107 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.107 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.107 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.107 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.107 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.107 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.107 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.107 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.107 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.107 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.107 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.107 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.108 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.108 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.108 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.108 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.108 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.108 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.108 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.108 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.108 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.108 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.108 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.108 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.108 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.108 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.108 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.108 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.108 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.108 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.108 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.108 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.108 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.108 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.108 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.108 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.108 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.108 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.108 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.108 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.108 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.108 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.108 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.108 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.108 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.108 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.108 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.108 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.108 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.108 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.108 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.108 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.108 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.108 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.108 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.108 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.108 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.108 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.108 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.108 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.108 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.108 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.108 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.108 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.108 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.108 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.108 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.108 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.108 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.108 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.108 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.108 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.108 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.108 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.108 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.108 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.108 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.108 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.108 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.108 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.108 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.108 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.108 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.108 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.108 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.108 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.108 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.108 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.108 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.108 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.108 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.108 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.108 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.108 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.108 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.108 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.108 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.108 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.108 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.108 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.108 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.108 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.108 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.108 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:26.108 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:26.108 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:26.108 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:26.108 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:26.108 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:26.108 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:26.108 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:26.108 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:26.108 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:26.108 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:26.108 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:26.108 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:26.108 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:26.109 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:26.109 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:26.109 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:26.109 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:26.109 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.109 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:26.109 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:26.109 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.109 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.109 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.109 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.109 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 19057024 kB' 'MemUsed: 13819916 kB' 'SwapCached: 0 kB' 'Active: 7249580 kB' 'Inactive: 3259208 kB' 'Active(anon): 7118876 kB' 'Inactive(anon): 0 kB' 'Active(file): 130704 kB' 'Inactive(file): 3259208 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10231336 kB' 'Mapped: 45192 kB' 'AnonPages: 280592 kB' 'Shmem: 6841424 kB' 'KernelStack: 7400 kB' 'PageTables: 4532 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 107280 kB' 'Slab: 349140 kB' 'SReclaimable: 107280 kB' 'SUnreclaim: 241860 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:26.109 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.109 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.109 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.109 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.109 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.109 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.109 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.109 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.109 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.109 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.109 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.109 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.109 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.109 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.109 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.109 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.109 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.109 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.109 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.109 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.109 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.109 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.109 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.109 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.109 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.109 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.109 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.109 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.109 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.109 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.109 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.109 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.109 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.109 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.109 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.109 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.109 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.109 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.109 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.109 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.109 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.109 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.109 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.109 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.109 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.109 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.109 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.109 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.109 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.109 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.109 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.109 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.109 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.109 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.109 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.109 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.109 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.109 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.109 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.109 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.109 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.109 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.109 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.109 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.109 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.109 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.109 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.109 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.109 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.109 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.109 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.109 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.109 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.109 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.109 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.109 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.109 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.109 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.109 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.109 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.109 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.109 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.109 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.109 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.109 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.109 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.109 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.110 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.110 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.110 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.110 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.110 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.110 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.110 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.110 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.110 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.110 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.110 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.110 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.110 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.110 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.110 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.110 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.110 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.110 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.110 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.110 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.110 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.110 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.110 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.110 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.110 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.110 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.110 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.110 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.110 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.110 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.110 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.110 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.110 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.110 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.110 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.110 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.110 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.110 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.110 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.110 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.110 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.110 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.110 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.110 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.110 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.110 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.110 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.110 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.110 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.110 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.110 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.110 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.110 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.110 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.110 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:26.110 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.110 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.110 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.110 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:26.110 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:26.110 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:26.110 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:26.110 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:26.110 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:26.110 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:26.110 node0=1024 expecting 1024 00:04:26.110 01:42:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:26.110 00:04:26.110 real 0m2.875s 00:04:26.110 user 0m1.182s 00:04:26.110 sys 0m1.621s 00:04:26.110 01:42:40 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:26.110 01:42:40 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:26.110 ************************************ 00:04:26.110 END TEST no_shrink_alloc 00:04:26.110 ************************************ 00:04:26.369 01:42:40 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:04:26.369 01:42:40 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:26.369 01:42:41 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:26.369 01:42:41 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:26.369 01:42:41 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:26.369 01:42:41 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:26.369 01:42:41 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:26.369 01:42:41 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:26.369 01:42:41 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:26.369 01:42:41 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:26.369 01:42:41 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:26.369 01:42:41 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:26.369 01:42:41 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:26.369 01:42:41 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:26.369 00:04:26.369 real 0m11.697s 00:04:26.369 user 0m4.485s 00:04:26.369 sys 0m6.130s 00:04:26.369 01:42:41 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:26.369 01:42:41 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:26.369 ************************************ 00:04:26.369 END TEST hugepages 00:04:26.369 ************************************ 00:04:26.369 01:42:41 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:26.369 01:42:41 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:26.369 01:42:41 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:26.369 01:42:41 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:26.369 ************************************ 00:04:26.369 START TEST driver 00:04:26.369 ************************************ 00:04:26.369 01:42:41 setup.sh.driver -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:26.369 * Looking for test storage... 00:04:26.369 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:26.369 01:42:41 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:26.369 01:42:41 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:26.369 01:42:41 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:28.899 01:42:43 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:28.900 01:42:43 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:28.900 01:42:43 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:28.900 01:42:43 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:28.900 ************************************ 00:04:28.900 START TEST guess_driver 00:04:28.900 ************************************ 00:04:28.900 01:42:43 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:04:28.900 01:42:43 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:28.900 01:42:43 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:28.900 01:42:43 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:28.900 01:42:43 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:28.900 01:42:43 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:28.900 01:42:43 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:28.900 01:42:43 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:28.900 01:42:43 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:28.900 01:42:43 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:28.900 01:42:43 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 141 > 0 )) 00:04:28.900 01:42:43 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:28.900 01:42:43 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:04:28.900 01:42:43 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:04:28.900 01:42:43 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:28.900 01:42:43 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:28.900 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:28.900 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:28.900 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:28.900 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:28.900 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:28.900 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:28.900 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:28.900 01:42:43 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:04:28.900 01:42:43 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:04:28.900 01:42:43 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:28.900 01:42:43 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:28.900 01:42:43 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:28.900 Looking for driver=vfio-pci 00:04:28.900 01:42:43 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:28.900 01:42:43 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:28.900 01:42:43 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:28.900 01:42:43 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:30.275 01:42:44 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:30.275 01:42:44 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:30.275 01:42:44 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:30.275 01:42:44 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:30.275 01:42:44 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:30.275 01:42:44 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:30.275 01:42:44 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:30.275 01:42:44 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:30.275 01:42:44 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:30.275 01:42:44 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:30.275 01:42:44 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:30.275 01:42:44 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:30.275 01:42:44 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:30.275 01:42:44 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:30.275 01:42:44 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:30.275 01:42:44 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:30.275 01:42:44 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:30.275 01:42:44 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:30.275 01:42:44 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:30.275 01:42:44 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:30.275 01:42:44 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:30.275 01:42:44 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:30.275 01:42:44 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:30.275 01:42:44 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:30.275 01:42:44 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:30.275 01:42:44 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:30.275 01:42:44 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:30.275 01:42:44 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:30.275 01:42:44 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:30.275 01:42:44 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:30.275 01:42:44 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:30.275 01:42:44 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:30.275 01:42:44 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:30.275 01:42:44 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:30.275 01:42:44 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:30.275 01:42:44 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:30.275 01:42:44 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:30.275 01:42:44 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:30.275 01:42:44 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:30.275 01:42:44 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:30.276 01:42:44 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:30.276 01:42:44 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:30.276 01:42:44 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:30.276 01:42:44 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:30.276 01:42:44 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:30.276 01:42:44 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:30.276 01:42:44 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:30.276 01:42:44 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:31.211 01:42:45 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:31.211 01:42:45 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:31.211 01:42:45 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:31.211 01:42:45 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:31.211 01:42:45 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:31.211 01:42:45 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:31.211 01:42:45 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:33.743 00:04:33.743 real 0m4.810s 00:04:33.743 user 0m1.105s 00:04:33.743 sys 0m1.806s 00:04:33.743 01:42:48 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:33.743 01:42:48 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:33.743 ************************************ 00:04:33.743 END TEST guess_driver 00:04:33.743 ************************************ 00:04:33.743 00:04:33.743 real 0m7.426s 00:04:33.743 user 0m1.691s 00:04:33.743 sys 0m2.849s 00:04:33.743 01:42:48 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:33.743 01:42:48 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:33.743 ************************************ 00:04:33.743 END TEST driver 00:04:33.743 ************************************ 00:04:33.743 01:42:48 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:33.743 01:42:48 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:33.743 01:42:48 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:33.743 01:42:48 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:33.743 ************************************ 00:04:33.743 START TEST devices 00:04:33.743 ************************************ 00:04:33.743 01:42:48 setup.sh.devices -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:33.743 * Looking for test storage... 00:04:33.743 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:33.743 01:42:48 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:33.743 01:42:48 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:33.743 01:42:48 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:33.743 01:42:48 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:35.118 01:42:49 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:35.118 01:42:49 setup.sh.devices -- common/autotest_common.sh@1667 -- # zoned_devs=() 00:04:35.118 01:42:49 setup.sh.devices -- common/autotest_common.sh@1667 -- # local -gA zoned_devs 00:04:35.118 01:42:49 setup.sh.devices -- common/autotest_common.sh@1668 -- # local nvme bdf 00:04:35.118 01:42:49 setup.sh.devices -- common/autotest_common.sh@1670 -- # for nvme in /sys/block/nvme* 00:04:35.118 01:42:49 setup.sh.devices -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:04:35.118 01:42:49 setup.sh.devices -- common/autotest_common.sh@1660 -- # local device=nvme0n1 00:04:35.118 01:42:49 setup.sh.devices -- common/autotest_common.sh@1662 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:35.118 01:42:49 setup.sh.devices -- common/autotest_common.sh@1663 -- # [[ none != none ]] 00:04:35.118 01:42:49 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:35.118 01:42:49 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:35.118 01:42:49 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:35.118 01:42:49 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:35.118 01:42:49 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:35.118 01:42:49 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:35.118 01:42:49 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:35.118 01:42:49 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:35.118 01:42:49 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:88:00.0 00:04:35.118 01:42:49 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:04:35.118 01:42:49 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:35.118 01:42:49 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:35.118 01:42:49 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:35.377 No valid GPT data, bailing 00:04:35.377 01:42:50 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:35.377 01:42:50 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:35.377 01:42:50 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:35.377 01:42:50 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:35.377 01:42:50 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:35.377 01:42:50 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:35.377 01:42:50 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:04:35.377 01:42:50 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:04:35.377 01:42:50 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:35.377 01:42:50 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:88:00.0 00:04:35.377 01:42:50 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:35.377 01:42:50 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:35.377 01:42:50 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:35.377 01:42:50 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:35.377 01:42:50 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:35.377 01:42:50 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:35.377 ************************************ 00:04:35.377 START TEST nvme_mount 00:04:35.377 ************************************ 00:04:35.377 01:42:50 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:04:35.377 01:42:50 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:35.377 01:42:50 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:35.377 01:42:50 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:35.377 01:42:50 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:35.377 01:42:50 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:35.377 01:42:50 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:35.377 01:42:50 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:35.377 01:42:50 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:35.377 01:42:50 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:35.377 01:42:50 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:35.377 01:42:50 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:35.377 01:42:50 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:35.377 01:42:50 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:35.377 01:42:50 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:35.377 01:42:50 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:35.377 01:42:50 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:35.377 01:42:50 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:35.377 01:42:50 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:35.377 01:42:50 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:36.312 Creating new GPT entries in memory. 00:04:36.312 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:36.312 other utilities. 00:04:36.312 01:42:51 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:36.312 01:42:51 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:36.312 01:42:51 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:36.312 01:42:51 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:36.312 01:42:51 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:37.247 Creating new GPT entries in memory. 00:04:37.247 The operation has completed successfully. 00:04:37.247 01:42:52 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:37.247 01:42:52 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:37.247 01:42:52 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 1282875 00:04:37.247 01:42:52 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:37.247 01:42:52 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:37.247 01:42:52 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:37.247 01:42:52 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:37.247 01:42:52 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:37.505 01:42:52 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:37.505 01:42:52 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:88:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:37.505 01:42:52 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:04:37.505 01:42:52 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:37.505 01:42:52 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:37.505 01:42:52 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:37.505 01:42:52 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:37.505 01:42:52 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:37.505 01:42:52 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:37.505 01:42:52 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:37.505 01:42:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.505 01:42:52 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:04:37.505 01:42:52 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:37.505 01:42:52 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:37.505 01:42:52 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:38.440 01:42:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:38.440 01:42:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:38.440 01:42:53 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:38.440 01:42:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.440 01:42:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:38.440 01:42:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.440 01:42:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:38.440 01:42:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.440 01:42:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:38.440 01:42:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.440 01:42:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:38.440 01:42:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.440 01:42:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:38.440 01:42:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.440 01:42:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:38.440 01:42:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.440 01:42:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:38.440 01:42:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.440 01:42:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:38.440 01:42:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.440 01:42:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:38.440 01:42:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.440 01:42:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:38.440 01:42:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.440 01:42:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:38.440 01:42:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.440 01:42:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:38.440 01:42:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.440 01:42:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:38.440 01:42:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.440 01:42:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:38.440 01:42:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.440 01:42:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:38.440 01:42:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.440 01:42:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:38.440 01:42:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.706 01:42:53 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:38.706 01:42:53 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:38.706 01:42:53 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:38.706 01:42:53 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:38.706 01:42:53 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:38.706 01:42:53 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:38.706 01:42:53 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:38.706 01:42:53 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:38.706 01:42:53 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:38.706 01:42:53 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:38.706 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:38.706 01:42:53 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:38.706 01:42:53 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:38.963 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:38.963 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:04:38.963 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:38.963 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:38.963 01:42:53 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:38.963 01:42:53 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:38.963 01:42:53 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:38.963 01:42:53 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:38.963 01:42:53 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:38.963 01:42:53 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:38.963 01:42:53 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:88:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:38.963 01:42:53 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:04:38.963 01:42:53 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:38.963 01:42:53 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:38.963 01:42:53 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:38.963 01:42:53 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:38.963 01:42:53 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:38.963 01:42:53 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:38.963 01:42:53 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:38.963 01:42:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.963 01:42:53 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:04:38.963 01:42:53 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:38.963 01:42:53 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:38.963 01:42:53 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:40.338 01:42:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:40.338 01:42:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:40.338 01:42:54 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:40.338 01:42:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.338 01:42:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:40.338 01:42:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.338 01:42:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:40.338 01:42:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.338 01:42:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:40.338 01:42:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.338 01:42:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:40.338 01:42:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.338 01:42:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:40.338 01:42:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.338 01:42:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:40.338 01:42:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.338 01:42:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:40.338 01:42:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.338 01:42:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:40.338 01:42:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.338 01:42:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:40.338 01:42:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.338 01:42:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:40.338 01:42:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.338 01:42:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:40.338 01:42:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.338 01:42:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:40.338 01:42:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.338 01:42:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:40.338 01:42:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.338 01:42:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:40.338 01:42:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.338 01:42:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:40.338 01:42:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.338 01:42:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:40.338 01:42:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.338 01:42:55 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:40.338 01:42:55 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:40.338 01:42:55 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:40.338 01:42:55 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:40.338 01:42:55 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:40.338 01:42:55 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:40.338 01:42:55 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:88:00.0 data@nvme0n1 '' '' 00:04:40.338 01:42:55 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:04:40.338 01:42:55 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:40.338 01:42:55 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:40.338 01:42:55 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:40.338 01:42:55 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:40.338 01:42:55 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:40.338 01:42:55 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:40.338 01:42:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.338 01:42:55 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:04:40.338 01:42:55 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:40.338 01:42:55 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:40.338 01:42:55 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:41.713 01:42:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:41.713 01:42:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:41.713 01:42:56 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:41.713 01:42:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.713 01:42:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:41.713 01:42:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.713 01:42:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:41.713 01:42:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.713 01:42:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:41.713 01:42:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.713 01:42:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:41.713 01:42:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.713 01:42:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:41.713 01:42:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.713 01:42:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:41.713 01:42:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.713 01:42:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:41.713 01:42:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.713 01:42:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:41.713 01:42:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.714 01:42:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:41.714 01:42:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.714 01:42:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:41.714 01:42:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.714 01:42:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:41.714 01:42:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.714 01:42:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:41.714 01:42:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.714 01:42:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:41.714 01:42:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.714 01:42:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:41.714 01:42:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.714 01:42:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:41.714 01:42:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.714 01:42:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:41.714 01:42:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.714 01:42:56 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:41.714 01:42:56 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:41.714 01:42:56 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:41.714 01:42:56 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:41.714 01:42:56 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:41.714 01:42:56 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:41.714 01:42:56 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:41.714 01:42:56 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:41.714 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:41.714 00:04:41.714 real 0m6.341s 00:04:41.714 user 0m1.471s 00:04:41.714 sys 0m2.423s 00:04:41.714 01:42:56 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:41.714 01:42:56 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:41.714 ************************************ 00:04:41.714 END TEST nvme_mount 00:04:41.714 ************************************ 00:04:41.714 01:42:56 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:41.714 01:42:56 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:41.714 01:42:56 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:41.714 01:42:56 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:41.714 ************************************ 00:04:41.714 START TEST dm_mount 00:04:41.714 ************************************ 00:04:41.714 01:42:56 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:04:41.714 01:42:56 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:41.714 01:42:56 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:41.714 01:42:56 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:41.714 01:42:56 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:41.714 01:42:56 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:41.714 01:42:56 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:41.714 01:42:56 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:41.714 01:42:56 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:41.714 01:42:56 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:41.714 01:42:56 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:41.714 01:42:56 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:41.714 01:42:56 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:41.714 01:42:56 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:41.714 01:42:56 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:41.714 01:42:56 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:41.714 01:42:56 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:41.714 01:42:56 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:41.714 01:42:56 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:41.714 01:42:56 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:41.714 01:42:56 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:41.714 01:42:56 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:42.691 Creating new GPT entries in memory. 00:04:42.692 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:42.692 other utilities. 00:04:42.692 01:42:57 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:42.692 01:42:57 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:42.692 01:42:57 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:42.692 01:42:57 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:42.692 01:42:57 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:43.629 Creating new GPT entries in memory. 00:04:43.629 The operation has completed successfully. 00:04:43.629 01:42:58 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:43.629 01:42:58 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:43.629 01:42:58 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:43.629 01:42:58 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:43.629 01:42:58 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:45.004 The operation has completed successfully. 00:04:45.004 01:42:59 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:45.004 01:42:59 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:45.004 01:42:59 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 1285271 00:04:45.004 01:42:59 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:45.004 01:42:59 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:45.004 01:42:59 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:45.004 01:42:59 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:45.004 01:42:59 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:45.004 01:42:59 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:45.004 01:42:59 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:45.004 01:42:59 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:45.004 01:42:59 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:45.004 01:42:59 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:45.004 01:42:59 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:04:45.004 01:42:59 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:45.004 01:42:59 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:45.004 01:42:59 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:45.004 01:42:59 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:04:45.004 01:42:59 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:45.004 01:42:59 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:45.004 01:42:59 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:45.004 01:42:59 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:45.004 01:42:59 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:88:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:45.004 01:42:59 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:04:45.004 01:42:59 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:45.004 01:42:59 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:45.004 01:42:59 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:45.004 01:42:59 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:45.004 01:42:59 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:45.004 01:42:59 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:45.004 01:42:59 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:45.004 01:42:59 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.004 01:42:59 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:04:45.004 01:42:59 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:45.004 01:42:59 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:45.004 01:42:59 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:45.938 01:43:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:45.938 01:43:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:45.938 01:43:00 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:45.938 01:43:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.938 01:43:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:45.938 01:43:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.938 01:43:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:45.938 01:43:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.938 01:43:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:45.938 01:43:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.938 01:43:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:45.938 01:43:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.938 01:43:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:45.938 01:43:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.938 01:43:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:45.938 01:43:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.938 01:43:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:45.938 01:43:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.938 01:43:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:45.938 01:43:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.938 01:43:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:45.938 01:43:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.938 01:43:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:45.938 01:43:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.938 01:43:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:45.938 01:43:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.938 01:43:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:45.938 01:43:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.938 01:43:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:45.938 01:43:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.938 01:43:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:45.938 01:43:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.938 01:43:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:45.938 01:43:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.938 01:43:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:45.938 01:43:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.197 01:43:00 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:46.197 01:43:00 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:46.197 01:43:00 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:46.197 01:43:00 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:46.197 01:43:00 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:46.197 01:43:00 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:46.197 01:43:00 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:88:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:46.197 01:43:00 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:04:46.197 01:43:00 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:46.197 01:43:00 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:46.197 01:43:00 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:46.197 01:43:00 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:46.197 01:43:00 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:46.197 01:43:00 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:46.197 01:43:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.197 01:43:00 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:04:46.197 01:43:00 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:46.197 01:43:00 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:46.197 01:43:00 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:47.131 01:43:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:47.131 01:43:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:47.131 01:43:01 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:47.131 01:43:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.131 01:43:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:47.132 01:43:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.132 01:43:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:47.132 01:43:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.132 01:43:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:47.132 01:43:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.132 01:43:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:47.132 01:43:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.132 01:43:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:47.132 01:43:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.132 01:43:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:47.132 01:43:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.132 01:43:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:47.132 01:43:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.132 01:43:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:47.132 01:43:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.132 01:43:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:47.132 01:43:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.132 01:43:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:47.132 01:43:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.132 01:43:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:47.132 01:43:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.132 01:43:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:47.132 01:43:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.132 01:43:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:47.132 01:43:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.132 01:43:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:47.132 01:43:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.132 01:43:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:47.132 01:43:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.132 01:43:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:47.132 01:43:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.390 01:43:02 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:47.390 01:43:02 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:47.390 01:43:02 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:47.390 01:43:02 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:47.390 01:43:02 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:47.390 01:43:02 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:47.390 01:43:02 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:47.390 01:43:02 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:47.390 01:43:02 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:47.390 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:47.390 01:43:02 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:47.390 01:43:02 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:47.390 00:04:47.390 real 0m5.699s 00:04:47.390 user 0m1.011s 00:04:47.390 sys 0m1.529s 00:04:47.390 01:43:02 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:47.390 01:43:02 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:47.390 ************************************ 00:04:47.390 END TEST dm_mount 00:04:47.390 ************************************ 00:04:47.390 01:43:02 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:47.390 01:43:02 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:47.390 01:43:02 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:47.390 01:43:02 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:47.390 01:43:02 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:47.390 01:43:02 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:47.390 01:43:02 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:47.648 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:47.648 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:04:47.648 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:47.648 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:47.648 01:43:02 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:47.648 01:43:02 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:47.648 01:43:02 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:47.648 01:43:02 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:47.648 01:43:02 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:47.648 01:43:02 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:47.648 01:43:02 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:47.648 00:04:47.648 real 0m13.927s 00:04:47.648 user 0m3.118s 00:04:47.648 sys 0m4.968s 00:04:47.648 01:43:02 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:47.648 01:43:02 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:47.648 ************************************ 00:04:47.648 END TEST devices 00:04:47.648 ************************************ 00:04:47.648 00:04:47.648 real 0m43.559s 00:04:47.648 user 0m12.551s 00:04:47.648 sys 0m19.167s 00:04:47.648 01:43:02 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:47.648 01:43:02 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:47.648 ************************************ 00:04:47.648 END TEST setup.sh 00:04:47.648 ************************************ 00:04:47.648 01:43:02 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:49.020 Hugepages 00:04:49.020 node hugesize free / total 00:04:49.020 node0 1048576kB 0 / 0 00:04:49.020 node0 2048kB 2048 / 2048 00:04:49.020 node1 1048576kB 0 / 0 00:04:49.020 node1 2048kB 0 / 0 00:04:49.020 00:04:49.020 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:49.020 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:04:49.020 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:04:49.020 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:04:49.020 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:04:49.020 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:04:49.020 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:04:49.020 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:04:49.020 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:04:49.020 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:04:49.020 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:04:49.020 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:04:49.020 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:04:49.020 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:04:49.020 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:04:49.020 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:04:49.020 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:04:49.020 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:04:49.020 01:43:03 -- spdk/autotest.sh@130 -- # uname -s 00:04:49.020 01:43:03 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:49.020 01:43:03 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:49.020 01:43:03 -- common/autotest_common.sh@1529 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:49.953 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:50.211 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:50.211 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:50.211 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:50.211 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:50.211 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:50.211 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:50.211 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:50.211 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:50.211 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:50.211 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:50.211 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:50.211 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:50.211 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:50.211 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:50.211 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:51.146 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:04:51.404 01:43:06 -- common/autotest_common.sh@1530 -- # sleep 1 00:04:52.338 01:43:07 -- common/autotest_common.sh@1531 -- # bdfs=() 00:04:52.338 01:43:07 -- common/autotest_common.sh@1531 -- # local bdfs 00:04:52.338 01:43:07 -- common/autotest_common.sh@1532 -- # bdfs=($(get_nvme_bdfs)) 00:04:52.338 01:43:07 -- common/autotest_common.sh@1532 -- # get_nvme_bdfs 00:04:52.338 01:43:07 -- common/autotest_common.sh@1511 -- # bdfs=() 00:04:52.338 01:43:07 -- common/autotest_common.sh@1511 -- # local bdfs 00:04:52.338 01:43:07 -- common/autotest_common.sh@1512 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:52.339 01:43:07 -- common/autotest_common.sh@1512 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:52.339 01:43:07 -- common/autotest_common.sh@1512 -- # jq -r '.config[].params.traddr' 00:04:52.339 01:43:07 -- common/autotest_common.sh@1513 -- # (( 1 == 0 )) 00:04:52.339 01:43:07 -- common/autotest_common.sh@1517 -- # printf '%s\n' 0000:88:00.0 00:04:52.339 01:43:07 -- common/autotest_common.sh@1534 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:53.712 Waiting for block devices as requested 00:04:53.712 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:04:53.712 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:04:53.712 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:04:53.712 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:04:53.969 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:04:53.969 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:04:53.969 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:04:53.969 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:04:54.228 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:04:54.228 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:04:54.228 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:04:54.228 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:04:54.487 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:04:54.487 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:04:54.487 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:04:54.487 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:04:54.745 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:04:54.745 01:43:09 -- common/autotest_common.sh@1536 -- # for bdf in "${bdfs[@]}" 00:04:54.745 01:43:09 -- common/autotest_common.sh@1537 -- # get_nvme_ctrlr_from_bdf 0000:88:00.0 00:04:54.745 01:43:09 -- common/autotest_common.sh@1500 -- # readlink -f /sys/class/nvme/nvme0 00:04:54.745 01:43:09 -- common/autotest_common.sh@1500 -- # grep 0000:88:00.0/nvme/nvme 00:04:54.745 01:43:09 -- common/autotest_common.sh@1500 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:04:54.745 01:43:09 -- common/autotest_common.sh@1501 -- # [[ -z /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 ]] 00:04:54.745 01:43:09 -- common/autotest_common.sh@1505 -- # basename /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:04:54.745 01:43:09 -- common/autotest_common.sh@1505 -- # printf '%s\n' nvme0 00:04:54.745 01:43:09 -- common/autotest_common.sh@1537 -- # nvme_ctrlr=/dev/nvme0 00:04:54.745 01:43:09 -- common/autotest_common.sh@1538 -- # [[ -z /dev/nvme0 ]] 00:04:54.745 01:43:09 -- common/autotest_common.sh@1543 -- # nvme id-ctrl /dev/nvme0 00:04:54.745 01:43:09 -- common/autotest_common.sh@1543 -- # grep oacs 00:04:54.745 01:43:09 -- common/autotest_common.sh@1543 -- # cut -d: -f2 00:04:54.745 01:43:09 -- common/autotest_common.sh@1543 -- # oacs=' 0xf' 00:04:54.745 01:43:09 -- common/autotest_common.sh@1544 -- # oacs_ns_manage=8 00:04:54.746 01:43:09 -- common/autotest_common.sh@1546 -- # [[ 8 -ne 0 ]] 00:04:54.746 01:43:09 -- common/autotest_common.sh@1552 -- # nvme id-ctrl /dev/nvme0 00:04:54.746 01:43:09 -- common/autotest_common.sh@1552 -- # grep unvmcap 00:04:54.746 01:43:09 -- common/autotest_common.sh@1552 -- # cut -d: -f2 00:04:54.746 01:43:09 -- common/autotest_common.sh@1552 -- # unvmcap=' 0' 00:04:54.746 01:43:09 -- common/autotest_common.sh@1553 -- # [[ 0 -eq 0 ]] 00:04:54.746 01:43:09 -- common/autotest_common.sh@1555 -- # continue 00:04:54.746 01:43:09 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:54.746 01:43:09 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:54.746 01:43:09 -- common/autotest_common.sh@10 -- # set +x 00:04:54.746 01:43:09 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:54.746 01:43:09 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:54.746 01:43:09 -- common/autotest_common.sh@10 -- # set +x 00:04:54.746 01:43:09 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:56.120 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:56.120 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:56.120 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:56.120 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:56.120 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:56.120 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:56.120 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:56.120 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:56.120 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:56.120 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:56.120 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:56.120 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:56.120 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:56.120 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:56.120 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:56.120 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:57.054 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:04:57.054 01:43:11 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:57.054 01:43:11 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:57.054 01:43:11 -- common/autotest_common.sh@10 -- # set +x 00:04:57.054 01:43:11 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:57.054 01:43:11 -- common/autotest_common.sh@1589 -- # mapfile -t bdfs 00:04:57.054 01:43:11 -- common/autotest_common.sh@1589 -- # get_nvme_bdfs_by_id 0x0a54 00:04:57.054 01:43:11 -- common/autotest_common.sh@1575 -- # bdfs=() 00:04:57.054 01:43:11 -- common/autotest_common.sh@1575 -- # local bdfs 00:04:57.054 01:43:11 -- common/autotest_common.sh@1577 -- # get_nvme_bdfs 00:04:57.054 01:43:11 -- common/autotest_common.sh@1511 -- # bdfs=() 00:04:57.054 01:43:11 -- common/autotest_common.sh@1511 -- # local bdfs 00:04:57.054 01:43:11 -- common/autotest_common.sh@1512 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:57.054 01:43:11 -- common/autotest_common.sh@1512 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:57.054 01:43:11 -- common/autotest_common.sh@1512 -- # jq -r '.config[].params.traddr' 00:04:57.312 01:43:11 -- common/autotest_common.sh@1513 -- # (( 1 == 0 )) 00:04:57.312 01:43:11 -- common/autotest_common.sh@1517 -- # printf '%s\n' 0000:88:00.0 00:04:57.312 01:43:11 -- common/autotest_common.sh@1577 -- # for bdf in $(get_nvme_bdfs) 00:04:57.312 01:43:11 -- common/autotest_common.sh@1578 -- # cat /sys/bus/pci/devices/0000:88:00.0/device 00:04:57.312 01:43:11 -- common/autotest_common.sh@1578 -- # device=0x0a54 00:04:57.312 01:43:11 -- common/autotest_common.sh@1579 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:57.312 01:43:11 -- common/autotest_common.sh@1580 -- # bdfs+=($bdf) 00:04:57.312 01:43:11 -- common/autotest_common.sh@1584 -- # printf '%s\n' 0000:88:00.0 00:04:57.312 01:43:11 -- common/autotest_common.sh@1590 -- # [[ -z 0000:88:00.0 ]] 00:04:57.312 01:43:11 -- common/autotest_common.sh@1595 -- # spdk_tgt_pid=1290455 00:04:57.312 01:43:11 -- common/autotest_common.sh@1594 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:57.312 01:43:11 -- common/autotest_common.sh@1596 -- # waitforlisten 1290455 00:04:57.312 01:43:11 -- common/autotest_common.sh@829 -- # '[' -z 1290455 ']' 00:04:57.312 01:43:11 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:57.312 01:43:11 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:57.312 01:43:11 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:57.312 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:57.312 01:43:11 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:57.312 01:43:11 -- common/autotest_common.sh@10 -- # set +x 00:04:57.312 [2024-07-24 01:43:12.016792] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:04:57.312 [2024-07-24 01:43:12.016896] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1290455 ] 00:04:57.312 EAL: No free 2048 kB hugepages reported on node 1 00:04:57.312 [2024-07-24 01:43:12.074440] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:57.312 [2024-07-24 01:43:12.163812] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.570 01:43:12 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:57.570 01:43:12 -- common/autotest_common.sh@862 -- # return 0 00:04:57.570 01:43:12 -- common/autotest_common.sh@1598 -- # bdf_id=0 00:04:57.570 01:43:12 -- common/autotest_common.sh@1599 -- # for bdf in "${bdfs[@]}" 00:04:57.570 01:43:12 -- common/autotest_common.sh@1600 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:88:00.0 00:05:00.850 nvme0n1 00:05:00.850 01:43:15 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:01.108 [2024-07-24 01:43:15.746102] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:05:01.108 [2024-07-24 01:43:15.746157] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:05:01.108 request: 00:05:01.108 { 00:05:01.108 "nvme_ctrlr_name": "nvme0", 00:05:01.108 "password": "test", 00:05:01.108 "method": "bdev_nvme_opal_revert", 00:05:01.108 "req_id": 1 00:05:01.108 } 00:05:01.108 Got JSON-RPC error response 00:05:01.108 response: 00:05:01.108 { 00:05:01.108 "code": -32603, 00:05:01.108 "message": "Internal error" 00:05:01.108 } 00:05:01.108 01:43:15 -- common/autotest_common.sh@1602 -- # true 00:05:01.108 01:43:15 -- common/autotest_common.sh@1603 -- # (( ++bdf_id )) 00:05:01.108 01:43:15 -- common/autotest_common.sh@1606 -- # killprocess 1290455 00:05:01.108 01:43:15 -- common/autotest_common.sh@948 -- # '[' -z 1290455 ']' 00:05:01.108 01:43:15 -- common/autotest_common.sh@952 -- # kill -0 1290455 00:05:01.108 01:43:15 -- common/autotest_common.sh@953 -- # uname 00:05:01.108 01:43:15 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:01.108 01:43:15 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1290455 00:05:01.108 01:43:15 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:01.108 01:43:15 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:01.108 01:43:15 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1290455' 00:05:01.108 killing process with pid 1290455 00:05:01.108 01:43:15 -- common/autotest_common.sh@967 -- # kill 1290455 00:05:01.108 01:43:15 -- common/autotest_common.sh@972 -- # wait 1290455 00:05:01.108 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.108 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.108 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.108 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.108 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.108 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.108 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.108 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.108 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.108 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.108 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.108 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.108 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.108 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.108 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.108 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.108 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.108 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.108 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.108 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.108 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.108 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.108 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.108 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.108 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.108 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.108 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.108 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.108 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.108 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.108 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.108 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.108 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.108 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.108 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.108 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.108 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.108 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.108 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.108 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.108 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.108 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.108 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.108 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.108 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.108 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.108 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.108 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.108 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.108 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.108 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.108 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.108 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.108 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.108 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.108 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.108 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.108 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.108 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.108 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.108 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.108 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.108 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.108 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.108 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.108 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.108 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.108 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.108 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.108 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.108 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.108 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.108 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.108 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.108 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.108 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.108 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.108 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.108 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.108 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.108 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.108 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.109 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.109 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.109 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.109 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.109 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.109 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.109 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.109 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.109 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.109 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.109 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.109 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.109 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.109 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.109 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.109 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.109 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.109 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.109 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.109 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.109 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.109 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.109 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.109 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.109 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.109 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.109 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.109 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.109 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.109 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.109 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.109 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.109 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.109 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.109 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.109 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.109 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.109 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.109 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.109 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.109 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.109 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.109 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.109 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.109 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.109 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.109 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.109 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.109 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.109 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.109 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.109 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.109 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.109 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.109 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.109 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.109 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.109 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.109 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.109 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.109 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.109 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.109 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.109 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.109 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.109 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.109 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.109 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.109 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.109 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.109 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.109 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.109 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.109 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.109 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.109 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.109 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.109 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.109 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.109 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.109 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.109 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.109 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.109 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.109 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.109 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.109 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.109 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.109 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.109 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.109 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.109 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.109 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.109 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.109 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.109 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.109 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.109 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.109 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.109 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.109 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.109 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.109 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.109 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.109 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.109 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.109 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.109 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.109 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.109 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.109 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.109 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.109 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.109 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:03.031 01:43:17 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:03.031 01:43:17 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:03.031 01:43:17 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:03.031 01:43:17 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:03.031 01:43:17 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:03.031 01:43:17 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:03.031 01:43:17 -- common/autotest_common.sh@10 -- # set +x 00:05:03.031 01:43:17 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:05:03.031 01:43:17 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:03.031 01:43:17 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:03.031 01:43:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:03.031 01:43:17 -- common/autotest_common.sh@10 -- # set +x 00:05:03.031 ************************************ 00:05:03.031 START TEST env 00:05:03.031 ************************************ 00:05:03.031 01:43:17 env -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:03.031 * Looking for test storage... 00:05:03.031 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:05:03.031 01:43:17 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:03.031 01:43:17 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:03.031 01:43:17 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:03.031 01:43:17 env -- common/autotest_common.sh@10 -- # set +x 00:05:03.031 ************************************ 00:05:03.031 START TEST env_memory 00:05:03.031 ************************************ 00:05:03.031 01:43:17 env.env_memory -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:03.031 00:05:03.031 00:05:03.031 CUnit - A unit testing framework for C - Version 2.1-3 00:05:03.031 http://cunit.sourceforge.net/ 00:05:03.031 00:05:03.031 00:05:03.031 Suite: memory 00:05:03.031 Test: alloc and free memory map ...[2024-07-24 01:43:17.649047] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:03.031 passed 00:05:03.031 Test: mem map translation ...[2024-07-24 01:43:17.669265] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:03.031 [2024-07-24 01:43:17.669289] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:03.031 [2024-07-24 01:43:17.669337] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:03.031 [2024-07-24 01:43:17.669357] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:03.031 passed 00:05:03.031 Test: mem map registration ...[2024-07-24 01:43:17.711118] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:03.031 [2024-07-24 01:43:17.711140] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:03.031 passed 00:05:03.031 Test: mem map adjacent registrations ...passed 00:05:03.031 00:05:03.031 Run Summary: Type Total Ran Passed Failed Inactive 00:05:03.032 suites 1 1 n/a 0 0 00:05:03.032 tests 4 4 4 0 0 00:05:03.032 asserts 152 152 152 0 n/a 00:05:03.032 00:05:03.032 Elapsed time = 0.140 seconds 00:05:03.032 00:05:03.032 real 0m0.148s 00:05:03.032 user 0m0.143s 00:05:03.032 sys 0m0.005s 00:05:03.032 01:43:17 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:03.032 01:43:17 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:03.032 ************************************ 00:05:03.032 END TEST env_memory 00:05:03.032 ************************************ 00:05:03.032 01:43:17 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:03.032 01:43:17 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:03.032 01:43:17 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:03.032 01:43:17 env -- common/autotest_common.sh@10 -- # set +x 00:05:03.032 ************************************ 00:05:03.032 START TEST env_vtophys 00:05:03.032 ************************************ 00:05:03.032 01:43:17 env.env_vtophys -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:03.032 EAL: lib.eal log level changed from notice to debug 00:05:03.032 EAL: Detected lcore 0 as core 0 on socket 0 00:05:03.032 EAL: Detected lcore 1 as core 1 on socket 0 00:05:03.032 EAL: Detected lcore 2 as core 2 on socket 0 00:05:03.032 EAL: Detected lcore 3 as core 3 on socket 0 00:05:03.032 EAL: Detected lcore 4 as core 4 on socket 0 00:05:03.032 EAL: Detected lcore 5 as core 5 on socket 0 00:05:03.032 EAL: Detected lcore 6 as core 8 on socket 0 00:05:03.032 EAL: Detected lcore 7 as core 9 on socket 0 00:05:03.032 EAL: Detected lcore 8 as core 10 on socket 0 00:05:03.032 EAL: Detected lcore 9 as core 11 on socket 0 00:05:03.032 EAL: Detected lcore 10 as core 12 on socket 0 00:05:03.032 EAL: Detected lcore 11 as core 13 on socket 0 00:05:03.032 EAL: Detected lcore 12 as core 0 on socket 1 00:05:03.032 EAL: Detected lcore 13 as core 1 on socket 1 00:05:03.032 EAL: Detected lcore 14 as core 2 on socket 1 00:05:03.032 EAL: Detected lcore 15 as core 3 on socket 1 00:05:03.032 EAL: Detected lcore 16 as core 4 on socket 1 00:05:03.032 EAL: Detected lcore 17 as core 5 on socket 1 00:05:03.032 EAL: Detected lcore 18 as core 8 on socket 1 00:05:03.032 EAL: Detected lcore 19 as core 9 on socket 1 00:05:03.032 EAL: Detected lcore 20 as core 10 on socket 1 00:05:03.032 EAL: Detected lcore 21 as core 11 on socket 1 00:05:03.032 EAL: Detected lcore 22 as core 12 on socket 1 00:05:03.032 EAL: Detected lcore 23 as core 13 on socket 1 00:05:03.032 EAL: Detected lcore 24 as core 0 on socket 0 00:05:03.032 EAL: Detected lcore 25 as core 1 on socket 0 00:05:03.032 EAL: Detected lcore 26 as core 2 on socket 0 00:05:03.032 EAL: Detected lcore 27 as core 3 on socket 0 00:05:03.032 EAL: Detected lcore 28 as core 4 on socket 0 00:05:03.032 EAL: Detected lcore 29 as core 5 on socket 0 00:05:03.032 EAL: Detected lcore 30 as core 8 on socket 0 00:05:03.032 EAL: Detected lcore 31 as core 9 on socket 0 00:05:03.032 EAL: Detected lcore 32 as core 10 on socket 0 00:05:03.032 EAL: Detected lcore 33 as core 11 on socket 0 00:05:03.032 EAL: Detected lcore 34 as core 12 on socket 0 00:05:03.032 EAL: Detected lcore 35 as core 13 on socket 0 00:05:03.032 EAL: Detected lcore 36 as core 0 on socket 1 00:05:03.032 EAL: Detected lcore 37 as core 1 on socket 1 00:05:03.032 EAL: Detected lcore 38 as core 2 on socket 1 00:05:03.032 EAL: Detected lcore 39 as core 3 on socket 1 00:05:03.032 EAL: Detected lcore 40 as core 4 on socket 1 00:05:03.032 EAL: Detected lcore 41 as core 5 on socket 1 00:05:03.032 EAL: Detected lcore 42 as core 8 on socket 1 00:05:03.032 EAL: Detected lcore 43 as core 9 on socket 1 00:05:03.032 EAL: Detected lcore 44 as core 10 on socket 1 00:05:03.032 EAL: Detected lcore 45 as core 11 on socket 1 00:05:03.032 EAL: Detected lcore 46 as core 12 on socket 1 00:05:03.032 EAL: Detected lcore 47 as core 13 on socket 1 00:05:03.032 EAL: Maximum logical cores by configuration: 128 00:05:03.032 EAL: Detected CPU lcores: 48 00:05:03.032 EAL: Detected NUMA nodes: 2 00:05:03.032 EAL: Checking presence of .so 'librte_eal.so.23.0' 00:05:03.032 EAL: Detected shared linkage of DPDK 00:05:03.032 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23.0 00:05:03.032 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23.0 00:05:03.032 EAL: Registered [vdev] bus. 00:05:03.032 EAL: bus.vdev log level changed from disabled to notice 00:05:03.032 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23.0 00:05:03.032 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23.0 00:05:03.032 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:03.032 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:03.032 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:05:03.032 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:05:03.032 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:05:03.032 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:05:03.032 EAL: No shared files mode enabled, IPC will be disabled 00:05:03.032 EAL: No shared files mode enabled, IPC is disabled 00:05:03.032 EAL: Bus pci wants IOVA as 'DC' 00:05:03.032 EAL: Bus vdev wants IOVA as 'DC' 00:05:03.032 EAL: Buses did not request a specific IOVA mode. 00:05:03.032 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:03.032 EAL: Selected IOVA mode 'VA' 00:05:03.032 EAL: No free 2048 kB hugepages reported on node 1 00:05:03.032 EAL: Probing VFIO support... 00:05:03.032 EAL: IOMMU type 1 (Type 1) is supported 00:05:03.032 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:03.032 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:03.032 EAL: VFIO support initialized 00:05:03.032 EAL: Ask a virtual area of 0x2e000 bytes 00:05:03.032 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:03.032 EAL: Setting up physically contiguous memory... 00:05:03.032 EAL: Setting maximum number of open files to 524288 00:05:03.032 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:03.032 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:03.032 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:03.032 EAL: Ask a virtual area of 0x61000 bytes 00:05:03.032 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:03.032 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:03.032 EAL: Ask a virtual area of 0x400000000 bytes 00:05:03.032 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:03.032 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:03.032 EAL: Ask a virtual area of 0x61000 bytes 00:05:03.032 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:03.032 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:03.032 EAL: Ask a virtual area of 0x400000000 bytes 00:05:03.032 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:03.032 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:03.032 EAL: Ask a virtual area of 0x61000 bytes 00:05:03.032 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:03.032 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:03.032 EAL: Ask a virtual area of 0x400000000 bytes 00:05:03.032 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:03.032 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:03.032 EAL: Ask a virtual area of 0x61000 bytes 00:05:03.032 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:03.032 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:03.032 EAL: Ask a virtual area of 0x400000000 bytes 00:05:03.032 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:03.032 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:03.032 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:03.032 EAL: Ask a virtual area of 0x61000 bytes 00:05:03.032 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:03.032 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:03.032 EAL: Ask a virtual area of 0x400000000 bytes 00:05:03.032 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:03.032 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:03.032 EAL: Ask a virtual area of 0x61000 bytes 00:05:03.032 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:03.032 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:03.032 EAL: Ask a virtual area of 0x400000000 bytes 00:05:03.032 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:03.032 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:03.032 EAL: Ask a virtual area of 0x61000 bytes 00:05:03.032 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:03.032 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:03.032 EAL: Ask a virtual area of 0x400000000 bytes 00:05:03.032 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:03.032 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:03.032 EAL: Ask a virtual area of 0x61000 bytes 00:05:03.032 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:03.032 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:03.032 EAL: Ask a virtual area of 0x400000000 bytes 00:05:03.032 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:03.032 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:03.032 EAL: Hugepages will be freed exactly as allocated. 00:05:03.032 EAL: No shared files mode enabled, IPC is disabled 00:05:03.032 EAL: No shared files mode enabled, IPC is disabled 00:05:03.032 EAL: TSC frequency is ~2700000 KHz 00:05:03.032 EAL: Main lcore 0 is ready (tid=7ff36bb27a00;cpuset=[0]) 00:05:03.032 EAL: Trying to obtain current memory policy. 00:05:03.032 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:03.032 EAL: Restoring previous memory policy: 0 00:05:03.032 EAL: request: mp_malloc_sync 00:05:03.032 EAL: No shared files mode enabled, IPC is disabled 00:05:03.032 EAL: Heap on socket 0 was expanded by 2MB 00:05:03.032 EAL: No shared files mode enabled, IPC is disabled 00:05:03.032 EAL: No shared files mode enabled, IPC is disabled 00:05:03.032 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:03.032 EAL: Mem event callback 'spdk:(nil)' registered 00:05:03.032 00:05:03.032 00:05:03.032 CUnit - A unit testing framework for C - Version 2.1-3 00:05:03.033 http://cunit.sourceforge.net/ 00:05:03.033 00:05:03.033 00:05:03.033 Suite: components_suite 00:05:03.033 Test: vtophys_malloc_test ...passed 00:05:03.033 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:03.033 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:03.033 EAL: Restoring previous memory policy: 4 00:05:03.033 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.033 EAL: request: mp_malloc_sync 00:05:03.033 EAL: No shared files mode enabled, IPC is disabled 00:05:03.033 EAL: Heap on socket 0 was expanded by 4MB 00:05:03.033 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.033 EAL: request: mp_malloc_sync 00:05:03.033 EAL: No shared files mode enabled, IPC is disabled 00:05:03.033 EAL: Heap on socket 0 was shrunk by 4MB 00:05:03.033 EAL: Trying to obtain current memory policy. 00:05:03.033 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:03.033 EAL: Restoring previous memory policy: 4 00:05:03.033 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.033 EAL: request: mp_malloc_sync 00:05:03.033 EAL: No shared files mode enabled, IPC is disabled 00:05:03.033 EAL: Heap on socket 0 was expanded by 6MB 00:05:03.033 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.033 EAL: request: mp_malloc_sync 00:05:03.033 EAL: No shared files mode enabled, IPC is disabled 00:05:03.033 EAL: Heap on socket 0 was shrunk by 6MB 00:05:03.033 EAL: Trying to obtain current memory policy. 00:05:03.033 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:03.033 EAL: Restoring previous memory policy: 4 00:05:03.033 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.033 EAL: request: mp_malloc_sync 00:05:03.033 EAL: No shared files mode enabled, IPC is disabled 00:05:03.033 EAL: Heap on socket 0 was expanded by 10MB 00:05:03.033 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.033 EAL: request: mp_malloc_sync 00:05:03.033 EAL: No shared files mode enabled, IPC is disabled 00:05:03.033 EAL: Heap on socket 0 was shrunk by 10MB 00:05:03.033 EAL: Trying to obtain current memory policy. 00:05:03.033 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:03.033 EAL: Restoring previous memory policy: 4 00:05:03.033 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.033 EAL: request: mp_malloc_sync 00:05:03.033 EAL: No shared files mode enabled, IPC is disabled 00:05:03.033 EAL: Heap on socket 0 was expanded by 18MB 00:05:03.033 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.033 EAL: request: mp_malloc_sync 00:05:03.033 EAL: No shared files mode enabled, IPC is disabled 00:05:03.033 EAL: Heap on socket 0 was shrunk by 18MB 00:05:03.033 EAL: Trying to obtain current memory policy. 00:05:03.033 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:03.033 EAL: Restoring previous memory policy: 4 00:05:03.033 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.033 EAL: request: mp_malloc_sync 00:05:03.033 EAL: No shared files mode enabled, IPC is disabled 00:05:03.033 EAL: Heap on socket 0 was expanded by 34MB 00:05:03.033 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.033 EAL: request: mp_malloc_sync 00:05:03.033 EAL: No shared files mode enabled, IPC is disabled 00:05:03.033 EAL: Heap on socket 0 was shrunk by 34MB 00:05:03.033 EAL: Trying to obtain current memory policy. 00:05:03.033 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:03.291 EAL: Restoring previous memory policy: 4 00:05:03.291 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.291 EAL: request: mp_malloc_sync 00:05:03.291 EAL: No shared files mode enabled, IPC is disabled 00:05:03.291 EAL: Heap on socket 0 was expanded by 66MB 00:05:03.291 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.291 EAL: request: mp_malloc_sync 00:05:03.291 EAL: No shared files mode enabled, IPC is disabled 00:05:03.291 EAL: Heap on socket 0 was shrunk by 66MB 00:05:03.291 EAL: Trying to obtain current memory policy. 00:05:03.291 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:03.291 EAL: Restoring previous memory policy: 4 00:05:03.291 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.291 EAL: request: mp_malloc_sync 00:05:03.291 EAL: No shared files mode enabled, IPC is disabled 00:05:03.291 EAL: Heap on socket 0 was expanded by 130MB 00:05:03.291 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.291 EAL: request: mp_malloc_sync 00:05:03.291 EAL: No shared files mode enabled, IPC is disabled 00:05:03.291 EAL: Heap on socket 0 was shrunk by 130MB 00:05:03.291 EAL: Trying to obtain current memory policy. 00:05:03.291 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:03.291 EAL: Restoring previous memory policy: 4 00:05:03.291 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.291 EAL: request: mp_malloc_sync 00:05:03.291 EAL: No shared files mode enabled, IPC is disabled 00:05:03.291 EAL: Heap on socket 0 was expanded by 258MB 00:05:03.291 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.548 EAL: request: mp_malloc_sync 00:05:03.548 EAL: No shared files mode enabled, IPC is disabled 00:05:03.548 EAL: Heap on socket 0 was shrunk by 258MB 00:05:03.548 EAL: Trying to obtain current memory policy. 00:05:03.548 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:03.548 EAL: Restoring previous memory policy: 4 00:05:03.548 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.548 EAL: request: mp_malloc_sync 00:05:03.548 EAL: No shared files mode enabled, IPC is disabled 00:05:03.548 EAL: Heap on socket 0 was expanded by 514MB 00:05:03.806 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.806 EAL: request: mp_malloc_sync 00:05:03.806 EAL: No shared files mode enabled, IPC is disabled 00:05:03.806 EAL: Heap on socket 0 was shrunk by 514MB 00:05:03.806 EAL: Trying to obtain current memory policy. 00:05:03.806 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:04.063 EAL: Restoring previous memory policy: 4 00:05:04.063 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.063 EAL: request: mp_malloc_sync 00:05:04.063 EAL: No shared files mode enabled, IPC is disabled 00:05:04.063 EAL: Heap on socket 0 was expanded by 1026MB 00:05:04.320 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.578 EAL: request: mp_malloc_sync 00:05:04.578 EAL: No shared files mode enabled, IPC is disabled 00:05:04.578 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:04.578 passed 00:05:04.578 00:05:04.578 Run Summary: Type Total Ran Passed Failed Inactive 00:05:04.578 suites 1 1 n/a 0 0 00:05:04.578 tests 2 2 2 0 0 00:05:04.578 asserts 497 497 497 0 n/a 00:05:04.578 00:05:04.578 Elapsed time = 1.354 seconds 00:05:04.578 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.578 EAL: request: mp_malloc_sync 00:05:04.578 EAL: No shared files mode enabled, IPC is disabled 00:05:04.578 EAL: Heap on socket 0 was shrunk by 2MB 00:05:04.578 EAL: No shared files mode enabled, IPC is disabled 00:05:04.578 EAL: No shared files mode enabled, IPC is disabled 00:05:04.578 EAL: No shared files mode enabled, IPC is disabled 00:05:04.578 00:05:04.578 real 0m1.465s 00:05:04.578 user 0m0.832s 00:05:04.578 sys 0m0.601s 00:05:04.578 01:43:19 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:04.578 01:43:19 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:04.578 ************************************ 00:05:04.578 END TEST env_vtophys 00:05:04.578 ************************************ 00:05:04.578 01:43:19 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:04.578 01:43:19 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:04.578 01:43:19 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:04.578 01:43:19 env -- common/autotest_common.sh@10 -- # set +x 00:05:04.578 ************************************ 00:05:04.578 START TEST env_pci 00:05:04.578 ************************************ 00:05:04.578 01:43:19 env.env_pci -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:04.578 00:05:04.578 00:05:04.578 CUnit - A unit testing framework for C - Version 2.1-3 00:05:04.579 http://cunit.sourceforge.net/ 00:05:04.579 00:05:04.579 00:05:04.579 Suite: pci 00:05:04.579 Test: pci_hook ...[2024-07-24 01:43:19.337952] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1291344 has claimed it 00:05:04.579 EAL: Cannot find device (10000:00:01.0) 00:05:04.579 EAL: Failed to attach device on primary process 00:05:04.579 passed 00:05:04.579 00:05:04.579 Run Summary: Type Total Ran Passed Failed Inactive 00:05:04.579 suites 1 1 n/a 0 0 00:05:04.579 tests 1 1 1 0 0 00:05:04.579 asserts 25 25 25 0 n/a 00:05:04.579 00:05:04.579 Elapsed time = 0.021 seconds 00:05:04.579 00:05:04.579 real 0m0.033s 00:05:04.579 user 0m0.009s 00:05:04.579 sys 0m0.024s 00:05:04.579 01:43:19 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:04.579 01:43:19 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:04.579 ************************************ 00:05:04.579 END TEST env_pci 00:05:04.579 ************************************ 00:05:04.579 01:43:19 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:04.579 01:43:19 env -- env/env.sh@15 -- # uname 00:05:04.579 01:43:19 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:04.579 01:43:19 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:04.579 01:43:19 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:04.579 01:43:19 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:05:04.579 01:43:19 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:04.579 01:43:19 env -- common/autotest_common.sh@10 -- # set +x 00:05:04.579 ************************************ 00:05:04.579 START TEST env_dpdk_post_init 00:05:04.579 ************************************ 00:05:04.579 01:43:19 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:04.579 EAL: Detected CPU lcores: 48 00:05:04.579 EAL: Detected NUMA nodes: 2 00:05:04.579 EAL: Detected shared linkage of DPDK 00:05:04.579 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:04.579 EAL: Selected IOVA mode 'VA' 00:05:04.579 EAL: No free 2048 kB hugepages reported on node 1 00:05:04.579 EAL: VFIO support initialized 00:05:04.579 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:04.836 EAL: Using IOMMU type 1 (Type 1) 00:05:04.836 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:05:04.836 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:05:04.836 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:05:04.836 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:05:04.836 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:05:04.836 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:05:04.836 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:05:04.836 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:05:04.836 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:05:04.836 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:05:04.837 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:05:04.837 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:05:04.837 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:05:04.837 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:05:04.837 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:05:04.837 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:05:05.768 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:88:00.0 (socket 1) 00:05:09.046 EAL: Releasing PCI mapped resource for 0000:88:00.0 00:05:09.046 EAL: Calling pci_unmap_resource for 0000:88:00.0 at 0x202001040000 00:05:09.046 Starting DPDK initialization... 00:05:09.046 Starting SPDK post initialization... 00:05:09.046 SPDK NVMe probe 00:05:09.046 Attaching to 0000:88:00.0 00:05:09.046 Attached to 0000:88:00.0 00:05:09.046 Cleaning up... 00:05:09.046 00:05:09.046 real 0m4.428s 00:05:09.046 user 0m3.295s 00:05:09.046 sys 0m0.196s 00:05:09.046 01:43:23 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:09.046 01:43:23 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:09.046 ************************************ 00:05:09.046 END TEST env_dpdk_post_init 00:05:09.046 ************************************ 00:05:09.046 01:43:23 env -- env/env.sh@26 -- # uname 00:05:09.046 01:43:23 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:09.046 01:43:23 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:09.046 01:43:23 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:09.046 01:43:23 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:09.046 01:43:23 env -- common/autotest_common.sh@10 -- # set +x 00:05:09.046 ************************************ 00:05:09.046 START TEST env_mem_callbacks 00:05:09.046 ************************************ 00:05:09.046 01:43:23 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:09.046 EAL: Detected CPU lcores: 48 00:05:09.046 EAL: Detected NUMA nodes: 2 00:05:09.046 EAL: Detected shared linkage of DPDK 00:05:09.046 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:09.046 EAL: Selected IOVA mode 'VA' 00:05:09.046 EAL: No free 2048 kB hugepages reported on node 1 00:05:09.046 EAL: VFIO support initialized 00:05:09.046 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:09.046 00:05:09.046 00:05:09.046 CUnit - A unit testing framework for C - Version 2.1-3 00:05:09.046 http://cunit.sourceforge.net/ 00:05:09.046 00:05:09.046 00:05:09.046 Suite: memory 00:05:09.046 Test: test ... 00:05:09.046 register 0x200000200000 2097152 00:05:09.046 malloc 3145728 00:05:09.046 register 0x200000400000 4194304 00:05:09.046 buf 0x200000500000 len 3145728 PASSED 00:05:09.046 malloc 64 00:05:09.046 buf 0x2000004fff40 len 64 PASSED 00:05:09.046 malloc 4194304 00:05:09.046 register 0x200000800000 6291456 00:05:09.046 buf 0x200000a00000 len 4194304 PASSED 00:05:09.046 free 0x200000500000 3145728 00:05:09.046 free 0x2000004fff40 64 00:05:09.046 unregister 0x200000400000 4194304 PASSED 00:05:09.046 free 0x200000a00000 4194304 00:05:09.046 unregister 0x200000800000 6291456 PASSED 00:05:09.046 malloc 8388608 00:05:09.046 register 0x200000400000 10485760 00:05:09.046 buf 0x200000600000 len 8388608 PASSED 00:05:09.046 free 0x200000600000 8388608 00:05:09.046 unregister 0x200000400000 10485760 PASSED 00:05:09.046 passed 00:05:09.046 00:05:09.046 Run Summary: Type Total Ran Passed Failed Inactive 00:05:09.046 suites 1 1 n/a 0 0 00:05:09.046 tests 1 1 1 0 0 00:05:09.046 asserts 15 15 15 0 n/a 00:05:09.046 00:05:09.046 Elapsed time = 0.005 seconds 00:05:09.046 00:05:09.046 real 0m0.047s 00:05:09.046 user 0m0.009s 00:05:09.046 sys 0m0.038s 00:05:09.046 01:43:23 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:09.046 01:43:23 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:09.046 ************************************ 00:05:09.046 END TEST env_mem_callbacks 00:05:09.046 ************************************ 00:05:09.304 00:05:09.304 real 0m6.407s 00:05:09.304 user 0m4.402s 00:05:09.304 sys 0m1.054s 00:05:09.304 01:43:23 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:09.304 01:43:23 env -- common/autotest_common.sh@10 -- # set +x 00:05:09.304 ************************************ 00:05:09.304 END TEST env 00:05:09.304 ************************************ 00:05:09.304 01:43:23 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:09.304 01:43:23 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:09.304 01:43:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:09.304 01:43:23 -- common/autotest_common.sh@10 -- # set +x 00:05:09.304 ************************************ 00:05:09.304 START TEST rpc 00:05:09.304 ************************************ 00:05:09.304 01:43:23 rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:09.304 * Looking for test storage... 00:05:09.304 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:09.304 01:43:24 rpc -- rpc/rpc.sh@65 -- # spdk_pid=1292006 00:05:09.304 01:43:24 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:09.304 01:43:24 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:09.304 01:43:24 rpc -- rpc/rpc.sh@67 -- # waitforlisten 1292006 00:05:09.304 01:43:24 rpc -- common/autotest_common.sh@829 -- # '[' -z 1292006 ']' 00:05:09.304 01:43:24 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:09.304 01:43:24 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:09.304 01:43:24 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:09.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:09.304 01:43:24 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:09.304 01:43:24 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:09.304 [2024-07-24 01:43:24.097429] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:05:09.304 [2024-07-24 01:43:24.097521] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1292006 ] 00:05:09.304 EAL: No free 2048 kB hugepages reported on node 1 00:05:09.304 [2024-07-24 01:43:24.160525] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:09.562 [2024-07-24 01:43:24.253855] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:09.562 [2024-07-24 01:43:24.253928] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1292006' to capture a snapshot of events at runtime. 00:05:09.563 [2024-07-24 01:43:24.253955] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:09.563 [2024-07-24 01:43:24.253968] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:09.563 [2024-07-24 01:43:24.253980] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1292006 for offline analysis/debug. 00:05:09.563 [2024-07-24 01:43:24.254012] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.821 01:43:24 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:09.821 01:43:24 rpc -- common/autotest_common.sh@862 -- # return 0 00:05:09.821 01:43:24 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:09.821 01:43:24 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:09.821 01:43:24 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:09.821 01:43:24 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:09.821 01:43:24 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:09.821 01:43:24 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:09.821 01:43:24 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:09.821 ************************************ 00:05:09.821 START TEST rpc_integrity 00:05:09.821 ************************************ 00:05:09.821 01:43:24 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:05:09.821 01:43:24 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:09.821 01:43:24 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:09.821 01:43:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:09.821 01:43:24 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:09.821 01:43:24 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:09.821 01:43:24 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:09.821 01:43:24 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:09.821 01:43:24 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:09.821 01:43:24 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:09.822 01:43:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:09.822 01:43:24 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:09.822 01:43:24 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:09.822 01:43:24 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:09.822 01:43:24 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:09.822 01:43:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:09.822 01:43:24 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:09.822 01:43:24 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:09.822 { 00:05:09.822 "name": "Malloc0", 00:05:09.822 "aliases": [ 00:05:09.822 "1b1081e9-ef5a-4c8c-aef6-9edc02f95d1e" 00:05:09.822 ], 00:05:09.822 "product_name": "Malloc disk", 00:05:09.822 "block_size": 512, 00:05:09.822 "num_blocks": 16384, 00:05:09.822 "uuid": "1b1081e9-ef5a-4c8c-aef6-9edc02f95d1e", 00:05:09.822 "assigned_rate_limits": { 00:05:09.822 "rw_ios_per_sec": 0, 00:05:09.822 "rw_mbytes_per_sec": 0, 00:05:09.822 "r_mbytes_per_sec": 0, 00:05:09.822 "w_mbytes_per_sec": 0 00:05:09.822 }, 00:05:09.822 "claimed": false, 00:05:09.822 "zoned": false, 00:05:09.822 "supported_io_types": { 00:05:09.822 "read": true, 00:05:09.822 "write": true, 00:05:09.822 "unmap": true, 00:05:09.822 "flush": true, 00:05:09.822 "reset": true, 00:05:09.822 "nvme_admin": false, 00:05:09.822 "nvme_io": false, 00:05:09.822 "nvme_io_md": false, 00:05:09.822 "write_zeroes": true, 00:05:09.822 "zcopy": true, 00:05:09.822 "get_zone_info": false, 00:05:09.822 "zone_management": false, 00:05:09.822 "zone_append": false, 00:05:09.822 "compare": false, 00:05:09.822 "compare_and_write": false, 00:05:09.822 "abort": true, 00:05:09.822 "seek_hole": false, 00:05:09.822 "seek_data": false, 00:05:09.822 "copy": true, 00:05:09.822 "nvme_iov_md": false 00:05:09.822 }, 00:05:09.822 "memory_domains": [ 00:05:09.822 { 00:05:09.822 "dma_device_id": "system", 00:05:09.822 "dma_device_type": 1 00:05:09.822 }, 00:05:09.822 { 00:05:09.822 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:09.822 "dma_device_type": 2 00:05:09.822 } 00:05:09.822 ], 00:05:09.822 "driver_specific": {} 00:05:09.822 } 00:05:09.822 ]' 00:05:09.822 01:43:24 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:09.822 01:43:24 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:09.822 01:43:24 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:09.822 01:43:24 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:09.822 01:43:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:09.822 [2024-07-24 01:43:24.650966] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:09.822 [2024-07-24 01:43:24.651020] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:09.822 [2024-07-24 01:43:24.651045] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x182caf0 00:05:09.822 [2024-07-24 01:43:24.651060] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:09.822 [2024-07-24 01:43:24.652572] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:09.822 [2024-07-24 01:43:24.652599] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:09.822 Passthru0 00:05:09.822 01:43:24 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:09.822 01:43:24 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:09.822 01:43:24 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:09.822 01:43:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:09.822 01:43:24 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:09.822 01:43:24 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:09.822 { 00:05:09.822 "name": "Malloc0", 00:05:09.822 "aliases": [ 00:05:09.822 "1b1081e9-ef5a-4c8c-aef6-9edc02f95d1e" 00:05:09.822 ], 00:05:09.822 "product_name": "Malloc disk", 00:05:09.822 "block_size": 512, 00:05:09.822 "num_blocks": 16384, 00:05:09.822 "uuid": "1b1081e9-ef5a-4c8c-aef6-9edc02f95d1e", 00:05:09.822 "assigned_rate_limits": { 00:05:09.822 "rw_ios_per_sec": 0, 00:05:09.822 "rw_mbytes_per_sec": 0, 00:05:09.822 "r_mbytes_per_sec": 0, 00:05:09.822 "w_mbytes_per_sec": 0 00:05:09.822 }, 00:05:09.822 "claimed": true, 00:05:09.822 "claim_type": "exclusive_write", 00:05:09.822 "zoned": false, 00:05:09.822 "supported_io_types": { 00:05:09.822 "read": true, 00:05:09.822 "write": true, 00:05:09.822 "unmap": true, 00:05:09.822 "flush": true, 00:05:09.822 "reset": true, 00:05:09.822 "nvme_admin": false, 00:05:09.822 "nvme_io": false, 00:05:09.822 "nvme_io_md": false, 00:05:09.822 "write_zeroes": true, 00:05:09.822 "zcopy": true, 00:05:09.822 "get_zone_info": false, 00:05:09.822 "zone_management": false, 00:05:09.822 "zone_append": false, 00:05:09.822 "compare": false, 00:05:09.822 "compare_and_write": false, 00:05:09.822 "abort": true, 00:05:09.822 "seek_hole": false, 00:05:09.822 "seek_data": false, 00:05:09.822 "copy": true, 00:05:09.822 "nvme_iov_md": false 00:05:09.822 }, 00:05:09.822 "memory_domains": [ 00:05:09.822 { 00:05:09.822 "dma_device_id": "system", 00:05:09.822 "dma_device_type": 1 00:05:09.822 }, 00:05:09.822 { 00:05:09.822 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:09.822 "dma_device_type": 2 00:05:09.822 } 00:05:09.822 ], 00:05:09.822 "driver_specific": {} 00:05:09.822 }, 00:05:09.822 { 00:05:09.822 "name": "Passthru0", 00:05:09.822 "aliases": [ 00:05:09.822 "723aef0c-ea82-57c3-a228-b139e94992cf" 00:05:09.822 ], 00:05:09.822 "product_name": "passthru", 00:05:09.822 "block_size": 512, 00:05:09.822 "num_blocks": 16384, 00:05:09.822 "uuid": "723aef0c-ea82-57c3-a228-b139e94992cf", 00:05:09.822 "assigned_rate_limits": { 00:05:09.822 "rw_ios_per_sec": 0, 00:05:09.822 "rw_mbytes_per_sec": 0, 00:05:09.822 "r_mbytes_per_sec": 0, 00:05:09.822 "w_mbytes_per_sec": 0 00:05:09.822 }, 00:05:09.822 "claimed": false, 00:05:09.822 "zoned": false, 00:05:09.822 "supported_io_types": { 00:05:09.822 "read": true, 00:05:09.822 "write": true, 00:05:09.822 "unmap": true, 00:05:09.822 "flush": true, 00:05:09.822 "reset": true, 00:05:09.822 "nvme_admin": false, 00:05:09.822 "nvme_io": false, 00:05:09.822 "nvme_io_md": false, 00:05:09.822 "write_zeroes": true, 00:05:09.822 "zcopy": true, 00:05:09.822 "get_zone_info": false, 00:05:09.822 "zone_management": false, 00:05:09.822 "zone_append": false, 00:05:09.822 "compare": false, 00:05:09.822 "compare_and_write": false, 00:05:09.822 "abort": true, 00:05:09.822 "seek_hole": false, 00:05:09.822 "seek_data": false, 00:05:09.822 "copy": true, 00:05:09.822 "nvme_iov_md": false 00:05:09.822 }, 00:05:09.822 "memory_domains": [ 00:05:09.822 { 00:05:09.822 "dma_device_id": "system", 00:05:09.822 "dma_device_type": 1 00:05:09.822 }, 00:05:09.822 { 00:05:09.822 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:09.822 "dma_device_type": 2 00:05:09.822 } 00:05:09.822 ], 00:05:09.822 "driver_specific": { 00:05:09.822 "passthru": { 00:05:09.822 "name": "Passthru0", 00:05:09.822 "base_bdev_name": "Malloc0" 00:05:09.822 } 00:05:09.822 } 00:05:09.822 } 00:05:09.822 ]' 00:05:09.822 01:43:24 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:09.822 01:43:24 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:09.822 01:43:24 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:09.822 01:43:24 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:09.822 01:43:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:10.081 01:43:24 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:10.081 01:43:24 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:10.081 01:43:24 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:10.081 01:43:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:10.081 01:43:24 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:10.081 01:43:24 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:10.081 01:43:24 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:10.081 01:43:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:10.081 01:43:24 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:10.081 01:43:24 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:10.081 01:43:24 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:10.081 01:43:24 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:10.081 00:05:10.081 real 0m0.237s 00:05:10.081 user 0m0.157s 00:05:10.081 sys 0m0.020s 00:05:10.081 01:43:24 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:10.081 01:43:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:10.081 ************************************ 00:05:10.081 END TEST rpc_integrity 00:05:10.081 ************************************ 00:05:10.081 01:43:24 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:10.081 01:43:24 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:10.081 01:43:24 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:10.081 01:43:24 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:10.081 ************************************ 00:05:10.081 START TEST rpc_plugins 00:05:10.081 ************************************ 00:05:10.081 01:43:24 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:05:10.081 01:43:24 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:10.081 01:43:24 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:10.081 01:43:24 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:10.081 01:43:24 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:10.081 01:43:24 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:10.081 01:43:24 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:10.081 01:43:24 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:10.081 01:43:24 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:10.081 01:43:24 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:10.081 01:43:24 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:10.081 { 00:05:10.081 "name": "Malloc1", 00:05:10.081 "aliases": [ 00:05:10.081 "6791cc62-79d0-4b89-9dd0-7b5c0db47c89" 00:05:10.081 ], 00:05:10.081 "product_name": "Malloc disk", 00:05:10.081 "block_size": 4096, 00:05:10.081 "num_blocks": 256, 00:05:10.081 "uuid": "6791cc62-79d0-4b89-9dd0-7b5c0db47c89", 00:05:10.081 "assigned_rate_limits": { 00:05:10.081 "rw_ios_per_sec": 0, 00:05:10.081 "rw_mbytes_per_sec": 0, 00:05:10.081 "r_mbytes_per_sec": 0, 00:05:10.081 "w_mbytes_per_sec": 0 00:05:10.081 }, 00:05:10.081 "claimed": false, 00:05:10.081 "zoned": false, 00:05:10.081 "supported_io_types": { 00:05:10.081 "read": true, 00:05:10.081 "write": true, 00:05:10.081 "unmap": true, 00:05:10.081 "flush": true, 00:05:10.081 "reset": true, 00:05:10.081 "nvme_admin": false, 00:05:10.081 "nvme_io": false, 00:05:10.081 "nvme_io_md": false, 00:05:10.081 "write_zeroes": true, 00:05:10.081 "zcopy": true, 00:05:10.081 "get_zone_info": false, 00:05:10.081 "zone_management": false, 00:05:10.081 "zone_append": false, 00:05:10.081 "compare": false, 00:05:10.081 "compare_and_write": false, 00:05:10.081 "abort": true, 00:05:10.081 "seek_hole": false, 00:05:10.081 "seek_data": false, 00:05:10.081 "copy": true, 00:05:10.081 "nvme_iov_md": false 00:05:10.081 }, 00:05:10.081 "memory_domains": [ 00:05:10.081 { 00:05:10.081 "dma_device_id": "system", 00:05:10.081 "dma_device_type": 1 00:05:10.081 }, 00:05:10.081 { 00:05:10.081 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:10.081 "dma_device_type": 2 00:05:10.081 } 00:05:10.081 ], 00:05:10.081 "driver_specific": {} 00:05:10.081 } 00:05:10.081 ]' 00:05:10.081 01:43:24 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:10.081 01:43:24 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:10.081 01:43:24 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:10.081 01:43:24 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:10.081 01:43:24 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:10.081 01:43:24 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:10.081 01:43:24 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:10.081 01:43:24 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:10.081 01:43:24 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:10.081 01:43:24 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:10.081 01:43:24 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:10.081 01:43:24 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:10.081 01:43:24 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:10.081 00:05:10.081 real 0m0.113s 00:05:10.081 user 0m0.075s 00:05:10.081 sys 0m0.011s 00:05:10.081 01:43:24 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:10.081 01:43:24 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:10.081 ************************************ 00:05:10.081 END TEST rpc_plugins 00:05:10.081 ************************************ 00:05:10.081 01:43:24 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:10.081 01:43:24 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:10.081 01:43:24 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:10.081 01:43:24 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:10.339 ************************************ 00:05:10.339 START TEST rpc_trace_cmd_test 00:05:10.339 ************************************ 00:05:10.339 01:43:24 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:05:10.339 01:43:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:10.339 01:43:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:10.339 01:43:24 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:10.339 01:43:24 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:10.339 01:43:24 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:10.339 01:43:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:10.339 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1292006", 00:05:10.339 "tpoint_group_mask": "0x8", 00:05:10.339 "iscsi_conn": { 00:05:10.339 "mask": "0x2", 00:05:10.339 "tpoint_mask": "0x0" 00:05:10.339 }, 00:05:10.339 "scsi": { 00:05:10.339 "mask": "0x4", 00:05:10.339 "tpoint_mask": "0x0" 00:05:10.339 }, 00:05:10.339 "bdev": { 00:05:10.339 "mask": "0x8", 00:05:10.339 "tpoint_mask": "0xffffffffffffffff" 00:05:10.339 }, 00:05:10.339 "nvmf_rdma": { 00:05:10.339 "mask": "0x10", 00:05:10.339 "tpoint_mask": "0x0" 00:05:10.339 }, 00:05:10.339 "nvmf_tcp": { 00:05:10.339 "mask": "0x20", 00:05:10.339 "tpoint_mask": "0x0" 00:05:10.339 }, 00:05:10.339 "ftl": { 00:05:10.339 "mask": "0x40", 00:05:10.339 "tpoint_mask": "0x0" 00:05:10.339 }, 00:05:10.339 "blobfs": { 00:05:10.339 "mask": "0x80", 00:05:10.339 "tpoint_mask": "0x0" 00:05:10.339 }, 00:05:10.339 "dsa": { 00:05:10.339 "mask": "0x200", 00:05:10.339 "tpoint_mask": "0x0" 00:05:10.339 }, 00:05:10.339 "thread": { 00:05:10.339 "mask": "0x400", 00:05:10.339 "tpoint_mask": "0x0" 00:05:10.339 }, 00:05:10.339 "nvme_pcie": { 00:05:10.339 "mask": "0x800", 00:05:10.339 "tpoint_mask": "0x0" 00:05:10.339 }, 00:05:10.339 "iaa": { 00:05:10.339 "mask": "0x1000", 00:05:10.339 "tpoint_mask": "0x0" 00:05:10.339 }, 00:05:10.339 "nvme_tcp": { 00:05:10.339 "mask": "0x2000", 00:05:10.339 "tpoint_mask": "0x0" 00:05:10.339 }, 00:05:10.339 "bdev_nvme": { 00:05:10.339 "mask": "0x4000", 00:05:10.339 "tpoint_mask": "0x0" 00:05:10.339 }, 00:05:10.339 "sock": { 00:05:10.339 "mask": "0x8000", 00:05:10.339 "tpoint_mask": "0x0" 00:05:10.339 } 00:05:10.339 }' 00:05:10.339 01:43:24 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:10.339 01:43:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:10.339 01:43:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:10.339 01:43:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:10.339 01:43:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:10.339 01:43:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:10.339 01:43:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:10.339 01:43:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:10.339 01:43:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:10.339 01:43:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:10.339 00:05:10.339 real 0m0.197s 00:05:10.339 user 0m0.167s 00:05:10.339 sys 0m0.023s 00:05:10.339 01:43:25 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:10.339 01:43:25 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:10.339 ************************************ 00:05:10.339 END TEST rpc_trace_cmd_test 00:05:10.339 ************************************ 00:05:10.339 01:43:25 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:10.339 01:43:25 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:10.340 01:43:25 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:10.340 01:43:25 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:10.340 01:43:25 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:10.340 01:43:25 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:10.340 ************************************ 00:05:10.340 START TEST rpc_daemon_integrity 00:05:10.340 ************************************ 00:05:10.340 01:43:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:05:10.340 01:43:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:10.340 01:43:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:10.340 01:43:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:10.340 01:43:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:10.340 01:43:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:10.340 01:43:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:10.598 01:43:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:10.598 01:43:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:10.598 01:43:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:10.598 01:43:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:10.598 01:43:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:10.598 01:43:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:10.598 01:43:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:10.598 01:43:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:10.598 01:43:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:10.598 01:43:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:10.598 01:43:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:10.598 { 00:05:10.598 "name": "Malloc2", 00:05:10.598 "aliases": [ 00:05:10.598 "0744aebb-a83a-41f3-9b8f-c80ff6776066" 00:05:10.598 ], 00:05:10.598 "product_name": "Malloc disk", 00:05:10.598 "block_size": 512, 00:05:10.598 "num_blocks": 16384, 00:05:10.598 "uuid": "0744aebb-a83a-41f3-9b8f-c80ff6776066", 00:05:10.598 "assigned_rate_limits": { 00:05:10.598 "rw_ios_per_sec": 0, 00:05:10.598 "rw_mbytes_per_sec": 0, 00:05:10.598 "r_mbytes_per_sec": 0, 00:05:10.598 "w_mbytes_per_sec": 0 00:05:10.598 }, 00:05:10.598 "claimed": false, 00:05:10.598 "zoned": false, 00:05:10.598 "supported_io_types": { 00:05:10.598 "read": true, 00:05:10.598 "write": true, 00:05:10.598 "unmap": true, 00:05:10.598 "flush": true, 00:05:10.598 "reset": true, 00:05:10.598 "nvme_admin": false, 00:05:10.598 "nvme_io": false, 00:05:10.598 "nvme_io_md": false, 00:05:10.598 "write_zeroes": true, 00:05:10.598 "zcopy": true, 00:05:10.598 "get_zone_info": false, 00:05:10.598 "zone_management": false, 00:05:10.598 "zone_append": false, 00:05:10.598 "compare": false, 00:05:10.598 "compare_and_write": false, 00:05:10.598 "abort": true, 00:05:10.598 "seek_hole": false, 00:05:10.598 "seek_data": false, 00:05:10.598 "copy": true, 00:05:10.598 "nvme_iov_md": false 00:05:10.598 }, 00:05:10.598 "memory_domains": [ 00:05:10.598 { 00:05:10.598 "dma_device_id": "system", 00:05:10.598 "dma_device_type": 1 00:05:10.598 }, 00:05:10.598 { 00:05:10.598 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:10.598 "dma_device_type": 2 00:05:10.598 } 00:05:10.598 ], 00:05:10.598 "driver_specific": {} 00:05:10.598 } 00:05:10.598 ]' 00:05:10.598 01:43:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:10.598 01:43:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:10.598 01:43:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:10.598 01:43:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:10.598 01:43:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:10.598 [2024-07-24 01:43:25.333817] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:10.598 [2024-07-24 01:43:25.333867] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:10.598 [2024-07-24 01:43:25.333893] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x167c290 00:05:10.598 [2024-07-24 01:43:25.333908] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:10.598 [2024-07-24 01:43:25.335257] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:10.598 [2024-07-24 01:43:25.335286] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:10.598 Passthru0 00:05:10.598 01:43:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:10.598 01:43:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:10.598 01:43:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:10.598 01:43:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:10.598 01:43:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:10.598 01:43:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:10.598 { 00:05:10.598 "name": "Malloc2", 00:05:10.598 "aliases": [ 00:05:10.598 "0744aebb-a83a-41f3-9b8f-c80ff6776066" 00:05:10.598 ], 00:05:10.598 "product_name": "Malloc disk", 00:05:10.598 "block_size": 512, 00:05:10.598 "num_blocks": 16384, 00:05:10.598 "uuid": "0744aebb-a83a-41f3-9b8f-c80ff6776066", 00:05:10.598 "assigned_rate_limits": { 00:05:10.598 "rw_ios_per_sec": 0, 00:05:10.598 "rw_mbytes_per_sec": 0, 00:05:10.598 "r_mbytes_per_sec": 0, 00:05:10.598 "w_mbytes_per_sec": 0 00:05:10.598 }, 00:05:10.598 "claimed": true, 00:05:10.599 "claim_type": "exclusive_write", 00:05:10.599 "zoned": false, 00:05:10.599 "supported_io_types": { 00:05:10.599 "read": true, 00:05:10.599 "write": true, 00:05:10.599 "unmap": true, 00:05:10.599 "flush": true, 00:05:10.599 "reset": true, 00:05:10.599 "nvme_admin": false, 00:05:10.599 "nvme_io": false, 00:05:10.599 "nvme_io_md": false, 00:05:10.599 "write_zeroes": true, 00:05:10.599 "zcopy": true, 00:05:10.599 "get_zone_info": false, 00:05:10.599 "zone_management": false, 00:05:10.599 "zone_append": false, 00:05:10.599 "compare": false, 00:05:10.599 "compare_and_write": false, 00:05:10.599 "abort": true, 00:05:10.599 "seek_hole": false, 00:05:10.599 "seek_data": false, 00:05:10.599 "copy": true, 00:05:10.599 "nvme_iov_md": false 00:05:10.599 }, 00:05:10.599 "memory_domains": [ 00:05:10.599 { 00:05:10.599 "dma_device_id": "system", 00:05:10.599 "dma_device_type": 1 00:05:10.599 }, 00:05:10.599 { 00:05:10.599 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:10.599 "dma_device_type": 2 00:05:10.599 } 00:05:10.599 ], 00:05:10.599 "driver_specific": {} 00:05:10.599 }, 00:05:10.599 { 00:05:10.599 "name": "Passthru0", 00:05:10.599 "aliases": [ 00:05:10.599 "24e5847f-56a2-557e-8f08-f4033fe3ed9e" 00:05:10.599 ], 00:05:10.599 "product_name": "passthru", 00:05:10.599 "block_size": 512, 00:05:10.599 "num_blocks": 16384, 00:05:10.599 "uuid": "24e5847f-56a2-557e-8f08-f4033fe3ed9e", 00:05:10.599 "assigned_rate_limits": { 00:05:10.599 "rw_ios_per_sec": 0, 00:05:10.599 "rw_mbytes_per_sec": 0, 00:05:10.599 "r_mbytes_per_sec": 0, 00:05:10.599 "w_mbytes_per_sec": 0 00:05:10.599 }, 00:05:10.599 "claimed": false, 00:05:10.599 "zoned": false, 00:05:10.599 "supported_io_types": { 00:05:10.599 "read": true, 00:05:10.599 "write": true, 00:05:10.599 "unmap": true, 00:05:10.599 "flush": true, 00:05:10.599 "reset": true, 00:05:10.599 "nvme_admin": false, 00:05:10.599 "nvme_io": false, 00:05:10.599 "nvme_io_md": false, 00:05:10.599 "write_zeroes": true, 00:05:10.599 "zcopy": true, 00:05:10.599 "get_zone_info": false, 00:05:10.599 "zone_management": false, 00:05:10.599 "zone_append": false, 00:05:10.599 "compare": false, 00:05:10.599 "compare_and_write": false, 00:05:10.599 "abort": true, 00:05:10.599 "seek_hole": false, 00:05:10.599 "seek_data": false, 00:05:10.599 "copy": true, 00:05:10.599 "nvme_iov_md": false 00:05:10.599 }, 00:05:10.599 "memory_domains": [ 00:05:10.599 { 00:05:10.599 "dma_device_id": "system", 00:05:10.599 "dma_device_type": 1 00:05:10.599 }, 00:05:10.599 { 00:05:10.599 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:10.599 "dma_device_type": 2 00:05:10.599 } 00:05:10.599 ], 00:05:10.599 "driver_specific": { 00:05:10.599 "passthru": { 00:05:10.599 "name": "Passthru0", 00:05:10.599 "base_bdev_name": "Malloc2" 00:05:10.599 } 00:05:10.599 } 00:05:10.599 } 00:05:10.599 ]' 00:05:10.599 01:43:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:10.599 01:43:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:10.599 01:43:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:10.599 01:43:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:10.599 01:43:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:10.599 01:43:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:10.599 01:43:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:10.599 01:43:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:10.599 01:43:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:10.599 01:43:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:10.599 01:43:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:10.599 01:43:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:10.599 01:43:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:10.599 01:43:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:10.599 01:43:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:10.599 01:43:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:10.599 01:43:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:10.599 00:05:10.599 real 0m0.231s 00:05:10.599 user 0m0.159s 00:05:10.599 sys 0m0.017s 00:05:10.599 01:43:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:10.599 01:43:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:10.599 ************************************ 00:05:10.599 END TEST rpc_daemon_integrity 00:05:10.599 ************************************ 00:05:10.599 01:43:25 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:10.599 01:43:25 rpc -- rpc/rpc.sh@84 -- # killprocess 1292006 00:05:10.599 01:43:25 rpc -- common/autotest_common.sh@948 -- # '[' -z 1292006 ']' 00:05:10.599 01:43:25 rpc -- common/autotest_common.sh@952 -- # kill -0 1292006 00:05:10.599 01:43:25 rpc -- common/autotest_common.sh@953 -- # uname 00:05:10.599 01:43:25 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:10.599 01:43:25 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1292006 00:05:10.857 01:43:25 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:10.857 01:43:25 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:10.857 01:43:25 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1292006' 00:05:10.857 killing process with pid 1292006 00:05:10.857 01:43:25 rpc -- common/autotest_common.sh@967 -- # kill 1292006 00:05:10.857 01:43:25 rpc -- common/autotest_common.sh@972 -- # wait 1292006 00:05:11.115 00:05:11.115 real 0m1.912s 00:05:11.115 user 0m2.393s 00:05:11.115 sys 0m0.600s 00:05:11.115 01:43:25 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:11.115 01:43:25 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:11.115 ************************************ 00:05:11.115 END TEST rpc 00:05:11.115 ************************************ 00:05:11.115 01:43:25 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:11.115 01:43:25 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:11.115 01:43:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:11.115 01:43:25 -- common/autotest_common.sh@10 -- # set +x 00:05:11.115 ************************************ 00:05:11.115 START TEST skip_rpc 00:05:11.115 ************************************ 00:05:11.115 01:43:25 skip_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:11.115 * Looking for test storage... 00:05:11.115 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:11.115 01:43:26 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:11.115 01:43:26 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:11.115 01:43:26 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:11.115 01:43:26 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:11.115 01:43:26 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:11.115 01:43:26 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:11.373 ************************************ 00:05:11.373 START TEST skip_rpc 00:05:11.373 ************************************ 00:05:11.373 01:43:26 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:05:11.373 01:43:26 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1292433 00:05:11.373 01:43:26 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:11.373 01:43:26 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:11.373 01:43:26 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:11.373 [2024-07-24 01:43:26.083265] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:05:11.373 [2024-07-24 01:43:26.083374] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1292433 ] 00:05:11.373 EAL: No free 2048 kB hugepages reported on node 1 00:05:11.373 [2024-07-24 01:43:26.146988] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:11.373 [2024-07-24 01:43:26.236622] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.636 01:43:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:16.636 01:43:31 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:16.636 01:43:31 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:16.636 01:43:31 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:16.636 01:43:31 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:16.636 01:43:31 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:16.636 01:43:31 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:16.636 01:43:31 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:05:16.636 01:43:31 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:16.636 01:43:31 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:16.636 01:43:31 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:16.636 01:43:31 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:16.636 01:43:31 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:16.636 01:43:31 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:16.636 01:43:31 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:16.636 01:43:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:16.636 01:43:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 1292433 00:05:16.636 01:43:31 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 1292433 ']' 00:05:16.636 01:43:31 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 1292433 00:05:16.636 01:43:31 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:05:16.636 01:43:31 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:16.636 01:43:31 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1292433 00:05:16.636 01:43:31 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:16.636 01:43:31 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:16.636 01:43:31 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1292433' 00:05:16.636 killing process with pid 1292433 00:05:16.636 01:43:31 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 1292433 00:05:16.636 01:43:31 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 1292433 00:05:16.636 00:05:16.636 real 0m5.453s 00:05:16.636 user 0m5.127s 00:05:16.636 sys 0m0.329s 00:05:16.636 01:43:31 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:16.636 01:43:31 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:16.636 ************************************ 00:05:16.636 END TEST skip_rpc 00:05:16.636 ************************************ 00:05:16.636 01:43:31 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:16.636 01:43:31 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:16.636 01:43:31 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:16.636 01:43:31 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:16.895 ************************************ 00:05:16.895 START TEST skip_rpc_with_json 00:05:16.895 ************************************ 00:05:16.895 01:43:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:05:16.895 01:43:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:16.895 01:43:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1293120 00:05:16.895 01:43:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:16.895 01:43:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:16.895 01:43:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 1293120 00:05:16.895 01:43:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 1293120 ']' 00:05:16.895 01:43:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:16.895 01:43:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:16.895 01:43:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:16.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:16.895 01:43:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:16.895 01:43:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:16.895 [2024-07-24 01:43:31.589669] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:05:16.895 [2024-07-24 01:43:31.589748] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1293120 ] 00:05:16.895 EAL: No free 2048 kB hugepages reported on node 1 00:05:16.895 [2024-07-24 01:43:31.645731] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:16.895 [2024-07-24 01:43:31.731906] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.153 01:43:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:17.153 01:43:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:05:17.153 01:43:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:17.153 01:43:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:17.153 01:43:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:17.154 [2024-07-24 01:43:31.982169] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:17.154 request: 00:05:17.154 { 00:05:17.154 "trtype": "tcp", 00:05:17.154 "method": "nvmf_get_transports", 00:05:17.154 "req_id": 1 00:05:17.154 } 00:05:17.154 Got JSON-RPC error response 00:05:17.154 response: 00:05:17.154 { 00:05:17.154 "code": -19, 00:05:17.154 "message": "No such device" 00:05:17.154 } 00:05:17.154 01:43:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:17.154 01:43:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:17.154 01:43:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:17.154 01:43:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:17.154 [2024-07-24 01:43:31.990326] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:17.154 01:43:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:17.154 01:43:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:17.154 01:43:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:17.154 01:43:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:17.412 01:43:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:17.412 01:43:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:17.412 { 00:05:17.412 "subsystems": [ 00:05:17.412 { 00:05:17.412 "subsystem": "vfio_user_target", 00:05:17.412 "config": null 00:05:17.412 }, 00:05:17.412 { 00:05:17.412 "subsystem": "keyring", 00:05:17.412 "config": [] 00:05:17.412 }, 00:05:17.412 { 00:05:17.412 "subsystem": "iobuf", 00:05:17.412 "config": [ 00:05:17.412 { 00:05:17.412 "method": "iobuf_set_options", 00:05:17.412 "params": { 00:05:17.412 "small_pool_count": 8192, 00:05:17.412 "large_pool_count": 1024, 00:05:17.412 "small_bufsize": 8192, 00:05:17.412 "large_bufsize": 135168 00:05:17.412 } 00:05:17.412 } 00:05:17.412 ] 00:05:17.412 }, 00:05:17.412 { 00:05:17.412 "subsystem": "sock", 00:05:17.412 "config": [ 00:05:17.412 { 00:05:17.412 "method": "sock_set_default_impl", 00:05:17.412 "params": { 00:05:17.412 "impl_name": "posix" 00:05:17.412 } 00:05:17.412 }, 00:05:17.412 { 00:05:17.412 "method": "sock_impl_set_options", 00:05:17.412 "params": { 00:05:17.412 "impl_name": "ssl", 00:05:17.412 "recv_buf_size": 4096, 00:05:17.412 "send_buf_size": 4096, 00:05:17.412 "enable_recv_pipe": true, 00:05:17.412 "enable_quickack": false, 00:05:17.412 "enable_placement_id": 0, 00:05:17.412 "enable_zerocopy_send_server": true, 00:05:17.412 "enable_zerocopy_send_client": false, 00:05:17.412 "zerocopy_threshold": 0, 00:05:17.412 "tls_version": 0, 00:05:17.412 "enable_ktls": false 00:05:17.412 } 00:05:17.412 }, 00:05:17.412 { 00:05:17.412 "method": "sock_impl_set_options", 00:05:17.412 "params": { 00:05:17.412 "impl_name": "posix", 00:05:17.412 "recv_buf_size": 2097152, 00:05:17.412 "send_buf_size": 2097152, 00:05:17.412 "enable_recv_pipe": true, 00:05:17.412 "enable_quickack": false, 00:05:17.412 "enable_placement_id": 0, 00:05:17.412 "enable_zerocopy_send_server": true, 00:05:17.412 "enable_zerocopy_send_client": false, 00:05:17.412 "zerocopy_threshold": 0, 00:05:17.412 "tls_version": 0, 00:05:17.412 "enable_ktls": false 00:05:17.412 } 00:05:17.412 } 00:05:17.412 ] 00:05:17.412 }, 00:05:17.412 { 00:05:17.412 "subsystem": "vmd", 00:05:17.412 "config": [] 00:05:17.412 }, 00:05:17.412 { 00:05:17.412 "subsystem": "accel", 00:05:17.412 "config": [ 00:05:17.412 { 00:05:17.412 "method": "accel_set_options", 00:05:17.412 "params": { 00:05:17.412 "small_cache_size": 128, 00:05:17.412 "large_cache_size": 16, 00:05:17.412 "task_count": 2048, 00:05:17.412 "sequence_count": 2048, 00:05:17.412 "buf_count": 2048 00:05:17.412 } 00:05:17.412 } 00:05:17.412 ] 00:05:17.412 }, 00:05:17.412 { 00:05:17.412 "subsystem": "bdev", 00:05:17.412 "config": [ 00:05:17.412 { 00:05:17.412 "method": "bdev_set_options", 00:05:17.412 "params": { 00:05:17.412 "bdev_io_pool_size": 65535, 00:05:17.412 "bdev_io_cache_size": 256, 00:05:17.412 "bdev_auto_examine": true, 00:05:17.412 "iobuf_small_cache_size": 128, 00:05:17.412 "iobuf_large_cache_size": 16 00:05:17.412 } 00:05:17.412 }, 00:05:17.412 { 00:05:17.412 "method": "bdev_raid_set_options", 00:05:17.412 "params": { 00:05:17.412 "process_window_size_kb": 1024, 00:05:17.412 "process_max_bandwidth_mb_sec": 0 00:05:17.412 } 00:05:17.412 }, 00:05:17.412 { 00:05:17.412 "method": "bdev_iscsi_set_options", 00:05:17.412 "params": { 00:05:17.412 "timeout_sec": 30 00:05:17.412 } 00:05:17.412 }, 00:05:17.412 { 00:05:17.412 "method": "bdev_nvme_set_options", 00:05:17.412 "params": { 00:05:17.412 "action_on_timeout": "none", 00:05:17.412 "timeout_us": 0, 00:05:17.412 "timeout_admin_us": 0, 00:05:17.412 "keep_alive_timeout_ms": 10000, 00:05:17.412 "arbitration_burst": 0, 00:05:17.412 "low_priority_weight": 0, 00:05:17.412 "medium_priority_weight": 0, 00:05:17.412 "high_priority_weight": 0, 00:05:17.412 "nvme_adminq_poll_period_us": 10000, 00:05:17.412 "nvme_ioq_poll_period_us": 0, 00:05:17.412 "io_queue_requests": 0, 00:05:17.412 "delay_cmd_submit": true, 00:05:17.412 "transport_retry_count": 4, 00:05:17.412 "bdev_retry_count": 3, 00:05:17.412 "transport_ack_timeout": 0, 00:05:17.412 "ctrlr_loss_timeout_sec": 0, 00:05:17.412 "reconnect_delay_sec": 0, 00:05:17.412 "fast_io_fail_timeout_sec": 0, 00:05:17.412 "disable_auto_failback": false, 00:05:17.412 "generate_uuids": false, 00:05:17.412 "transport_tos": 0, 00:05:17.412 "nvme_error_stat": false, 00:05:17.412 "rdma_srq_size": 0, 00:05:17.412 "io_path_stat": false, 00:05:17.412 "allow_accel_sequence": false, 00:05:17.412 "rdma_max_cq_size": 0, 00:05:17.412 "rdma_cm_event_timeout_ms": 0, 00:05:17.412 "dhchap_digests": [ 00:05:17.412 "sha256", 00:05:17.412 "sha384", 00:05:17.412 "sha512" 00:05:17.412 ], 00:05:17.412 "dhchap_dhgroups": [ 00:05:17.412 "null", 00:05:17.412 "ffdhe2048", 00:05:17.412 "ffdhe3072", 00:05:17.412 "ffdhe4096", 00:05:17.412 "ffdhe6144", 00:05:17.412 "ffdhe8192" 00:05:17.412 ] 00:05:17.412 } 00:05:17.412 }, 00:05:17.412 { 00:05:17.412 "method": "bdev_nvme_set_hotplug", 00:05:17.412 "params": { 00:05:17.412 "period_us": 100000, 00:05:17.412 "enable": false 00:05:17.412 } 00:05:17.412 }, 00:05:17.412 { 00:05:17.412 "method": "bdev_wait_for_examine" 00:05:17.412 } 00:05:17.412 ] 00:05:17.412 }, 00:05:17.412 { 00:05:17.412 "subsystem": "scsi", 00:05:17.412 "config": null 00:05:17.412 }, 00:05:17.412 { 00:05:17.412 "subsystem": "scheduler", 00:05:17.412 "config": [ 00:05:17.412 { 00:05:17.412 "method": "framework_set_scheduler", 00:05:17.412 "params": { 00:05:17.413 "name": "static" 00:05:17.413 } 00:05:17.413 } 00:05:17.413 ] 00:05:17.413 }, 00:05:17.413 { 00:05:17.413 "subsystem": "vhost_scsi", 00:05:17.413 "config": [] 00:05:17.413 }, 00:05:17.413 { 00:05:17.413 "subsystem": "vhost_blk", 00:05:17.413 "config": [] 00:05:17.413 }, 00:05:17.413 { 00:05:17.413 "subsystem": "ublk", 00:05:17.413 "config": [] 00:05:17.413 }, 00:05:17.413 { 00:05:17.413 "subsystem": "nbd", 00:05:17.413 "config": [] 00:05:17.413 }, 00:05:17.413 { 00:05:17.413 "subsystem": "nvmf", 00:05:17.413 "config": [ 00:05:17.413 { 00:05:17.413 "method": "nvmf_set_config", 00:05:17.413 "params": { 00:05:17.413 "discovery_filter": "match_any", 00:05:17.413 "admin_cmd_passthru": { 00:05:17.413 "identify_ctrlr": false 00:05:17.413 } 00:05:17.413 } 00:05:17.413 }, 00:05:17.413 { 00:05:17.413 "method": "nvmf_set_max_subsystems", 00:05:17.413 "params": { 00:05:17.413 "max_subsystems": 1024 00:05:17.413 } 00:05:17.413 }, 00:05:17.413 { 00:05:17.413 "method": "nvmf_set_crdt", 00:05:17.413 "params": { 00:05:17.413 "crdt1": 0, 00:05:17.413 "crdt2": 0, 00:05:17.413 "crdt3": 0 00:05:17.413 } 00:05:17.413 }, 00:05:17.413 { 00:05:17.413 "method": "nvmf_create_transport", 00:05:17.413 "params": { 00:05:17.413 "trtype": "TCP", 00:05:17.413 "max_queue_depth": 128, 00:05:17.413 "max_io_qpairs_per_ctrlr": 127, 00:05:17.413 "in_capsule_data_size": 4096, 00:05:17.413 "max_io_size": 131072, 00:05:17.413 "io_unit_size": 131072, 00:05:17.413 "max_aq_depth": 128, 00:05:17.413 "num_shared_buffers": 511, 00:05:17.413 "buf_cache_size": 4294967295, 00:05:17.413 "dif_insert_or_strip": false, 00:05:17.413 "zcopy": false, 00:05:17.413 "c2h_success": true, 00:05:17.413 "sock_priority": 0, 00:05:17.413 "abort_timeout_sec": 1, 00:05:17.413 "ack_timeout": 0, 00:05:17.413 "data_wr_pool_size": 0 00:05:17.413 } 00:05:17.413 } 00:05:17.413 ] 00:05:17.413 }, 00:05:17.413 { 00:05:17.413 "subsystem": "iscsi", 00:05:17.413 "config": [ 00:05:17.413 { 00:05:17.413 "method": "iscsi_set_options", 00:05:17.413 "params": { 00:05:17.413 "node_base": "iqn.2016-06.io.spdk", 00:05:17.413 "max_sessions": 128, 00:05:17.413 "max_connections_per_session": 2, 00:05:17.413 "max_queue_depth": 64, 00:05:17.413 "default_time2wait": 2, 00:05:17.413 "default_time2retain": 20, 00:05:17.413 "first_burst_length": 8192, 00:05:17.413 "immediate_data": true, 00:05:17.413 "allow_duplicated_isid": false, 00:05:17.413 "error_recovery_level": 0, 00:05:17.413 "nop_timeout": 60, 00:05:17.413 "nop_in_interval": 30, 00:05:17.413 "disable_chap": false, 00:05:17.413 "require_chap": false, 00:05:17.413 "mutual_chap": false, 00:05:17.413 "chap_group": 0, 00:05:17.413 "max_large_datain_per_connection": 64, 00:05:17.413 "max_r2t_per_connection": 4, 00:05:17.413 "pdu_pool_size": 36864, 00:05:17.413 "immediate_data_pool_size": 16384, 00:05:17.413 "data_out_pool_size": 2048 00:05:17.413 } 00:05:17.413 } 00:05:17.413 ] 00:05:17.413 } 00:05:17.413 ] 00:05:17.413 } 00:05:17.413 01:43:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:17.413 01:43:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 1293120 00:05:17.413 01:43:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 1293120 ']' 00:05:17.413 01:43:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 1293120 00:05:17.413 01:43:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:17.413 01:43:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:17.413 01:43:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1293120 00:05:17.413 01:43:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:17.413 01:43:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:17.413 01:43:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1293120' 00:05:17.413 killing process with pid 1293120 00:05:17.413 01:43:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 1293120 00:05:17.413 01:43:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 1293120 00:05:17.979 01:43:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1293260 00:05:17.979 01:43:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:17.979 01:43:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:23.239 01:43:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 1293260 00:05:23.239 01:43:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 1293260 ']' 00:05:23.239 01:43:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 1293260 00:05:23.239 01:43:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:23.239 01:43:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:23.239 01:43:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1293260 00:05:23.239 01:43:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:23.239 01:43:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:23.239 01:43:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1293260' 00:05:23.239 killing process with pid 1293260 00:05:23.239 01:43:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 1293260 00:05:23.239 01:43:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 1293260 00:05:23.239 01:43:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:23.239 01:43:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:23.239 00:05:23.239 real 0m6.470s 00:05:23.239 user 0m6.060s 00:05:23.239 sys 0m0.681s 00:05:23.239 01:43:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:23.239 01:43:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:23.239 ************************************ 00:05:23.239 END TEST skip_rpc_with_json 00:05:23.239 ************************************ 00:05:23.239 01:43:38 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:23.239 01:43:38 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:23.239 01:43:38 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:23.240 01:43:38 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:23.240 ************************************ 00:05:23.240 START TEST skip_rpc_with_delay 00:05:23.240 ************************************ 00:05:23.240 01:43:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:05:23.240 01:43:38 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:23.240 01:43:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:05:23.240 01:43:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:23.240 01:43:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:23.240 01:43:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:23.240 01:43:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:23.240 01:43:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:23.240 01:43:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:23.240 01:43:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:23.240 01:43:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:23.240 01:43:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:23.240 01:43:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:23.240 [2024-07-24 01:43:38.105550] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:23.240 [2024-07-24 01:43:38.105671] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:23.240 01:43:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:05:23.240 01:43:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:23.240 01:43:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:23.240 01:43:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:23.240 00:05:23.240 real 0m0.066s 00:05:23.240 user 0m0.042s 00:05:23.240 sys 0m0.024s 00:05:23.240 01:43:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:23.240 01:43:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:23.240 ************************************ 00:05:23.240 END TEST skip_rpc_with_delay 00:05:23.240 ************************************ 00:05:23.498 01:43:38 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:23.498 01:43:38 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:23.498 01:43:38 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:23.498 01:43:38 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:23.498 01:43:38 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:23.498 01:43:38 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:23.498 ************************************ 00:05:23.498 START TEST exit_on_failed_rpc_init 00:05:23.498 ************************************ 00:05:23.498 01:43:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:05:23.498 01:43:38 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1293982 00:05:23.498 01:43:38 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:23.498 01:43:38 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 1293982 00:05:23.498 01:43:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 1293982 ']' 00:05:23.498 01:43:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:23.498 01:43:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:23.498 01:43:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:23.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:23.498 01:43:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:23.498 01:43:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:23.498 [2024-07-24 01:43:38.216495] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:05:23.498 [2024-07-24 01:43:38.216575] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1293982 ] 00:05:23.498 EAL: No free 2048 kB hugepages reported on node 1 00:05:23.498 [2024-07-24 01:43:38.273274] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.498 [2024-07-24 01:43:38.362686] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.756 01:43:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:23.756 01:43:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:05:23.756 01:43:38 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:23.756 01:43:38 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:23.756 01:43:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:05:23.756 01:43:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:23.756 01:43:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:23.756 01:43:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:23.756 01:43:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:23.756 01:43:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:23.756 01:43:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:23.756 01:43:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:23.756 01:43:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:23.756 01:43:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:23.756 01:43:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:24.013 [2024-07-24 01:43:38.660962] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:05:24.013 [2024-07-24 01:43:38.661035] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1293989 ] 00:05:24.013 EAL: No free 2048 kB hugepages reported on node 1 00:05:24.013 [2024-07-24 01:43:38.722098] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.013 [2024-07-24 01:43:38.814201] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:24.013 [2024-07-24 01:43:38.814330] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:24.013 [2024-07-24 01:43:38.814366] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:24.013 [2024-07-24 01:43:38.814393] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:24.013 01:43:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:05:24.013 01:43:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:24.013 01:43:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:05:24.013 01:43:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:05:24.013 01:43:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:05:24.013 01:43:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:24.013 01:43:38 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:24.013 01:43:38 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 1293982 00:05:24.013 01:43:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 1293982 ']' 00:05:24.013 01:43:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 1293982 00:05:24.013 01:43:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:05:24.271 01:43:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:24.271 01:43:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1293982 00:05:24.271 01:43:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:24.271 01:43:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:24.271 01:43:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1293982' 00:05:24.271 killing process with pid 1293982 00:05:24.271 01:43:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 1293982 00:05:24.271 01:43:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 1293982 00:05:24.558 00:05:24.558 real 0m1.171s 00:05:24.558 user 0m1.273s 00:05:24.558 sys 0m0.451s 00:05:24.558 01:43:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:24.558 01:43:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:24.558 ************************************ 00:05:24.558 END TEST exit_on_failed_rpc_init 00:05:24.558 ************************************ 00:05:24.558 01:43:39 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:24.558 00:05:24.558 real 0m13.404s 00:05:24.558 user 0m12.613s 00:05:24.558 sys 0m1.634s 00:05:24.558 01:43:39 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:24.558 01:43:39 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:24.558 ************************************ 00:05:24.558 END TEST skip_rpc 00:05:24.558 ************************************ 00:05:24.558 01:43:39 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:24.558 01:43:39 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:24.558 01:43:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:24.558 01:43:39 -- common/autotest_common.sh@10 -- # set +x 00:05:24.558 ************************************ 00:05:24.558 START TEST rpc_client 00:05:24.558 ************************************ 00:05:24.558 01:43:39 rpc_client -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:24.814 * Looking for test storage... 00:05:24.814 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:24.815 01:43:39 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:24.815 OK 00:05:24.815 01:43:39 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:24.815 00:05:24.815 real 0m0.068s 00:05:24.815 user 0m0.029s 00:05:24.815 sys 0m0.045s 00:05:24.815 01:43:39 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:24.815 01:43:39 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:24.815 ************************************ 00:05:24.815 END TEST rpc_client 00:05:24.815 ************************************ 00:05:24.815 01:43:39 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:24.815 01:43:39 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:24.815 01:43:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:24.815 01:43:39 -- common/autotest_common.sh@10 -- # set +x 00:05:24.815 ************************************ 00:05:24.815 START TEST json_config 00:05:24.815 ************************************ 00:05:24.815 01:43:39 json_config -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:24.815 01:43:39 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:24.815 01:43:39 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:24.815 01:43:39 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:24.815 01:43:39 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:24.815 01:43:39 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:24.815 01:43:39 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:24.815 01:43:39 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:24.815 01:43:39 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:24.815 01:43:39 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:24.815 01:43:39 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:24.815 01:43:39 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:24.815 01:43:39 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:24.815 01:43:39 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:24.815 01:43:39 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:24.815 01:43:39 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:24.815 01:43:39 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:24.815 01:43:39 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:24.815 01:43:39 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:24.815 01:43:39 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:24.815 01:43:39 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:24.815 01:43:39 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:24.815 01:43:39 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:24.815 01:43:39 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:24.815 01:43:39 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:24.815 01:43:39 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:24.815 01:43:39 json_config -- paths/export.sh@5 -- # export PATH 00:05:24.815 01:43:39 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:24.815 01:43:39 json_config -- nvmf/common.sh@47 -- # : 0 00:05:24.815 01:43:39 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:24.815 01:43:39 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:24.815 01:43:39 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:24.815 01:43:39 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:24.815 01:43:39 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:24.815 01:43:39 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:24.815 01:43:39 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:24.815 01:43:39 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:24.815 01:43:39 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:24.815 01:43:39 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:24.815 01:43:39 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:24.815 01:43:39 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:24.815 01:43:39 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:24.815 01:43:39 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:24.815 01:43:39 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:24.815 01:43:39 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:24.815 01:43:39 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:24.815 01:43:39 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:24.815 01:43:39 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:24.815 01:43:39 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:24.815 01:43:39 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:24.815 01:43:39 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:24.815 01:43:39 json_config -- json_config/json_config.sh@359 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:24.815 01:43:39 json_config -- json_config/json_config.sh@360 -- # echo 'INFO: JSON configuration test init' 00:05:24.815 INFO: JSON configuration test init 00:05:24.815 01:43:39 json_config -- json_config/json_config.sh@361 -- # json_config_test_init 00:05:24.815 01:43:39 json_config -- json_config/json_config.sh@266 -- # timing_enter json_config_test_init 00:05:24.815 01:43:39 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:24.815 01:43:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:24.815 01:43:39 json_config -- json_config/json_config.sh@267 -- # timing_enter json_config_setup_target 00:05:24.815 01:43:39 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:24.815 01:43:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:24.815 01:43:39 json_config -- json_config/json_config.sh@269 -- # json_config_test_start_app target --wait-for-rpc 00:05:24.815 01:43:39 json_config -- json_config/common.sh@9 -- # local app=target 00:05:24.815 01:43:39 json_config -- json_config/common.sh@10 -- # shift 00:05:24.815 01:43:39 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:24.815 01:43:39 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:24.815 01:43:39 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:24.815 01:43:39 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:24.815 01:43:39 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:24.815 01:43:39 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1294235 00:05:24.815 01:43:39 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:24.815 01:43:39 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:24.815 Waiting for target to run... 00:05:24.815 01:43:39 json_config -- json_config/common.sh@25 -- # waitforlisten 1294235 /var/tmp/spdk_tgt.sock 00:05:24.815 01:43:39 json_config -- common/autotest_common.sh@829 -- # '[' -z 1294235 ']' 00:05:24.815 01:43:39 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:24.815 01:43:39 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:24.815 01:43:39 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:24.815 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:24.815 01:43:39 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:24.815 01:43:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:24.815 [2024-07-24 01:43:39.629628] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:05:24.815 [2024-07-24 01:43:39.629724] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1294235 ] 00:05:24.815 EAL: No free 2048 kB hugepages reported on node 1 00:05:25.380 [2024-07-24 01:43:40.141084] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.380 [2024-07-24 01:43:40.223128] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.945 01:43:40 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:25.945 01:43:40 json_config -- common/autotest_common.sh@862 -- # return 0 00:05:25.945 01:43:40 json_config -- json_config/common.sh@26 -- # echo '' 00:05:25.945 00:05:25.945 01:43:40 json_config -- json_config/json_config.sh@273 -- # create_accel_config 00:05:25.945 01:43:40 json_config -- json_config/json_config.sh@97 -- # timing_enter create_accel_config 00:05:25.945 01:43:40 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:25.945 01:43:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:25.945 01:43:40 json_config -- json_config/json_config.sh@99 -- # [[ 0 -eq 1 ]] 00:05:25.945 01:43:40 json_config -- json_config/json_config.sh@105 -- # timing_exit create_accel_config 00:05:25.945 01:43:40 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:25.945 01:43:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:25.945 01:43:40 json_config -- json_config/json_config.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:25.945 01:43:40 json_config -- json_config/json_config.sh@278 -- # tgt_rpc load_config 00:05:25.945 01:43:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:29.221 01:43:43 json_config -- json_config/json_config.sh@280 -- # tgt_check_notification_types 00:05:29.221 01:43:43 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:29.221 01:43:43 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:29.221 01:43:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:29.221 01:43:43 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:29.221 01:43:43 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:29.221 01:43:43 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:29.221 01:43:43 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:29.221 01:43:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:29.221 01:43:43 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:29.221 01:43:43 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:29.221 01:43:43 json_config -- json_config/json_config.sh@48 -- # local get_types 00:05:29.221 01:43:43 json_config -- json_config/json_config.sh@50 -- # local type_diff 00:05:29.221 01:43:43 json_config -- json_config/json_config.sh@51 -- # echo bdev_register bdev_unregister bdev_register bdev_unregister 00:05:29.221 01:43:43 json_config -- json_config/json_config.sh@51 -- # tr ' ' '\n' 00:05:29.221 01:43:43 json_config -- json_config/json_config.sh@51 -- # sort 00:05:29.221 01:43:43 json_config -- json_config/json_config.sh@51 -- # uniq -u 00:05:29.221 01:43:44 json_config -- json_config/json_config.sh@51 -- # type_diff= 00:05:29.221 01:43:44 json_config -- json_config/json_config.sh@53 -- # [[ -n '' ]] 00:05:29.221 01:43:44 json_config -- json_config/json_config.sh@58 -- # timing_exit tgt_check_notification_types 00:05:29.221 01:43:44 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:29.221 01:43:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:29.221 01:43:44 json_config -- json_config/json_config.sh@59 -- # return 0 00:05:29.221 01:43:44 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:05:29.221 01:43:44 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:05:29.221 01:43:44 json_config -- json_config/json_config.sh@290 -- # [[ 0 -eq 1 ]] 00:05:29.221 01:43:44 json_config -- json_config/json_config.sh@294 -- # [[ 1 -eq 1 ]] 00:05:29.221 01:43:44 json_config -- json_config/json_config.sh@295 -- # create_nvmf_subsystem_config 00:05:29.221 01:43:44 json_config -- json_config/json_config.sh@234 -- # timing_enter create_nvmf_subsystem_config 00:05:29.221 01:43:44 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:29.221 01:43:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:29.221 01:43:44 json_config -- json_config/json_config.sh@236 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:29.221 01:43:44 json_config -- json_config/json_config.sh@237 -- # [[ tcp == \r\d\m\a ]] 00:05:29.221 01:43:44 json_config -- json_config/json_config.sh@241 -- # [[ -z 127.0.0.1 ]] 00:05:29.221 01:43:44 json_config -- json_config/json_config.sh@246 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:29.221 01:43:44 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:29.478 MallocForNvmf0 00:05:29.478 01:43:44 json_config -- json_config/json_config.sh@247 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:29.478 01:43:44 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:29.735 MallocForNvmf1 00:05:29.736 01:43:44 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:29.736 01:43:44 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:29.993 [2024-07-24 01:43:44.745678] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:29.993 01:43:44 json_config -- json_config/json_config.sh@250 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:29.993 01:43:44 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:30.251 01:43:45 json_config -- json_config/json_config.sh@251 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:30.251 01:43:45 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:30.508 01:43:45 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:30.508 01:43:45 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:30.766 01:43:45 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:30.766 01:43:45 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:31.023 [2024-07-24 01:43:45.732928] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:31.023 01:43:45 json_config -- json_config/json_config.sh@255 -- # timing_exit create_nvmf_subsystem_config 00:05:31.023 01:43:45 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:31.023 01:43:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:31.023 01:43:45 json_config -- json_config/json_config.sh@297 -- # timing_exit json_config_setup_target 00:05:31.023 01:43:45 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:31.023 01:43:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:31.023 01:43:45 json_config -- json_config/json_config.sh@299 -- # [[ 0 -eq 1 ]] 00:05:31.024 01:43:45 json_config -- json_config/json_config.sh@304 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:31.024 01:43:45 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:31.281 MallocBdevForConfigChangeCheck 00:05:31.281 01:43:46 json_config -- json_config/json_config.sh@306 -- # timing_exit json_config_test_init 00:05:31.281 01:43:46 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:31.281 01:43:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:31.281 01:43:46 json_config -- json_config/json_config.sh@363 -- # tgt_rpc save_config 00:05:31.281 01:43:46 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:31.539 01:43:46 json_config -- json_config/json_config.sh@365 -- # echo 'INFO: shutting down applications...' 00:05:31.539 INFO: shutting down applications... 00:05:31.539 01:43:46 json_config -- json_config/json_config.sh@366 -- # [[ 0 -eq 1 ]] 00:05:31.539 01:43:46 json_config -- json_config/json_config.sh@372 -- # json_config_clear target 00:05:31.539 01:43:46 json_config -- json_config/json_config.sh@336 -- # [[ -n 22 ]] 00:05:31.539 01:43:46 json_config -- json_config/json_config.sh@337 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:33.438 Calling clear_iscsi_subsystem 00:05:33.438 Calling clear_nvmf_subsystem 00:05:33.438 Calling clear_nbd_subsystem 00:05:33.438 Calling clear_ublk_subsystem 00:05:33.438 Calling clear_vhost_blk_subsystem 00:05:33.438 Calling clear_vhost_scsi_subsystem 00:05:33.438 Calling clear_bdev_subsystem 00:05:33.438 01:43:48 json_config -- json_config/json_config.sh@341 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:33.438 01:43:48 json_config -- json_config/json_config.sh@347 -- # count=100 00:05:33.438 01:43:48 json_config -- json_config/json_config.sh@348 -- # '[' 100 -gt 0 ']' 00:05:33.438 01:43:48 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:33.438 01:43:48 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:33.438 01:43:48 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:33.697 01:43:48 json_config -- json_config/json_config.sh@349 -- # break 00:05:33.697 01:43:48 json_config -- json_config/json_config.sh@354 -- # '[' 100 -eq 0 ']' 00:05:33.697 01:43:48 json_config -- json_config/json_config.sh@373 -- # json_config_test_shutdown_app target 00:05:33.697 01:43:48 json_config -- json_config/common.sh@31 -- # local app=target 00:05:33.697 01:43:48 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:33.697 01:43:48 json_config -- json_config/common.sh@35 -- # [[ -n 1294235 ]] 00:05:33.697 01:43:48 json_config -- json_config/common.sh@38 -- # kill -SIGINT 1294235 00:05:33.697 01:43:48 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:33.697 01:43:48 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:33.697 01:43:48 json_config -- json_config/common.sh@41 -- # kill -0 1294235 00:05:33.697 01:43:48 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:34.266 01:43:48 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:34.266 01:43:48 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:34.266 01:43:48 json_config -- json_config/common.sh@41 -- # kill -0 1294235 00:05:34.266 01:43:48 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:34.266 01:43:48 json_config -- json_config/common.sh@43 -- # break 00:05:34.266 01:43:48 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:34.266 01:43:48 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:34.266 SPDK target shutdown done 00:05:34.266 01:43:48 json_config -- json_config/json_config.sh@375 -- # echo 'INFO: relaunching applications...' 00:05:34.266 INFO: relaunching applications... 00:05:34.266 01:43:48 json_config -- json_config/json_config.sh@376 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:34.266 01:43:48 json_config -- json_config/common.sh@9 -- # local app=target 00:05:34.266 01:43:48 json_config -- json_config/common.sh@10 -- # shift 00:05:34.266 01:43:48 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:34.266 01:43:48 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:34.266 01:43:48 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:34.266 01:43:48 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:34.266 01:43:48 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:34.266 01:43:48 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1295486 00:05:34.266 01:43:48 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:34.266 01:43:48 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:34.266 Waiting for target to run... 00:05:34.266 01:43:48 json_config -- json_config/common.sh@25 -- # waitforlisten 1295486 /var/tmp/spdk_tgt.sock 00:05:34.266 01:43:48 json_config -- common/autotest_common.sh@829 -- # '[' -z 1295486 ']' 00:05:34.266 01:43:48 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:34.266 01:43:48 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:34.266 01:43:48 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:34.266 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:34.266 01:43:48 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:34.266 01:43:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:34.266 [2024-07-24 01:43:49.011726] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:05:34.266 [2024-07-24 01:43:49.011810] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1295486 ] 00:05:34.266 EAL: No free 2048 kB hugepages reported on node 1 00:05:34.832 [2024-07-24 01:43:49.508819] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.832 [2024-07-24 01:43:49.590915] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.114 [2024-07-24 01:43:52.626920] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:38.114 [2024-07-24 01:43:52.659415] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:38.678 01:43:53 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:38.678 01:43:53 json_config -- common/autotest_common.sh@862 -- # return 0 00:05:38.678 01:43:53 json_config -- json_config/common.sh@26 -- # echo '' 00:05:38.678 00:05:38.678 01:43:53 json_config -- json_config/json_config.sh@377 -- # [[ 0 -eq 1 ]] 00:05:38.678 01:43:53 json_config -- json_config/json_config.sh@381 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:38.678 INFO: Checking if target configuration is the same... 00:05:38.678 01:43:53 json_config -- json_config/json_config.sh@382 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:38.678 01:43:53 json_config -- json_config/json_config.sh@382 -- # tgt_rpc save_config 00:05:38.678 01:43:53 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:38.678 + '[' 2 -ne 2 ']' 00:05:38.678 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:38.678 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:38.678 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:38.678 +++ basename /dev/fd/62 00:05:38.678 ++ mktemp /tmp/62.XXX 00:05:38.678 + tmp_file_1=/tmp/62.iKy 00:05:38.678 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:38.678 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:38.678 + tmp_file_2=/tmp/spdk_tgt_config.json.XLI 00:05:38.678 + ret=0 00:05:38.678 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:38.935 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:39.192 + diff -u /tmp/62.iKy /tmp/spdk_tgt_config.json.XLI 00:05:39.192 + echo 'INFO: JSON config files are the same' 00:05:39.192 INFO: JSON config files are the same 00:05:39.192 + rm /tmp/62.iKy /tmp/spdk_tgt_config.json.XLI 00:05:39.192 + exit 0 00:05:39.192 01:43:53 json_config -- json_config/json_config.sh@383 -- # [[ 0 -eq 1 ]] 00:05:39.192 01:43:53 json_config -- json_config/json_config.sh@388 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:39.192 INFO: changing configuration and checking if this can be detected... 00:05:39.192 01:43:53 json_config -- json_config/json_config.sh@390 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:39.192 01:43:53 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:39.449 01:43:54 json_config -- json_config/json_config.sh@391 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:39.449 01:43:54 json_config -- json_config/json_config.sh@391 -- # tgt_rpc save_config 00:05:39.449 01:43:54 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:39.449 + '[' 2 -ne 2 ']' 00:05:39.449 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:39.449 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:39.449 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:39.449 +++ basename /dev/fd/62 00:05:39.449 ++ mktemp /tmp/62.XXX 00:05:39.449 + tmp_file_1=/tmp/62.IS7 00:05:39.449 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:39.449 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:39.449 + tmp_file_2=/tmp/spdk_tgt_config.json.mpN 00:05:39.449 + ret=0 00:05:39.449 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:39.707 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:39.707 + diff -u /tmp/62.IS7 /tmp/spdk_tgt_config.json.mpN 00:05:39.707 + ret=1 00:05:39.707 + echo '=== Start of file: /tmp/62.IS7 ===' 00:05:39.707 + cat /tmp/62.IS7 00:05:39.707 + echo '=== End of file: /tmp/62.IS7 ===' 00:05:39.707 + echo '' 00:05:39.707 + echo '=== Start of file: /tmp/spdk_tgt_config.json.mpN ===' 00:05:39.707 + cat /tmp/spdk_tgt_config.json.mpN 00:05:39.707 + echo '=== End of file: /tmp/spdk_tgt_config.json.mpN ===' 00:05:39.707 + echo '' 00:05:39.707 + rm /tmp/62.IS7 /tmp/spdk_tgt_config.json.mpN 00:05:39.707 + exit 1 00:05:39.707 01:43:54 json_config -- json_config/json_config.sh@395 -- # echo 'INFO: configuration change detected.' 00:05:39.707 INFO: configuration change detected. 00:05:39.707 01:43:54 json_config -- json_config/json_config.sh@398 -- # json_config_test_fini 00:05:39.707 01:43:54 json_config -- json_config/json_config.sh@310 -- # timing_enter json_config_test_fini 00:05:39.707 01:43:54 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:39.707 01:43:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:39.707 01:43:54 json_config -- json_config/json_config.sh@311 -- # local ret=0 00:05:39.707 01:43:54 json_config -- json_config/json_config.sh@313 -- # [[ -n '' ]] 00:05:39.707 01:43:54 json_config -- json_config/json_config.sh@321 -- # [[ -n 1295486 ]] 00:05:39.707 01:43:54 json_config -- json_config/json_config.sh@324 -- # cleanup_bdev_subsystem_config 00:05:39.707 01:43:54 json_config -- json_config/json_config.sh@188 -- # timing_enter cleanup_bdev_subsystem_config 00:05:39.707 01:43:54 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:39.707 01:43:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:39.707 01:43:54 json_config -- json_config/json_config.sh@190 -- # [[ 0 -eq 1 ]] 00:05:39.707 01:43:54 json_config -- json_config/json_config.sh@197 -- # uname -s 00:05:39.707 01:43:54 json_config -- json_config/json_config.sh@197 -- # [[ Linux = Linux ]] 00:05:39.707 01:43:54 json_config -- json_config/json_config.sh@198 -- # rm -f /sample_aio 00:05:39.707 01:43:54 json_config -- json_config/json_config.sh@201 -- # [[ 0 -eq 1 ]] 00:05:39.707 01:43:54 json_config -- json_config/json_config.sh@205 -- # timing_exit cleanup_bdev_subsystem_config 00:05:39.707 01:43:54 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:39.707 01:43:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:39.707 01:43:54 json_config -- json_config/json_config.sh@327 -- # killprocess 1295486 00:05:39.707 01:43:54 json_config -- common/autotest_common.sh@948 -- # '[' -z 1295486 ']' 00:05:39.707 01:43:54 json_config -- common/autotest_common.sh@952 -- # kill -0 1295486 00:05:39.707 01:43:54 json_config -- common/autotest_common.sh@953 -- # uname 00:05:39.707 01:43:54 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:39.707 01:43:54 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1295486 00:05:39.965 01:43:54 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:39.965 01:43:54 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:39.965 01:43:54 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1295486' 00:05:39.965 killing process with pid 1295486 00:05:39.965 01:43:54 json_config -- common/autotest_common.sh@967 -- # kill 1295486 00:05:39.965 01:43:54 json_config -- common/autotest_common.sh@972 -- # wait 1295486 00:05:41.338 01:43:56 json_config -- json_config/json_config.sh@330 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:41.338 01:43:56 json_config -- json_config/json_config.sh@331 -- # timing_exit json_config_test_fini 00:05:41.338 01:43:56 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:41.338 01:43:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:41.338 01:43:56 json_config -- json_config/json_config.sh@332 -- # return 0 00:05:41.338 01:43:56 json_config -- json_config/json_config.sh@400 -- # echo 'INFO: Success' 00:05:41.338 INFO: Success 00:05:41.338 00:05:41.338 real 0m16.711s 00:05:41.338 user 0m18.496s 00:05:41.338 sys 0m2.229s 00:05:41.338 01:43:56 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:41.338 01:43:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:41.338 ************************************ 00:05:41.338 END TEST json_config 00:05:41.338 ************************************ 00:05:41.595 01:43:56 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:41.595 01:43:56 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:41.595 01:43:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:41.595 01:43:56 -- common/autotest_common.sh@10 -- # set +x 00:05:41.595 ************************************ 00:05:41.595 START TEST json_config_extra_key 00:05:41.595 ************************************ 00:05:41.595 01:43:56 json_config_extra_key -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:41.595 01:43:56 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:41.595 01:43:56 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:41.595 01:43:56 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:41.595 01:43:56 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:41.595 01:43:56 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:41.595 01:43:56 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:41.595 01:43:56 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:41.595 01:43:56 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:41.595 01:43:56 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:41.595 01:43:56 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:41.595 01:43:56 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:41.595 01:43:56 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:41.595 01:43:56 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:41.595 01:43:56 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:41.595 01:43:56 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:41.595 01:43:56 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:41.595 01:43:56 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:41.595 01:43:56 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:41.595 01:43:56 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:41.595 01:43:56 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:41.595 01:43:56 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:41.595 01:43:56 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:41.595 01:43:56 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.595 01:43:56 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.595 01:43:56 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.595 01:43:56 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:41.596 01:43:56 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.596 01:43:56 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:05:41.596 01:43:56 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:41.596 01:43:56 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:41.596 01:43:56 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:41.596 01:43:56 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:41.596 01:43:56 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:41.596 01:43:56 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:41.596 01:43:56 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:41.596 01:43:56 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:41.596 01:43:56 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:41.596 01:43:56 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:41.596 01:43:56 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:41.596 01:43:56 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:41.596 01:43:56 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:41.596 01:43:56 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:41.596 01:43:56 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:41.596 01:43:56 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:41.596 01:43:56 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:41.596 01:43:56 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:41.596 01:43:56 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:41.596 INFO: launching applications... 00:05:41.596 01:43:56 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:41.596 01:43:56 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:41.596 01:43:56 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:41.596 01:43:56 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:41.596 01:43:56 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:41.596 01:43:56 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:41.596 01:43:56 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:41.596 01:43:56 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:41.596 01:43:56 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=1296473 00:05:41.596 01:43:56 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:41.596 01:43:56 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:41.596 Waiting for target to run... 00:05:41.596 01:43:56 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 1296473 /var/tmp/spdk_tgt.sock 00:05:41.596 01:43:56 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 1296473 ']' 00:05:41.596 01:43:56 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:41.596 01:43:56 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:41.596 01:43:56 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:41.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:41.596 01:43:56 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:41.596 01:43:56 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:41.596 [2024-07-24 01:43:56.384209] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:05:41.596 [2024-07-24 01:43:56.384308] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1296473 ] 00:05:41.596 EAL: No free 2048 kB hugepages reported on node 1 00:05:42.160 [2024-07-24 01:43:56.876480] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.160 [2024-07-24 01:43:56.955281] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.724 01:43:57 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:42.724 01:43:57 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:05:42.724 01:43:57 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:42.724 00:05:42.724 01:43:57 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:42.724 INFO: shutting down applications... 00:05:42.724 01:43:57 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:42.724 01:43:57 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:42.724 01:43:57 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:42.724 01:43:57 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 1296473 ]] 00:05:42.724 01:43:57 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 1296473 00:05:42.724 01:43:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:42.724 01:43:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:42.724 01:43:57 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1296473 00:05:42.724 01:43:57 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:42.982 01:43:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:42.982 01:43:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:42.982 01:43:57 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1296473 00:05:42.982 01:43:57 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:42.982 01:43:57 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:42.982 01:43:57 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:42.982 01:43:57 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:42.982 SPDK target shutdown done 00:05:42.982 01:43:57 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:42.982 Success 00:05:42.982 00:05:42.982 real 0m1.546s 00:05:42.982 user 0m1.371s 00:05:42.982 sys 0m0.588s 00:05:42.982 01:43:57 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:42.982 01:43:57 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:42.982 ************************************ 00:05:42.982 END TEST json_config_extra_key 00:05:42.982 ************************************ 00:05:42.982 01:43:57 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:42.982 01:43:57 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:42.982 01:43:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:42.982 01:43:57 -- common/autotest_common.sh@10 -- # set +x 00:05:42.982 ************************************ 00:05:42.982 START TEST alias_rpc 00:05:42.982 ************************************ 00:05:42.982 01:43:57 alias_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:43.240 * Looking for test storage... 00:05:43.240 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:43.240 01:43:57 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:43.240 01:43:57 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1296661 00:05:43.240 01:43:57 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:43.240 01:43:57 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1296661 00:05:43.240 01:43:57 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 1296661 ']' 00:05:43.240 01:43:57 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:43.240 01:43:57 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:43.240 01:43:57 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:43.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:43.240 01:43:57 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:43.240 01:43:57 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:43.240 [2024-07-24 01:43:57.972897] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:05:43.240 [2024-07-24 01:43:57.972998] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1296661 ] 00:05:43.240 EAL: No free 2048 kB hugepages reported on node 1 00:05:43.240 [2024-07-24 01:43:58.032589] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.240 [2024-07-24 01:43:58.123120] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.496 01:43:58 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:43.496 01:43:58 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:43.496 01:43:58 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:43.755 01:43:58 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1296661 00:05:43.755 01:43:58 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 1296661 ']' 00:05:43.755 01:43:58 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 1296661 00:05:43.755 01:43:58 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:05:43.755 01:43:58 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:43.755 01:43:58 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1296661 00:05:44.045 01:43:58 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:44.045 01:43:58 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:44.045 01:43:58 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1296661' 00:05:44.045 killing process with pid 1296661 00:05:44.045 01:43:58 alias_rpc -- common/autotest_common.sh@967 -- # kill 1296661 00:05:44.045 01:43:58 alias_rpc -- common/autotest_common.sh@972 -- # wait 1296661 00:05:44.303 00:05:44.303 real 0m1.210s 00:05:44.303 user 0m1.284s 00:05:44.303 sys 0m0.445s 00:05:44.303 01:43:59 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:44.303 01:43:59 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:44.303 ************************************ 00:05:44.303 END TEST alias_rpc 00:05:44.303 ************************************ 00:05:44.303 01:43:59 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:05:44.303 01:43:59 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:44.303 01:43:59 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:44.303 01:43:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:44.303 01:43:59 -- common/autotest_common.sh@10 -- # set +x 00:05:44.303 ************************************ 00:05:44.303 START TEST spdkcli_tcp 00:05:44.303 ************************************ 00:05:44.303 01:43:59 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:44.303 * Looking for test storage... 00:05:44.303 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:44.303 01:43:59 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:44.303 01:43:59 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:44.303 01:43:59 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:44.303 01:43:59 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:44.303 01:43:59 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:44.303 01:43:59 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:44.303 01:43:59 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:44.303 01:43:59 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:44.303 01:43:59 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:44.303 01:43:59 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1296969 00:05:44.303 01:43:59 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:44.303 01:43:59 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 1296969 00:05:44.303 01:43:59 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 1296969 ']' 00:05:44.303 01:43:59 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.303 01:43:59 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:44.303 01:43:59 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.303 01:43:59 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:44.303 01:43:59 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:44.561 [2024-07-24 01:43:59.234887] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:05:44.561 [2024-07-24 01:43:59.234985] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1296969 ] 00:05:44.561 EAL: No free 2048 kB hugepages reported on node 1 00:05:44.561 [2024-07-24 01:43:59.291241] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:44.561 [2024-07-24 01:43:59.378337] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:44.561 [2024-07-24 01:43:59.378354] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.817 01:43:59 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:44.817 01:43:59 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:05:44.817 01:43:59 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=1296979 00:05:44.817 01:43:59 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:44.817 01:43:59 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:45.074 [ 00:05:45.074 "bdev_malloc_delete", 00:05:45.074 "bdev_malloc_create", 00:05:45.074 "bdev_null_resize", 00:05:45.074 "bdev_null_delete", 00:05:45.074 "bdev_null_create", 00:05:45.074 "bdev_nvme_cuse_unregister", 00:05:45.074 "bdev_nvme_cuse_register", 00:05:45.074 "bdev_opal_new_user", 00:05:45.074 "bdev_opal_set_lock_state", 00:05:45.074 "bdev_opal_delete", 00:05:45.074 "bdev_opal_get_info", 00:05:45.074 "bdev_opal_create", 00:05:45.074 "bdev_nvme_opal_revert", 00:05:45.074 "bdev_nvme_opal_init", 00:05:45.074 "bdev_nvme_send_cmd", 00:05:45.074 "bdev_nvme_get_path_iostat", 00:05:45.074 "bdev_nvme_get_mdns_discovery_info", 00:05:45.074 "bdev_nvme_stop_mdns_discovery", 00:05:45.074 "bdev_nvme_start_mdns_discovery", 00:05:45.074 "bdev_nvme_set_multipath_policy", 00:05:45.074 "bdev_nvme_set_preferred_path", 00:05:45.074 "bdev_nvme_get_io_paths", 00:05:45.074 "bdev_nvme_remove_error_injection", 00:05:45.074 "bdev_nvme_add_error_injection", 00:05:45.074 "bdev_nvme_get_discovery_info", 00:05:45.074 "bdev_nvme_stop_discovery", 00:05:45.074 "bdev_nvme_start_discovery", 00:05:45.074 "bdev_nvme_get_controller_health_info", 00:05:45.074 "bdev_nvme_disable_controller", 00:05:45.074 "bdev_nvme_enable_controller", 00:05:45.074 "bdev_nvme_reset_controller", 00:05:45.074 "bdev_nvme_get_transport_statistics", 00:05:45.074 "bdev_nvme_apply_firmware", 00:05:45.074 "bdev_nvme_detach_controller", 00:05:45.074 "bdev_nvme_get_controllers", 00:05:45.074 "bdev_nvme_attach_controller", 00:05:45.074 "bdev_nvme_set_hotplug", 00:05:45.074 "bdev_nvme_set_options", 00:05:45.074 "bdev_passthru_delete", 00:05:45.074 "bdev_passthru_create", 00:05:45.074 "bdev_lvol_set_parent_bdev", 00:05:45.074 "bdev_lvol_set_parent", 00:05:45.074 "bdev_lvol_check_shallow_copy", 00:05:45.074 "bdev_lvol_start_shallow_copy", 00:05:45.074 "bdev_lvol_grow_lvstore", 00:05:45.074 "bdev_lvol_get_lvols", 00:05:45.074 "bdev_lvol_get_lvstores", 00:05:45.074 "bdev_lvol_delete", 00:05:45.074 "bdev_lvol_set_read_only", 00:05:45.074 "bdev_lvol_resize", 00:05:45.074 "bdev_lvol_decouple_parent", 00:05:45.074 "bdev_lvol_inflate", 00:05:45.074 "bdev_lvol_rename", 00:05:45.074 "bdev_lvol_clone_bdev", 00:05:45.074 "bdev_lvol_clone", 00:05:45.074 "bdev_lvol_snapshot", 00:05:45.074 "bdev_lvol_create", 00:05:45.074 "bdev_lvol_delete_lvstore", 00:05:45.074 "bdev_lvol_rename_lvstore", 00:05:45.074 "bdev_lvol_create_lvstore", 00:05:45.074 "bdev_raid_set_options", 00:05:45.074 "bdev_raid_remove_base_bdev", 00:05:45.074 "bdev_raid_add_base_bdev", 00:05:45.074 "bdev_raid_delete", 00:05:45.074 "bdev_raid_create", 00:05:45.074 "bdev_raid_get_bdevs", 00:05:45.074 "bdev_error_inject_error", 00:05:45.074 "bdev_error_delete", 00:05:45.074 "bdev_error_create", 00:05:45.074 "bdev_split_delete", 00:05:45.074 "bdev_split_create", 00:05:45.074 "bdev_delay_delete", 00:05:45.074 "bdev_delay_create", 00:05:45.074 "bdev_delay_update_latency", 00:05:45.074 "bdev_zone_block_delete", 00:05:45.074 "bdev_zone_block_create", 00:05:45.074 "blobfs_create", 00:05:45.074 "blobfs_detect", 00:05:45.074 "blobfs_set_cache_size", 00:05:45.074 "bdev_aio_delete", 00:05:45.074 "bdev_aio_rescan", 00:05:45.074 "bdev_aio_create", 00:05:45.074 "bdev_ftl_set_property", 00:05:45.074 "bdev_ftl_get_properties", 00:05:45.074 "bdev_ftl_get_stats", 00:05:45.074 "bdev_ftl_unmap", 00:05:45.074 "bdev_ftl_unload", 00:05:45.074 "bdev_ftl_delete", 00:05:45.074 "bdev_ftl_load", 00:05:45.074 "bdev_ftl_create", 00:05:45.074 "bdev_virtio_attach_controller", 00:05:45.074 "bdev_virtio_scsi_get_devices", 00:05:45.074 "bdev_virtio_detach_controller", 00:05:45.074 "bdev_virtio_blk_set_hotplug", 00:05:45.074 "bdev_iscsi_delete", 00:05:45.074 "bdev_iscsi_create", 00:05:45.074 "bdev_iscsi_set_options", 00:05:45.074 "accel_error_inject_error", 00:05:45.074 "ioat_scan_accel_module", 00:05:45.074 "dsa_scan_accel_module", 00:05:45.074 "iaa_scan_accel_module", 00:05:45.074 "vfu_virtio_create_scsi_endpoint", 00:05:45.074 "vfu_virtio_scsi_remove_target", 00:05:45.074 "vfu_virtio_scsi_add_target", 00:05:45.074 "vfu_virtio_create_blk_endpoint", 00:05:45.074 "vfu_virtio_delete_endpoint", 00:05:45.074 "keyring_file_remove_key", 00:05:45.074 "keyring_file_add_key", 00:05:45.074 "keyring_linux_set_options", 00:05:45.074 "iscsi_get_histogram", 00:05:45.074 "iscsi_enable_histogram", 00:05:45.074 "iscsi_set_options", 00:05:45.074 "iscsi_get_auth_groups", 00:05:45.074 "iscsi_auth_group_remove_secret", 00:05:45.074 "iscsi_auth_group_add_secret", 00:05:45.074 "iscsi_delete_auth_group", 00:05:45.074 "iscsi_create_auth_group", 00:05:45.074 "iscsi_set_discovery_auth", 00:05:45.074 "iscsi_get_options", 00:05:45.074 "iscsi_target_node_request_logout", 00:05:45.074 "iscsi_target_node_set_redirect", 00:05:45.074 "iscsi_target_node_set_auth", 00:05:45.074 "iscsi_target_node_add_lun", 00:05:45.074 "iscsi_get_stats", 00:05:45.074 "iscsi_get_connections", 00:05:45.074 "iscsi_portal_group_set_auth", 00:05:45.074 "iscsi_start_portal_group", 00:05:45.074 "iscsi_delete_portal_group", 00:05:45.074 "iscsi_create_portal_group", 00:05:45.074 "iscsi_get_portal_groups", 00:05:45.074 "iscsi_delete_target_node", 00:05:45.074 "iscsi_target_node_remove_pg_ig_maps", 00:05:45.074 "iscsi_target_node_add_pg_ig_maps", 00:05:45.074 "iscsi_create_target_node", 00:05:45.075 "iscsi_get_target_nodes", 00:05:45.075 "iscsi_delete_initiator_group", 00:05:45.075 "iscsi_initiator_group_remove_initiators", 00:05:45.075 "iscsi_initiator_group_add_initiators", 00:05:45.075 "iscsi_create_initiator_group", 00:05:45.075 "iscsi_get_initiator_groups", 00:05:45.075 "nvmf_set_crdt", 00:05:45.075 "nvmf_set_config", 00:05:45.075 "nvmf_set_max_subsystems", 00:05:45.075 "nvmf_stop_mdns_prr", 00:05:45.075 "nvmf_publish_mdns_prr", 00:05:45.075 "nvmf_subsystem_get_listeners", 00:05:45.075 "nvmf_subsystem_get_qpairs", 00:05:45.075 "nvmf_subsystem_get_controllers", 00:05:45.075 "nvmf_get_stats", 00:05:45.075 "nvmf_get_transports", 00:05:45.075 "nvmf_create_transport", 00:05:45.075 "nvmf_get_targets", 00:05:45.075 "nvmf_delete_target", 00:05:45.075 "nvmf_create_target", 00:05:45.075 "nvmf_subsystem_allow_any_host", 00:05:45.075 "nvmf_subsystem_remove_host", 00:05:45.075 "nvmf_subsystem_add_host", 00:05:45.075 "nvmf_ns_remove_host", 00:05:45.075 "nvmf_ns_add_host", 00:05:45.075 "nvmf_subsystem_remove_ns", 00:05:45.075 "nvmf_subsystem_add_ns", 00:05:45.075 "nvmf_subsystem_listener_set_ana_state", 00:05:45.075 "nvmf_discovery_get_referrals", 00:05:45.075 "nvmf_discovery_remove_referral", 00:05:45.075 "nvmf_discovery_add_referral", 00:05:45.075 "nvmf_subsystem_remove_listener", 00:05:45.075 "nvmf_subsystem_add_listener", 00:05:45.075 "nvmf_delete_subsystem", 00:05:45.075 "nvmf_create_subsystem", 00:05:45.075 "nvmf_get_subsystems", 00:05:45.075 "env_dpdk_get_mem_stats", 00:05:45.075 "nbd_get_disks", 00:05:45.075 "nbd_stop_disk", 00:05:45.075 "nbd_start_disk", 00:05:45.075 "ublk_recover_disk", 00:05:45.075 "ublk_get_disks", 00:05:45.075 "ublk_stop_disk", 00:05:45.075 "ublk_start_disk", 00:05:45.075 "ublk_destroy_target", 00:05:45.075 "ublk_create_target", 00:05:45.075 "virtio_blk_create_transport", 00:05:45.075 "virtio_blk_get_transports", 00:05:45.075 "vhost_controller_set_coalescing", 00:05:45.075 "vhost_get_controllers", 00:05:45.075 "vhost_delete_controller", 00:05:45.075 "vhost_create_blk_controller", 00:05:45.075 "vhost_scsi_controller_remove_target", 00:05:45.075 "vhost_scsi_controller_add_target", 00:05:45.075 "vhost_start_scsi_controller", 00:05:45.075 "vhost_create_scsi_controller", 00:05:45.075 "thread_set_cpumask", 00:05:45.075 "framework_get_governor", 00:05:45.075 "framework_get_scheduler", 00:05:45.075 "framework_set_scheduler", 00:05:45.075 "framework_get_reactors", 00:05:45.075 "thread_get_io_channels", 00:05:45.075 "thread_get_pollers", 00:05:45.075 "thread_get_stats", 00:05:45.075 "framework_monitor_context_switch", 00:05:45.075 "spdk_kill_instance", 00:05:45.075 "log_enable_timestamps", 00:05:45.075 "log_get_flags", 00:05:45.075 "log_clear_flag", 00:05:45.075 "log_set_flag", 00:05:45.075 "log_get_level", 00:05:45.075 "log_set_level", 00:05:45.075 "log_get_print_level", 00:05:45.075 "log_set_print_level", 00:05:45.075 "framework_enable_cpumask_locks", 00:05:45.075 "framework_disable_cpumask_locks", 00:05:45.075 "framework_wait_init", 00:05:45.075 "framework_start_init", 00:05:45.075 "scsi_get_devices", 00:05:45.075 "bdev_get_histogram", 00:05:45.075 "bdev_enable_histogram", 00:05:45.075 "bdev_set_qos_limit", 00:05:45.075 "bdev_set_qd_sampling_period", 00:05:45.075 "bdev_get_bdevs", 00:05:45.075 "bdev_reset_iostat", 00:05:45.075 "bdev_get_iostat", 00:05:45.075 "bdev_examine", 00:05:45.075 "bdev_wait_for_examine", 00:05:45.075 "bdev_set_options", 00:05:45.075 "notify_get_notifications", 00:05:45.075 "notify_get_types", 00:05:45.075 "accel_get_stats", 00:05:45.075 "accel_set_options", 00:05:45.075 "accel_set_driver", 00:05:45.075 "accel_crypto_key_destroy", 00:05:45.075 "accel_crypto_keys_get", 00:05:45.075 "accel_crypto_key_create", 00:05:45.075 "accel_assign_opc", 00:05:45.075 "accel_get_module_info", 00:05:45.075 "accel_get_opc_assignments", 00:05:45.075 "vmd_rescan", 00:05:45.075 "vmd_remove_device", 00:05:45.075 "vmd_enable", 00:05:45.075 "sock_get_default_impl", 00:05:45.075 "sock_set_default_impl", 00:05:45.075 "sock_impl_set_options", 00:05:45.075 "sock_impl_get_options", 00:05:45.075 "iobuf_get_stats", 00:05:45.075 "iobuf_set_options", 00:05:45.075 "keyring_get_keys", 00:05:45.075 "framework_get_pci_devices", 00:05:45.075 "framework_get_config", 00:05:45.075 "framework_get_subsystems", 00:05:45.075 "vfu_tgt_set_base_path", 00:05:45.075 "trace_get_info", 00:05:45.075 "trace_get_tpoint_group_mask", 00:05:45.075 "trace_disable_tpoint_group", 00:05:45.075 "trace_enable_tpoint_group", 00:05:45.075 "trace_clear_tpoint_mask", 00:05:45.075 "trace_set_tpoint_mask", 00:05:45.075 "spdk_get_version", 00:05:45.075 "rpc_get_methods" 00:05:45.075 ] 00:05:45.075 01:43:59 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:45.075 01:43:59 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:45.075 01:43:59 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:45.075 01:43:59 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:45.075 01:43:59 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 1296969 00:05:45.075 01:43:59 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 1296969 ']' 00:05:45.075 01:43:59 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 1296969 00:05:45.075 01:43:59 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:05:45.075 01:43:59 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:45.075 01:43:59 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1296969 00:05:45.075 01:43:59 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:45.075 01:43:59 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:45.075 01:43:59 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1296969' 00:05:45.075 killing process with pid 1296969 00:05:45.075 01:43:59 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 1296969 00:05:45.075 01:43:59 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 1296969 00:05:45.640 00:05:45.640 real 0m1.203s 00:05:45.640 user 0m2.143s 00:05:45.640 sys 0m0.445s 00:05:45.640 01:44:00 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:45.640 01:44:00 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:45.640 ************************************ 00:05:45.640 END TEST spdkcli_tcp 00:05:45.640 ************************************ 00:05:45.640 01:44:00 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:45.640 01:44:00 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:45.640 01:44:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:45.640 01:44:00 -- common/autotest_common.sh@10 -- # set +x 00:05:45.640 ************************************ 00:05:45.640 START TEST dpdk_mem_utility 00:05:45.640 ************************************ 00:05:45.640 01:44:00 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:45.640 * Looking for test storage... 00:05:45.640 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:45.640 01:44:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:45.640 01:44:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1297172 00:05:45.640 01:44:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:45.640 01:44:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1297172 00:05:45.640 01:44:00 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 1297172 ']' 00:05:45.640 01:44:00 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:45.640 01:44:00 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:45.640 01:44:00 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:45.640 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:45.640 01:44:00 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:45.640 01:44:00 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:45.640 [2024-07-24 01:44:00.475235] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:05:45.640 [2024-07-24 01:44:00.475343] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1297172 ] 00:05:45.640 EAL: No free 2048 kB hugepages reported on node 1 00:05:45.640 [2024-07-24 01:44:00.533377] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.897 [2024-07-24 01:44:00.620323] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.155 01:44:00 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:46.155 01:44:00 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:05:46.155 01:44:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:46.155 01:44:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:46.155 01:44:00 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:46.155 01:44:00 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:46.155 { 00:05:46.155 "filename": "/tmp/spdk_mem_dump.txt" 00:05:46.155 } 00:05:46.155 01:44:00 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:46.155 01:44:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:46.155 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:46.155 1 heaps totaling size 814.000000 MiB 00:05:46.155 size: 814.000000 MiB heap id: 0 00:05:46.155 end heaps---------- 00:05:46.155 8 mempools totaling size 598.116089 MiB 00:05:46.155 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:46.155 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:46.155 size: 84.521057 MiB name: bdev_io_1297172 00:05:46.155 size: 51.011292 MiB name: evtpool_1297172 00:05:46.155 size: 50.003479 MiB name: msgpool_1297172 00:05:46.155 size: 21.763794 MiB name: PDU_Pool 00:05:46.155 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:46.155 size: 0.026123 MiB name: Session_Pool 00:05:46.155 end mempools------- 00:05:46.155 6 memzones totaling size 4.142822 MiB 00:05:46.155 size: 1.000366 MiB name: RG_ring_0_1297172 00:05:46.155 size: 1.000366 MiB name: RG_ring_1_1297172 00:05:46.155 size: 1.000366 MiB name: RG_ring_4_1297172 00:05:46.155 size: 1.000366 MiB name: RG_ring_5_1297172 00:05:46.155 size: 0.125366 MiB name: RG_ring_2_1297172 00:05:46.155 size: 0.015991 MiB name: RG_ring_3_1297172 00:05:46.155 end memzones------- 00:05:46.155 01:44:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:46.155 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:05:46.155 list of free elements. size: 12.519348 MiB 00:05:46.155 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:46.155 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:46.155 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:46.155 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:46.155 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:46.155 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:46.155 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:46.155 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:46.155 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:46.155 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:05:46.155 element at address: 0x20000b200000 with size: 0.490723 MiB 00:05:46.155 element at address: 0x200000800000 with size: 0.487793 MiB 00:05:46.155 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:46.155 element at address: 0x200027e00000 with size: 0.410034 MiB 00:05:46.155 element at address: 0x200003a00000 with size: 0.355530 MiB 00:05:46.155 list of standard malloc elements. size: 199.218079 MiB 00:05:46.155 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:46.155 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:46.155 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:46.155 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:46.156 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:46.156 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:46.156 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:46.156 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:46.156 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:46.156 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:46.156 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:46.156 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:46.156 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:46.156 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:46.156 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:46.156 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:46.156 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:46.156 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:46.156 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:46.156 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:46.156 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:46.156 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:46.156 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:46.156 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:46.156 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:46.156 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:46.156 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:46.156 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:46.156 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:46.156 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:46.156 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:46.156 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:46.156 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:46.156 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:46.156 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:46.156 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:46.156 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:05:46.156 element at address: 0x200027e69040 with size: 0.000183 MiB 00:05:46.156 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:05:46.156 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:46.156 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:46.156 list of memzone associated elements. size: 602.262573 MiB 00:05:46.156 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:46.156 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:46.156 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:46.156 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:46.156 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:46.156 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_1297172_0 00:05:46.156 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:46.156 associated memzone info: size: 48.002930 MiB name: MP_evtpool_1297172_0 00:05:46.156 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:46.156 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1297172_0 00:05:46.156 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:46.156 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:46.156 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:46.156 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:46.156 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:46.156 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_1297172 00:05:46.156 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:46.156 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1297172 00:05:46.156 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:46.156 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1297172 00:05:46.156 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:46.156 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:46.156 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:46.156 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:46.156 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:46.156 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:46.156 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:46.156 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:46.156 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:46.156 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1297172 00:05:46.156 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:46.156 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1297172 00:05:46.156 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:46.156 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1297172 00:05:46.156 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:46.156 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1297172 00:05:46.156 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:46.156 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1297172 00:05:46.156 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:46.156 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:46.156 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:46.156 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:46.156 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:46.156 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:46.156 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:46.156 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1297172 00:05:46.156 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:46.156 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:46.156 element at address: 0x200027e69100 with size: 0.023743 MiB 00:05:46.156 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:46.156 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:46.156 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1297172 00:05:46.156 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:05:46.156 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:46.156 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:46.156 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1297172 00:05:46.156 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:46.156 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1297172 00:05:46.156 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:05:46.156 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:46.156 01:44:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:46.156 01:44:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1297172 00:05:46.156 01:44:00 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 1297172 ']' 00:05:46.156 01:44:00 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 1297172 00:05:46.156 01:44:00 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:05:46.156 01:44:00 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:46.156 01:44:00 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1297172 00:05:46.156 01:44:01 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:46.156 01:44:01 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:46.156 01:44:01 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1297172' 00:05:46.156 killing process with pid 1297172 00:05:46.156 01:44:01 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 1297172 00:05:46.156 01:44:01 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 1297172 00:05:46.721 00:05:46.721 real 0m1.050s 00:05:46.721 user 0m0.991s 00:05:46.721 sys 0m0.421s 00:05:46.721 01:44:01 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:46.721 01:44:01 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:46.721 ************************************ 00:05:46.721 END TEST dpdk_mem_utility 00:05:46.721 ************************************ 00:05:46.721 01:44:01 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:46.721 01:44:01 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:46.721 01:44:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:46.721 01:44:01 -- common/autotest_common.sh@10 -- # set +x 00:05:46.721 ************************************ 00:05:46.721 START TEST event 00:05:46.721 ************************************ 00:05:46.721 01:44:01 event -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:46.721 * Looking for test storage... 00:05:46.721 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:46.721 01:44:01 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:46.721 01:44:01 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:46.721 01:44:01 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:46.721 01:44:01 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:46.721 01:44:01 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:46.721 01:44:01 event -- common/autotest_common.sh@10 -- # set +x 00:05:46.721 ************************************ 00:05:46.721 START TEST event_perf 00:05:46.721 ************************************ 00:05:46.721 01:44:01 event.event_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:46.721 Running I/O for 1 seconds...[2024-07-24 01:44:01.558287] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:05:46.721 [2024-07-24 01:44:01.558398] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1297360 ] 00:05:46.722 EAL: No free 2048 kB hugepages reported on node 1 00:05:46.979 [2024-07-24 01:44:01.624824] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:46.979 [2024-07-24 01:44:01.723178] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:46.979 [2024-07-24 01:44:01.723248] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:46.979 [2024-07-24 01:44:01.723346] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:46.979 [2024-07-24 01:44:01.723350] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.912 Running I/O for 1 seconds... 00:05:47.912 lcore 0: 234723 00:05:47.912 lcore 1: 234722 00:05:47.912 lcore 2: 234722 00:05:47.912 lcore 3: 234723 00:05:47.912 done. 00:05:47.912 00:05:47.912 real 0m1.258s 00:05:47.912 user 0m4.154s 00:05:47.912 sys 0m0.098s 00:05:47.912 01:44:02 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:47.912 01:44:02 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:47.912 ************************************ 00:05:47.912 END TEST event_perf 00:05:47.912 ************************************ 00:05:48.170 01:44:02 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:48.170 01:44:02 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:48.170 01:44:02 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:48.170 01:44:02 event -- common/autotest_common.sh@10 -- # set +x 00:05:48.170 ************************************ 00:05:48.170 START TEST event_reactor 00:05:48.170 ************************************ 00:05:48.170 01:44:02 event.event_reactor -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:48.170 [2024-07-24 01:44:02.859969] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:05:48.170 [2024-07-24 01:44:02.860033] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1297519 ] 00:05:48.170 EAL: No free 2048 kB hugepages reported on node 1 00:05:48.170 [2024-07-24 01:44:02.922174] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.170 [2024-07-24 01:44:03.016434] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.540 test_start 00:05:49.540 oneshot 00:05:49.540 tick 100 00:05:49.540 tick 100 00:05:49.540 tick 250 00:05:49.540 tick 100 00:05:49.540 tick 100 00:05:49.540 tick 100 00:05:49.540 tick 250 00:05:49.540 tick 500 00:05:49.540 tick 100 00:05:49.540 tick 100 00:05:49.540 tick 250 00:05:49.540 tick 100 00:05:49.540 tick 100 00:05:49.540 test_end 00:05:49.540 00:05:49.540 real 0m1.251s 00:05:49.540 user 0m1.160s 00:05:49.540 sys 0m0.087s 00:05:49.540 01:44:04 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:49.540 01:44:04 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:49.540 ************************************ 00:05:49.540 END TEST event_reactor 00:05:49.540 ************************************ 00:05:49.540 01:44:04 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:49.540 01:44:04 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:49.540 01:44:04 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:49.540 01:44:04 event -- common/autotest_common.sh@10 -- # set +x 00:05:49.540 ************************************ 00:05:49.540 START TEST event_reactor_perf 00:05:49.540 ************************************ 00:05:49.540 01:44:04 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:49.540 [2024-07-24 01:44:04.158461] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:05:49.540 [2024-07-24 01:44:04.158521] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1297679 ] 00:05:49.540 EAL: No free 2048 kB hugepages reported on node 1 00:05:49.540 [2024-07-24 01:44:04.223045] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.540 [2024-07-24 01:44:04.315675] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.914 test_start 00:05:50.914 test_end 00:05:50.914 Performance: 353617 events per second 00:05:50.914 00:05:50.914 real 0m1.249s 00:05:50.914 user 0m1.166s 00:05:50.914 sys 0m0.078s 00:05:50.914 01:44:05 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:50.914 01:44:05 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:50.914 ************************************ 00:05:50.914 END TEST event_reactor_perf 00:05:50.914 ************************************ 00:05:50.914 01:44:05 event -- event/event.sh@49 -- # uname -s 00:05:50.914 01:44:05 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:50.914 01:44:05 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:50.914 01:44:05 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:50.914 01:44:05 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:50.914 01:44:05 event -- common/autotest_common.sh@10 -- # set +x 00:05:50.914 ************************************ 00:05:50.914 START TEST event_scheduler 00:05:50.914 ************************************ 00:05:50.914 01:44:05 event.event_scheduler -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:50.914 * Looking for test storage... 00:05:50.914 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:50.914 01:44:05 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:50.914 01:44:05 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=1297858 00:05:50.914 01:44:05 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:50.914 01:44:05 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:50.914 01:44:05 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 1297858 00:05:50.914 01:44:05 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 1297858 ']' 00:05:50.914 01:44:05 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:50.914 01:44:05 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:50.914 01:44:05 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:50.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:50.914 01:44:05 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:50.914 01:44:05 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:50.914 [2024-07-24 01:44:05.542633] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:05:50.914 [2024-07-24 01:44:05.542704] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1297858 ] 00:05:50.914 EAL: No free 2048 kB hugepages reported on node 1 00:05:50.914 [2024-07-24 01:44:05.599326] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:50.914 [2024-07-24 01:44:05.686822] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.914 [2024-07-24 01:44:05.686888] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:50.914 [2024-07-24 01:44:05.686954] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:50.914 [2024-07-24 01:44:05.686956] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:50.914 01:44:05 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:50.914 01:44:05 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:05:50.914 01:44:05 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:50.914 01:44:05 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:50.914 01:44:05 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:50.914 [2024-07-24 01:44:05.747790] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:50.914 [2024-07-24 01:44:05.747816] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:05:50.914 [2024-07-24 01:44:05.747848] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:50.914 [2024-07-24 01:44:05.747859] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:50.914 [2024-07-24 01:44:05.747869] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:50.914 01:44:05 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:50.914 01:44:05 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:50.914 01:44:05 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:50.914 01:44:05 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:51.173 [2024-07-24 01:44:05.847561] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:51.173 01:44:05 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:51.173 01:44:05 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:51.173 01:44:05 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:51.173 01:44:05 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:51.173 01:44:05 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:51.173 ************************************ 00:05:51.173 START TEST scheduler_create_thread 00:05:51.173 ************************************ 00:05:51.173 01:44:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:05:51.173 01:44:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:51.173 01:44:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:51.173 01:44:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:51.173 2 00:05:51.173 01:44:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:51.173 01:44:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:51.173 01:44:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:51.173 01:44:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:51.173 3 00:05:51.173 01:44:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:51.173 01:44:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:51.173 01:44:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:51.173 01:44:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:51.173 4 00:05:51.173 01:44:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:51.173 01:44:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:51.173 01:44:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:51.173 01:44:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:51.173 5 00:05:51.173 01:44:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:51.173 01:44:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:51.173 01:44:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:51.173 01:44:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:51.173 6 00:05:51.173 01:44:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:51.173 01:44:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:51.173 01:44:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:51.173 01:44:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:51.173 7 00:05:51.173 01:44:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:51.173 01:44:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:51.173 01:44:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:51.173 01:44:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:51.173 8 00:05:51.173 01:44:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:51.173 01:44:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:51.173 01:44:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:51.173 01:44:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:51.173 9 00:05:51.173 01:44:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:51.173 01:44:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:51.173 01:44:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:51.173 01:44:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:51.173 10 00:05:51.173 01:44:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:51.173 01:44:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:51.173 01:44:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:51.173 01:44:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:51.173 01:44:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:51.173 01:44:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:51.173 01:44:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:51.173 01:44:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:51.173 01:44:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:51.173 01:44:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:51.173 01:44:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:51.173 01:44:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:51.173 01:44:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:51.173 01:44:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:51.173 01:44:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:51.173 01:44:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:51.173 01:44:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:51.173 01:44:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:51.739 01:44:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:51.739 00:05:51.739 real 0m0.590s 00:05:51.739 user 0m0.008s 00:05:51.739 sys 0m0.005s 00:05:51.739 01:44:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:51.739 01:44:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:51.739 ************************************ 00:05:51.739 END TEST scheduler_create_thread 00:05:51.739 ************************************ 00:05:51.739 01:44:06 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:51.739 01:44:06 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 1297858 00:05:51.739 01:44:06 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 1297858 ']' 00:05:51.739 01:44:06 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 1297858 00:05:51.739 01:44:06 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:05:51.739 01:44:06 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:51.739 01:44:06 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1297858 00:05:51.739 01:44:06 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:05:51.739 01:44:06 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:05:51.739 01:44:06 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1297858' 00:05:51.739 killing process with pid 1297858 00:05:51.739 01:44:06 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 1297858 00:05:51.739 01:44:06 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 1297858 00:05:52.305 [2024-07-24 01:44:06.947566] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:52.305 00:05:52.305 real 0m1.705s 00:05:52.305 user 0m2.175s 00:05:52.305 sys 0m0.324s 00:05:52.305 01:44:07 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:52.305 01:44:07 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:52.305 ************************************ 00:05:52.305 END TEST event_scheduler 00:05:52.305 ************************************ 00:05:52.305 01:44:07 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:52.305 01:44:07 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:52.305 01:44:07 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:52.305 01:44:07 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:52.305 01:44:07 event -- common/autotest_common.sh@10 -- # set +x 00:05:52.563 ************************************ 00:05:52.563 START TEST app_repeat 00:05:52.563 ************************************ 00:05:52.563 01:44:07 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:05:52.563 01:44:07 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:52.563 01:44:07 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:52.563 01:44:07 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:52.563 01:44:07 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:52.563 01:44:07 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:52.563 01:44:07 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:52.563 01:44:07 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:52.563 01:44:07 event.app_repeat -- event/event.sh@19 -- # repeat_pid=1298169 00:05:52.563 01:44:07 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:52.563 01:44:07 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:52.563 01:44:07 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1298169' 00:05:52.563 Process app_repeat pid: 1298169 00:05:52.563 01:44:07 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:52.563 01:44:07 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:52.563 spdk_app_start Round 0 00:05:52.563 01:44:07 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1298169 /var/tmp/spdk-nbd.sock 00:05:52.563 01:44:07 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 1298169 ']' 00:05:52.563 01:44:07 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:52.563 01:44:07 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:52.563 01:44:07 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:52.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:52.563 01:44:07 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:52.563 01:44:07 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:52.563 [2024-07-24 01:44:07.230605] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:05:52.563 [2024-07-24 01:44:07.230677] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1298169 ] 00:05:52.563 EAL: No free 2048 kB hugepages reported on node 1 00:05:52.563 [2024-07-24 01:44:07.294172] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:52.563 [2024-07-24 01:44:07.384874] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:52.563 [2024-07-24 01:44:07.384878] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.821 01:44:07 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:52.821 01:44:07 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:52.821 01:44:07 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:53.079 Malloc0 00:05:53.079 01:44:07 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:53.338 Malloc1 00:05:53.338 01:44:08 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:53.338 01:44:08 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:53.338 01:44:08 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:53.338 01:44:08 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:53.338 01:44:08 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:53.338 01:44:08 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:53.338 01:44:08 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:53.338 01:44:08 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:53.338 01:44:08 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:53.338 01:44:08 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:53.338 01:44:08 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:53.338 01:44:08 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:53.338 01:44:08 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:53.338 01:44:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:53.338 01:44:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:53.338 01:44:08 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:53.596 /dev/nbd0 00:05:53.596 01:44:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:53.596 01:44:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:53.596 01:44:08 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:53.596 01:44:08 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:53.596 01:44:08 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:53.596 01:44:08 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:53.596 01:44:08 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:53.596 01:44:08 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:53.596 01:44:08 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:53.596 01:44:08 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:53.596 01:44:08 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:53.596 1+0 records in 00:05:53.596 1+0 records out 00:05:53.596 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000190021 s, 21.6 MB/s 00:05:53.596 01:44:08 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:53.596 01:44:08 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:53.596 01:44:08 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:53.596 01:44:08 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:53.596 01:44:08 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:53.596 01:44:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:53.596 01:44:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:53.596 01:44:08 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:53.854 /dev/nbd1 00:05:53.854 01:44:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:53.854 01:44:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:53.854 01:44:08 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:53.854 01:44:08 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:53.854 01:44:08 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:53.854 01:44:08 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:53.854 01:44:08 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:53.854 01:44:08 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:53.854 01:44:08 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:53.854 01:44:08 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:53.854 01:44:08 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:53.854 1+0 records in 00:05:53.854 1+0 records out 00:05:53.854 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000153324 s, 26.7 MB/s 00:05:53.854 01:44:08 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:53.854 01:44:08 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:53.854 01:44:08 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:53.854 01:44:08 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:53.854 01:44:08 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:53.854 01:44:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:53.854 01:44:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:53.854 01:44:08 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:53.854 01:44:08 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:53.854 01:44:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:54.112 01:44:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:54.112 { 00:05:54.112 "nbd_device": "/dev/nbd0", 00:05:54.112 "bdev_name": "Malloc0" 00:05:54.112 }, 00:05:54.112 { 00:05:54.112 "nbd_device": "/dev/nbd1", 00:05:54.112 "bdev_name": "Malloc1" 00:05:54.112 } 00:05:54.112 ]' 00:05:54.112 01:44:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:54.112 { 00:05:54.112 "nbd_device": "/dev/nbd0", 00:05:54.112 "bdev_name": "Malloc0" 00:05:54.112 }, 00:05:54.112 { 00:05:54.112 "nbd_device": "/dev/nbd1", 00:05:54.112 "bdev_name": "Malloc1" 00:05:54.112 } 00:05:54.112 ]' 00:05:54.112 01:44:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:54.112 01:44:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:54.112 /dev/nbd1' 00:05:54.112 01:44:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:54.112 /dev/nbd1' 00:05:54.112 01:44:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:54.112 01:44:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:54.112 01:44:08 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:54.112 01:44:08 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:54.112 01:44:08 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:54.112 01:44:08 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:54.112 01:44:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:54.112 01:44:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:54.112 01:44:08 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:54.112 01:44:08 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:54.112 01:44:08 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:54.112 01:44:08 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:54.112 256+0 records in 00:05:54.112 256+0 records out 00:05:54.112 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00499265 s, 210 MB/s 00:05:54.112 01:44:08 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:54.112 01:44:08 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:54.112 256+0 records in 00:05:54.112 256+0 records out 00:05:54.112 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.023932 s, 43.8 MB/s 00:05:54.112 01:44:08 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:54.112 01:44:08 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:54.112 256+0 records in 00:05:54.112 256+0 records out 00:05:54.112 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0249173 s, 42.1 MB/s 00:05:54.112 01:44:08 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:54.112 01:44:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:54.112 01:44:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:54.112 01:44:08 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:54.112 01:44:08 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:54.112 01:44:08 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:54.112 01:44:08 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:54.112 01:44:08 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:54.112 01:44:08 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:54.112 01:44:08 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:54.113 01:44:08 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:54.113 01:44:08 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:54.113 01:44:08 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:54.113 01:44:08 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:54.113 01:44:08 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:54.113 01:44:08 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:54.113 01:44:08 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:54.113 01:44:08 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:54.113 01:44:08 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:54.370 01:44:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:54.370 01:44:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:54.370 01:44:09 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:54.370 01:44:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:54.371 01:44:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:54.371 01:44:09 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:54.371 01:44:09 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:54.371 01:44:09 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:54.371 01:44:09 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:54.371 01:44:09 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:54.628 01:44:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:54.628 01:44:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:54.628 01:44:09 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:54.628 01:44:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:54.628 01:44:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:54.628 01:44:09 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:54.628 01:44:09 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:54.628 01:44:09 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:54.628 01:44:09 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:54.629 01:44:09 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:54.629 01:44:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:54.886 01:44:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:54.886 01:44:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:54.886 01:44:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:54.886 01:44:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:54.886 01:44:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:54.886 01:44:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:55.145 01:44:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:55.145 01:44:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:55.145 01:44:09 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:55.145 01:44:09 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:55.145 01:44:09 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:55.145 01:44:09 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:55.145 01:44:09 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:55.402 01:44:10 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:55.402 [2024-07-24 01:44:10.283275] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:55.660 [2024-07-24 01:44:10.374543] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.660 [2024-07-24 01:44:10.374544] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:55.660 [2024-07-24 01:44:10.431961] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:55.660 [2024-07-24 01:44:10.432030] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:58.186 01:44:13 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:58.186 01:44:13 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:58.186 spdk_app_start Round 1 00:05:58.186 01:44:13 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1298169 /var/tmp/spdk-nbd.sock 00:05:58.186 01:44:13 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 1298169 ']' 00:05:58.186 01:44:13 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:58.186 01:44:13 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:58.186 01:44:13 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:58.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:58.186 01:44:13 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:58.186 01:44:13 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:58.443 01:44:13 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:58.443 01:44:13 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:58.443 01:44:13 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:58.700 Malloc0 00:05:58.700 01:44:13 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:58.957 Malloc1 00:05:58.957 01:44:13 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:58.957 01:44:13 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:58.957 01:44:13 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:58.957 01:44:13 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:58.957 01:44:13 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:58.957 01:44:13 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:58.957 01:44:13 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:58.957 01:44:13 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:58.957 01:44:13 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:58.957 01:44:13 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:58.957 01:44:13 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:58.957 01:44:13 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:58.957 01:44:13 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:58.957 01:44:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:58.957 01:44:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:58.957 01:44:13 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:59.214 /dev/nbd0 00:05:59.214 01:44:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:59.214 01:44:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:59.214 01:44:14 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:59.214 01:44:14 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:59.214 01:44:14 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:59.214 01:44:14 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:59.215 01:44:14 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:59.215 01:44:14 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:59.215 01:44:14 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:59.215 01:44:14 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:59.215 01:44:14 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:59.215 1+0 records in 00:05:59.215 1+0 records out 00:05:59.215 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000216368 s, 18.9 MB/s 00:05:59.215 01:44:14 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:59.215 01:44:14 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:59.215 01:44:14 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:59.215 01:44:14 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:59.215 01:44:14 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:59.215 01:44:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:59.215 01:44:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:59.215 01:44:14 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:59.472 /dev/nbd1 00:05:59.729 01:44:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:59.730 01:44:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:59.730 01:44:14 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:59.730 01:44:14 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:59.730 01:44:14 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:59.730 01:44:14 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:59.730 01:44:14 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:59.730 01:44:14 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:59.730 01:44:14 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:59.730 01:44:14 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:59.730 01:44:14 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:59.730 1+0 records in 00:05:59.730 1+0 records out 00:05:59.730 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000213322 s, 19.2 MB/s 00:05:59.730 01:44:14 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:59.730 01:44:14 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:59.730 01:44:14 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:59.730 01:44:14 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:59.730 01:44:14 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:59.730 01:44:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:59.730 01:44:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:59.730 01:44:14 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:59.730 01:44:14 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:59.730 01:44:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:59.988 01:44:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:59.988 { 00:05:59.988 "nbd_device": "/dev/nbd0", 00:05:59.988 "bdev_name": "Malloc0" 00:05:59.988 }, 00:05:59.988 { 00:05:59.988 "nbd_device": "/dev/nbd1", 00:05:59.988 "bdev_name": "Malloc1" 00:05:59.988 } 00:05:59.988 ]' 00:05:59.988 01:44:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:59.988 { 00:05:59.988 "nbd_device": "/dev/nbd0", 00:05:59.988 "bdev_name": "Malloc0" 00:05:59.988 }, 00:05:59.988 { 00:05:59.988 "nbd_device": "/dev/nbd1", 00:05:59.988 "bdev_name": "Malloc1" 00:05:59.988 } 00:05:59.988 ]' 00:05:59.988 01:44:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:59.988 01:44:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:59.988 /dev/nbd1' 00:05:59.988 01:44:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:59.988 /dev/nbd1' 00:05:59.988 01:44:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:59.988 01:44:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:59.988 01:44:14 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:59.988 01:44:14 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:59.988 01:44:14 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:59.988 01:44:14 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:59.988 01:44:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:59.988 01:44:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:59.988 01:44:14 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:59.988 01:44:14 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:59.988 01:44:14 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:59.988 01:44:14 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:59.988 256+0 records in 00:05:59.988 256+0 records out 00:05:59.988 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00506701 s, 207 MB/s 00:05:59.988 01:44:14 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:59.988 01:44:14 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:59.988 256+0 records in 00:05:59.988 256+0 records out 00:05:59.988 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0218306 s, 48.0 MB/s 00:05:59.988 01:44:14 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:59.988 01:44:14 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:59.988 256+0 records in 00:05:59.988 256+0 records out 00:05:59.988 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0250721 s, 41.8 MB/s 00:05:59.988 01:44:14 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:59.988 01:44:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:59.988 01:44:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:59.988 01:44:14 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:59.988 01:44:14 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:59.988 01:44:14 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:59.988 01:44:14 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:59.988 01:44:14 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:59.988 01:44:14 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:59.988 01:44:14 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:59.988 01:44:14 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:59.988 01:44:14 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:59.988 01:44:14 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:59.988 01:44:14 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:59.988 01:44:14 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:59.988 01:44:14 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:59.988 01:44:14 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:59.988 01:44:14 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:59.988 01:44:14 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:00.245 01:44:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:00.245 01:44:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:00.245 01:44:15 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:00.245 01:44:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:00.245 01:44:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:00.245 01:44:15 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:00.245 01:44:15 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:00.245 01:44:15 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:00.245 01:44:15 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:00.245 01:44:15 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:00.512 01:44:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:00.512 01:44:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:00.512 01:44:15 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:00.512 01:44:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:00.512 01:44:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:00.512 01:44:15 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:00.512 01:44:15 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:00.512 01:44:15 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:00.512 01:44:15 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:00.512 01:44:15 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.512 01:44:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:00.822 01:44:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:00.822 01:44:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:00.822 01:44:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:00.822 01:44:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:00.822 01:44:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:00.822 01:44:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:00.822 01:44:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:00.822 01:44:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:00.822 01:44:15 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:00.822 01:44:15 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:00.822 01:44:15 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:00.822 01:44:15 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:00.822 01:44:15 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:01.080 01:44:15 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:01.338 [2024-07-24 01:44:16.078283] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:01.338 [2024-07-24 01:44:16.167190] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:01.338 [2024-07-24 01:44:16.167194] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.338 [2024-07-24 01:44:16.229657] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:01.338 [2024-07-24 01:44:16.229725] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:04.616 01:44:18 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:04.616 01:44:18 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:04.616 spdk_app_start Round 2 00:06:04.616 01:44:18 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1298169 /var/tmp/spdk-nbd.sock 00:06:04.616 01:44:18 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 1298169 ']' 00:06:04.616 01:44:18 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:04.616 01:44:18 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:04.616 01:44:18 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:04.616 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:04.616 01:44:18 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:04.616 01:44:18 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:04.616 01:44:19 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:04.616 01:44:19 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:04.616 01:44:19 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:04.616 Malloc0 00:06:04.616 01:44:19 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:04.874 Malloc1 00:06:04.874 01:44:19 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:04.874 01:44:19 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:04.874 01:44:19 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:04.874 01:44:19 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:04.874 01:44:19 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:04.874 01:44:19 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:04.874 01:44:19 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:04.874 01:44:19 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:04.874 01:44:19 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:04.874 01:44:19 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:04.874 01:44:19 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:04.874 01:44:19 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:04.874 01:44:19 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:04.874 01:44:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:04.874 01:44:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:04.874 01:44:19 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:05.132 /dev/nbd0 00:06:05.132 01:44:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:05.132 01:44:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:05.132 01:44:19 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:05.132 01:44:19 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:05.132 01:44:19 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:05.132 01:44:19 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:05.132 01:44:19 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:05.132 01:44:19 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:05.132 01:44:19 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:05.132 01:44:19 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:05.132 01:44:19 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:05.132 1+0 records in 00:06:05.132 1+0 records out 00:06:05.132 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000194386 s, 21.1 MB/s 00:06:05.132 01:44:19 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:05.132 01:44:19 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:05.132 01:44:19 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:05.132 01:44:19 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:05.132 01:44:19 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:05.132 01:44:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:05.132 01:44:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:05.132 01:44:19 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:05.389 /dev/nbd1 00:06:05.389 01:44:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:05.389 01:44:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:05.389 01:44:20 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:05.389 01:44:20 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:05.389 01:44:20 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:05.389 01:44:20 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:05.390 01:44:20 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:05.390 01:44:20 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:05.390 01:44:20 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:05.390 01:44:20 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:05.390 01:44:20 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:05.390 1+0 records in 00:06:05.390 1+0 records out 00:06:05.390 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000240318 s, 17.0 MB/s 00:06:05.390 01:44:20 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:05.390 01:44:20 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:05.390 01:44:20 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:05.390 01:44:20 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:05.390 01:44:20 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:05.390 01:44:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:05.390 01:44:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:05.390 01:44:20 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:05.390 01:44:20 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:05.390 01:44:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:05.647 01:44:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:05.647 { 00:06:05.647 "nbd_device": "/dev/nbd0", 00:06:05.647 "bdev_name": "Malloc0" 00:06:05.647 }, 00:06:05.647 { 00:06:05.647 "nbd_device": "/dev/nbd1", 00:06:05.647 "bdev_name": "Malloc1" 00:06:05.647 } 00:06:05.647 ]' 00:06:05.647 01:44:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:05.647 { 00:06:05.647 "nbd_device": "/dev/nbd0", 00:06:05.647 "bdev_name": "Malloc0" 00:06:05.647 }, 00:06:05.647 { 00:06:05.647 "nbd_device": "/dev/nbd1", 00:06:05.647 "bdev_name": "Malloc1" 00:06:05.647 } 00:06:05.647 ]' 00:06:05.647 01:44:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:05.647 01:44:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:05.647 /dev/nbd1' 00:06:05.647 01:44:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:05.647 /dev/nbd1' 00:06:05.647 01:44:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:05.647 01:44:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:05.647 01:44:20 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:05.647 01:44:20 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:05.647 01:44:20 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:05.647 01:44:20 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:05.647 01:44:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:05.647 01:44:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:05.647 01:44:20 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:05.647 01:44:20 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:05.647 01:44:20 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:05.647 01:44:20 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:05.647 256+0 records in 00:06:05.647 256+0 records out 00:06:05.647 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00465169 s, 225 MB/s 00:06:05.647 01:44:20 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:05.647 01:44:20 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:05.647 256+0 records in 00:06:05.647 256+0 records out 00:06:05.647 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0243455 s, 43.1 MB/s 00:06:05.647 01:44:20 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:05.647 01:44:20 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:05.905 256+0 records in 00:06:05.905 256+0 records out 00:06:05.905 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.023328 s, 44.9 MB/s 00:06:05.905 01:44:20 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:05.905 01:44:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:05.905 01:44:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:05.905 01:44:20 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:05.905 01:44:20 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:05.905 01:44:20 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:05.905 01:44:20 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:05.905 01:44:20 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:05.905 01:44:20 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:05.905 01:44:20 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:05.905 01:44:20 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:05.905 01:44:20 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:05.905 01:44:20 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:05.905 01:44:20 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:05.905 01:44:20 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:05.905 01:44:20 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:05.905 01:44:20 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:05.905 01:44:20 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:05.905 01:44:20 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:06.162 01:44:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:06.162 01:44:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:06.162 01:44:20 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:06.162 01:44:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:06.163 01:44:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:06.163 01:44:20 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:06.163 01:44:20 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:06.163 01:44:20 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:06.163 01:44:20 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:06.163 01:44:20 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:06.419 01:44:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:06.419 01:44:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:06.419 01:44:21 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:06.419 01:44:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:06.419 01:44:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:06.419 01:44:21 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:06.419 01:44:21 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:06.419 01:44:21 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:06.419 01:44:21 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:06.419 01:44:21 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.419 01:44:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:06.676 01:44:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:06.676 01:44:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:06.676 01:44:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:06.676 01:44:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:06.676 01:44:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:06.676 01:44:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:06.676 01:44:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:06.676 01:44:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:06.676 01:44:21 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:06.676 01:44:21 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:06.676 01:44:21 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:06.676 01:44:21 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:06.676 01:44:21 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:06.934 01:44:21 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:07.191 [2024-07-24 01:44:21.897688] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:07.191 [2024-07-24 01:44:21.987452] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:07.191 [2024-07-24 01:44:21.987455] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.191 [2024-07-24 01:44:22.049059] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:07.191 [2024-07-24 01:44:22.049136] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:10.469 01:44:24 event.app_repeat -- event/event.sh@38 -- # waitforlisten 1298169 /var/tmp/spdk-nbd.sock 00:06:10.469 01:44:24 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 1298169 ']' 00:06:10.469 01:44:24 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:10.469 01:44:24 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:10.469 01:44:24 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:10.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:10.469 01:44:24 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:10.469 01:44:24 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:10.469 01:44:24 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:10.469 01:44:24 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:10.469 01:44:24 event.app_repeat -- event/event.sh@39 -- # killprocess 1298169 00:06:10.469 01:44:24 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 1298169 ']' 00:06:10.469 01:44:24 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 1298169 00:06:10.469 01:44:24 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:06:10.469 01:44:24 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:10.469 01:44:24 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1298169 00:06:10.469 01:44:24 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:10.469 01:44:24 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:10.469 01:44:24 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1298169' 00:06:10.469 killing process with pid 1298169 00:06:10.469 01:44:24 event.app_repeat -- common/autotest_common.sh@967 -- # kill 1298169 00:06:10.469 01:44:24 event.app_repeat -- common/autotest_common.sh@972 -- # wait 1298169 00:06:10.469 spdk_app_start is called in Round 0. 00:06:10.469 Shutdown signal received, stop current app iteration 00:06:10.469 Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 reinitialization... 00:06:10.469 spdk_app_start is called in Round 1. 00:06:10.469 Shutdown signal received, stop current app iteration 00:06:10.469 Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 reinitialization... 00:06:10.469 spdk_app_start is called in Round 2. 00:06:10.469 Shutdown signal received, stop current app iteration 00:06:10.469 Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 reinitialization... 00:06:10.469 spdk_app_start is called in Round 3. 00:06:10.469 Shutdown signal received, stop current app iteration 00:06:10.469 01:44:25 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:10.469 01:44:25 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:10.469 00:06:10.469 real 0m17.924s 00:06:10.469 user 0m39.033s 00:06:10.469 sys 0m3.222s 00:06:10.469 01:44:25 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:10.469 01:44:25 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:10.469 ************************************ 00:06:10.469 END TEST app_repeat 00:06:10.469 ************************************ 00:06:10.469 01:44:25 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:10.469 01:44:25 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:10.469 01:44:25 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:10.469 01:44:25 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:10.469 01:44:25 event -- common/autotest_common.sh@10 -- # set +x 00:06:10.469 ************************************ 00:06:10.469 START TEST cpu_locks 00:06:10.469 ************************************ 00:06:10.469 01:44:25 event.cpu_locks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:10.469 * Looking for test storage... 00:06:10.469 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:10.469 01:44:25 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:10.469 01:44:25 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:10.469 01:44:25 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:10.469 01:44:25 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:10.469 01:44:25 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:10.469 01:44:25 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:10.469 01:44:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:10.469 ************************************ 00:06:10.469 START TEST default_locks 00:06:10.469 ************************************ 00:06:10.469 01:44:25 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:06:10.469 01:44:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1300515 00:06:10.469 01:44:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:10.469 01:44:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 1300515 00:06:10.469 01:44:25 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 1300515 ']' 00:06:10.469 01:44:25 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:10.469 01:44:25 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:10.469 01:44:25 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:10.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:10.469 01:44:25 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:10.469 01:44:25 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:10.469 [2024-07-24 01:44:25.307901] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:06:10.469 [2024-07-24 01:44:25.308003] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1300515 ] 00:06:10.469 EAL: No free 2048 kB hugepages reported on node 1 00:06:10.727 [2024-07-24 01:44:25.365022] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.727 [2024-07-24 01:44:25.451497] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.986 01:44:25 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:10.986 01:44:25 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:06:10.986 01:44:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 1300515 00:06:10.986 01:44:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 1300515 00:06:10.986 01:44:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:11.243 lslocks: write error 00:06:11.243 01:44:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 1300515 00:06:11.243 01:44:26 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 1300515 ']' 00:06:11.243 01:44:26 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 1300515 00:06:11.243 01:44:26 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:06:11.244 01:44:26 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:11.244 01:44:26 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1300515 00:06:11.244 01:44:26 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:11.244 01:44:26 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:11.244 01:44:26 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1300515' 00:06:11.244 killing process with pid 1300515 00:06:11.244 01:44:26 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 1300515 00:06:11.244 01:44:26 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 1300515 00:06:11.809 01:44:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1300515 00:06:11.809 01:44:26 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:06:11.809 01:44:26 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 1300515 00:06:11.809 01:44:26 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:11.809 01:44:26 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:11.809 01:44:26 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:11.809 01:44:26 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:11.809 01:44:26 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 1300515 00:06:11.809 01:44:26 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 1300515 ']' 00:06:11.809 01:44:26 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:11.809 01:44:26 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:11.809 01:44:26 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:11.809 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:11.809 01:44:26 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:11.809 01:44:26 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:11.809 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (1300515) - No such process 00:06:11.809 ERROR: process (pid: 1300515) is no longer running 00:06:11.809 01:44:26 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:11.809 01:44:26 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:06:11.809 01:44:26 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:06:11.809 01:44:26 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:11.809 01:44:26 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:11.809 01:44:26 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:11.809 01:44:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:11.809 01:44:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:11.809 01:44:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:11.809 01:44:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:11.809 00:06:11.809 real 0m1.257s 00:06:11.809 user 0m1.195s 00:06:11.809 sys 0m0.536s 00:06:11.809 01:44:26 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:11.809 01:44:26 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:11.809 ************************************ 00:06:11.809 END TEST default_locks 00:06:11.809 ************************************ 00:06:11.809 01:44:26 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:11.809 01:44:26 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:11.809 01:44:26 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:11.809 01:44:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:11.809 ************************************ 00:06:11.809 START TEST default_locks_via_rpc 00:06:11.809 ************************************ 00:06:11.809 01:44:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:06:11.809 01:44:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1300684 00:06:11.809 01:44:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:11.809 01:44:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 1300684 00:06:11.809 01:44:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1300684 ']' 00:06:11.809 01:44:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:11.809 01:44:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:11.809 01:44:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:11.809 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:11.809 01:44:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:11.810 01:44:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:11.810 [2024-07-24 01:44:26.612002] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:06:11.810 [2024-07-24 01:44:26.612104] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1300684 ] 00:06:11.810 EAL: No free 2048 kB hugepages reported on node 1 00:06:11.810 [2024-07-24 01:44:26.673919] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.068 [2024-07-24 01:44:26.762463] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.326 01:44:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:12.326 01:44:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:12.326 01:44:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:12.326 01:44:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:12.326 01:44:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:12.326 01:44:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:12.326 01:44:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:12.326 01:44:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:12.326 01:44:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:12.326 01:44:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:12.326 01:44:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:12.326 01:44:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:12.326 01:44:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:12.326 01:44:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:12.326 01:44:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 1300684 00:06:12.326 01:44:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 1300684 00:06:12.326 01:44:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:12.584 01:44:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 1300684 00:06:12.584 01:44:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 1300684 ']' 00:06:12.584 01:44:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 1300684 00:06:12.584 01:44:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:06:12.584 01:44:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:12.584 01:44:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1300684 00:06:12.584 01:44:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:12.584 01:44:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:12.584 01:44:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1300684' 00:06:12.584 killing process with pid 1300684 00:06:12.584 01:44:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 1300684 00:06:12.584 01:44:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 1300684 00:06:13.150 00:06:13.150 real 0m1.200s 00:06:13.150 user 0m1.138s 00:06:13.150 sys 0m0.528s 00:06:13.150 01:44:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:13.150 01:44:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:13.150 ************************************ 00:06:13.150 END TEST default_locks_via_rpc 00:06:13.150 ************************************ 00:06:13.150 01:44:27 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:13.150 01:44:27 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:13.150 01:44:27 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:13.150 01:44:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:13.150 ************************************ 00:06:13.150 START TEST non_locking_app_on_locked_coremask 00:06:13.150 ************************************ 00:06:13.150 01:44:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:06:13.150 01:44:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1300846 00:06:13.150 01:44:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:13.150 01:44:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 1300846 /var/tmp/spdk.sock 00:06:13.150 01:44:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1300846 ']' 00:06:13.150 01:44:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:13.150 01:44:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:13.150 01:44:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:13.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:13.150 01:44:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:13.150 01:44:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:13.150 [2024-07-24 01:44:27.865459] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:06:13.150 [2024-07-24 01:44:27.865547] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1300846 ] 00:06:13.150 EAL: No free 2048 kB hugepages reported on node 1 00:06:13.150 [2024-07-24 01:44:27.922786] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.150 [2024-07-24 01:44:28.013040] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.408 01:44:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:13.408 01:44:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:13.408 01:44:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1300860 00:06:13.408 01:44:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:13.408 01:44:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 1300860 /var/tmp/spdk2.sock 00:06:13.408 01:44:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1300860 ']' 00:06:13.408 01:44:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:13.408 01:44:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:13.408 01:44:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:13.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:13.408 01:44:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:13.408 01:44:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:13.667 [2024-07-24 01:44:28.309950] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:06:13.667 [2024-07-24 01:44:28.310027] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1300860 ] 00:06:13.667 EAL: No free 2048 kB hugepages reported on node 1 00:06:13.667 [2024-07-24 01:44:28.404118] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:13.667 [2024-07-24 01:44:28.404152] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.925 [2024-07-24 01:44:28.588479] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.490 01:44:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:14.490 01:44:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:14.490 01:44:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 1300846 00:06:14.490 01:44:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1300846 00:06:14.490 01:44:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:15.056 lslocks: write error 00:06:15.056 01:44:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 1300846 00:06:15.056 01:44:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1300846 ']' 00:06:15.056 01:44:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 1300846 00:06:15.056 01:44:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:15.056 01:44:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:15.056 01:44:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1300846 00:06:15.057 01:44:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:15.057 01:44:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:15.057 01:44:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1300846' 00:06:15.057 killing process with pid 1300846 00:06:15.057 01:44:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 1300846 00:06:15.057 01:44:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 1300846 00:06:15.991 01:44:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 1300860 00:06:15.991 01:44:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1300860 ']' 00:06:15.991 01:44:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 1300860 00:06:15.991 01:44:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:15.991 01:44:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:15.991 01:44:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1300860 00:06:15.991 01:44:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:15.991 01:44:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:15.991 01:44:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1300860' 00:06:15.991 killing process with pid 1300860 00:06:15.991 01:44:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 1300860 00:06:15.991 01:44:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 1300860 00:06:16.249 00:06:16.249 real 0m3.218s 00:06:16.249 user 0m3.367s 00:06:16.249 sys 0m1.085s 00:06:16.249 01:44:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:16.249 01:44:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:16.249 ************************************ 00:06:16.249 END TEST non_locking_app_on_locked_coremask 00:06:16.249 ************************************ 00:06:16.249 01:44:31 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:16.249 01:44:31 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:16.249 01:44:31 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:16.249 01:44:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:16.249 ************************************ 00:06:16.249 START TEST locking_app_on_unlocked_coremask 00:06:16.249 ************************************ 00:06:16.249 01:44:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:06:16.249 01:44:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1301280 00:06:16.249 01:44:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:16.250 01:44:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 1301280 /var/tmp/spdk.sock 00:06:16.250 01:44:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1301280 ']' 00:06:16.250 01:44:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:16.250 01:44:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:16.250 01:44:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:16.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:16.250 01:44:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:16.250 01:44:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:16.250 [2024-07-24 01:44:31.135432] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:06:16.250 [2024-07-24 01:44:31.135520] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1301280 ] 00:06:16.508 EAL: No free 2048 kB hugepages reported on node 1 00:06:16.508 [2024-07-24 01:44:31.192905] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:16.508 [2024-07-24 01:44:31.192946] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.508 [2024-07-24 01:44:31.281714] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.767 01:44:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:16.767 01:44:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:16.767 01:44:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1301291 00:06:16.767 01:44:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:16.767 01:44:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 1301291 /var/tmp/spdk2.sock 00:06:16.767 01:44:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1301291 ']' 00:06:16.767 01:44:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:16.767 01:44:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:16.767 01:44:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:16.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:16.767 01:44:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:16.767 01:44:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:16.767 [2024-07-24 01:44:31.588824] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:06:16.767 [2024-07-24 01:44:31.588918] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1301291 ] 00:06:16.767 EAL: No free 2048 kB hugepages reported on node 1 00:06:17.025 [2024-07-24 01:44:31.687090] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.025 [2024-07-24 01:44:31.874409] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.957 01:44:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:17.957 01:44:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:17.957 01:44:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 1301291 00:06:17.957 01:44:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1301291 00:06:17.957 01:44:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:18.215 lslocks: write error 00:06:18.215 01:44:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 1301280 00:06:18.215 01:44:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1301280 ']' 00:06:18.215 01:44:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 1301280 00:06:18.215 01:44:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:18.215 01:44:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:18.215 01:44:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1301280 00:06:18.215 01:44:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:18.215 01:44:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:18.215 01:44:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1301280' 00:06:18.215 killing process with pid 1301280 00:06:18.215 01:44:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 1301280 00:06:18.215 01:44:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 1301280 00:06:19.184 01:44:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 1301291 00:06:19.184 01:44:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1301291 ']' 00:06:19.184 01:44:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 1301291 00:06:19.184 01:44:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:19.184 01:44:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:19.184 01:44:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1301291 00:06:19.184 01:44:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:19.184 01:44:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:19.184 01:44:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1301291' 00:06:19.184 killing process with pid 1301291 00:06:19.184 01:44:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 1301291 00:06:19.184 01:44:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 1301291 00:06:19.447 00:06:19.447 real 0m3.175s 00:06:19.447 user 0m3.345s 00:06:19.447 sys 0m1.049s 00:06:19.447 01:44:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:19.447 01:44:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:19.447 ************************************ 00:06:19.447 END TEST locking_app_on_unlocked_coremask 00:06:19.447 ************************************ 00:06:19.447 01:44:34 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:19.447 01:44:34 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:19.447 01:44:34 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:19.447 01:44:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:19.447 ************************************ 00:06:19.447 START TEST locking_app_on_locked_coremask 00:06:19.447 ************************************ 00:06:19.447 01:44:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:06:19.447 01:44:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1301715 00:06:19.447 01:44:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:19.448 01:44:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 1301715 /var/tmp/spdk.sock 00:06:19.448 01:44:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1301715 ']' 00:06:19.448 01:44:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:19.448 01:44:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:19.448 01:44:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:19.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:19.448 01:44:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:19.448 01:44:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:19.706 [2024-07-24 01:44:34.359541] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:06:19.706 [2024-07-24 01:44:34.359649] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1301715 ] 00:06:19.706 EAL: No free 2048 kB hugepages reported on node 1 00:06:19.706 [2024-07-24 01:44:34.421069] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.706 [2024-07-24 01:44:34.510144] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.964 01:44:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:19.964 01:44:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:19.964 01:44:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1301725 00:06:19.964 01:44:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:19.964 01:44:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1301725 /var/tmp/spdk2.sock 00:06:19.964 01:44:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:19.964 01:44:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 1301725 /var/tmp/spdk2.sock 00:06:19.964 01:44:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:19.964 01:44:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:19.964 01:44:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:19.964 01:44:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:19.964 01:44:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 1301725 /var/tmp/spdk2.sock 00:06:19.964 01:44:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1301725 ']' 00:06:19.964 01:44:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:19.964 01:44:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:19.964 01:44:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:19.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:19.964 01:44:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:19.964 01:44:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:19.964 [2024-07-24 01:44:34.817856] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:06:19.964 [2024-07-24 01:44:34.817950] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1301725 ] 00:06:19.964 EAL: No free 2048 kB hugepages reported on node 1 00:06:20.222 [2024-07-24 01:44:34.914384] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1301715 has claimed it. 00:06:20.222 [2024-07-24 01:44:34.914456] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:20.787 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (1301725) - No such process 00:06:20.787 ERROR: process (pid: 1301725) is no longer running 00:06:20.787 01:44:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:20.787 01:44:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:06:20.788 01:44:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:20.788 01:44:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:20.788 01:44:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:20.788 01:44:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:20.788 01:44:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 1301715 00:06:20.788 01:44:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1301715 00:06:20.788 01:44:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:21.353 lslocks: write error 00:06:21.353 01:44:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 1301715 00:06:21.353 01:44:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1301715 ']' 00:06:21.353 01:44:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 1301715 00:06:21.353 01:44:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:21.353 01:44:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:21.353 01:44:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1301715 00:06:21.353 01:44:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:21.353 01:44:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:21.353 01:44:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1301715' 00:06:21.353 killing process with pid 1301715 00:06:21.353 01:44:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 1301715 00:06:21.353 01:44:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 1301715 00:06:21.611 00:06:21.611 real 0m2.114s 00:06:21.611 user 0m2.260s 00:06:21.611 sys 0m0.681s 00:06:21.611 01:44:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:21.611 01:44:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:21.611 ************************************ 00:06:21.611 END TEST locking_app_on_locked_coremask 00:06:21.611 ************************************ 00:06:21.611 01:44:36 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:21.611 01:44:36 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:21.611 01:44:36 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:21.611 01:44:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:21.611 ************************************ 00:06:21.611 START TEST locking_overlapped_coremask 00:06:21.611 ************************************ 00:06:21.611 01:44:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:06:21.611 01:44:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1302014 00:06:21.611 01:44:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:21.611 01:44:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 1302014 /var/tmp/spdk.sock 00:06:21.611 01:44:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 1302014 ']' 00:06:21.611 01:44:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:21.611 01:44:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:21.611 01:44:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:21.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:21.611 01:44:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:21.611 01:44:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:21.870 [2024-07-24 01:44:36.522723] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:06:21.870 [2024-07-24 01:44:36.522819] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1302014 ] 00:06:21.870 EAL: No free 2048 kB hugepages reported on node 1 00:06:21.870 [2024-07-24 01:44:36.585657] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:21.870 [2024-07-24 01:44:36.675634] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:21.870 [2024-07-24 01:44:36.675689] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:21.870 [2024-07-24 01:44:36.675706] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.128 01:44:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:22.128 01:44:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:22.128 01:44:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1302025 00:06:22.128 01:44:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:22.128 01:44:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1302025 /var/tmp/spdk2.sock 00:06:22.128 01:44:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:22.128 01:44:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 1302025 /var/tmp/spdk2.sock 00:06:22.128 01:44:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:22.128 01:44:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:22.128 01:44:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:22.128 01:44:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:22.128 01:44:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 1302025 /var/tmp/spdk2.sock 00:06:22.128 01:44:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 1302025 ']' 00:06:22.128 01:44:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:22.128 01:44:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:22.128 01:44:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:22.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:22.128 01:44:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:22.128 01:44:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:22.128 [2024-07-24 01:44:36.977121] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:06:22.128 [2024-07-24 01:44:36.977215] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1302025 ] 00:06:22.128 EAL: No free 2048 kB hugepages reported on node 1 00:06:22.385 [2024-07-24 01:44:37.064951] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1302014 has claimed it. 00:06:22.385 [2024-07-24 01:44:37.065015] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:22.950 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (1302025) - No such process 00:06:22.950 ERROR: process (pid: 1302025) is no longer running 00:06:22.951 01:44:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:22.951 01:44:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:06:22.951 01:44:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:22.951 01:44:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:22.951 01:44:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:22.951 01:44:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:22.951 01:44:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:22.951 01:44:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:22.951 01:44:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:22.951 01:44:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:22.951 01:44:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 1302014 00:06:22.951 01:44:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 1302014 ']' 00:06:22.951 01:44:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 1302014 00:06:22.951 01:44:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:06:22.951 01:44:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:22.951 01:44:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1302014 00:06:22.951 01:44:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:22.951 01:44:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:22.951 01:44:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1302014' 00:06:22.951 killing process with pid 1302014 00:06:22.951 01:44:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 1302014 00:06:22.951 01:44:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 1302014 00:06:23.517 00:06:23.517 real 0m1.642s 00:06:23.517 user 0m4.431s 00:06:23.517 sys 0m0.450s 00:06:23.517 01:44:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:23.517 01:44:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:23.517 ************************************ 00:06:23.517 END TEST locking_overlapped_coremask 00:06:23.517 ************************************ 00:06:23.517 01:44:38 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:23.517 01:44:38 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:23.517 01:44:38 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:23.517 01:44:38 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:23.517 ************************************ 00:06:23.517 START TEST locking_overlapped_coremask_via_rpc 00:06:23.517 ************************************ 00:06:23.517 01:44:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:06:23.517 01:44:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1302187 00:06:23.517 01:44:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:23.517 01:44:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 1302187 /var/tmp/spdk.sock 00:06:23.517 01:44:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1302187 ']' 00:06:23.517 01:44:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:23.517 01:44:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:23.517 01:44:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:23.517 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:23.517 01:44:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:23.517 01:44:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:23.517 [2024-07-24 01:44:38.214218] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:06:23.517 [2024-07-24 01:44:38.214330] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1302187 ] 00:06:23.517 EAL: No free 2048 kB hugepages reported on node 1 00:06:23.517 [2024-07-24 01:44:38.275039] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:23.517 [2024-07-24 01:44:38.275088] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:23.517 [2024-07-24 01:44:38.364056] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:23.517 [2024-07-24 01:44:38.364079] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:23.517 [2024-07-24 01:44:38.364082] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.775 01:44:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:23.775 01:44:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:23.775 01:44:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1302322 00:06:23.775 01:44:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:23.775 01:44:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 1302322 /var/tmp/spdk2.sock 00:06:23.775 01:44:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1302322 ']' 00:06:23.775 01:44:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:23.775 01:44:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:23.775 01:44:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:23.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:23.775 01:44:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:23.775 01:44:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:23.775 [2024-07-24 01:44:38.649433] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:06:23.775 [2024-07-24 01:44:38.649533] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1302322 ] 00:06:24.033 EAL: No free 2048 kB hugepages reported on node 1 00:06:24.033 [2024-07-24 01:44:38.738566] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:24.033 [2024-07-24 01:44:38.738610] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:24.033 [2024-07-24 01:44:38.914282] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:24.033 [2024-07-24 01:44:38.914308] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:06:24.033 [2024-07-24 01:44:38.914310] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:24.966 01:44:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:24.966 01:44:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:24.966 01:44:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:24.966 01:44:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:24.966 01:44:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:24.966 01:44:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:24.966 01:44:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:24.966 01:44:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:06:24.966 01:44:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:24.966 01:44:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:06:24.966 01:44:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:24.966 01:44:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:06:24.966 01:44:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:24.966 01:44:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:24.966 01:44:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:24.966 01:44:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:24.966 [2024-07-24 01:44:39.618421] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1302187 has claimed it. 00:06:24.966 request: 00:06:24.966 { 00:06:24.966 "method": "framework_enable_cpumask_locks", 00:06:24.966 "req_id": 1 00:06:24.966 } 00:06:24.966 Got JSON-RPC error response 00:06:24.966 response: 00:06:24.966 { 00:06:24.966 "code": -32603, 00:06:24.966 "message": "Failed to claim CPU core: 2" 00:06:24.966 } 00:06:24.966 01:44:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:24.966 01:44:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:06:24.966 01:44:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:24.966 01:44:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:24.966 01:44:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:24.966 01:44:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 1302187 /var/tmp/spdk.sock 00:06:24.966 01:44:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1302187 ']' 00:06:24.966 01:44:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:24.966 01:44:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:24.966 01:44:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:24.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:24.966 01:44:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:24.966 01:44:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:25.224 01:44:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:25.224 01:44:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:25.224 01:44:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 1302322 /var/tmp/spdk2.sock 00:06:25.224 01:44:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1302322 ']' 00:06:25.224 01:44:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:25.224 01:44:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:25.224 01:44:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:25.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:25.224 01:44:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:25.224 01:44:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:25.224 01:44:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:25.224 01:44:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:25.224 01:44:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:25.224 01:44:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:25.483 01:44:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:25.483 01:44:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:25.483 00:06:25.483 real 0m1.958s 00:06:25.483 user 0m1.015s 00:06:25.483 sys 0m0.190s 00:06:25.483 01:44:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:25.483 01:44:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:25.483 ************************************ 00:06:25.483 END TEST locking_overlapped_coremask_via_rpc 00:06:25.483 ************************************ 00:06:25.483 01:44:40 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:25.483 01:44:40 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1302187 ]] 00:06:25.483 01:44:40 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1302187 00:06:25.483 01:44:40 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 1302187 ']' 00:06:25.483 01:44:40 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 1302187 00:06:25.483 01:44:40 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:06:25.483 01:44:40 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:25.483 01:44:40 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1302187 00:06:25.483 01:44:40 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:25.483 01:44:40 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:25.483 01:44:40 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1302187' 00:06:25.483 killing process with pid 1302187 00:06:25.483 01:44:40 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 1302187 00:06:25.483 01:44:40 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 1302187 00:06:25.741 01:44:40 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1302322 ]] 00:06:25.741 01:44:40 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1302322 00:06:25.741 01:44:40 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 1302322 ']' 00:06:25.741 01:44:40 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 1302322 00:06:25.741 01:44:40 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:06:25.741 01:44:40 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:25.741 01:44:40 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1302322 00:06:25.741 01:44:40 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:06:25.741 01:44:40 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:06:25.741 01:44:40 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1302322' 00:06:25.741 killing process with pid 1302322 00:06:25.741 01:44:40 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 1302322 00:06:25.741 01:44:40 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 1302322 00:06:26.306 01:44:40 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:26.306 01:44:40 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:26.306 01:44:40 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1302187 ]] 00:06:26.306 01:44:40 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1302187 00:06:26.306 01:44:40 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 1302187 ']' 00:06:26.306 01:44:40 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 1302187 00:06:26.306 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1302187) - No such process 00:06:26.306 01:44:40 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 1302187 is not found' 00:06:26.306 Process with pid 1302187 is not found 00:06:26.306 01:44:40 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1302322 ]] 00:06:26.306 01:44:40 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1302322 00:06:26.306 01:44:40 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 1302322 ']' 00:06:26.306 01:44:40 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 1302322 00:06:26.306 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1302322) - No such process 00:06:26.306 01:44:40 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 1302322 is not found' 00:06:26.306 Process with pid 1302322 is not found 00:06:26.306 01:44:40 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:26.306 00:06:26.306 real 0m15.819s 00:06:26.306 user 0m27.517s 00:06:26.306 sys 0m5.392s 00:06:26.306 01:44:40 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:26.306 01:44:40 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:26.306 ************************************ 00:06:26.307 END TEST cpu_locks 00:06:26.307 ************************************ 00:06:26.307 00:06:26.307 real 0m39.551s 00:06:26.307 user 1m15.326s 00:06:26.307 sys 0m9.445s 00:06:26.307 01:44:41 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:26.307 01:44:41 event -- common/autotest_common.sh@10 -- # set +x 00:06:26.307 ************************************ 00:06:26.307 END TEST event 00:06:26.307 ************************************ 00:06:26.307 01:44:41 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:26.307 01:44:41 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:26.307 01:44:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:26.307 01:44:41 -- common/autotest_common.sh@10 -- # set +x 00:06:26.307 ************************************ 00:06:26.307 START TEST thread 00:06:26.307 ************************************ 00:06:26.307 01:44:41 thread -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:26.307 * Looking for test storage... 00:06:26.307 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:26.307 01:44:41 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:26.307 01:44:41 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:26.307 01:44:41 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:26.307 01:44:41 thread -- common/autotest_common.sh@10 -- # set +x 00:06:26.307 ************************************ 00:06:26.307 START TEST thread_poller_perf 00:06:26.307 ************************************ 00:06:26.307 01:44:41 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:26.307 [2024-07-24 01:44:41.152012] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:06:26.307 [2024-07-24 01:44:41.152083] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1302686 ] 00:06:26.307 EAL: No free 2048 kB hugepages reported on node 1 00:06:26.565 [2024-07-24 01:44:41.212948] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.565 [2024-07-24 01:44:41.300699] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.565 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:27.498 ====================================== 00:06:27.498 busy:2710995072 (cyc) 00:06:27.498 total_run_count: 298000 00:06:27.498 tsc_hz: 2700000000 (cyc) 00:06:27.498 ====================================== 00:06:27.498 poller_cost: 9097 (cyc), 3369 (nsec) 00:06:27.498 00:06:27.498 real 0m1.251s 00:06:27.498 user 0m1.159s 00:06:27.498 sys 0m0.087s 00:06:27.498 01:44:42 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:27.498 01:44:42 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:27.498 ************************************ 00:06:27.498 END TEST thread_poller_perf 00:06:27.498 ************************************ 00:06:27.755 01:44:42 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:27.755 01:44:42 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:27.755 01:44:42 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:27.755 01:44:42 thread -- common/autotest_common.sh@10 -- # set +x 00:06:27.755 ************************************ 00:06:27.755 START TEST thread_poller_perf 00:06:27.755 ************************************ 00:06:27.755 01:44:42 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:27.755 [2024-07-24 01:44:42.447262] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:06:27.756 [2024-07-24 01:44:42.447370] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1302840 ] 00:06:27.756 EAL: No free 2048 kB hugepages reported on node 1 00:06:27.756 [2024-07-24 01:44:42.506799] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.756 [2024-07-24 01:44:42.598985] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.756 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:29.129 ====================================== 00:06:29.129 busy:2702450439 (cyc) 00:06:29.129 total_run_count: 3856000 00:06:29.129 tsc_hz: 2700000000 (cyc) 00:06:29.129 ====================================== 00:06:29.129 poller_cost: 700 (cyc), 259 (nsec) 00:06:29.129 00:06:29.129 real 0m1.246s 00:06:29.129 user 0m1.167s 00:06:29.129 sys 0m0.074s 00:06:29.129 01:44:43 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:29.129 01:44:43 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:29.129 ************************************ 00:06:29.129 END TEST thread_poller_perf 00:06:29.129 ************************************ 00:06:29.129 01:44:43 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:29.129 00:06:29.129 real 0m2.637s 00:06:29.129 user 0m2.390s 00:06:29.129 sys 0m0.247s 00:06:29.129 01:44:43 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:29.129 01:44:43 thread -- common/autotest_common.sh@10 -- # set +x 00:06:29.129 ************************************ 00:06:29.129 END TEST thread 00:06:29.129 ************************************ 00:06:29.129 01:44:43 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:29.129 01:44:43 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:29.129 01:44:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:29.129 01:44:43 -- common/autotest_common.sh@10 -- # set +x 00:06:29.129 ************************************ 00:06:29.129 START TEST accel 00:06:29.129 ************************************ 00:06:29.129 01:44:43 accel -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:29.129 * Looking for test storage... 00:06:29.129 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:29.129 01:44:43 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:06:29.129 01:44:43 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:06:29.129 01:44:43 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:29.129 01:44:43 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=1303037 00:06:29.129 01:44:43 accel -- accel/accel.sh@63 -- # waitforlisten 1303037 00:06:29.129 01:44:43 accel -- common/autotest_common.sh@829 -- # '[' -z 1303037 ']' 00:06:29.129 01:44:43 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:29.129 01:44:43 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:29.129 01:44:43 accel -- accel/accel.sh@61 -- # build_accel_config 00:06:29.129 01:44:43 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:29.129 01:44:43 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:29.129 01:44:43 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:29.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:29.130 01:44:43 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:29.130 01:44:43 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:29.130 01:44:43 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:29.130 01:44:43 accel -- common/autotest_common.sh@10 -- # set +x 00:06:29.130 01:44:43 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:29.130 01:44:43 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:29.130 01:44:43 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:29.130 01:44:43 accel -- accel/accel.sh@41 -- # jq -r . 00:06:29.130 [2024-07-24 01:44:43.854489] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:06:29.130 [2024-07-24 01:44:43.854574] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1303037 ] 00:06:29.130 EAL: No free 2048 kB hugepages reported on node 1 00:06:29.130 [2024-07-24 01:44:43.914566] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.130 [2024-07-24 01:44:44.007544] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.387 01:44:44 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:29.387 01:44:44 accel -- common/autotest_common.sh@862 -- # return 0 00:06:29.387 01:44:44 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:06:29.387 01:44:44 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:06:29.387 01:44:44 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:06:29.387 01:44:44 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:06:29.387 01:44:44 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:29.387 01:44:44 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:06:29.387 01:44:44 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:29.387 01:44:44 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:29.387 01:44:44 accel -- common/autotest_common.sh@10 -- # set +x 00:06:29.387 01:44:44 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:29.645 01:44:44 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:29.645 01:44:44 accel -- accel/accel.sh@72 -- # IFS== 00:06:29.645 01:44:44 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:29.645 01:44:44 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:29.645 01:44:44 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:29.645 01:44:44 accel -- accel/accel.sh@72 -- # IFS== 00:06:29.645 01:44:44 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:29.645 01:44:44 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:29.645 01:44:44 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:29.645 01:44:44 accel -- accel/accel.sh@72 -- # IFS== 00:06:29.645 01:44:44 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:29.645 01:44:44 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:29.645 01:44:44 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:29.645 01:44:44 accel -- accel/accel.sh@72 -- # IFS== 00:06:29.645 01:44:44 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:29.645 01:44:44 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:29.645 01:44:44 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:29.645 01:44:44 accel -- accel/accel.sh@72 -- # IFS== 00:06:29.645 01:44:44 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:29.645 01:44:44 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:29.645 01:44:44 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:29.645 01:44:44 accel -- accel/accel.sh@72 -- # IFS== 00:06:29.645 01:44:44 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:29.645 01:44:44 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:29.645 01:44:44 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:29.645 01:44:44 accel -- accel/accel.sh@72 -- # IFS== 00:06:29.645 01:44:44 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:29.645 01:44:44 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:29.645 01:44:44 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:29.645 01:44:44 accel -- accel/accel.sh@72 -- # IFS== 00:06:29.645 01:44:44 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:29.645 01:44:44 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:29.645 01:44:44 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:29.645 01:44:44 accel -- accel/accel.sh@72 -- # IFS== 00:06:29.645 01:44:44 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:29.645 01:44:44 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:29.645 01:44:44 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:29.645 01:44:44 accel -- accel/accel.sh@72 -- # IFS== 00:06:29.645 01:44:44 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:29.645 01:44:44 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:29.645 01:44:44 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:29.645 01:44:44 accel -- accel/accel.sh@72 -- # IFS== 00:06:29.645 01:44:44 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:29.645 01:44:44 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:29.645 01:44:44 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:29.645 01:44:44 accel -- accel/accel.sh@72 -- # IFS== 00:06:29.645 01:44:44 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:29.645 01:44:44 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:29.645 01:44:44 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:29.645 01:44:44 accel -- accel/accel.sh@72 -- # IFS== 00:06:29.645 01:44:44 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:29.645 01:44:44 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:29.645 01:44:44 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:29.645 01:44:44 accel -- accel/accel.sh@72 -- # IFS== 00:06:29.645 01:44:44 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:29.645 01:44:44 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:29.645 01:44:44 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:29.645 01:44:44 accel -- accel/accel.sh@72 -- # IFS== 00:06:29.645 01:44:44 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:29.645 01:44:44 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:29.645 01:44:44 accel -- accel/accel.sh@75 -- # killprocess 1303037 00:06:29.645 01:44:44 accel -- common/autotest_common.sh@948 -- # '[' -z 1303037 ']' 00:06:29.645 01:44:44 accel -- common/autotest_common.sh@952 -- # kill -0 1303037 00:06:29.645 01:44:44 accel -- common/autotest_common.sh@953 -- # uname 00:06:29.645 01:44:44 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:29.645 01:44:44 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1303037 00:06:29.645 01:44:44 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:29.645 01:44:44 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:29.645 01:44:44 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1303037' 00:06:29.645 killing process with pid 1303037 00:06:29.645 01:44:44 accel -- common/autotest_common.sh@967 -- # kill 1303037 00:06:29.645 01:44:44 accel -- common/autotest_common.sh@972 -- # wait 1303037 00:06:29.903 01:44:44 accel -- accel/accel.sh@76 -- # trap - ERR 00:06:29.903 01:44:44 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:06:29.903 01:44:44 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:29.903 01:44:44 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:29.903 01:44:44 accel -- common/autotest_common.sh@10 -- # set +x 00:06:29.903 01:44:44 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:06:29.903 01:44:44 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:29.903 01:44:44 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:06:30.161 01:44:44 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:30.161 01:44:44 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:30.161 01:44:44 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:30.161 01:44:44 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:30.161 01:44:44 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:30.161 01:44:44 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:06:30.161 01:44:44 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:06:30.161 01:44:44 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:30.161 01:44:44 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:06:30.161 01:44:44 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:30.161 01:44:44 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:30.161 01:44:44 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:30.161 01:44:44 accel -- common/autotest_common.sh@10 -- # set +x 00:06:30.161 ************************************ 00:06:30.161 START TEST accel_missing_filename 00:06:30.161 ************************************ 00:06:30.161 01:44:44 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:06:30.161 01:44:44 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:06:30.161 01:44:44 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:30.161 01:44:44 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:30.161 01:44:44 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:30.161 01:44:44 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:30.161 01:44:44 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:30.161 01:44:44 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:06:30.161 01:44:44 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:30.161 01:44:44 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:06:30.161 01:44:44 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:30.161 01:44:44 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:30.161 01:44:44 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:30.161 01:44:44 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:30.161 01:44:44 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:30.161 01:44:44 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:06:30.161 01:44:44 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:06:30.161 [2024-07-24 01:44:44.878281] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:06:30.161 [2024-07-24 01:44:44.878359] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1303205 ] 00:06:30.161 EAL: No free 2048 kB hugepages reported on node 1 00:06:30.161 [2024-07-24 01:44:44.939666] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.161 [2024-07-24 01:44:45.032634] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.419 [2024-07-24 01:44:45.094536] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:30.419 [2024-07-24 01:44:45.173528] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:06:30.419 A filename is required. 00:06:30.419 01:44:45 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:06:30.419 01:44:45 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:30.419 01:44:45 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:06:30.419 01:44:45 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:06:30.419 01:44:45 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:06:30.419 01:44:45 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:30.419 00:06:30.419 real 0m0.390s 00:06:30.419 user 0m0.283s 00:06:30.419 sys 0m0.141s 00:06:30.419 01:44:45 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:30.419 01:44:45 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:06:30.419 ************************************ 00:06:30.419 END TEST accel_missing_filename 00:06:30.419 ************************************ 00:06:30.419 01:44:45 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:30.419 01:44:45 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:06:30.419 01:44:45 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:30.419 01:44:45 accel -- common/autotest_common.sh@10 -- # set +x 00:06:30.419 ************************************ 00:06:30.419 START TEST accel_compress_verify 00:06:30.419 ************************************ 00:06:30.419 01:44:45 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:30.419 01:44:45 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:06:30.419 01:44:45 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:30.419 01:44:45 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:30.419 01:44:45 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:30.419 01:44:45 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:30.419 01:44:45 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:30.419 01:44:45 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:30.419 01:44:45 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:30.419 01:44:45 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:30.419 01:44:45 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:30.419 01:44:45 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:30.419 01:44:45 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:30.419 01:44:45 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:30.419 01:44:45 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:30.419 01:44:45 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:30.419 01:44:45 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:06:30.419 [2024-07-24 01:44:45.313306] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:06:30.419 [2024-07-24 01:44:45.313403] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1303345 ] 00:06:30.677 EAL: No free 2048 kB hugepages reported on node 1 00:06:30.677 [2024-07-24 01:44:45.378241] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.677 [2024-07-24 01:44:45.469498] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.677 [2024-07-24 01:44:45.529681] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:30.935 [2024-07-24 01:44:45.611618] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:06:30.935 00:06:30.935 Compression does not support the verify option, aborting. 00:06:30.935 01:44:45 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:06:30.935 01:44:45 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:30.935 01:44:45 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:06:30.935 01:44:45 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:06:30.936 01:44:45 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:06:30.936 01:44:45 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:30.936 00:06:30.936 real 0m0.397s 00:06:30.936 user 0m0.290s 00:06:30.936 sys 0m0.143s 00:06:30.936 01:44:45 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:30.936 01:44:45 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:06:30.936 ************************************ 00:06:30.936 END TEST accel_compress_verify 00:06:30.936 ************************************ 00:06:30.936 01:44:45 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:30.936 01:44:45 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:30.936 01:44:45 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:30.936 01:44:45 accel -- common/autotest_common.sh@10 -- # set +x 00:06:30.936 ************************************ 00:06:30.936 START TEST accel_wrong_workload 00:06:30.936 ************************************ 00:06:30.936 01:44:45 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:06:30.936 01:44:45 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:06:30.936 01:44:45 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:30.936 01:44:45 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:30.936 01:44:45 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:30.936 01:44:45 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:30.936 01:44:45 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:30.936 01:44:45 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:06:30.936 01:44:45 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:30.936 01:44:45 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:06:30.936 01:44:45 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:30.936 01:44:45 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:30.936 01:44:45 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:30.936 01:44:45 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:30.936 01:44:45 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:30.936 01:44:45 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:06:30.936 01:44:45 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:06:30.936 Unsupported workload type: foobar 00:06:30.936 [2024-07-24 01:44:45.753853] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:30.936 accel_perf options: 00:06:30.936 [-h help message] 00:06:30.936 [-q queue depth per core] 00:06:30.936 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:30.936 [-T number of threads per core 00:06:30.936 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:30.936 [-t time in seconds] 00:06:30.936 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:30.936 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:30.936 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:30.936 [-l for compress/decompress workloads, name of uncompressed input file 00:06:30.936 [-S for crc32c workload, use this seed value (default 0) 00:06:30.936 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:30.936 [-f for fill workload, use this BYTE value (default 255) 00:06:30.936 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:30.936 [-y verify result if this switch is on] 00:06:30.936 [-a tasks to allocate per core (default: same value as -q)] 00:06:30.936 Can be used to spread operations across a wider range of memory. 00:06:30.936 01:44:45 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:06:30.936 01:44:45 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:30.936 01:44:45 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:30.936 01:44:45 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:30.936 00:06:30.936 real 0m0.021s 00:06:30.936 user 0m0.010s 00:06:30.936 sys 0m0.011s 00:06:30.936 01:44:45 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:30.936 01:44:45 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:06:30.936 ************************************ 00:06:30.936 END TEST accel_wrong_workload 00:06:30.936 ************************************ 00:06:30.936 Error: writing output failed: Broken pipe 00:06:30.936 01:44:45 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:30.936 01:44:45 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:06:30.936 01:44:45 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:30.936 01:44:45 accel -- common/autotest_common.sh@10 -- # set +x 00:06:30.936 ************************************ 00:06:30.936 START TEST accel_negative_buffers 00:06:30.936 ************************************ 00:06:30.936 01:44:45 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:30.936 01:44:45 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:06:30.936 01:44:45 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:30.936 01:44:45 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:30.936 01:44:45 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:30.936 01:44:45 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:30.936 01:44:45 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:30.936 01:44:45 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:06:30.936 01:44:45 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:30.936 01:44:45 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:06:30.936 01:44:45 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:30.936 01:44:45 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:30.936 01:44:45 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:30.936 01:44:45 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:30.936 01:44:45 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:30.936 01:44:45 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:06:30.936 01:44:45 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:06:30.936 -x option must be non-negative. 00:06:30.936 [2024-07-24 01:44:45.823451] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:30.936 accel_perf options: 00:06:30.936 [-h help message] 00:06:30.936 [-q queue depth per core] 00:06:30.936 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:30.936 [-T number of threads per core 00:06:30.936 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:30.936 [-t time in seconds] 00:06:30.936 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:30.936 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:30.936 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:30.936 [-l for compress/decompress workloads, name of uncompressed input file 00:06:30.936 [-S for crc32c workload, use this seed value (default 0) 00:06:30.936 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:30.936 [-f for fill workload, use this BYTE value (default 255) 00:06:30.936 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:30.936 [-y verify result if this switch is on] 00:06:30.936 [-a tasks to allocate per core (default: same value as -q)] 00:06:30.936 Can be used to spread operations across a wider range of memory. 00:06:30.936 01:44:45 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:06:30.936 01:44:45 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:30.936 01:44:45 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:30.936 01:44:45 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:30.936 00:06:30.936 real 0m0.023s 00:06:30.936 user 0m0.015s 00:06:30.936 sys 0m0.008s 00:06:30.936 01:44:45 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:30.936 01:44:45 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:06:30.936 ************************************ 00:06:30.936 END TEST accel_negative_buffers 00:06:30.936 ************************************ 00:06:31.194 Error: writing output failed: Broken pipe 00:06:31.194 01:44:45 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:31.194 01:44:45 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:31.194 01:44:45 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:31.194 01:44:45 accel -- common/autotest_common.sh@10 -- # set +x 00:06:31.194 ************************************ 00:06:31.194 START TEST accel_crc32c 00:06:31.194 ************************************ 00:06:31.194 01:44:45 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:31.194 01:44:45 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:31.194 01:44:45 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:31.194 01:44:45 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:31.194 01:44:45 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:31.194 01:44:45 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:31.194 01:44:45 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:31.194 01:44:45 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:31.194 01:44:45 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:31.194 01:44:45 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:31.194 01:44:45 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:31.194 01:44:45 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:31.194 01:44:45 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:31.194 01:44:45 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:31.194 01:44:45 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:31.194 [2024-07-24 01:44:45.885793] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:06:31.194 [2024-07-24 01:44:45.885860] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1303418 ] 00:06:31.194 EAL: No free 2048 kB hugepages reported on node 1 00:06:31.194 [2024-07-24 01:44:45.947387] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.194 [2024-07-24 01:44:46.040749] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.452 01:44:46 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:31.452 01:44:46 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:31.452 01:44:46 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:31.452 01:44:46 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:31.452 01:44:46 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:31.452 01:44:46 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:31.452 01:44:46 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:31.452 01:44:46 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:31.452 01:44:46 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:31.452 01:44:46 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:31.452 01:44:46 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:31.452 01:44:46 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:31.452 01:44:46 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:31.452 01:44:46 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:31.452 01:44:46 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:31.452 01:44:46 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:31.452 01:44:46 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:31.452 01:44:46 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:31.452 01:44:46 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:31.452 01:44:46 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:31.452 01:44:46 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:06:31.452 01:44:46 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:31.452 01:44:46 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:31.452 01:44:46 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:31.452 01:44:46 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:31.452 01:44:46 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:31.452 01:44:46 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:31.452 01:44:46 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:31.452 01:44:46 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:31.452 01:44:46 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:31.452 01:44:46 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:31.452 01:44:46 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:31.452 01:44:46 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:31.452 01:44:46 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:31.452 01:44:46 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:31.452 01:44:46 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:31.452 01:44:46 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:31.452 01:44:46 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:06:31.452 01:44:46 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:31.452 01:44:46 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:31.452 01:44:46 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:31.452 01:44:46 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:31.452 01:44:46 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:31.452 01:44:46 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:31.452 01:44:46 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:31.452 01:44:46 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:31.452 01:44:46 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:31.452 01:44:46 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:31.452 01:44:46 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:31.452 01:44:46 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:31.452 01:44:46 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:06:31.452 01:44:46 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:31.452 01:44:46 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:31.452 01:44:46 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:31.452 01:44:46 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:31.452 01:44:46 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:31.452 01:44:46 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:31.452 01:44:46 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:31.452 01:44:46 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:31.452 01:44:46 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:31.452 01:44:46 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:31.452 01:44:46 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:31.452 01:44:46 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:31.452 01:44:46 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:31.452 01:44:46 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:31.452 01:44:46 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:31.452 01:44:46 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:31.452 01:44:46 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:31.452 01:44:46 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:31.452 01:44:46 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:32.386 01:44:47 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:32.386 01:44:47 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:32.386 01:44:47 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:32.386 01:44:47 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:32.386 01:44:47 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:32.386 01:44:47 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:32.386 01:44:47 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:32.386 01:44:47 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:32.386 01:44:47 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:32.386 01:44:47 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:32.386 01:44:47 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:32.386 01:44:47 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:32.386 01:44:47 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:32.386 01:44:47 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:32.386 01:44:47 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:32.386 01:44:47 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:32.386 01:44:47 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:32.386 01:44:47 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:32.386 01:44:47 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:32.386 01:44:47 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:32.386 01:44:47 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:32.386 01:44:47 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:32.386 01:44:47 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:32.386 01:44:47 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:32.386 01:44:47 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:32.386 01:44:47 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:32.386 01:44:47 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:32.386 00:06:32.386 real 0m1.405s 00:06:32.386 user 0m1.261s 00:06:32.386 sys 0m0.147s 00:06:32.386 01:44:47 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:32.386 01:44:47 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:32.386 ************************************ 00:06:32.386 END TEST accel_crc32c 00:06:32.386 ************************************ 00:06:32.645 01:44:47 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:32.645 01:44:47 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:32.645 01:44:47 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:32.645 01:44:47 accel -- common/autotest_common.sh@10 -- # set +x 00:06:32.645 ************************************ 00:06:32.645 START TEST accel_crc32c_C2 00:06:32.645 ************************************ 00:06:32.645 01:44:47 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:32.645 01:44:47 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:32.645 01:44:47 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:32.645 01:44:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:32.645 01:44:47 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:32.645 01:44:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:32.645 01:44:47 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:32.645 01:44:47 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:32.645 01:44:47 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:32.645 01:44:47 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:32.645 01:44:47 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:32.645 01:44:47 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:32.645 01:44:47 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:32.645 01:44:47 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:32.645 01:44:47 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:32.645 [2024-07-24 01:44:47.341744] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:06:32.645 [2024-07-24 01:44:47.341801] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1303569 ] 00:06:32.645 EAL: No free 2048 kB hugepages reported on node 1 00:06:32.645 [2024-07-24 01:44:47.403437] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.645 [2024-07-24 01:44:47.494558] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.903 01:44:47 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:32.903 01:44:47 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.903 01:44:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:32.903 01:44:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:32.903 01:44:47 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:32.903 01:44:47 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.903 01:44:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:32.903 01:44:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:32.903 01:44:47 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:32.903 01:44:47 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.903 01:44:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:32.903 01:44:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:32.903 01:44:47 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:32.903 01:44:47 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.903 01:44:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:32.903 01:44:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:32.903 01:44:47 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:32.903 01:44:47 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.903 01:44:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:32.903 01:44:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:32.903 01:44:47 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:06:32.903 01:44:47 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.903 01:44:47 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:32.903 01:44:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:32.903 01:44:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:32.903 01:44:47 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:32.903 01:44:47 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.903 01:44:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:32.903 01:44:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:32.903 01:44:47 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:32.903 01:44:47 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.903 01:44:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:32.903 01:44:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:32.903 01:44:47 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:32.903 01:44:47 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.903 01:44:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:32.903 01:44:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:32.903 01:44:47 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:32.903 01:44:47 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.903 01:44:47 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:32.903 01:44:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:32.903 01:44:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:32.903 01:44:47 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:32.903 01:44:47 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.903 01:44:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:32.903 01:44:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:32.903 01:44:47 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:32.903 01:44:47 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.903 01:44:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:32.903 01:44:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:32.903 01:44:47 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:32.903 01:44:47 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.903 01:44:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:32.903 01:44:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:32.903 01:44:47 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:32.903 01:44:47 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.903 01:44:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:32.903 01:44:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:32.903 01:44:47 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:32.903 01:44:47 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.903 01:44:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:32.903 01:44:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:32.903 01:44:47 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:32.903 01:44:47 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.903 01:44:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:32.903 01:44:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:32.903 01:44:47 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:32.904 01:44:47 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.904 01:44:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:32.904 01:44:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:33.837 01:44:48 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:33.837 01:44:48 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.837 01:44:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:33.837 01:44:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:33.837 01:44:48 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:33.837 01:44:48 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.837 01:44:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:33.837 01:44:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:33.837 01:44:48 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:33.837 01:44:48 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.837 01:44:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:33.837 01:44:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:33.837 01:44:48 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:33.838 01:44:48 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.838 01:44:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:33.838 01:44:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:33.838 01:44:48 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:33.838 01:44:48 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.838 01:44:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:33.838 01:44:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:33.838 01:44:48 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:33.838 01:44:48 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.838 01:44:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:33.838 01:44:48 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:33.838 01:44:48 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:33.838 01:44:48 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:33.838 01:44:48 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:33.838 00:06:33.838 real 0m1.395s 00:06:33.838 user 0m1.254s 00:06:33.838 sys 0m0.143s 00:06:33.838 01:44:48 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:33.838 01:44:48 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:33.838 ************************************ 00:06:33.838 END TEST accel_crc32c_C2 00:06:33.838 ************************************ 00:06:34.096 01:44:48 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:34.096 01:44:48 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:34.096 01:44:48 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:34.096 01:44:48 accel -- common/autotest_common.sh@10 -- # set +x 00:06:34.096 ************************************ 00:06:34.096 START TEST accel_copy 00:06:34.096 ************************************ 00:06:34.096 01:44:48 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:06:34.096 01:44:48 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:34.096 01:44:48 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:06:34.096 01:44:48 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:34.096 01:44:48 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:34.096 01:44:48 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:34.096 01:44:48 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:34.096 01:44:48 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:34.096 01:44:48 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:34.096 01:44:48 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:34.096 01:44:48 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:34.096 01:44:48 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:34.096 01:44:48 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:34.096 01:44:48 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:34.096 01:44:48 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:06:34.096 [2024-07-24 01:44:48.780968] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:06:34.096 [2024-07-24 01:44:48.781034] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1303845 ] 00:06:34.096 EAL: No free 2048 kB hugepages reported on node 1 00:06:34.096 [2024-07-24 01:44:48.841591] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.096 [2024-07-24 01:44:48.931998] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.354 01:44:48 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:34.354 01:44:48 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:34.354 01:44:48 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:34.354 01:44:48 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:34.354 01:44:48 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:34.354 01:44:48 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:34.354 01:44:48 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:34.354 01:44:48 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:34.354 01:44:48 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:06:34.354 01:44:48 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:34.354 01:44:48 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:34.354 01:44:48 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:34.354 01:44:48 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:34.354 01:44:48 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:34.354 01:44:48 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:34.354 01:44:48 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:34.354 01:44:48 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:34.354 01:44:48 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:34.354 01:44:48 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:34.354 01:44:48 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:34.354 01:44:48 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:06:34.354 01:44:48 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:34.354 01:44:48 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:06:34.354 01:44:48 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:34.354 01:44:48 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:34.354 01:44:48 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:34.354 01:44:48 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:34.354 01:44:48 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:34.354 01:44:48 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:34.354 01:44:48 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:34.354 01:44:48 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:34.354 01:44:48 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:34.354 01:44:48 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:34.354 01:44:48 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:06:34.354 01:44:48 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:34.354 01:44:48 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:34.354 01:44:48 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:34.354 01:44:48 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:34.354 01:44:48 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:34.354 01:44:48 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:34.354 01:44:48 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:34.354 01:44:48 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:34.354 01:44:48 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:34.355 01:44:48 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:34.355 01:44:48 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:34.355 01:44:48 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:34.355 01:44:48 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:06:34.355 01:44:48 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:34.355 01:44:48 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:34.355 01:44:48 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:34.355 01:44:48 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:34.355 01:44:48 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:34.355 01:44:48 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:34.355 01:44:48 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:34.355 01:44:48 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:06:34.355 01:44:48 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:34.355 01:44:48 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:34.355 01:44:48 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:34.355 01:44:48 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:34.355 01:44:48 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:34.355 01:44:48 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:34.355 01:44:48 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:34.355 01:44:48 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:34.355 01:44:48 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:34.355 01:44:48 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:34.355 01:44:48 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:35.314 01:44:50 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:35.314 01:44:50 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:35.314 01:44:50 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:35.314 01:44:50 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:35.314 01:44:50 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:35.314 01:44:50 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:35.314 01:44:50 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:35.314 01:44:50 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:35.314 01:44:50 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:35.314 01:44:50 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:35.314 01:44:50 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:35.314 01:44:50 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:35.314 01:44:50 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:35.314 01:44:50 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:35.314 01:44:50 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:35.314 01:44:50 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:35.314 01:44:50 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:35.314 01:44:50 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:35.314 01:44:50 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:35.314 01:44:50 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:35.314 01:44:50 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:35.314 01:44:50 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:35.314 01:44:50 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:35.314 01:44:50 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:35.314 01:44:50 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:35.314 01:44:50 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:06:35.314 01:44:50 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:35.314 00:06:35.314 real 0m1.394s 00:06:35.314 user 0m1.253s 00:06:35.314 sys 0m0.142s 00:06:35.314 01:44:50 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:35.314 01:44:50 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:06:35.314 ************************************ 00:06:35.314 END TEST accel_copy 00:06:35.314 ************************************ 00:06:35.314 01:44:50 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:35.314 01:44:50 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:35.314 01:44:50 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:35.314 01:44:50 accel -- common/autotest_common.sh@10 -- # set +x 00:06:35.573 ************************************ 00:06:35.573 START TEST accel_fill 00:06:35.573 ************************************ 00:06:35.573 01:44:50 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:35.573 01:44:50 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:06:35.573 01:44:50 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:06:35.573 01:44:50 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:35.573 01:44:50 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:35.573 01:44:50 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:35.573 01:44:50 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:35.573 01:44:50 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:06:35.573 01:44:50 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:35.573 01:44:50 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:35.573 01:44:50 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:35.573 01:44:50 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:35.573 01:44:50 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:35.573 01:44:50 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:06:35.573 01:44:50 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:06:35.573 [2024-07-24 01:44:50.218814] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:06:35.573 [2024-07-24 01:44:50.218880] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1304005 ] 00:06:35.573 EAL: No free 2048 kB hugepages reported on node 1 00:06:35.573 [2024-07-24 01:44:50.279891] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.573 [2024-07-24 01:44:50.373128] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.573 01:44:50 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:35.573 01:44:50 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:35.573 01:44:50 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:35.573 01:44:50 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:35.573 01:44:50 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:35.573 01:44:50 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:35.573 01:44:50 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:35.573 01:44:50 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:35.573 01:44:50 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:06:35.573 01:44:50 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:35.573 01:44:50 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:35.573 01:44:50 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:35.573 01:44:50 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:35.573 01:44:50 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:35.573 01:44:50 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:35.573 01:44:50 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:35.573 01:44:50 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:35.573 01:44:50 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:35.573 01:44:50 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:35.573 01:44:50 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:35.573 01:44:50 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:06:35.573 01:44:50 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:35.573 01:44:50 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:06:35.573 01:44:50 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:35.573 01:44:50 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:35.573 01:44:50 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:06:35.573 01:44:50 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:35.573 01:44:50 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:35.573 01:44:50 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:35.573 01:44:50 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:35.573 01:44:50 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:35.573 01:44:50 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:35.573 01:44:50 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:35.573 01:44:50 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:35.573 01:44:50 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:35.573 01:44:50 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:35.573 01:44:50 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:35.573 01:44:50 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:06:35.573 01:44:50 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:35.573 01:44:50 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:06:35.573 01:44:50 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:35.573 01:44:50 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:35.573 01:44:50 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:35.573 01:44:50 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:35.573 01:44:50 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:35.573 01:44:50 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:35.573 01:44:50 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:35.573 01:44:50 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:35.573 01:44:50 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:35.573 01:44:50 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:35.573 01:44:50 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:06:35.573 01:44:50 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:35.573 01:44:50 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:35.573 01:44:50 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:35.573 01:44:50 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:06:35.573 01:44:50 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:35.573 01:44:50 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:35.573 01:44:50 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:35.573 01:44:50 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:06:35.573 01:44:50 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:35.573 01:44:50 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:35.573 01:44:50 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:35.573 01:44:50 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:35.573 01:44:50 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:35.573 01:44:50 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:35.573 01:44:50 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:35.573 01:44:50 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:35.573 01:44:50 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:35.573 01:44:50 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:35.573 01:44:50 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:36.945 01:44:51 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:36.945 01:44:51 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:36.945 01:44:51 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:36.945 01:44:51 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:36.945 01:44:51 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:36.945 01:44:51 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:36.945 01:44:51 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:36.945 01:44:51 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:36.945 01:44:51 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:36.945 01:44:51 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:36.945 01:44:51 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:36.945 01:44:51 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:36.945 01:44:51 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:36.945 01:44:51 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:36.945 01:44:51 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:36.946 01:44:51 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:36.946 01:44:51 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:36.946 01:44:51 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:36.946 01:44:51 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:36.946 01:44:51 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:36.946 01:44:51 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:36.946 01:44:51 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:36.946 01:44:51 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:36.946 01:44:51 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:36.946 01:44:51 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:36.946 01:44:51 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:06:36.946 01:44:51 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:36.946 00:06:36.946 real 0m1.409s 00:06:36.946 user 0m1.271s 00:06:36.946 sys 0m0.139s 00:06:36.946 01:44:51 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:36.946 01:44:51 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:06:36.946 ************************************ 00:06:36.946 END TEST accel_fill 00:06:36.946 ************************************ 00:06:36.946 01:44:51 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:36.946 01:44:51 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:36.946 01:44:51 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:36.946 01:44:51 accel -- common/autotest_common.sh@10 -- # set +x 00:06:36.946 ************************************ 00:06:36.946 START TEST accel_copy_crc32c 00:06:36.946 ************************************ 00:06:36.946 01:44:51 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:06:36.946 01:44:51 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:36.946 01:44:51 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:36.946 01:44:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:36.946 01:44:51 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:36.946 01:44:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:36.946 01:44:51 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:36.946 01:44:51 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:36.946 01:44:51 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:36.946 01:44:51 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:36.946 01:44:51 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:36.946 01:44:51 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:36.946 01:44:51 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:36.946 01:44:51 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:36.946 01:44:51 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:36.946 [2024-07-24 01:44:51.673921] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:06:36.946 [2024-07-24 01:44:51.673986] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1304159 ] 00:06:36.946 EAL: No free 2048 kB hugepages reported on node 1 00:06:36.946 [2024-07-24 01:44:51.735506] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.946 [2024-07-24 01:44:51.828802] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.204 01:44:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:37.204 01:44:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:37.204 01:44:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:37.204 01:44:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:37.204 01:44:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:37.204 01:44:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:37.204 01:44:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:37.204 01:44:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:37.204 01:44:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:37.204 01:44:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:37.204 01:44:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:37.204 01:44:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:37.204 01:44:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:37.204 01:44:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:37.204 01:44:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:37.204 01:44:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:37.204 01:44:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:37.204 01:44:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:37.204 01:44:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:37.204 01:44:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:37.204 01:44:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:37.204 01:44:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:37.204 01:44:51 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:37.204 01:44:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:37.204 01:44:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:37.204 01:44:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:06:37.204 01:44:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:37.204 01:44:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:37.204 01:44:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:37.204 01:44:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:37.204 01:44:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:37.204 01:44:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:37.204 01:44:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:37.204 01:44:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:37.204 01:44:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:37.204 01:44:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:37.204 01:44:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:37.204 01:44:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:37.204 01:44:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:37.204 01:44:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:37.204 01:44:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:37.204 01:44:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:06:37.204 01:44:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:37.204 01:44:51 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:37.204 01:44:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:37.204 01:44:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:37.204 01:44:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:37.204 01:44:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:37.204 01:44:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:37.204 01:44:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:37.204 01:44:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:37.204 01:44:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:37.204 01:44:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:37.204 01:44:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:37.204 01:44:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:06:37.204 01:44:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:37.204 01:44:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:37.204 01:44:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:37.204 01:44:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:37.204 01:44:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:37.204 01:44:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:37.204 01:44:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:37.204 01:44:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:37.204 01:44:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:37.204 01:44:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:37.204 01:44:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:37.204 01:44:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:37.204 01:44:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:37.204 01:44:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:37.204 01:44:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:37.204 01:44:51 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:37.204 01:44:51 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:37.204 01:44:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:37.204 01:44:51 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:38.583 01:44:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:38.583 01:44:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:38.583 01:44:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:38.583 01:44:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:38.583 01:44:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:38.583 01:44:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:38.584 01:44:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:38.584 01:44:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:38.584 01:44:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:38.584 01:44:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:38.584 01:44:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:38.584 01:44:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:38.584 01:44:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:38.584 01:44:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:38.584 01:44:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:38.584 01:44:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:38.584 01:44:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:38.584 01:44:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:38.584 01:44:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:38.584 01:44:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:38.584 01:44:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:38.584 01:44:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:38.584 01:44:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:38.584 01:44:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:38.584 01:44:53 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:38.584 01:44:53 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:38.584 01:44:53 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:38.584 00:06:38.584 real 0m1.407s 00:06:38.584 user 0m1.265s 00:06:38.584 sys 0m0.146s 00:06:38.584 01:44:53 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:38.584 01:44:53 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:38.584 ************************************ 00:06:38.584 END TEST accel_copy_crc32c 00:06:38.584 ************************************ 00:06:38.584 01:44:53 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:38.584 01:44:53 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:38.584 01:44:53 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:38.584 01:44:53 accel -- common/autotest_common.sh@10 -- # set +x 00:06:38.585 ************************************ 00:06:38.585 START TEST accel_copy_crc32c_C2 00:06:38.585 ************************************ 00:06:38.585 01:44:53 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:38.585 01:44:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:38.585 01:44:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:38.585 01:44:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:38.585 01:44:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:38.585 01:44:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:38.585 01:44:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:38.585 01:44:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:38.585 01:44:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:38.585 01:44:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:38.585 01:44:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:38.585 01:44:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:38.586 01:44:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:38.586 01:44:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:38.586 01:44:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:38.586 [2024-07-24 01:44:53.126174] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:06:38.586 [2024-07-24 01:44:53.126241] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1304350 ] 00:06:38.586 EAL: No free 2048 kB hugepages reported on node 1 00:06:38.586 [2024-07-24 01:44:53.192446] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.586 [2024-07-24 01:44:53.285829] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.586 01:44:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:38.586 01:44:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.586 01:44:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:38.586 01:44:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:38.586 01:44:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:38.586 01:44:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.586 01:44:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:38.586 01:44:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:38.586 01:44:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:38.586 01:44:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.586 01:44:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:38.586 01:44:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:38.586 01:44:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:38.587 01:44:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.587 01:44:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:38.587 01:44:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:38.587 01:44:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:38.587 01:44:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.587 01:44:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:38.587 01:44:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:38.587 01:44:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:38.587 01:44:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.587 01:44:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:38.587 01:44:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:38.587 01:44:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:38.587 01:44:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:38.587 01:44:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.587 01:44:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:38.587 01:44:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:38.587 01:44:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:38.587 01:44:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.587 01:44:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:38.587 01:44:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:38.587 01:44:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:06:38.587 01:44:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.587 01:44:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:38.587 01:44:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:38.587 01:44:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:38.587 01:44:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.587 01:44:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:38.587 01:44:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:38.588 01:44:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:38.588 01:44:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.588 01:44:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:38.588 01:44:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:38.588 01:44:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:38.588 01:44:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:38.588 01:44:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.588 01:44:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:38.588 01:44:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:38.588 01:44:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:38.588 01:44:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.588 01:44:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:38.588 01:44:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:38.588 01:44:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:38.588 01:44:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.588 01:44:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:38.588 01:44:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:38.588 01:44:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:38.588 01:44:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.588 01:44:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:38.588 01:44:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:38.588 01:44:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:38.589 01:44:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.589 01:44:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:38.589 01:44:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:38.589 01:44:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:38.589 01:44:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.589 01:44:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:38.589 01:44:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:38.589 01:44:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:38.589 01:44:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.589 01:44:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:38.589 01:44:53 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:39.962 01:44:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:39.962 01:44:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.962 01:44:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:39.962 01:44:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:39.962 01:44:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:39.962 01:44:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.962 01:44:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:39.962 01:44:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:39.962 01:44:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:39.962 01:44:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.963 01:44:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:39.963 01:44:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:39.963 01:44:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:39.963 01:44:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.963 01:44:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:39.963 01:44:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:39.963 01:44:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:39.963 01:44:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.963 01:44:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:39.963 01:44:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:39.963 01:44:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:39.963 01:44:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.963 01:44:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:39.963 01:44:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:39.963 01:44:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:39.963 01:44:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:39.963 01:44:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:39.963 00:06:39.963 real 0m1.407s 00:06:39.963 user 0m1.266s 00:06:39.963 sys 0m0.143s 00:06:39.963 01:44:54 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:39.963 01:44:54 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:39.963 ************************************ 00:06:39.963 END TEST accel_copy_crc32c_C2 00:06:39.963 ************************************ 00:06:39.963 01:44:54 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:39.963 01:44:54 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:39.963 01:44:54 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:39.963 01:44:54 accel -- common/autotest_common.sh@10 -- # set +x 00:06:39.963 ************************************ 00:06:39.963 START TEST accel_dualcast 00:06:39.963 ************************************ 00:06:39.963 01:44:54 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:06:39.963 01:44:54 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:06:39.963 01:44:54 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:06:39.963 01:44:54 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:39.963 01:44:54 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:39.963 01:44:54 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:39.963 01:44:54 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:39.963 01:44:54 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:06:39.963 01:44:54 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:39.963 01:44:54 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:39.963 01:44:54 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:39.963 01:44:54 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:39.963 01:44:54 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:39.963 01:44:54 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:06:39.963 01:44:54 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:06:39.963 [2024-07-24 01:44:54.576793] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:06:39.963 [2024-07-24 01:44:54.576859] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1304592 ] 00:06:39.963 EAL: No free 2048 kB hugepages reported on node 1 00:06:39.963 [2024-07-24 01:44:54.634107] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.963 [2024-07-24 01:44:54.730265] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.963 01:44:54 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:39.963 01:44:54 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:39.963 01:44:54 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:39.963 01:44:54 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:39.963 01:44:54 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:39.963 01:44:54 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:39.963 01:44:54 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:39.963 01:44:54 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:39.963 01:44:54 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:06:39.963 01:44:54 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:39.963 01:44:54 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:39.963 01:44:54 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:39.963 01:44:54 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:39.963 01:44:54 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:39.963 01:44:54 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:39.963 01:44:54 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:39.963 01:44:54 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:39.963 01:44:54 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:39.963 01:44:54 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:39.963 01:44:54 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:39.963 01:44:54 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:06:39.963 01:44:54 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:39.963 01:44:54 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:06:39.963 01:44:54 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:39.963 01:44:54 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:39.963 01:44:54 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:39.963 01:44:54 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:39.963 01:44:54 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:39.963 01:44:54 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:39.963 01:44:54 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:39.963 01:44:54 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:39.963 01:44:54 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:39.963 01:44:54 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:39.963 01:44:54 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:06:39.963 01:44:54 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:39.963 01:44:54 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:06:39.963 01:44:54 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:39.963 01:44:54 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:39.963 01:44:54 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:39.963 01:44:54 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:39.963 01:44:54 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:39.963 01:44:54 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:39.963 01:44:54 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:39.963 01:44:54 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:39.963 01:44:54 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:39.963 01:44:54 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:39.963 01:44:54 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:06:39.963 01:44:54 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:39.963 01:44:54 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:39.963 01:44:54 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:39.963 01:44:54 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:06:39.963 01:44:54 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:39.963 01:44:54 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:39.963 01:44:54 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:39.963 01:44:54 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:06:39.963 01:44:54 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:39.963 01:44:54 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:39.963 01:44:54 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:39.963 01:44:54 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:39.963 01:44:54 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:39.963 01:44:54 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:39.963 01:44:54 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:39.963 01:44:54 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:39.963 01:44:54 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:39.963 01:44:54 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:39.963 01:44:54 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:41.336 01:44:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:41.336 01:44:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:41.336 01:44:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:41.336 01:44:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:41.336 01:44:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:41.336 01:44:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:41.336 01:44:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:41.336 01:44:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:41.336 01:44:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:41.336 01:44:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:41.336 01:44:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:41.336 01:44:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:41.336 01:44:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:41.336 01:44:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:41.336 01:44:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:41.336 01:44:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:41.336 01:44:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:41.336 01:44:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:41.336 01:44:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:41.336 01:44:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:41.336 01:44:55 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:41.336 01:44:55 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:41.336 01:44:55 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:41.336 01:44:55 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:41.336 01:44:55 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:41.336 01:44:55 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:06:41.336 01:44:55 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:41.336 00:06:41.336 real 0m1.402s 00:06:41.336 user 0m1.260s 00:06:41.336 sys 0m0.144s 00:06:41.336 01:44:55 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:41.336 01:44:55 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:06:41.336 ************************************ 00:06:41.336 END TEST accel_dualcast 00:06:41.336 ************************************ 00:06:41.336 01:44:55 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:41.336 01:44:55 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:41.336 01:44:55 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:41.336 01:44:55 accel -- common/autotest_common.sh@10 -- # set +x 00:06:41.336 ************************************ 00:06:41.336 START TEST accel_compare 00:06:41.336 ************************************ 00:06:41.336 01:44:56 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:06:41.336 01:44:56 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:06:41.336 01:44:56 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:06:41.336 01:44:56 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:41.336 01:44:56 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:41.336 01:44:56 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:41.336 01:44:56 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:41.336 01:44:56 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:06:41.336 01:44:56 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:41.336 01:44:56 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:41.336 01:44:56 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:41.336 01:44:56 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:41.336 01:44:56 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:41.336 01:44:56 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:06:41.336 01:44:56 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:06:41.336 [2024-07-24 01:44:56.024126] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:06:41.336 [2024-07-24 01:44:56.024191] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1304751 ] 00:06:41.336 EAL: No free 2048 kB hugepages reported on node 1 00:06:41.336 [2024-07-24 01:44:56.085705] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.336 [2024-07-24 01:44:56.178470] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.594 01:44:56 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:41.594 01:44:56 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:41.594 01:44:56 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:41.594 01:44:56 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:41.594 01:44:56 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:41.594 01:44:56 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:41.594 01:44:56 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:41.594 01:44:56 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:41.594 01:44:56 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:06:41.594 01:44:56 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:41.594 01:44:56 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:41.594 01:44:56 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:41.594 01:44:56 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:41.594 01:44:56 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:41.594 01:44:56 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:41.594 01:44:56 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:41.594 01:44:56 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:41.594 01:44:56 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:41.594 01:44:56 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:41.594 01:44:56 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:41.594 01:44:56 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:06:41.594 01:44:56 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:41.594 01:44:56 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:06:41.594 01:44:56 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:41.594 01:44:56 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:41.594 01:44:56 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:41.594 01:44:56 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:41.594 01:44:56 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:41.594 01:44:56 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:41.594 01:44:56 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:41.594 01:44:56 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:41.594 01:44:56 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:41.594 01:44:56 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:41.594 01:44:56 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:06:41.594 01:44:56 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:41.594 01:44:56 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:06:41.594 01:44:56 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:41.594 01:44:56 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:41.594 01:44:56 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:41.594 01:44:56 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:41.594 01:44:56 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:41.594 01:44:56 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:41.594 01:44:56 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:41.594 01:44:56 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:41.594 01:44:56 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:41.594 01:44:56 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:41.594 01:44:56 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:06:41.594 01:44:56 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:41.594 01:44:56 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:41.594 01:44:56 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:41.594 01:44:56 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:06:41.594 01:44:56 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:41.594 01:44:56 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:41.594 01:44:56 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:41.594 01:44:56 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:06:41.594 01:44:56 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:41.594 01:44:56 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:41.594 01:44:56 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:41.594 01:44:56 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:41.594 01:44:56 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:41.594 01:44:56 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:41.595 01:44:56 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:41.595 01:44:56 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:41.595 01:44:56 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:41.595 01:44:56 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:41.595 01:44:56 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:42.528 01:44:57 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:42.528 01:44:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:42.528 01:44:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:42.528 01:44:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:42.528 01:44:57 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:42.528 01:44:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:42.528 01:44:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:42.528 01:44:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:42.528 01:44:57 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:42.528 01:44:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:42.528 01:44:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:42.528 01:44:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:42.528 01:44:57 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:42.528 01:44:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:42.528 01:44:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:42.528 01:44:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:42.528 01:44:57 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:42.528 01:44:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:42.528 01:44:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:42.528 01:44:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:42.528 01:44:57 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:42.528 01:44:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:42.528 01:44:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:42.528 01:44:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:42.528 01:44:57 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:42.528 01:44:57 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:06:42.528 01:44:57 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:42.528 00:06:42.528 real 0m1.409s 00:06:42.528 user 0m1.259s 00:06:42.528 sys 0m0.151s 00:06:42.528 01:44:57 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:42.528 01:44:57 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:06:42.528 ************************************ 00:06:42.528 END TEST accel_compare 00:06:42.528 ************************************ 00:06:42.787 01:44:57 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:42.787 01:44:57 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:42.787 01:44:57 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:42.787 01:44:57 accel -- common/autotest_common.sh@10 -- # set +x 00:06:42.787 ************************************ 00:06:42.787 START TEST accel_xor 00:06:42.787 ************************************ 00:06:42.787 01:44:57 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:06:42.787 01:44:57 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:42.787 01:44:57 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:42.787 01:44:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:42.787 01:44:57 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:42.787 01:44:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:42.787 01:44:57 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:42.787 01:44:57 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:42.787 01:44:57 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:42.787 01:44:57 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:42.787 01:44:57 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:42.787 01:44:57 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:42.787 01:44:57 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:42.787 01:44:57 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:42.787 01:44:57 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:42.787 [2024-07-24 01:44:57.477281] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:06:42.787 [2024-07-24 01:44:57.477371] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1304904 ] 00:06:42.787 EAL: No free 2048 kB hugepages reported on node 1 00:06:42.787 [2024-07-24 01:44:57.539625] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.787 [2024-07-24 01:44:57.631795] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.045 01:44:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:43.045 01:44:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:43.045 01:44:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:43.045 01:44:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:43.045 01:44:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:43.045 01:44:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:43.045 01:44:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:43.045 01:44:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:43.045 01:44:57 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:43.045 01:44:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:43.045 01:44:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:43.045 01:44:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:43.045 01:44:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:43.045 01:44:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:43.045 01:44:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:43.045 01:44:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:43.045 01:44:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:43.045 01:44:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:43.045 01:44:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:43.045 01:44:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:43.045 01:44:57 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:43.045 01:44:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:43.045 01:44:57 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:43.045 01:44:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:43.045 01:44:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:43.045 01:44:57 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:06:43.045 01:44:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:43.045 01:44:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:43.045 01:44:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:43.045 01:44:57 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:43.045 01:44:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:43.045 01:44:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:43.045 01:44:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:43.045 01:44:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:43.045 01:44:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:43.045 01:44:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:43.045 01:44:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:43.045 01:44:57 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:43.045 01:44:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:43.045 01:44:57 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:43.046 01:44:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:43.046 01:44:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:43.046 01:44:57 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:43.046 01:44:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:43.046 01:44:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:43.046 01:44:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:43.046 01:44:57 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:43.046 01:44:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:43.046 01:44:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:43.046 01:44:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:43.046 01:44:57 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:43.046 01:44:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:43.046 01:44:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:43.046 01:44:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:43.046 01:44:57 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:43.046 01:44:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:43.046 01:44:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:43.046 01:44:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:43.046 01:44:57 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:43.046 01:44:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:43.046 01:44:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:43.046 01:44:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:43.046 01:44:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:43.046 01:44:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:43.046 01:44:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:43.046 01:44:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:43.046 01:44:57 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:43.046 01:44:57 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:43.046 01:44:57 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:43.046 01:44:57 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:43.980 01:44:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:43.980 01:44:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:43.980 01:44:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:43.980 01:44:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:43.980 01:44:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:43.980 01:44:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:43.980 01:44:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:43.980 01:44:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:43.980 01:44:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:43.980 01:44:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:43.980 01:44:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:43.980 01:44:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:43.980 01:44:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:43.980 01:44:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:43.980 01:44:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:43.980 01:44:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:43.980 01:44:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:43.980 01:44:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:43.980 01:44:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:43.980 01:44:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:43.980 01:44:58 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:43.980 01:44:58 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:43.980 01:44:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:43.980 01:44:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:43.980 01:44:58 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:43.980 01:44:58 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:43.980 01:44:58 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:43.980 00:06:43.980 real 0m1.393s 00:06:43.980 user 0m1.251s 00:06:43.980 sys 0m0.143s 00:06:43.980 01:44:58 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:43.980 01:44:58 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:43.980 ************************************ 00:06:43.980 END TEST accel_xor 00:06:43.980 ************************************ 00:06:43.980 01:44:58 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:44.238 01:44:58 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:44.238 01:44:58 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:44.238 01:44:58 accel -- common/autotest_common.sh@10 -- # set +x 00:06:44.238 ************************************ 00:06:44.238 START TEST accel_xor 00:06:44.238 ************************************ 00:06:44.238 01:44:58 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:06:44.238 01:44:58 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:44.238 01:44:58 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:44.238 01:44:58 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:44.238 01:44:58 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:44.238 01:44:58 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:44.238 01:44:58 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:44.238 01:44:58 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:44.238 01:44:58 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:44.238 01:44:58 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:44.238 01:44:58 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:44.238 01:44:58 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:44.238 01:44:58 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:44.238 01:44:58 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:44.238 01:44:58 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:44.238 [2024-07-24 01:44:58.917123] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:06:44.238 [2024-07-24 01:44:58.917189] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1305176 ] 00:06:44.238 EAL: No free 2048 kB hugepages reported on node 1 00:06:44.238 [2024-07-24 01:44:58.978411] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.238 [2024-07-24 01:44:59.070380] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.238 01:44:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:44.238 01:44:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:44.238 01:44:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:44.238 01:44:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:44.238 01:44:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:44.238 01:44:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:44.238 01:44:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:44.238 01:44:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:44.238 01:44:59 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:44.238 01:44:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:44.238 01:44:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:44.238 01:44:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:44.238 01:44:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:44.238 01:44:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:44.238 01:44:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:44.238 01:44:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:44.238 01:44:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:44.238 01:44:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:44.238 01:44:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:44.238 01:44:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:44.238 01:44:59 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:44.238 01:44:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:44.238 01:44:59 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:44.238 01:44:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:44.238 01:44:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:44.238 01:44:59 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:06:44.238 01:44:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:44.238 01:44:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:44.238 01:44:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:44.238 01:44:59 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:44.238 01:44:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:44.238 01:44:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:44.238 01:44:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:44.238 01:44:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:44.238 01:44:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:44.238 01:44:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:44.238 01:44:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:44.238 01:44:59 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:44.238 01:44:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:44.239 01:44:59 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:44.496 01:44:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:44.496 01:44:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:44.496 01:44:59 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:44.496 01:44:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:44.496 01:44:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:44.496 01:44:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:44.496 01:44:59 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:44.496 01:44:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:44.496 01:44:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:44.496 01:44:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:44.496 01:44:59 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:44.496 01:44:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:44.496 01:44:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:44.496 01:44:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:44.496 01:44:59 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:44.497 01:44:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:44.497 01:44:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:44.497 01:44:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:44.497 01:44:59 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:44.497 01:44:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:44.497 01:44:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:44.497 01:44:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:44.497 01:44:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:44.497 01:44:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:44.497 01:44:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:44.497 01:44:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:44.497 01:44:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:44.497 01:44:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:44.497 01:44:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:44.497 01:44:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:45.430 01:45:00 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:45.430 01:45:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:45.430 01:45:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:45.430 01:45:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:45.430 01:45:00 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:45.430 01:45:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:45.430 01:45:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:45.430 01:45:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:45.430 01:45:00 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:45.430 01:45:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:45.430 01:45:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:45.430 01:45:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:45.430 01:45:00 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:45.430 01:45:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:45.430 01:45:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:45.430 01:45:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:45.430 01:45:00 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:45.430 01:45:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:45.430 01:45:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:45.430 01:45:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:45.430 01:45:00 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:45.430 01:45:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:45.430 01:45:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:45.430 01:45:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:45.430 01:45:00 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:45.430 01:45:00 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:45.430 01:45:00 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:45.430 00:06:45.430 real 0m1.390s 00:06:45.430 user 0m1.249s 00:06:45.430 sys 0m0.143s 00:06:45.430 01:45:00 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:45.430 01:45:00 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:45.430 ************************************ 00:06:45.430 END TEST accel_xor 00:06:45.430 ************************************ 00:06:45.430 01:45:00 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:45.430 01:45:00 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:45.430 01:45:00 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:45.430 01:45:00 accel -- common/autotest_common.sh@10 -- # set +x 00:06:45.688 ************************************ 00:06:45.688 START TEST accel_dif_verify 00:06:45.689 ************************************ 00:06:45.689 01:45:00 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:06:45.689 01:45:00 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:06:45.689 01:45:00 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:06:45.689 01:45:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:45.689 01:45:00 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:45.689 01:45:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:45.689 01:45:00 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:45.689 01:45:00 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:45.689 01:45:00 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:45.689 01:45:00 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:45.689 01:45:00 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:45.689 01:45:00 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:45.689 01:45:00 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:45.689 01:45:00 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:45.689 01:45:00 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:06:45.689 [2024-07-24 01:45:00.349275] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:06:45.689 [2024-07-24 01:45:00.349371] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1305370 ] 00:06:45.689 EAL: No free 2048 kB hugepages reported on node 1 00:06:45.689 [2024-07-24 01:45:00.408989] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.689 [2024-07-24 01:45:00.500750] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.689 01:45:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:45.689 01:45:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:45.689 01:45:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:45.689 01:45:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:45.689 01:45:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:45.689 01:45:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:45.689 01:45:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:45.689 01:45:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:45.689 01:45:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:06:45.689 01:45:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:45.689 01:45:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:45.689 01:45:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:45.689 01:45:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:45.689 01:45:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:45.689 01:45:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:45.689 01:45:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:45.689 01:45:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:45.689 01:45:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:45.689 01:45:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:45.689 01:45:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:45.689 01:45:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:06:45.689 01:45:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:45.689 01:45:00 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:06:45.689 01:45:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:45.689 01:45:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:45.689 01:45:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:45.689 01:45:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:45.689 01:45:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:45.689 01:45:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:45.689 01:45:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:45.689 01:45:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:45.689 01:45:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:45.689 01:45:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:45.689 01:45:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:06:45.689 01:45:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:45.689 01:45:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:45.689 01:45:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:45.689 01:45:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:06:45.689 01:45:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:45.689 01:45:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:45.689 01:45:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:45.689 01:45:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:45.689 01:45:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:45.689 01:45:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:45.689 01:45:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:45.689 01:45:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:06:45.689 01:45:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:45.689 01:45:00 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:06:45.689 01:45:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:45.689 01:45:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:45.689 01:45:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:45.689 01:45:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:45.689 01:45:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:45.689 01:45:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:45.689 01:45:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:45.689 01:45:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:45.689 01:45:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:45.689 01:45:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:45.689 01:45:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:06:45.689 01:45:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:45.689 01:45:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:45.689 01:45:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:45.689 01:45:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:06:45.689 01:45:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:45.689 01:45:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:45.689 01:45:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:45.689 01:45:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:06:45.689 01:45:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:45.689 01:45:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:45.689 01:45:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:45.689 01:45:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:45.689 01:45:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:45.689 01:45:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:45.689 01:45:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:45.689 01:45:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:45.689 01:45:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:45.689 01:45:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:45.689 01:45:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:47.062 01:45:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:47.062 01:45:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:47.062 01:45:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:47.062 01:45:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:47.062 01:45:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:47.062 01:45:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:47.062 01:45:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:47.062 01:45:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:47.062 01:45:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:47.062 01:45:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:47.062 01:45:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:47.062 01:45:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:47.062 01:45:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:47.062 01:45:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:47.062 01:45:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:47.062 01:45:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:47.062 01:45:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:47.062 01:45:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:47.062 01:45:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:47.062 01:45:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:47.062 01:45:01 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:47.062 01:45:01 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:47.062 01:45:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:47.062 01:45:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:47.062 01:45:01 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:47.062 01:45:01 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:06:47.062 01:45:01 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:47.062 00:06:47.062 real 0m1.396s 00:06:47.062 user 0m1.256s 00:06:47.062 sys 0m0.142s 00:06:47.062 01:45:01 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:47.062 01:45:01 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:06:47.062 ************************************ 00:06:47.062 END TEST accel_dif_verify 00:06:47.062 ************************************ 00:06:47.062 01:45:01 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:47.062 01:45:01 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:47.062 01:45:01 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:47.062 01:45:01 accel -- common/autotest_common.sh@10 -- # set +x 00:06:47.062 ************************************ 00:06:47.062 START TEST accel_dif_generate 00:06:47.062 ************************************ 00:06:47.062 01:45:01 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:06:47.062 01:45:01 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:06:47.062 01:45:01 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:06:47.062 01:45:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:47.062 01:45:01 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:47.062 01:45:01 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:47.062 01:45:01 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:47.062 01:45:01 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:06:47.062 01:45:01 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:47.062 01:45:01 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:47.062 01:45:01 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:47.062 01:45:01 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:47.062 01:45:01 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:47.062 01:45:01 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:06:47.062 01:45:01 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:06:47.062 [2024-07-24 01:45:01.790997] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:06:47.062 [2024-07-24 01:45:01.791064] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1305595 ] 00:06:47.062 EAL: No free 2048 kB hugepages reported on node 1 00:06:47.062 [2024-07-24 01:45:01.855281] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.062 [2024-07-24 01:45:01.950473] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.320 01:45:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:47.320 01:45:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:47.320 01:45:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:47.320 01:45:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:47.320 01:45:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:47.320 01:45:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:47.320 01:45:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:47.320 01:45:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:47.320 01:45:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:06:47.320 01:45:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:47.320 01:45:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:47.320 01:45:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:47.320 01:45:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:47.320 01:45:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:47.320 01:45:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:47.320 01:45:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:47.320 01:45:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:47.320 01:45:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:47.320 01:45:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:47.320 01:45:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:47.320 01:45:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:06:47.320 01:45:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:47.320 01:45:02 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:06:47.320 01:45:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:47.320 01:45:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:47.320 01:45:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:47.320 01:45:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:47.320 01:45:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:47.320 01:45:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:47.320 01:45:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:47.320 01:45:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:47.320 01:45:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:47.320 01:45:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:47.320 01:45:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:06:47.320 01:45:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:47.320 01:45:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:47.320 01:45:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:47.320 01:45:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:06:47.320 01:45:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:47.320 01:45:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:47.320 01:45:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:47.320 01:45:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:47.320 01:45:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:47.320 01:45:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:47.320 01:45:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:47.320 01:45:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:06:47.320 01:45:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:47.320 01:45:02 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:06:47.320 01:45:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:47.320 01:45:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:47.320 01:45:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:47.320 01:45:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:47.320 01:45:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:47.320 01:45:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:47.320 01:45:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:47.320 01:45:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:47.320 01:45:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:47.320 01:45:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:47.320 01:45:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:06:47.320 01:45:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:47.320 01:45:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:47.320 01:45:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:47.320 01:45:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:06:47.320 01:45:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:47.320 01:45:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:47.320 01:45:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:47.320 01:45:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:06:47.320 01:45:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:47.320 01:45:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:47.320 01:45:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:47.320 01:45:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:47.320 01:45:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:47.320 01:45:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:47.320 01:45:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:47.321 01:45:02 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:47.321 01:45:02 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:47.321 01:45:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:47.321 01:45:02 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:48.697 01:45:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:48.697 01:45:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:48.697 01:45:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:48.697 01:45:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:48.697 01:45:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:48.697 01:45:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:48.697 01:45:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:48.697 01:45:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:48.697 01:45:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:48.697 01:45:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:48.697 01:45:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:48.697 01:45:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:48.697 01:45:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:48.697 01:45:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:48.697 01:45:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:48.697 01:45:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:48.697 01:45:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:48.697 01:45:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:48.697 01:45:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:48.697 01:45:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:48.697 01:45:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:48.697 01:45:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:48.697 01:45:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:48.697 01:45:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:48.697 01:45:03 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:48.697 01:45:03 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:06:48.697 01:45:03 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:48.697 00:06:48.697 real 0m1.413s 00:06:48.697 user 0m1.260s 00:06:48.697 sys 0m0.155s 00:06:48.697 01:45:03 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:48.697 01:45:03 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:06:48.697 ************************************ 00:06:48.697 END TEST accel_dif_generate 00:06:48.697 ************************************ 00:06:48.697 01:45:03 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:48.697 01:45:03 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:48.697 01:45:03 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:48.697 01:45:03 accel -- common/autotest_common.sh@10 -- # set +x 00:06:48.697 ************************************ 00:06:48.697 START TEST accel_dif_generate_copy 00:06:48.697 ************************************ 00:06:48.697 01:45:03 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:06:48.697 01:45:03 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:48.697 01:45:03 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:06:48.697 01:45:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:48.697 01:45:03 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:48.697 01:45:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:48.697 01:45:03 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:48.697 01:45:03 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:48.697 01:45:03 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:48.697 01:45:03 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:48.697 01:45:03 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:48.697 01:45:03 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:48.697 01:45:03 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:48.697 01:45:03 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:48.697 01:45:03 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:06:48.697 [2024-07-24 01:45:03.247431] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:06:48.697 [2024-07-24 01:45:03.247489] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1305762 ] 00:06:48.697 EAL: No free 2048 kB hugepages reported on node 1 00:06:48.697 [2024-07-24 01:45:03.309941] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.697 [2024-07-24 01:45:03.402654] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.697 01:45:03 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:48.697 01:45:03 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:48.697 01:45:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:48.697 01:45:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:48.697 01:45:03 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:48.697 01:45:03 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:48.697 01:45:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:48.697 01:45:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:48.697 01:45:03 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:06:48.697 01:45:03 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:48.697 01:45:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:48.698 01:45:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:48.698 01:45:03 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:48.698 01:45:03 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:48.698 01:45:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:48.698 01:45:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:48.698 01:45:03 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:48.698 01:45:03 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:48.698 01:45:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:48.698 01:45:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:48.698 01:45:03 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:06:48.698 01:45:03 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:48.698 01:45:03 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:06:48.698 01:45:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:48.698 01:45:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:48.698 01:45:03 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:48.698 01:45:03 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:48.698 01:45:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:48.698 01:45:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:48.698 01:45:03 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:48.698 01:45:03 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:48.698 01:45:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:48.698 01:45:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:48.698 01:45:03 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:48.698 01:45:03 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:48.698 01:45:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:48.698 01:45:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:48.698 01:45:03 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:06:48.698 01:45:03 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:48.698 01:45:03 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:48.698 01:45:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:48.698 01:45:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:48.698 01:45:03 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:48.698 01:45:03 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:48.698 01:45:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:48.698 01:45:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:48.698 01:45:03 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:48.698 01:45:03 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:48.698 01:45:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:48.698 01:45:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:48.698 01:45:03 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:06:48.698 01:45:03 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:48.698 01:45:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:48.698 01:45:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:48.698 01:45:03 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:48.698 01:45:03 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:48.698 01:45:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:48.698 01:45:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:48.698 01:45:03 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:06:48.698 01:45:03 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:48.698 01:45:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:48.698 01:45:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:48.698 01:45:03 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:48.698 01:45:03 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:48.698 01:45:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:48.698 01:45:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:48.698 01:45:03 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:48.698 01:45:03 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:48.698 01:45:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:48.698 01:45:03 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:50.069 01:45:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:50.069 01:45:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:50.069 01:45:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:50.069 01:45:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:50.069 01:45:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:50.069 01:45:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:50.069 01:45:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:50.069 01:45:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:50.069 01:45:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:50.069 01:45:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:50.069 01:45:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:50.069 01:45:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:50.069 01:45:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:50.069 01:45:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:50.069 01:45:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:50.069 01:45:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:50.069 01:45:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:50.069 01:45:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:50.069 01:45:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:50.069 01:45:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:50.069 01:45:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:50.069 01:45:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:50.069 01:45:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:50.069 01:45:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:50.069 01:45:04 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:50.069 01:45:04 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:06:50.069 01:45:04 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:50.069 00:06:50.069 real 0m1.404s 00:06:50.069 user 0m1.260s 00:06:50.069 sys 0m0.144s 00:06:50.069 01:45:04 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:50.069 01:45:04 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:06:50.069 ************************************ 00:06:50.069 END TEST accel_dif_generate_copy 00:06:50.069 ************************************ 00:06:50.069 01:45:04 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:06:50.069 01:45:04 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:50.069 01:45:04 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:50.069 01:45:04 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:50.069 01:45:04 accel -- common/autotest_common.sh@10 -- # set +x 00:06:50.069 ************************************ 00:06:50.069 START TEST accel_comp 00:06:50.069 ************************************ 00:06:50.069 01:45:04 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:50.069 01:45:04 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:06:50.069 01:45:04 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:06:50.069 01:45:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:50.069 01:45:04 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:50.069 01:45:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:50.069 01:45:04 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:50.069 01:45:04 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:06:50.069 01:45:04 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:50.069 01:45:04 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:50.069 01:45:04 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:50.069 01:45:04 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:50.069 01:45:04 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:50.069 01:45:04 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:06:50.069 01:45:04 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:06:50.069 [2024-07-24 01:45:04.695876] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:06:50.069 [2024-07-24 01:45:04.695938] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1306028 ] 00:06:50.069 EAL: No free 2048 kB hugepages reported on node 1 00:06:50.069 [2024-07-24 01:45:04.757633] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.069 [2024-07-24 01:45:04.850769] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.069 01:45:04 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:50.069 01:45:04 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:50.069 01:45:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:50.069 01:45:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:50.069 01:45:04 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:50.069 01:45:04 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:50.069 01:45:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:50.069 01:45:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:50.069 01:45:04 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:50.069 01:45:04 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:50.069 01:45:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:50.069 01:45:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:50.069 01:45:04 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:06:50.069 01:45:04 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:50.069 01:45:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:50.069 01:45:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:50.069 01:45:04 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:50.069 01:45:04 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:50.069 01:45:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:50.069 01:45:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:50.069 01:45:04 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:50.069 01:45:04 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:50.069 01:45:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:50.069 01:45:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:50.069 01:45:04 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:06:50.069 01:45:04 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:50.069 01:45:04 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:06:50.069 01:45:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:50.069 01:45:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:50.069 01:45:04 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:50.069 01:45:04 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:50.070 01:45:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:50.070 01:45:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:50.070 01:45:04 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:50.070 01:45:04 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:50.070 01:45:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:50.070 01:45:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:50.070 01:45:04 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:06:50.070 01:45:04 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:50.070 01:45:04 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:06:50.070 01:45:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:50.070 01:45:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:50.070 01:45:04 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:50.070 01:45:04 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:50.070 01:45:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:50.070 01:45:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:50.070 01:45:04 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:50.070 01:45:04 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:50.070 01:45:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:50.070 01:45:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:50.070 01:45:04 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:50.070 01:45:04 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:50.070 01:45:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:50.070 01:45:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:50.070 01:45:04 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:06:50.070 01:45:04 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:50.070 01:45:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:50.070 01:45:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:50.070 01:45:04 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:50.070 01:45:04 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:50.070 01:45:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:50.070 01:45:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:50.070 01:45:04 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:06:50.070 01:45:04 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:50.070 01:45:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:50.070 01:45:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:50.070 01:45:04 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:50.070 01:45:04 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:50.070 01:45:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:50.070 01:45:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:50.070 01:45:04 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:50.070 01:45:04 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:50.070 01:45:04 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:50.070 01:45:04 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:51.442 01:45:06 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:51.442 01:45:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:51.442 01:45:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:51.442 01:45:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:51.442 01:45:06 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:51.442 01:45:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:51.442 01:45:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:51.442 01:45:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:51.442 01:45:06 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:51.442 01:45:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:51.442 01:45:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:51.442 01:45:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:51.442 01:45:06 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:51.442 01:45:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:51.442 01:45:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:51.442 01:45:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:51.442 01:45:06 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:51.442 01:45:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:51.442 01:45:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:51.442 01:45:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:51.442 01:45:06 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:51.442 01:45:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:51.442 01:45:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:51.442 01:45:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:51.442 01:45:06 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:51.442 01:45:06 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:06:51.442 01:45:06 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:51.442 00:06:51.442 real 0m1.406s 00:06:51.442 user 0m1.265s 00:06:51.442 sys 0m0.145s 00:06:51.442 01:45:06 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:51.442 01:45:06 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:06:51.442 ************************************ 00:06:51.442 END TEST accel_comp 00:06:51.442 ************************************ 00:06:51.442 01:45:06 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:51.442 01:45:06 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:51.442 01:45:06 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:51.442 01:45:06 accel -- common/autotest_common.sh@10 -- # set +x 00:06:51.442 ************************************ 00:06:51.442 START TEST accel_decomp 00:06:51.442 ************************************ 00:06:51.442 01:45:06 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:51.442 01:45:06 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:06:51.442 01:45:06 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:06:51.442 01:45:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:51.442 01:45:06 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:51.442 01:45:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:51.442 01:45:06 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:51.442 01:45:06 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:06:51.442 01:45:06 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:51.442 01:45:06 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:51.442 01:45:06 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:51.442 01:45:06 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:51.442 01:45:06 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:51.442 01:45:06 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:06:51.442 01:45:06 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:06:51.443 [2024-07-24 01:45:06.145652] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:06:51.443 [2024-07-24 01:45:06.145718] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1306193 ] 00:06:51.443 EAL: No free 2048 kB hugepages reported on node 1 00:06:51.443 [2024-07-24 01:45:06.208817] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.443 [2024-07-24 01:45:06.299815] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.701 01:45:06 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:51.701 01:45:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:51.701 01:45:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:51.701 01:45:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:51.701 01:45:06 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:51.701 01:45:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:51.701 01:45:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:51.701 01:45:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:51.701 01:45:06 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:51.701 01:45:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:51.701 01:45:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:51.701 01:45:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:51.701 01:45:06 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:06:51.701 01:45:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:51.701 01:45:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:51.701 01:45:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:51.701 01:45:06 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:51.701 01:45:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:51.701 01:45:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:51.701 01:45:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:51.701 01:45:06 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:51.701 01:45:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:51.701 01:45:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:51.701 01:45:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:51.701 01:45:06 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:06:51.701 01:45:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:51.701 01:45:06 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:51.701 01:45:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:51.701 01:45:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:51.701 01:45:06 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:51.701 01:45:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:51.701 01:45:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:51.701 01:45:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:51.701 01:45:06 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:51.701 01:45:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:51.701 01:45:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:51.701 01:45:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:51.701 01:45:06 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:06:51.701 01:45:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:51.701 01:45:06 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:06:51.701 01:45:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:51.701 01:45:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:51.701 01:45:06 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:51.701 01:45:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:51.701 01:45:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:51.701 01:45:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:51.701 01:45:06 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:51.701 01:45:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:51.701 01:45:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:51.701 01:45:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:51.701 01:45:06 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:51.701 01:45:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:51.701 01:45:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:51.701 01:45:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:51.701 01:45:06 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:06:51.701 01:45:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:51.701 01:45:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:51.701 01:45:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:51.701 01:45:06 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:51.701 01:45:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:51.701 01:45:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:51.701 01:45:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:51.701 01:45:06 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:06:51.701 01:45:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:51.701 01:45:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:51.701 01:45:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:51.701 01:45:06 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:51.701 01:45:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:51.701 01:45:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:51.701 01:45:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:51.701 01:45:06 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:51.701 01:45:06 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:51.701 01:45:06 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:51.701 01:45:06 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:52.635 01:45:07 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:52.635 01:45:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:52.635 01:45:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:52.635 01:45:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:52.635 01:45:07 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:52.635 01:45:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:52.635 01:45:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:52.635 01:45:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:52.635 01:45:07 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:52.635 01:45:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:52.635 01:45:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:52.635 01:45:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:52.635 01:45:07 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:52.635 01:45:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:52.635 01:45:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:52.893 01:45:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:52.893 01:45:07 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:52.893 01:45:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:52.893 01:45:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:52.893 01:45:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:52.893 01:45:07 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:52.893 01:45:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:52.893 01:45:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:52.893 01:45:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:52.893 01:45:07 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:52.893 01:45:07 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:52.893 01:45:07 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:52.893 00:06:52.893 real 0m1.401s 00:06:52.893 user 0m1.260s 00:06:52.893 sys 0m0.144s 00:06:52.893 01:45:07 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:52.893 01:45:07 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:06:52.893 ************************************ 00:06:52.893 END TEST accel_decomp 00:06:52.893 ************************************ 00:06:52.893 01:45:07 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:52.893 01:45:07 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:52.893 01:45:07 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:52.893 01:45:07 accel -- common/autotest_common.sh@10 -- # set +x 00:06:52.893 ************************************ 00:06:52.893 START TEST accel_decomp_full 00:06:52.893 ************************************ 00:06:52.893 01:45:07 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:52.893 01:45:07 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:06:52.893 01:45:07 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:06:52.893 01:45:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:52.893 01:45:07 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:52.893 01:45:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:52.893 01:45:07 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:52.893 01:45:07 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:06:52.893 01:45:07 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:52.893 01:45:07 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:52.893 01:45:07 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:52.893 01:45:07 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:52.893 01:45:07 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:52.893 01:45:07 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:06:52.893 01:45:07 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:06:52.893 [2024-07-24 01:45:07.593266] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:06:52.893 [2024-07-24 01:45:07.593342] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1306578 ] 00:06:52.893 EAL: No free 2048 kB hugepages reported on node 1 00:06:52.893 [2024-07-24 01:45:07.655362] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.894 [2024-07-24 01:45:07.747085] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.152 01:45:07 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:53.152 01:45:07 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:53.152 01:45:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:53.152 01:45:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:53.152 01:45:07 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:53.152 01:45:07 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:53.152 01:45:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:53.152 01:45:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:53.152 01:45:07 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:53.152 01:45:07 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:53.152 01:45:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:53.152 01:45:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:53.152 01:45:07 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:06:53.152 01:45:07 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:53.152 01:45:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:53.152 01:45:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:53.152 01:45:07 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:53.152 01:45:07 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:53.152 01:45:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:53.152 01:45:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:53.152 01:45:07 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:53.152 01:45:07 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:53.152 01:45:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:53.152 01:45:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:53.152 01:45:07 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:06:53.152 01:45:07 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:53.152 01:45:07 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:53.152 01:45:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:53.152 01:45:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:53.152 01:45:07 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:53.152 01:45:07 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:53.152 01:45:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:53.152 01:45:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:53.152 01:45:07 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:53.152 01:45:07 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:53.152 01:45:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:53.152 01:45:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:53.152 01:45:07 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:06:53.152 01:45:07 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:53.152 01:45:07 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:06:53.152 01:45:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:53.152 01:45:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:53.152 01:45:07 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:53.152 01:45:07 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:53.152 01:45:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:53.152 01:45:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:53.152 01:45:07 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:53.152 01:45:07 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:53.152 01:45:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:53.152 01:45:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:53.152 01:45:07 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:53.152 01:45:07 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:53.152 01:45:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:53.152 01:45:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:53.152 01:45:07 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:06:53.152 01:45:07 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:53.152 01:45:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:53.152 01:45:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:53.152 01:45:07 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:06:53.152 01:45:07 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:53.152 01:45:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:53.152 01:45:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:53.152 01:45:07 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:06:53.152 01:45:07 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:53.152 01:45:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:53.152 01:45:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:53.152 01:45:07 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:53.152 01:45:07 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:53.152 01:45:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:53.152 01:45:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:53.152 01:45:07 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:53.152 01:45:07 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:53.152 01:45:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:53.152 01:45:07 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:54.143 01:45:08 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:54.143 01:45:08 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:54.143 01:45:08 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:54.143 01:45:08 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:54.143 01:45:08 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:54.143 01:45:08 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:54.143 01:45:08 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:54.143 01:45:08 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:54.143 01:45:08 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:54.143 01:45:08 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:54.143 01:45:08 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:54.143 01:45:08 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:54.143 01:45:08 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:54.143 01:45:08 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:54.143 01:45:08 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:54.143 01:45:08 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:54.143 01:45:08 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:54.143 01:45:08 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:54.143 01:45:08 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:54.143 01:45:08 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:54.143 01:45:08 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:54.143 01:45:08 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:54.143 01:45:08 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:54.143 01:45:08 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:54.143 01:45:08 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:54.143 01:45:08 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:54.143 01:45:08 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:54.143 00:06:54.143 real 0m1.417s 00:06:54.143 user 0m1.275s 00:06:54.143 sys 0m0.145s 00:06:54.143 01:45:08 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:54.143 01:45:08 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:06:54.143 ************************************ 00:06:54.143 END TEST accel_decomp_full 00:06:54.143 ************************************ 00:06:54.143 01:45:09 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:54.143 01:45:09 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:54.143 01:45:09 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:54.143 01:45:09 accel -- common/autotest_common.sh@10 -- # set +x 00:06:54.143 ************************************ 00:06:54.143 START TEST accel_decomp_mcore 00:06:54.143 ************************************ 00:06:54.143 01:45:09 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:54.143 01:45:09 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:54.143 01:45:09 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:54.143 01:45:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:54.143 01:45:09 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:54.143 01:45:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:54.143 01:45:09 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:54.143 01:45:09 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:54.143 01:45:09 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:54.143 01:45:09 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:54.143 01:45:09 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:54.143 01:45:09 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:54.143 01:45:09 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:54.143 01:45:09 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:54.143 01:45:09 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:54.402 [2024-07-24 01:45:09.051217] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:06:54.402 [2024-07-24 01:45:09.051288] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1307074 ] 00:06:54.402 EAL: No free 2048 kB hugepages reported on node 1 00:06:54.402 [2024-07-24 01:45:09.114066] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:54.402 [2024-07-24 01:45:09.207231] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:54.402 [2024-07-24 01:45:09.207287] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:54.402 [2024-07-24 01:45:09.207406] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:54.402 [2024-07-24 01:45:09.207409] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.402 01:45:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:54.402 01:45:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:54.402 01:45:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:54.402 01:45:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:54.402 01:45:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:54.402 01:45:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:54.402 01:45:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:54.402 01:45:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:54.402 01:45:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:54.402 01:45:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:54.402 01:45:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:54.402 01:45:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:54.402 01:45:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:54.402 01:45:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:54.402 01:45:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:54.402 01:45:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:54.402 01:45:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:54.402 01:45:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:54.402 01:45:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:54.402 01:45:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:54.402 01:45:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:54.402 01:45:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:54.402 01:45:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:54.402 01:45:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:54.402 01:45:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:54.402 01:45:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:54.402 01:45:09 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:54.402 01:45:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:54.402 01:45:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:54.402 01:45:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:54.402 01:45:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:54.402 01:45:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:54.402 01:45:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:54.402 01:45:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:54.402 01:45:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:54.402 01:45:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:54.402 01:45:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:54.402 01:45:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:06:54.402 01:45:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:54.402 01:45:09 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:54.402 01:45:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:54.402 01:45:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:54.402 01:45:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:54.402 01:45:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:54.402 01:45:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:54.402 01:45:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:54.402 01:45:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:54.402 01:45:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:54.402 01:45:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:54.402 01:45:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:54.402 01:45:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:54.402 01:45:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:54.402 01:45:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:54.402 01:45:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:54.402 01:45:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:06:54.402 01:45:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:54.402 01:45:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:54.402 01:45:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:54.402 01:45:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:54.402 01:45:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:54.402 01:45:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:54.402 01:45:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:54.402 01:45:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:54.402 01:45:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:54.402 01:45:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:54.402 01:45:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:54.402 01:45:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:54.402 01:45:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:54.402 01:45:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:54.402 01:45:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:54.402 01:45:09 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:54.402 01:45:09 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:54.402 01:45:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:54.402 01:45:09 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:55.776 01:45:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:55.776 01:45:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:55.776 01:45:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:55.776 01:45:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:55.776 01:45:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:55.776 01:45:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:55.776 01:45:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:55.776 01:45:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:55.776 01:45:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:55.776 01:45:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:55.776 01:45:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:55.776 01:45:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:55.776 01:45:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:55.776 01:45:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:55.776 01:45:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:55.776 01:45:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:55.776 01:45:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:55.776 01:45:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:55.776 01:45:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:55.776 01:45:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:55.776 01:45:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:55.776 01:45:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:55.776 01:45:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:55.776 01:45:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:55.776 01:45:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:55.776 01:45:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:55.776 01:45:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:55.776 01:45:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:55.776 01:45:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:55.776 01:45:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:55.776 01:45:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:55.776 01:45:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:55.776 01:45:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:55.776 01:45:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:55.776 01:45:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:55.776 01:45:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:55.776 01:45:10 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:55.776 01:45:10 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:55.776 01:45:10 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:55.776 00:06:55.776 real 0m1.410s 00:06:55.776 user 0m4.712s 00:06:55.776 sys 0m0.141s 00:06:55.776 01:45:10 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:55.776 01:45:10 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:55.776 ************************************ 00:06:55.776 END TEST accel_decomp_mcore 00:06:55.776 ************************************ 00:06:55.776 01:45:10 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:55.776 01:45:10 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:55.776 01:45:10 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:55.777 01:45:10 accel -- common/autotest_common.sh@10 -- # set +x 00:06:55.777 ************************************ 00:06:55.777 START TEST accel_decomp_full_mcore 00:06:55.777 ************************************ 00:06:55.777 01:45:10 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:55.777 01:45:10 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:55.777 01:45:10 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:55.777 01:45:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:55.777 01:45:10 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:55.777 01:45:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:55.777 01:45:10 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:55.777 01:45:10 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:55.777 01:45:10 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:55.777 01:45:10 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:55.777 01:45:10 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:55.777 01:45:10 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:55.777 01:45:10 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:55.777 01:45:10 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:55.777 01:45:10 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:55.777 [2024-07-24 01:45:10.512437] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:06:55.777 [2024-07-24 01:45:10.512514] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1307284 ] 00:06:55.777 EAL: No free 2048 kB hugepages reported on node 1 00:06:55.777 [2024-07-24 01:45:10.572840] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:55.777 [2024-07-24 01:45:10.669423] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:55.777 [2024-07-24 01:45:10.669496] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:55.777 [2024-07-24 01:45:10.669594] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:55.777 [2024-07-24 01:45:10.669600] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.035 01:45:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:56.035 01:45:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:56.035 01:45:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:56.035 01:45:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:56.035 01:45:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:56.035 01:45:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:56.035 01:45:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:56.035 01:45:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:56.035 01:45:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:56.035 01:45:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:56.035 01:45:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:56.035 01:45:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:56.035 01:45:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:56.035 01:45:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:56.035 01:45:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:56.035 01:45:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:56.035 01:45:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:56.035 01:45:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:56.035 01:45:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:56.035 01:45:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:56.035 01:45:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:56.035 01:45:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:56.035 01:45:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:56.035 01:45:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:56.035 01:45:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:56.035 01:45:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:56.035 01:45:10 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:56.035 01:45:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:56.035 01:45:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:56.035 01:45:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:56.035 01:45:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:56.035 01:45:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:56.035 01:45:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:56.035 01:45:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:56.035 01:45:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:56.035 01:45:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:56.035 01:45:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:56.035 01:45:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:06:56.036 01:45:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:56.036 01:45:10 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:56.036 01:45:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:56.036 01:45:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:56.036 01:45:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:56.036 01:45:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:56.036 01:45:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:56.036 01:45:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:56.036 01:45:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:56.036 01:45:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:56.036 01:45:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:56.036 01:45:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:56.036 01:45:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:56.036 01:45:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:56.036 01:45:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:56.036 01:45:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:56.036 01:45:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:06:56.036 01:45:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:56.036 01:45:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:56.036 01:45:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:56.036 01:45:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:56.036 01:45:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:56.036 01:45:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:56.036 01:45:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:56.036 01:45:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:56.036 01:45:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:56.036 01:45:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:56.036 01:45:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:56.036 01:45:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:56.036 01:45:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:56.036 01:45:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:56.036 01:45:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:56.036 01:45:10 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:56.036 01:45:10 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:56.036 01:45:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:56.036 01:45:10 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:57.409 01:45:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:57.409 01:45:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:57.409 01:45:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:57.409 01:45:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:57.409 01:45:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:57.409 01:45:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:57.409 01:45:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:57.409 01:45:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:57.409 01:45:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:57.409 01:45:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:57.409 01:45:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:57.409 01:45:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:57.410 01:45:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:57.410 01:45:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:57.410 01:45:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:57.410 01:45:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:57.410 01:45:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:57.410 01:45:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:57.410 01:45:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:57.410 01:45:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:57.410 01:45:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:57.410 01:45:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:57.410 01:45:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:57.410 01:45:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:57.410 01:45:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:57.410 01:45:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:57.410 01:45:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:57.410 01:45:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:57.410 01:45:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:57.410 01:45:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:57.410 01:45:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:57.410 01:45:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:57.410 01:45:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:57.410 01:45:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:57.410 01:45:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:57.410 01:45:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:57.410 01:45:11 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:57.410 01:45:11 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:57.410 01:45:11 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:57.410 00:06:57.410 real 0m1.415s 00:06:57.410 user 0m4.733s 00:06:57.410 sys 0m0.149s 00:06:57.410 01:45:11 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:57.410 01:45:11 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:57.410 ************************************ 00:06:57.410 END TEST accel_decomp_full_mcore 00:06:57.410 ************************************ 00:06:57.410 01:45:11 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:57.410 01:45:11 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:57.410 01:45:11 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:57.410 01:45:11 accel -- common/autotest_common.sh@10 -- # set +x 00:06:57.410 ************************************ 00:06:57.410 START TEST accel_decomp_mthread 00:06:57.410 ************************************ 00:06:57.410 01:45:11 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:57.410 01:45:11 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:57.410 01:45:11 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:57.410 01:45:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:57.410 01:45:11 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:57.410 01:45:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:57.410 01:45:11 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:57.410 01:45:11 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:57.410 01:45:11 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:57.410 01:45:11 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:57.410 01:45:11 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:57.410 01:45:11 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:57.410 01:45:11 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:57.410 01:45:11 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:57.410 01:45:11 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:57.410 [2024-07-24 01:45:11.970955] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:06:57.410 [2024-07-24 01:45:11.971024] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1307453 ] 00:06:57.410 EAL: No free 2048 kB hugepages reported on node 1 00:06:57.410 [2024-07-24 01:45:12.032677] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.410 [2024-07-24 01:45:12.126330] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.410 01:45:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:57.410 01:45:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:57.410 01:45:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:57.410 01:45:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:57.410 01:45:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:57.410 01:45:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:57.410 01:45:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:57.410 01:45:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:57.410 01:45:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:57.410 01:45:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:57.410 01:45:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:57.410 01:45:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:57.410 01:45:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:57.410 01:45:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:57.410 01:45:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:57.410 01:45:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:57.410 01:45:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:57.410 01:45:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:57.410 01:45:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:57.410 01:45:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:57.410 01:45:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:57.410 01:45:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:57.410 01:45:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:57.410 01:45:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:57.410 01:45:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:57.410 01:45:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:57.410 01:45:12 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:57.410 01:45:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:57.410 01:45:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:57.410 01:45:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:57.410 01:45:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:57.410 01:45:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:57.410 01:45:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:57.410 01:45:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:57.410 01:45:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:57.410 01:45:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:57.410 01:45:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:57.410 01:45:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:06:57.410 01:45:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:57.410 01:45:12 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:57.410 01:45:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:57.410 01:45:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:57.410 01:45:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:57.410 01:45:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:57.410 01:45:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:57.410 01:45:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:57.410 01:45:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:57.410 01:45:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:57.410 01:45:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:57.410 01:45:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:57.410 01:45:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:57.410 01:45:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:57.410 01:45:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:57.410 01:45:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:57.410 01:45:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:06:57.410 01:45:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:57.410 01:45:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:57.410 01:45:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:57.410 01:45:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:57.410 01:45:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:57.410 01:45:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:57.410 01:45:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:57.410 01:45:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:57.410 01:45:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:57.410 01:45:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:57.410 01:45:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:57.410 01:45:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:57.410 01:45:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:57.411 01:45:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:57.411 01:45:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:57.411 01:45:12 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:57.411 01:45:12 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:57.411 01:45:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:57.411 01:45:12 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:58.783 01:45:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:58.783 01:45:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:58.783 01:45:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:58.783 01:45:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:58.783 01:45:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:58.783 01:45:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:58.783 01:45:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:58.783 01:45:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:58.783 01:45:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:58.783 01:45:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:58.783 01:45:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:58.783 01:45:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:58.783 01:45:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:58.783 01:45:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:58.783 01:45:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:58.783 01:45:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:58.783 01:45:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:58.783 01:45:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:58.783 01:45:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:58.783 01:45:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:58.783 01:45:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:58.783 01:45:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:58.783 01:45:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:58.783 01:45:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:58.783 01:45:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:58.783 01:45:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:58.783 01:45:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:58.783 01:45:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:58.783 01:45:13 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:58.783 01:45:13 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:58.783 01:45:13 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:58.783 00:06:58.783 real 0m1.414s 00:06:58.783 user 0m1.268s 00:06:58.783 sys 0m0.149s 00:06:58.783 01:45:13 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:58.783 01:45:13 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:58.783 ************************************ 00:06:58.783 END TEST accel_decomp_mthread 00:06:58.783 ************************************ 00:06:58.783 01:45:13 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:58.783 01:45:13 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:58.783 01:45:13 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:58.783 01:45:13 accel -- common/autotest_common.sh@10 -- # set +x 00:06:58.783 ************************************ 00:06:58.783 START TEST accel_decomp_full_mthread 00:06:58.783 ************************************ 00:06:58.783 01:45:13 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:58.783 01:45:13 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:58.783 01:45:13 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:58.783 01:45:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:58.783 01:45:13 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:58.783 01:45:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:58.783 01:45:13 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:58.783 01:45:13 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:58.783 01:45:13 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:58.783 01:45:13 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:58.783 01:45:13 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:58.783 01:45:13 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:58.783 01:45:13 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:58.783 01:45:13 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:58.783 01:45:13 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:58.783 [2024-07-24 01:45:13.435213] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:06:58.783 [2024-07-24 01:45:13.435281] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1307607 ] 00:06:58.783 EAL: No free 2048 kB hugepages reported on node 1 00:06:58.783 [2024-07-24 01:45:13.496650] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.783 [2024-07-24 01:45:13.588342] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.783 01:45:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:58.783 01:45:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:58.783 01:45:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:58.783 01:45:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:58.783 01:45:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:58.783 01:45:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:58.783 01:45:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:58.783 01:45:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:58.783 01:45:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:58.783 01:45:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:58.783 01:45:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:58.783 01:45:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:58.783 01:45:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:58.783 01:45:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:58.783 01:45:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:58.783 01:45:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:58.783 01:45:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:58.783 01:45:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:58.783 01:45:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:58.783 01:45:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:58.783 01:45:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:58.783 01:45:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:58.783 01:45:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:58.783 01:45:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:58.783 01:45:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:58.783 01:45:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:58.783 01:45:13 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:58.783 01:45:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:58.783 01:45:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:58.783 01:45:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:58.783 01:45:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:58.783 01:45:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:58.783 01:45:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:58.783 01:45:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:58.783 01:45:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:58.783 01:45:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:58.783 01:45:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:58.784 01:45:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:06:58.784 01:45:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:58.784 01:45:13 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:58.784 01:45:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:58.784 01:45:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:58.784 01:45:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:58.784 01:45:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:58.784 01:45:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:58.784 01:45:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:58.784 01:45:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:58.784 01:45:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:58.784 01:45:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:58.784 01:45:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:58.784 01:45:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:58.784 01:45:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:58.784 01:45:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:58.784 01:45:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:58.784 01:45:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:06:58.784 01:45:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:58.784 01:45:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:58.784 01:45:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:58.784 01:45:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:58.784 01:45:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:58.784 01:45:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:58.784 01:45:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:58.784 01:45:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:58.784 01:45:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:58.784 01:45:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:58.784 01:45:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:58.784 01:45:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:58.784 01:45:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:58.784 01:45:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:58.784 01:45:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:58.784 01:45:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:58.784 01:45:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:58.784 01:45:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:58.784 01:45:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:00.155 01:45:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:00.155 01:45:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:00.155 01:45:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:00.155 01:45:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:00.155 01:45:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:00.155 01:45:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:00.155 01:45:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:00.155 01:45:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:00.155 01:45:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:00.155 01:45:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:00.155 01:45:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:00.155 01:45:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:00.155 01:45:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:00.155 01:45:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:00.155 01:45:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:00.155 01:45:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:00.155 01:45:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:00.155 01:45:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:00.155 01:45:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:00.155 01:45:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:00.155 01:45:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:00.155 01:45:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:00.155 01:45:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:00.155 01:45:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:00.155 01:45:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:00.155 01:45:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:00.155 01:45:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:00.155 01:45:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:00.155 01:45:14 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:00.155 01:45:14 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:00.155 01:45:14 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:00.155 00:07:00.155 real 0m1.435s 00:07:00.155 user 0m1.300s 00:07:00.155 sys 0m0.139s 00:07:00.155 01:45:14 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:00.155 01:45:14 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:00.155 ************************************ 00:07:00.155 END TEST accel_decomp_full_mthread 00:07:00.155 ************************************ 00:07:00.155 01:45:14 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:07:00.155 01:45:14 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:00.155 01:45:14 accel -- accel/accel.sh@137 -- # build_accel_config 00:07:00.155 01:45:14 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:00.155 01:45:14 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:00.155 01:45:14 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:00.155 01:45:14 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:00.155 01:45:14 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:00.155 01:45:14 accel -- common/autotest_common.sh@10 -- # set +x 00:07:00.155 01:45:14 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:00.155 01:45:14 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:00.155 01:45:14 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:00.155 01:45:14 accel -- accel/accel.sh@41 -- # jq -r . 00:07:00.155 ************************************ 00:07:00.155 START TEST accel_dif_functional_tests 00:07:00.155 ************************************ 00:07:00.155 01:45:14 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:00.155 [2024-07-24 01:45:14.947836] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:07:00.155 [2024-07-24 01:45:14.947906] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1307874 ] 00:07:00.155 EAL: No free 2048 kB hugepages reported on node 1 00:07:00.155 [2024-07-24 01:45:15.010081] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:00.413 [2024-07-24 01:45:15.106100] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:00.413 [2024-07-24 01:45:15.106151] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:00.413 [2024-07-24 01:45:15.106168] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.413 00:07:00.413 00:07:00.413 CUnit - A unit testing framework for C - Version 2.1-3 00:07:00.413 http://cunit.sourceforge.net/ 00:07:00.413 00:07:00.413 00:07:00.413 Suite: accel_dif 00:07:00.413 Test: verify: DIF generated, GUARD check ...passed 00:07:00.413 Test: verify: DIF generated, APPTAG check ...passed 00:07:00.413 Test: verify: DIF generated, REFTAG check ...passed 00:07:00.413 Test: verify: DIF not generated, GUARD check ...[2024-07-24 01:45:15.199442] dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:00.413 passed 00:07:00.413 Test: verify: DIF not generated, APPTAG check ...[2024-07-24 01:45:15.199513] dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:00.413 passed 00:07:00.413 Test: verify: DIF not generated, REFTAG check ...[2024-07-24 01:45:15.199545] dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:00.413 passed 00:07:00.413 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:00.413 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-24 01:45:15.199607] dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:00.413 passed 00:07:00.413 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:00.413 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:00.413 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:00.413 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-24 01:45:15.199748] dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:00.413 passed 00:07:00.413 Test: verify copy: DIF generated, GUARD check ...passed 00:07:00.413 Test: verify copy: DIF generated, APPTAG check ...passed 00:07:00.413 Test: verify copy: DIF generated, REFTAG check ...passed 00:07:00.413 Test: verify copy: DIF not generated, GUARD check ...[2024-07-24 01:45:15.199899] dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:00.413 passed 00:07:00.413 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-24 01:45:15.199932] dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:00.413 passed 00:07:00.413 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-24 01:45:15.199963] dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:00.413 passed 00:07:00.413 Test: generate copy: DIF generated, GUARD check ...passed 00:07:00.413 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:00.413 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:00.413 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:00.413 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:00.413 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:00.413 Test: generate copy: iovecs-len validate ...[2024-07-24 01:45:15.200173] dif.c:1225:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:00.413 passed 00:07:00.413 Test: generate copy: buffer alignment validate ...passed 00:07:00.413 00:07:00.413 Run Summary: Type Total Ran Passed Failed Inactive 00:07:00.413 suites 1 1 n/a 0 0 00:07:00.413 tests 26 26 26 0 0 00:07:00.413 asserts 115 115 115 0 n/a 00:07:00.413 00:07:00.413 Elapsed time = 0.002 seconds 00:07:00.671 00:07:00.671 real 0m0.502s 00:07:00.671 user 0m0.776s 00:07:00.671 sys 0m0.178s 00:07:00.671 01:45:15 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:00.671 01:45:15 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:07:00.671 ************************************ 00:07:00.671 END TEST accel_dif_functional_tests 00:07:00.671 ************************************ 00:07:00.671 00:07:00.671 real 0m31.678s 00:07:00.671 user 0m35.067s 00:07:00.671 sys 0m4.559s 00:07:00.671 01:45:15 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:00.671 01:45:15 accel -- common/autotest_common.sh@10 -- # set +x 00:07:00.671 ************************************ 00:07:00.671 END TEST accel 00:07:00.671 ************************************ 00:07:00.671 01:45:15 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:00.671 01:45:15 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:00.671 01:45:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:00.671 01:45:15 -- common/autotest_common.sh@10 -- # set +x 00:07:00.671 ************************************ 00:07:00.671 START TEST accel_rpc 00:07:00.671 ************************************ 00:07:00.671 01:45:15 accel_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:00.671 * Looking for test storage... 00:07:00.671 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:07:00.671 01:45:15 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:00.671 01:45:15 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=1307953 00:07:00.671 01:45:15 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:00.671 01:45:15 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 1307953 00:07:00.671 01:45:15 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 1307953 ']' 00:07:00.671 01:45:15 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:00.671 01:45:15 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:00.671 01:45:15 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:00.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:00.671 01:45:15 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:00.671 01:45:15 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:00.930 [2024-07-24 01:45:15.572463] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:07:00.930 [2024-07-24 01:45:15.572540] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1307953 ] 00:07:00.930 EAL: No free 2048 kB hugepages reported on node 1 00:07:00.930 [2024-07-24 01:45:15.628046] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.930 [2024-07-24 01:45:15.712378] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.930 01:45:15 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:00.930 01:45:15 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:00.930 01:45:15 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:00.930 01:45:15 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:00.930 01:45:15 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:00.930 01:45:15 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:00.930 01:45:15 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:00.930 01:45:15 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:00.930 01:45:15 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:00.930 01:45:15 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:00.930 ************************************ 00:07:00.930 START TEST accel_assign_opcode 00:07:00.930 ************************************ 00:07:00.930 01:45:15 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:07:00.930 01:45:15 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:00.930 01:45:15 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:00.930 01:45:15 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:00.930 [2024-07-24 01:45:15.805031] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:00.930 01:45:15 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:00.930 01:45:15 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:00.930 01:45:15 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:00.930 01:45:15 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:00.930 [2024-07-24 01:45:15.813042] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:00.930 01:45:15 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:00.930 01:45:15 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:00.930 01:45:15 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:00.930 01:45:15 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:01.188 01:45:16 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:01.188 01:45:16 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:01.188 01:45:16 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:01.188 01:45:16 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:01.188 01:45:16 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:01.188 01:45:16 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:07:01.188 01:45:16 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:01.446 software 00:07:01.446 00:07:01.446 real 0m0.296s 00:07:01.446 user 0m0.040s 00:07:01.446 sys 0m0.006s 00:07:01.446 01:45:16 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:01.446 01:45:16 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:01.446 ************************************ 00:07:01.446 END TEST accel_assign_opcode 00:07:01.446 ************************************ 00:07:01.446 01:45:16 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 1307953 00:07:01.446 01:45:16 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 1307953 ']' 00:07:01.446 01:45:16 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 1307953 00:07:01.446 01:45:16 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:07:01.447 01:45:16 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:01.447 01:45:16 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1307953 00:07:01.447 01:45:16 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:01.447 01:45:16 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:01.447 01:45:16 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1307953' 00:07:01.447 killing process with pid 1307953 00:07:01.447 01:45:16 accel_rpc -- common/autotest_common.sh@967 -- # kill 1307953 00:07:01.447 01:45:16 accel_rpc -- common/autotest_common.sh@972 -- # wait 1307953 00:07:01.705 00:07:01.705 real 0m1.070s 00:07:01.705 user 0m1.022s 00:07:01.705 sys 0m0.397s 00:07:01.705 01:45:16 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:01.705 01:45:16 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:01.705 ************************************ 00:07:01.705 END TEST accel_rpc 00:07:01.705 ************************************ 00:07:01.705 01:45:16 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:01.705 01:45:16 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:01.705 01:45:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:01.705 01:45:16 -- common/autotest_common.sh@10 -- # set +x 00:07:01.705 ************************************ 00:07:01.705 START TEST app_cmdline 00:07:01.705 ************************************ 00:07:01.705 01:45:16 app_cmdline -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:01.964 * Looking for test storage... 00:07:01.964 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:01.964 01:45:16 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:01.964 01:45:16 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=1308157 00:07:01.964 01:45:16 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:01.964 01:45:16 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 1308157 00:07:01.964 01:45:16 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 1308157 ']' 00:07:01.964 01:45:16 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:01.964 01:45:16 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:01.964 01:45:16 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:01.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:01.964 01:45:16 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:01.964 01:45:16 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:01.964 [2024-07-24 01:45:16.694751] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:07:01.964 [2024-07-24 01:45:16.694846] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1308157 ] 00:07:01.964 EAL: No free 2048 kB hugepages reported on node 1 00:07:01.964 [2024-07-24 01:45:16.758887] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.964 [2024-07-24 01:45:16.852110] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.222 01:45:17 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:02.222 01:45:17 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:07:02.222 01:45:17 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:02.479 { 00:07:02.479 "version": "SPDK v24.09-pre git sha1 78cbcfdde", 00:07:02.479 "fields": { 00:07:02.479 "major": 24, 00:07:02.479 "minor": 9, 00:07:02.479 "patch": 0, 00:07:02.479 "suffix": "-pre", 00:07:02.479 "commit": "78cbcfdde" 00:07:02.479 } 00:07:02.479 } 00:07:02.479 01:45:17 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:02.479 01:45:17 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:02.479 01:45:17 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:02.479 01:45:17 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:02.479 01:45:17 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:02.479 01:45:17 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:02.479 01:45:17 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:02.479 01:45:17 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:02.479 01:45:17 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:02.479 01:45:17 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:02.737 01:45:17 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:02.737 01:45:17 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:02.737 01:45:17 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:02.737 01:45:17 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:07:02.737 01:45:17 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:02.737 01:45:17 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:02.737 01:45:17 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:02.737 01:45:17 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:02.737 01:45:17 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:02.737 01:45:17 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:02.737 01:45:17 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:02.737 01:45:17 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:02.737 01:45:17 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:02.737 01:45:17 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:02.737 request: 00:07:02.737 { 00:07:02.737 "method": "env_dpdk_get_mem_stats", 00:07:02.737 "req_id": 1 00:07:02.737 } 00:07:02.737 Got JSON-RPC error response 00:07:02.737 response: 00:07:02.737 { 00:07:02.737 "code": -32601, 00:07:02.737 "message": "Method not found" 00:07:02.737 } 00:07:02.996 01:45:17 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:07:02.996 01:45:17 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:02.996 01:45:17 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:02.996 01:45:17 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:02.996 01:45:17 app_cmdline -- app/cmdline.sh@1 -- # killprocess 1308157 00:07:02.996 01:45:17 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 1308157 ']' 00:07:02.996 01:45:17 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 1308157 00:07:02.996 01:45:17 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:07:02.996 01:45:17 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:02.996 01:45:17 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1308157 00:07:02.996 01:45:17 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:02.996 01:45:17 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:02.996 01:45:17 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1308157' 00:07:02.996 killing process with pid 1308157 00:07:02.996 01:45:17 app_cmdline -- common/autotest_common.sh@967 -- # kill 1308157 00:07:02.996 01:45:17 app_cmdline -- common/autotest_common.sh@972 -- # wait 1308157 00:07:03.254 00:07:03.254 real 0m1.504s 00:07:03.254 user 0m1.813s 00:07:03.254 sys 0m0.451s 00:07:03.254 01:45:18 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:03.254 01:45:18 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:03.254 ************************************ 00:07:03.254 END TEST app_cmdline 00:07:03.254 ************************************ 00:07:03.254 01:45:18 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:03.254 01:45:18 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:03.254 01:45:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:03.254 01:45:18 -- common/autotest_common.sh@10 -- # set +x 00:07:03.254 ************************************ 00:07:03.254 START TEST version 00:07:03.254 ************************************ 00:07:03.254 01:45:18 version -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:03.511 * Looking for test storage... 00:07:03.511 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:03.511 01:45:18 version -- app/version.sh@17 -- # get_header_version major 00:07:03.511 01:45:18 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:03.511 01:45:18 version -- app/version.sh@14 -- # cut -f2 00:07:03.511 01:45:18 version -- app/version.sh@14 -- # tr -d '"' 00:07:03.511 01:45:18 version -- app/version.sh@17 -- # major=24 00:07:03.511 01:45:18 version -- app/version.sh@18 -- # get_header_version minor 00:07:03.511 01:45:18 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:03.511 01:45:18 version -- app/version.sh@14 -- # cut -f2 00:07:03.511 01:45:18 version -- app/version.sh@14 -- # tr -d '"' 00:07:03.511 01:45:18 version -- app/version.sh@18 -- # minor=9 00:07:03.511 01:45:18 version -- app/version.sh@19 -- # get_header_version patch 00:07:03.511 01:45:18 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:03.511 01:45:18 version -- app/version.sh@14 -- # cut -f2 00:07:03.511 01:45:18 version -- app/version.sh@14 -- # tr -d '"' 00:07:03.511 01:45:18 version -- app/version.sh@19 -- # patch=0 00:07:03.511 01:45:18 version -- app/version.sh@20 -- # get_header_version suffix 00:07:03.511 01:45:18 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:03.511 01:45:18 version -- app/version.sh@14 -- # cut -f2 00:07:03.511 01:45:18 version -- app/version.sh@14 -- # tr -d '"' 00:07:03.511 01:45:18 version -- app/version.sh@20 -- # suffix=-pre 00:07:03.511 01:45:18 version -- app/version.sh@22 -- # version=24.9 00:07:03.511 01:45:18 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:03.511 01:45:18 version -- app/version.sh@28 -- # version=24.9rc0 00:07:03.512 01:45:18 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:03.512 01:45:18 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:03.512 01:45:18 version -- app/version.sh@30 -- # py_version=24.9rc0 00:07:03.512 01:45:18 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:07:03.512 00:07:03.512 real 0m0.105s 00:07:03.512 user 0m0.053s 00:07:03.512 sys 0m0.074s 00:07:03.512 01:45:18 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:03.512 01:45:18 version -- common/autotest_common.sh@10 -- # set +x 00:07:03.512 ************************************ 00:07:03.512 END TEST version 00:07:03.512 ************************************ 00:07:03.512 01:45:18 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:07:03.512 01:45:18 -- spdk/autotest.sh@198 -- # uname -s 00:07:03.512 01:45:18 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:07:03.512 01:45:18 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:03.512 01:45:18 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:03.512 01:45:18 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:07:03.512 01:45:18 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:07:03.512 01:45:18 -- spdk/autotest.sh@260 -- # timing_exit lib 00:07:03.512 01:45:18 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:03.512 01:45:18 -- common/autotest_common.sh@10 -- # set +x 00:07:03.512 01:45:18 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:07:03.512 01:45:18 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:07:03.512 01:45:18 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:07:03.512 01:45:18 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:07:03.512 01:45:18 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:07:03.512 01:45:18 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:07:03.512 01:45:18 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:03.512 01:45:18 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:03.512 01:45:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:03.512 01:45:18 -- common/autotest_common.sh@10 -- # set +x 00:07:03.512 ************************************ 00:07:03.512 START TEST nvmf_tcp 00:07:03.512 ************************************ 00:07:03.512 01:45:18 nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:03.512 * Looking for test storage... 00:07:03.512 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:03.512 01:45:18 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:03.512 01:45:18 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:03.512 01:45:18 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:03.512 01:45:18 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:03.512 01:45:18 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:03.512 01:45:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:03.512 ************************************ 00:07:03.512 START TEST nvmf_target_core 00:07:03.512 ************************************ 00:07:03.512 01:45:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:03.770 * Looking for test storage... 00:07:03.771 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:03.771 01:45:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:07:03.771 01:45:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:03.771 01:45:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:03.771 01:45:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:07:03.771 01:45:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:03.771 01:45:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:03.771 01:45:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:03.771 01:45:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:03.771 01:45:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:03.771 01:45:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:03.771 01:45:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:03.771 01:45:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:03.771 01:45:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:03.771 01:45:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:03.771 01:45:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:03.771 01:45:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:03.771 01:45:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:03.771 01:45:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:03.771 01:45:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:03.771 01:45:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:03.771 01:45:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:03.771 01:45:18 nvmf_tcp.nvmf_target_core -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:03.771 01:45:18 nvmf_tcp.nvmf_target_core -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:03.771 01:45:18 nvmf_tcp.nvmf_target_core -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:03.771 01:45:18 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:03.771 01:45:18 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:03.771 01:45:18 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:03.771 01:45:18 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:07:03.771 01:45:18 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:03.771 01:45:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@47 -- # : 0 00:07:03.771 01:45:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:03.771 01:45:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:03.771 01:45:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:03.771 01:45:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:03.771 01:45:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:03.771 01:45:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:03.771 01:45:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:03.771 01:45:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:03.771 01:45:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:03.771 01:45:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:07:03.771 01:45:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:07:03.771 01:45:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:03.771 01:45:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:03.771 01:45:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:03.771 01:45:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:03.771 ************************************ 00:07:03.771 START TEST nvmf_abort 00:07:03.771 ************************************ 00:07:03.771 01:45:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:03.771 * Looking for test storage... 00:07:03.771 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:03.771 01:45:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:03.771 01:45:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:07:03.771 01:45:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:03.771 01:45:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:03.771 01:45:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:03.771 01:45:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:03.771 01:45:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:03.771 01:45:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:03.771 01:45:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:03.771 01:45:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:03.771 01:45:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:03.771 01:45:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:03.771 01:45:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:03.771 01:45:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:03.771 01:45:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:03.771 01:45:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:03.771 01:45:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:03.771 01:45:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:03.771 01:45:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:03.771 01:45:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:03.771 01:45:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:03.771 01:45:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:03.771 01:45:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:03.771 01:45:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:03.771 01:45:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:03.771 01:45:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:07:03.771 01:45:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:03.772 01:45:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:07:03.772 01:45:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:03.772 01:45:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:03.772 01:45:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:03.772 01:45:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:03.772 01:45:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:03.772 01:45:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:03.772 01:45:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:03.772 01:45:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:03.772 01:45:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:03.772 01:45:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:07:03.772 01:45:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:07:03.772 01:45:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:03.772 01:45:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:03.772 01:45:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:03.772 01:45:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:03.772 01:45:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:03.772 01:45:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:03.772 01:45:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:03.772 01:45:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:03.772 01:45:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:03.772 01:45:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:03.772 01:45:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:07:03.772 01:45:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:05.672 01:45:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:05.672 01:45:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:07:05.672 01:45:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:05.672 01:45:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:05.672 01:45:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:05.672 01:45:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:05.672 01:45:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:05.672 01:45:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:07:05.672 01:45:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:05.672 01:45:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:07:05.672 01:45:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:07:05.672 01:45:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:07:05.672 01:45:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:07:05.672 01:45:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:07:05.672 01:45:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:07:05.672 01:45:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:05.672 01:45:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:05.672 01:45:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:05.672 01:45:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:05.672 01:45:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:05.672 01:45:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:05.672 01:45:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:05.672 01:45:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:05.672 01:45:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:05.672 01:45:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:05.672 01:45:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:05.672 01:45:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:05.673 01:45:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:05.673 01:45:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:05.673 01:45:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:05.673 01:45:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:05.673 01:45:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:05.673 01:45:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:05.673 01:45:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:05.673 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:05.673 01:45:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:05.673 01:45:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:05.673 01:45:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:05.673 01:45:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:05.673 01:45:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:05.673 01:45:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:05.673 01:45:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:05.673 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:05.673 01:45:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:05.673 01:45:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:05.673 01:45:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:05.673 01:45:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:05.673 01:45:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:05.673 01:45:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:05.673 01:45:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:05.673 01:45:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:05.673 01:45:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:05.673 01:45:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:05.673 01:45:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:05.673 01:45:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:05.673 01:45:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:05.673 01:45:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:05.673 01:45:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:05.673 01:45:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:05.673 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:05.673 01:45:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:05.673 01:45:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:05.673 01:45:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:05.673 01:45:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:05.673 01:45:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:05.673 01:45:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:05.673 01:45:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:05.673 01:45:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:05.673 01:45:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:05.673 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:05.673 01:45:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:05.673 01:45:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:05.673 01:45:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:07:05.673 01:45:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:05.673 01:45:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:05.673 01:45:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:05.673 01:45:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:05.673 01:45:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:05.673 01:45:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:05.673 01:45:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:05.673 01:45:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:05.673 01:45:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:05.673 01:45:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:05.673 01:45:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:05.673 01:45:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:05.673 01:45:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:05.673 01:45:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:05.673 01:45:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:05.673 01:45:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:05.673 01:45:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:05.673 01:45:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:05.673 01:45:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:05.673 01:45:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:05.931 01:45:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:05.931 01:45:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:05.931 01:45:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:05.931 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:05.931 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.175 ms 00:07:05.931 00:07:05.931 --- 10.0.0.2 ping statistics --- 00:07:05.932 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:05.932 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:07:05.932 01:45:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:05.932 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:05.932 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.110 ms 00:07:05.932 00:07:05.932 --- 10.0.0.1 ping statistics --- 00:07:05.932 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:05.932 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:07:05.932 01:45:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:05.932 01:45:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:07:05.932 01:45:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:05.932 01:45:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:05.932 01:45:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:05.932 01:45:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:05.932 01:45:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:05.932 01:45:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:05.932 01:45:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:05.932 01:45:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:07:05.932 01:45:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:05.932 01:45:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:05.932 01:45:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:05.932 01:45:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=1310194 00:07:05.932 01:45:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:05.932 01:45:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 1310194 00:07:05.932 01:45:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@829 -- # '[' -z 1310194 ']' 00:07:05.932 01:45:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:05.932 01:45:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:05.932 01:45:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:05.932 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:05.932 01:45:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:05.932 01:45:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:05.932 [2024-07-24 01:45:20.675164] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:07:05.932 [2024-07-24 01:45:20.675247] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:05.932 EAL: No free 2048 kB hugepages reported on node 1 00:07:05.932 [2024-07-24 01:45:20.743875] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:06.190 [2024-07-24 01:45:20.839290] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:06.190 [2024-07-24 01:45:20.839354] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:06.190 [2024-07-24 01:45:20.839372] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:06.190 [2024-07-24 01:45:20.839386] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:06.190 [2024-07-24 01:45:20.839399] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:06.190 [2024-07-24 01:45:20.839489] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:06.190 [2024-07-24 01:45:20.839543] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:06.190 [2024-07-24 01:45:20.839546] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:06.190 01:45:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:06.190 01:45:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@862 -- # return 0 00:07:06.190 01:45:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:06.190 01:45:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:06.190 01:45:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:06.190 01:45:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:06.190 01:45:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:07:06.190 01:45:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:06.190 01:45:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:06.190 [2024-07-24 01:45:20.990107] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:06.190 01:45:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:06.190 01:45:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:07:06.190 01:45:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:06.190 01:45:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:06.190 Malloc0 00:07:06.190 01:45:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:06.190 01:45:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:06.190 01:45:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:06.190 01:45:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:06.190 Delay0 00:07:06.190 01:45:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:06.190 01:45:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:06.190 01:45:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:06.190 01:45:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:06.190 01:45:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:06.190 01:45:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:07:06.190 01:45:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:06.190 01:45:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:06.190 01:45:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:06.190 01:45:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:06.190 01:45:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:06.190 01:45:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:06.190 [2024-07-24 01:45:21.058313] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:06.190 01:45:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:06.190 01:45:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:06.190 01:45:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:06.190 01:45:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:06.190 01:45:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:06.190 01:45:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:07:06.448 EAL: No free 2048 kB hugepages reported on node 1 00:07:06.448 [2024-07-24 01:45:21.153446] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:07:08.978 Initializing NVMe Controllers 00:07:08.978 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:08.978 controller IO queue size 128 less than required 00:07:08.978 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:07:08.978 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:07:08.978 Initialization complete. Launching workers. 00:07:08.978 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 34000 00:07:08.978 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 34061, failed to submit 62 00:07:08.978 success 34004, unsuccess 57, failed 0 00:07:08.978 01:45:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:08.978 01:45:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:08.978 01:45:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:08.978 01:45:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:08.978 01:45:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:07:08.978 01:45:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:07:08.978 01:45:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:08.978 01:45:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:07:08.978 01:45:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:08.978 01:45:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:07:08.978 01:45:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:08.978 01:45:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:08.978 rmmod nvme_tcp 00:07:08.978 rmmod nvme_fabrics 00:07:08.978 rmmod nvme_keyring 00:07:08.978 01:45:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:08.978 01:45:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:07:08.978 01:45:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:07:08.978 01:45:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 1310194 ']' 00:07:08.978 01:45:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 1310194 00:07:08.978 01:45:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@948 -- # '[' -z 1310194 ']' 00:07:08.978 01:45:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@952 -- # kill -0 1310194 00:07:08.978 01:45:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@953 -- # uname 00:07:08.978 01:45:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:08.978 01:45:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1310194 00:07:08.978 01:45:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:07:08.978 01:45:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:07:08.978 01:45:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1310194' 00:07:08.978 killing process with pid 1310194 00:07:08.978 01:45:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@967 -- # kill 1310194 00:07:08.978 01:45:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # wait 1310194 00:07:08.978 01:45:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:08.978 01:45:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:08.978 01:45:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:08.978 01:45:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:08.978 01:45:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:08.978 01:45:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:08.978 01:45:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:08.978 01:45:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:10.879 01:45:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:10.879 00:07:10.879 real 0m7.250s 00:07:10.879 user 0m10.857s 00:07:10.879 sys 0m2.422s 00:07:10.879 01:45:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:10.879 01:45:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:10.879 ************************************ 00:07:10.879 END TEST nvmf_abort 00:07:10.879 ************************************ 00:07:10.879 01:45:25 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:10.879 01:45:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:10.879 01:45:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:10.879 01:45:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:10.879 ************************************ 00:07:10.879 START TEST nvmf_ns_hotplug_stress 00:07:10.879 ************************************ 00:07:10.879 01:45:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:11.137 * Looking for test storage... 00:07:11.137 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:11.137 01:45:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:11.137 01:45:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:07:11.137 01:45:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:11.137 01:45:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:11.137 01:45:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:11.137 01:45:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:11.137 01:45:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:11.137 01:45:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:11.137 01:45:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:11.137 01:45:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:11.137 01:45:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:11.137 01:45:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:11.137 01:45:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:11.137 01:45:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:11.137 01:45:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:11.137 01:45:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:11.137 01:45:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:11.137 01:45:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:11.137 01:45:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:11.137 01:45:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:11.137 01:45:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:11.137 01:45:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:11.137 01:45:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.137 01:45:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.138 01:45:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.138 01:45:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:07:11.138 01:45:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.138 01:45:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:07:11.138 01:45:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:11.138 01:45:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:11.138 01:45:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:11.138 01:45:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:11.138 01:45:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:11.138 01:45:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:11.138 01:45:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:11.138 01:45:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:11.138 01:45:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:11.138 01:45:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:07:11.138 01:45:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:11.138 01:45:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:11.138 01:45:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:11.138 01:45:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:11.138 01:45:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:11.138 01:45:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:11.138 01:45:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:11.138 01:45:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:11.138 01:45:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:11.138 01:45:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:11.138 01:45:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:07:11.138 01:45:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:13.068 01:45:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:13.069 01:45:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:07:13.069 01:45:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:13.069 01:45:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:13.069 01:45:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:13.069 01:45:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:13.069 01:45:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:13.069 01:45:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:07:13.069 01:45:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:13.069 01:45:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:07:13.069 01:45:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:07:13.069 01:45:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:07:13.069 01:45:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:07:13.069 01:45:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:07:13.069 01:45:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:07:13.069 01:45:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:13.069 01:45:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:13.069 01:45:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:13.069 01:45:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:13.069 01:45:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:13.069 01:45:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:13.069 01:45:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:13.069 01:45:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:13.069 01:45:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:13.069 01:45:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:13.069 01:45:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:13.069 01:45:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:13.069 01:45:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:13.069 01:45:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:13.069 01:45:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:13.069 01:45:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:13.069 01:45:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:13.069 01:45:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:13.069 01:45:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:13.069 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:13.069 01:45:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:13.069 01:45:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:13.069 01:45:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:13.069 01:45:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:13.069 01:45:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:13.069 01:45:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:13.069 01:45:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:13.069 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:13.069 01:45:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:13.069 01:45:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:13.069 01:45:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:13.069 01:45:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:13.069 01:45:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:13.069 01:45:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:13.069 01:45:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:13.069 01:45:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:13.069 01:45:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:13.069 01:45:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:13.069 01:45:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:13.069 01:45:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:13.069 01:45:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:13.069 01:45:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:13.069 01:45:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:13.069 01:45:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:13.069 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:13.069 01:45:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:13.069 01:45:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:13.069 01:45:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:13.069 01:45:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:13.069 01:45:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:13.069 01:45:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:13.069 01:45:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:13.069 01:45:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:13.069 01:45:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:13.069 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:13.069 01:45:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:13.069 01:45:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:13.069 01:45:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:07:13.069 01:45:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:13.069 01:45:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:13.069 01:45:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:13.069 01:45:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:13.069 01:45:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:13.069 01:45:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:13.069 01:45:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:13.069 01:45:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:13.069 01:45:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:13.069 01:45:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:13.069 01:45:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:13.069 01:45:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:13.069 01:45:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:13.069 01:45:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:13.069 01:45:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:13.069 01:45:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:13.069 01:45:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:13.069 01:45:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:13.069 01:45:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:13.069 01:45:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:13.069 01:45:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:13.069 01:45:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:13.069 01:45:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:13.069 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:13.069 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.250 ms 00:07:13.069 00:07:13.069 --- 10.0.0.2 ping statistics --- 00:07:13.069 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:13.069 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:07:13.069 01:45:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:13.069 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:13.069 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.143 ms 00:07:13.069 00:07:13.069 --- 10.0.0.1 ping statistics --- 00:07:13.069 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:13.069 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:07:13.069 01:45:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:13.070 01:45:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:07:13.070 01:45:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:13.070 01:45:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:13.070 01:45:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:13.070 01:45:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:13.070 01:45:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:13.070 01:45:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:13.070 01:45:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:13.328 01:45:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:07:13.328 01:45:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:13.328 01:45:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:13.328 01:45:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:13.328 01:45:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=1312421 00:07:13.328 01:45:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 1312421 00:07:13.328 01:45:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:13.328 01:45:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@829 -- # '[' -z 1312421 ']' 00:07:13.328 01:45:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:13.328 01:45:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:13.328 01:45:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:13.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:13.328 01:45:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:13.328 01:45:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:13.328 [2024-07-24 01:45:28.014808] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:07:13.328 [2024-07-24 01:45:28.014905] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:13.328 EAL: No free 2048 kB hugepages reported on node 1 00:07:13.328 [2024-07-24 01:45:28.079332] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:13.328 [2024-07-24 01:45:28.173979] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:13.328 [2024-07-24 01:45:28.174036] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:13.328 [2024-07-24 01:45:28.174065] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:13.328 [2024-07-24 01:45:28.174077] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:13.328 [2024-07-24 01:45:28.174087] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:13.328 [2024-07-24 01:45:28.174188] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:13.328 [2024-07-24 01:45:28.174223] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:13.328 [2024-07-24 01:45:28.174221] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:13.586 01:45:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:13.586 01:45:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # return 0 00:07:13.586 01:45:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:13.586 01:45:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:13.586 01:45:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:13.586 01:45:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:13.586 01:45:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:07:13.586 01:45:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:13.844 [2024-07-24 01:45:28.553270] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:13.844 01:45:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:14.101 01:45:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:14.359 [2024-07-24 01:45:29.036906] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:14.359 01:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:14.616 01:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:07:14.873 Malloc0 00:07:14.873 01:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:15.131 Delay0 00:07:15.131 01:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:15.389 01:45:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:07:15.647 NULL1 00:07:15.647 01:45:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:07:15.905 01:45:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1312743 00:07:15.905 01:45:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:07:15.905 01:45:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1312743 00:07:15.905 01:45:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:15.905 EAL: No free 2048 kB hugepages reported on node 1 00:07:17.277 Read completed with error (sct=0, sc=11) 00:07:17.277 01:45:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:17.277 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:17.277 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:17.277 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:17.277 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:17.277 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:17.277 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:17.277 01:45:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:07:17.277 01:45:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:07:17.535 true 00:07:17.535 01:45:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1312743 00:07:17.535 01:45:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:18.468 01:45:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:18.468 01:45:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:07:18.468 01:45:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:07:18.725 true 00:07:18.725 01:45:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1312743 00:07:18.725 01:45:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:18.981 01:45:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:19.239 01:45:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:07:19.239 01:45:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:07:19.497 true 00:07:19.497 01:45:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1312743 00:07:19.497 01:45:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:20.430 01:45:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:20.430 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:20.430 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:20.688 01:45:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:07:20.688 01:45:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:07:20.946 true 00:07:20.946 01:45:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1312743 00:07:20.946 01:45:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:21.204 01:45:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:21.462 01:45:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:07:21.462 01:45:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:07:21.720 true 00:07:21.720 01:45:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1312743 00:07:21.720 01:45:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:22.654 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:22.654 01:45:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:22.654 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:22.654 01:45:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:07:22.654 01:45:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:07:22.912 true 00:07:22.912 01:45:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1312743 00:07:22.912 01:45:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:23.170 01:45:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:23.428 01:45:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:07:23.428 01:45:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:07:23.685 true 00:07:23.685 01:45:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1312743 00:07:23.685 01:45:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:24.619 01:45:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:24.619 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:24.877 01:45:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:07:24.877 01:45:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:07:25.135 true 00:07:25.135 01:45:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1312743 00:07:25.135 01:45:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:25.392 01:45:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:25.649 01:45:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:07:25.650 01:45:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:07:25.907 true 00:07:25.907 01:45:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1312743 00:07:25.907 01:45:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:26.839 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:26.839 01:45:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:27.097 01:45:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:07:27.097 01:45:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:07:27.097 true 00:07:27.354 01:45:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1312743 00:07:27.354 01:45:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:27.354 01:45:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:27.612 01:45:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:07:27.612 01:45:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:07:27.869 true 00:07:27.869 01:45:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1312743 00:07:27.869 01:45:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:28.801 01:45:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:29.083 01:45:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:07:29.083 01:45:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:07:29.351 true 00:07:29.351 01:45:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1312743 00:07:29.351 01:45:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:29.609 01:45:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:29.866 01:45:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:07:29.866 01:45:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:07:30.123 true 00:07:30.123 01:45:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1312743 00:07:30.123 01:45:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:31.053 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:31.053 01:45:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:31.053 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:31.053 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:31.053 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:31.053 01:45:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:07:31.053 01:45:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:07:31.309 true 00:07:31.309 01:45:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1312743 00:07:31.309 01:45:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:31.566 01:45:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:31.823 01:45:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:07:31.823 01:45:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:07:32.080 true 00:07:32.080 01:45:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1312743 00:07:32.080 01:45:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:33.450 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:33.450 01:45:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:33.450 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:33.450 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:33.450 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:33.450 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:33.450 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:33.450 01:45:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:07:33.450 01:45:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:07:33.706 true 00:07:33.706 01:45:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1312743 00:07:33.706 01:45:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:34.638 01:45:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:34.638 01:45:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:07:34.638 01:45:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:07:34.896 true 00:07:34.896 01:45:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1312743 00:07:34.896 01:45:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:35.154 01:45:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:35.411 01:45:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:07:35.411 01:45:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:07:35.668 true 00:07:35.668 01:45:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1312743 00:07:35.668 01:45:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:36.600 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:36.600 01:45:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:36.600 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:36.600 01:45:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:07:36.600 01:45:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:07:36.856 true 00:07:36.856 01:45:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1312743 00:07:36.856 01:45:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:37.114 01:45:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:37.371 01:45:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:07:37.371 01:45:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:07:37.628 true 00:07:37.628 01:45:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1312743 00:07:37.628 01:45:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:38.561 01:45:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:38.817 01:45:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:07:38.817 01:45:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:07:39.073 true 00:07:39.073 01:45:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1312743 00:07:39.073 01:45:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:39.330 01:45:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:39.586 01:45:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:07:39.586 01:45:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:07:39.843 true 00:07:39.843 01:45:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1312743 00:07:39.843 01:45:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:40.775 01:45:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:40.775 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:40.775 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:40.775 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:41.032 01:45:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:07:41.032 01:45:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:07:41.032 true 00:07:41.032 01:45:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1312743 00:07:41.032 01:45:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:41.289 01:45:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:41.547 01:45:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:07:41.547 01:45:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:07:41.804 true 00:07:41.804 01:45:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1312743 00:07:41.804 01:45:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:42.737 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:42.737 01:45:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:42.994 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:42.994 01:45:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:07:42.994 01:45:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:07:43.252 true 00:07:43.252 01:45:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1312743 00:07:43.252 01:45:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:43.510 01:45:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:43.768 01:45:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:07:43.768 01:45:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:07:44.026 true 00:07:44.026 01:45:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1312743 00:07:44.026 01:45:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:44.958 01:45:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:44.958 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:45.216 01:46:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:07:45.216 01:46:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:07:45.499 true 00:07:45.499 01:46:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1312743 00:07:45.499 01:46:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:45.765 01:46:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:46.023 01:46:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:07:46.023 01:46:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:07:46.280 true 00:07:46.280 01:46:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1312743 00:07:46.280 01:46:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:47.212 Initializing NVMe Controllers 00:07:47.212 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:47.212 Controller IO queue size 128, less than required. 00:07:47.212 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:47.212 Controller IO queue size 128, less than required. 00:07:47.212 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:47.212 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:47.212 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:07:47.212 Initialization complete. Launching workers. 00:07:47.212 ======================================================== 00:07:47.212 Latency(us) 00:07:47.212 Device Information : IOPS MiB/s Average min max 00:07:47.212 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1082.44 0.53 71412.85 2081.60 1062401.08 00:07:47.212 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 11989.48 5.85 10676.04 3693.10 463507.61 00:07:47.212 ======================================================== 00:07:47.212 Total : 13071.92 6.38 15705.43 2081.60 1062401.08 00:07:47.212 00:07:47.212 01:46:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:47.470 01:46:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:07:47.470 01:46:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:07:47.470 true 00:07:47.727 01:46:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1312743 00:07:47.727 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1312743) - No such process 00:07:47.727 01:46:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1312743 00:07:47.727 01:46:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:47.727 01:46:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:47.985 01:46:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:07:47.985 01:46:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:07:47.985 01:46:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:07:47.985 01:46:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:47.985 01:46:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:07:48.243 null0 00:07:48.243 01:46:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:48.243 01:46:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:48.243 01:46:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:07:48.500 null1 00:07:48.500 01:46:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:48.500 01:46:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:48.500 01:46:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:07:48.757 null2 00:07:48.757 01:46:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:48.757 01:46:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:48.757 01:46:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:07:49.014 null3 00:07:49.014 01:46:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:49.014 01:46:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:49.014 01:46:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:07:49.271 null4 00:07:49.271 01:46:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:49.271 01:46:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:49.271 01:46:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:07:49.529 null5 00:07:49.529 01:46:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:49.529 01:46:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:49.529 01:46:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:07:49.786 null6 00:07:49.786 01:46:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:49.786 01:46:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:49.786 01:46:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:07:50.044 null7 00:07:50.044 01:46:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:50.044 01:46:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:50.044 01:46:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:07:50.044 01:46:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:50.044 01:46:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:50.044 01:46:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:07:50.045 01:46:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:50.045 01:46:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:07:50.045 01:46:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:50.045 01:46:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:50.045 01:46:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:50.045 01:46:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:50.045 01:46:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:50.045 01:46:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:07:50.045 01:46:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:50.045 01:46:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:07:50.045 01:46:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:50.045 01:46:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:50.045 01:46:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:50.045 01:46:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:50.045 01:46:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:50.045 01:46:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:07:50.045 01:46:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:50.045 01:46:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:50.045 01:46:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:07:50.045 01:46:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:50.045 01:46:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:50.045 01:46:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:50.045 01:46:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:50.045 01:46:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:07:50.045 01:46:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:50.045 01:46:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:50.045 01:46:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:07:50.045 01:46:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:50.045 01:46:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:50.045 01:46:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:50.045 01:46:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:50.045 01:46:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:07:50.045 01:46:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:50.045 01:46:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:50.045 01:46:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:07:50.045 01:46:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:50.045 01:46:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:50.045 01:46:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:50.045 01:46:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:50.045 01:46:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:07:50.045 01:46:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:50.045 01:46:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:50.045 01:46:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:07:50.045 01:46:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:50.045 01:46:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:50.045 01:46:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:50.045 01:46:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:50.045 01:46:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:07:50.045 01:46:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:50.045 01:46:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:50.045 01:46:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:07:50.045 01:46:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:50.045 01:46:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:50.045 01:46:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:50.045 01:46:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:50.045 01:46:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:07:50.045 01:46:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:50.045 01:46:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:50.045 01:46:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:07:50.045 01:46:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:50.045 01:46:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1316921 1316922 1316924 1316926 1316928 1316930 1316932 1316934 00:07:50.045 01:46:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:50.045 01:46:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:50.302 01:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:50.302 01:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:50.302 01:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:50.302 01:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:50.302 01:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:50.302 01:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:50.302 01:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:50.302 01:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:50.558 01:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:50.558 01:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:50.558 01:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:50.558 01:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:50.558 01:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:50.558 01:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:50.558 01:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:50.558 01:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:50.558 01:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:50.559 01:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:50.559 01:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:50.559 01:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:50.559 01:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:50.559 01:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:50.559 01:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:50.559 01:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:50.559 01:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:50.559 01:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:50.559 01:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:50.559 01:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:50.559 01:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:50.559 01:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:50.559 01:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:50.559 01:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:50.815 01:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:50.815 01:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:50.815 01:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:50.815 01:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:50.815 01:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:50.815 01:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:50.815 01:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:50.815 01:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:51.071 01:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:51.071 01:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:51.071 01:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:51.071 01:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:51.071 01:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:51.071 01:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:51.071 01:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:51.071 01:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:51.071 01:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:51.071 01:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:51.071 01:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:51.071 01:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:51.071 01:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:51.071 01:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:51.071 01:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:51.071 01:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:51.071 01:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:51.072 01:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:51.072 01:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:51.072 01:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:51.072 01:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:51.072 01:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:51.072 01:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:51.072 01:46:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:51.328 01:46:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:51.328 01:46:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:51.328 01:46:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:51.328 01:46:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:51.328 01:46:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:51.328 01:46:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:51.328 01:46:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:51.328 01:46:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:51.585 01:46:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:51.585 01:46:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:51.585 01:46:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:51.585 01:46:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:51.585 01:46:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:51.585 01:46:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:51.585 01:46:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:51.585 01:46:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:51.585 01:46:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:51.585 01:46:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:51.585 01:46:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:51.585 01:46:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:51.585 01:46:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:51.585 01:46:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:51.585 01:46:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:51.585 01:46:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:51.585 01:46:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:51.585 01:46:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:51.585 01:46:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:51.585 01:46:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:51.585 01:46:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:51.585 01:46:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:51.585 01:46:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:51.585 01:46:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:51.842 01:46:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:51.842 01:46:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:51.842 01:46:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:51.842 01:46:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:52.098 01:46:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:52.098 01:46:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:52.098 01:46:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:52.098 01:46:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:52.355 01:46:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:52.355 01:46:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:52.355 01:46:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:52.355 01:46:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:52.355 01:46:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:52.355 01:46:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:52.355 01:46:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:52.355 01:46:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:52.355 01:46:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:52.355 01:46:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:52.355 01:46:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:52.355 01:46:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:52.355 01:46:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:52.355 01:46:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:52.355 01:46:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:52.355 01:46:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:52.355 01:46:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:52.355 01:46:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:52.355 01:46:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:52.355 01:46:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:52.355 01:46:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:52.355 01:46:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:52.355 01:46:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:52.355 01:46:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:52.612 01:46:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:52.612 01:46:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:52.612 01:46:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:52.612 01:46:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:52.612 01:46:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:52.612 01:46:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:52.612 01:46:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:52.612 01:46:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:52.870 01:46:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:52.870 01:46:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:52.870 01:46:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:52.870 01:46:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:52.870 01:46:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:52.870 01:46:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:52.870 01:46:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:52.870 01:46:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:52.870 01:46:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:52.870 01:46:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:52.870 01:46:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:52.870 01:46:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:52.870 01:46:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:52.870 01:46:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:52.870 01:46:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:52.870 01:46:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:52.870 01:46:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:52.870 01:46:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:52.870 01:46:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:52.870 01:46:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:52.870 01:46:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:52.870 01:46:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:52.870 01:46:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:52.870 01:46:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:53.127 01:46:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:53.127 01:46:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:53.127 01:46:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:53.127 01:46:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:53.127 01:46:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:53.127 01:46:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:53.127 01:46:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:53.127 01:46:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:53.385 01:46:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:53.385 01:46:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:53.385 01:46:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:53.385 01:46:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:53.385 01:46:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:53.385 01:46:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:53.385 01:46:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:53.385 01:46:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:53.385 01:46:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:53.385 01:46:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:53.385 01:46:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:53.385 01:46:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:53.385 01:46:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:53.385 01:46:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:53.385 01:46:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:53.385 01:46:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:53.385 01:46:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:53.385 01:46:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:53.385 01:46:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:53.385 01:46:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:53.385 01:46:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:53.385 01:46:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:53.385 01:46:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:53.385 01:46:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:53.642 01:46:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:53.642 01:46:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:53.642 01:46:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:53.642 01:46:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:53.642 01:46:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:53.642 01:46:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:53.642 01:46:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:53.642 01:46:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:53.899 01:46:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:53.899 01:46:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:53.899 01:46:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:53.899 01:46:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:53.899 01:46:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:53.899 01:46:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:53.899 01:46:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:53.899 01:46:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:53.899 01:46:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:53.899 01:46:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:53.899 01:46:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:53.899 01:46:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:53.899 01:46:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:53.899 01:46:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:53.900 01:46:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:53.900 01:46:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:53.900 01:46:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:53.900 01:46:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:53.900 01:46:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:53.900 01:46:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:53.900 01:46:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:53.900 01:46:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:53.900 01:46:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:53.900 01:46:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:54.157 01:46:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:54.157 01:46:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:54.157 01:46:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:54.157 01:46:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:54.157 01:46:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:54.157 01:46:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:54.157 01:46:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:54.157 01:46:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:54.414 01:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.414 01:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.414 01:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:54.414 01:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.414 01:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.414 01:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:54.414 01:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.414 01:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.414 01:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:54.414 01:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.415 01:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.415 01:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:54.415 01:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.415 01:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.415 01:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:54.415 01:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.415 01:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.415 01:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:54.415 01:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.415 01:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.415 01:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:54.415 01:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.415 01:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.415 01:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:54.673 01:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:54.673 01:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:54.673 01:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:54.673 01:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:54.673 01:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:54.673 01:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:54.673 01:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:54.673 01:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:54.931 01:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.931 01:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.931 01:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:54.931 01:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.931 01:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.931 01:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:54.931 01:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.931 01:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.931 01:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:54.931 01:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.931 01:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.931 01:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:54.931 01:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.931 01:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.931 01:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:54.931 01:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.932 01:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.932 01:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.932 01:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.932 01:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:54.932 01:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:54.932 01:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:54.932 01:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:54.932 01:46:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:55.189 01:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:55.189 01:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:55.189 01:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:55.189 01:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:55.189 01:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:55.189 01:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:55.189 01:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:55.189 01:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:55.447 01:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.447 01:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.447 01:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.447 01:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.447 01:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.447 01:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.447 01:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.447 01:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.447 01:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.447 01:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.447 01:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.447 01:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.447 01:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.447 01:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.447 01:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:55.447 01:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:55.447 01:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:07:55.447 01:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:07:55.447 01:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:55.447 01:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:07:55.447 01:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:55.447 01:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:07:55.447 01:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:55.447 01:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:55.447 rmmod nvme_tcp 00:07:55.704 rmmod nvme_fabrics 00:07:55.704 rmmod nvme_keyring 00:07:55.704 01:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:55.704 01:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:07:55.704 01:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:07:55.704 01:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 1312421 ']' 00:07:55.704 01:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 1312421 00:07:55.704 01:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@948 -- # '[' -z 1312421 ']' 00:07:55.704 01:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # kill -0 1312421 00:07:55.704 01:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # uname 00:07:55.704 01:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:55.704 01:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1312421 00:07:55.704 01:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:07:55.704 01:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:07:55.704 01:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1312421' 00:07:55.704 killing process with pid 1312421 00:07:55.704 01:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # kill 1312421 00:07:55.704 01:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # wait 1312421 00:07:55.963 01:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:55.963 01:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:55.963 01:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:55.963 01:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:55.963 01:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:55.963 01:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:55.963 01:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:55.963 01:46:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:57.866 01:46:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:57.866 00:07:57.866 real 0m46.936s 00:07:57.866 user 3m32.683s 00:07:57.866 sys 0m16.764s 00:07:57.866 01:46:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:57.866 01:46:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:57.866 ************************************ 00:07:57.866 END TEST nvmf_ns_hotplug_stress 00:07:57.866 ************************************ 00:07:57.866 01:46:12 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:57.866 01:46:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:57.866 01:46:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:57.866 01:46:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:57.866 ************************************ 00:07:57.866 START TEST nvmf_delete_subsystem 00:07:57.866 ************************************ 00:07:57.866 01:46:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:58.125 * Looking for test storage... 00:07:58.125 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:58.125 01:46:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:58.125 01:46:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:07:58.125 01:46:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:58.125 01:46:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:58.125 01:46:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:58.125 01:46:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:58.125 01:46:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:58.125 01:46:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:58.125 01:46:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:58.125 01:46:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:58.125 01:46:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:58.125 01:46:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:58.125 01:46:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:58.125 01:46:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:58.125 01:46:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:58.125 01:46:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:58.125 01:46:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:58.125 01:46:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:58.125 01:46:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:58.125 01:46:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:58.125 01:46:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:58.125 01:46:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:58.125 01:46:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.125 01:46:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.125 01:46:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.125 01:46:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:07:58.125 01:46:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.125 01:46:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:07:58.125 01:46:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:58.125 01:46:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:58.125 01:46:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:58.125 01:46:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:58.125 01:46:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:58.125 01:46:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:58.125 01:46:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:58.125 01:46:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:58.125 01:46:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:07:58.125 01:46:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:58.125 01:46:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:58.125 01:46:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:58.125 01:46:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:58.125 01:46:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:58.125 01:46:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:58.125 01:46:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:58.125 01:46:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:58.125 01:46:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:58.125 01:46:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:58.125 01:46:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:07:58.125 01:46:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:00.029 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:00.029 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:08:00.030 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:00.030 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:00.030 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:00.030 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:00.030 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:00.030 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:08:00.030 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:00.030 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:08:00.030 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:08:00.030 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:08:00.030 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:08:00.030 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:08:00.030 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:08:00.030 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:00.030 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:00.030 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:00.030 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:00.030 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:00.030 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:00.030 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:00.030 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:00.030 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:00.030 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:00.030 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:00.030 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:00.030 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:00.030 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:00.030 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:00.030 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:00.030 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:00.030 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:00.030 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:00.030 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:00.030 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:00.030 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:00.030 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:00.030 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:00.030 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:00.030 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:00.030 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:00.030 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:00.030 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:00.030 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:00.030 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:00.030 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:00.030 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:00.030 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:00.030 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:00.030 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:00.030 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:00.030 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:00.030 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:00.030 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:00.030 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:00.030 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:00.030 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:00.030 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:00.030 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:00.030 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:00.030 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:00.030 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:00.030 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:00.030 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:00.030 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:00.030 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:00.030 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:00.030 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:00.030 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:00.030 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:00.030 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:00.030 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:08:00.030 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:00.030 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:00.030 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:00.030 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:00.030 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:00.030 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:00.030 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:00.030 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:00.030 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:00.030 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:00.030 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:00.030 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:00.030 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:00.030 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:00.030 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:00.030 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:00.030 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:00.030 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:00.030 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:00.030 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:00.288 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:00.288 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:00.288 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:00.288 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:00.288 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.196 ms 00:08:00.288 00:08:00.288 --- 10.0.0.2 ping statistics --- 00:08:00.288 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:00.288 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:08:00.288 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:00.288 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:00.288 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.145 ms 00:08:00.288 00:08:00.288 --- 10.0.0.1 ping statistics --- 00:08:00.288 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:00.288 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:08:00.288 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:00.288 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:08:00.288 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:00.288 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:00.288 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:00.288 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:00.288 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:00.288 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:00.288 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:00.288 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:08:00.288 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:00.288 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:00.288 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:00.288 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=1319682 00:08:00.288 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:08:00.288 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 1319682 00:08:00.288 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@829 -- # '[' -z 1319682 ']' 00:08:00.288 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:00.288 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:00.288 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:00.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:00.288 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:00.288 01:46:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:00.288 [2024-07-24 01:46:15.013603] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:08:00.288 [2024-07-24 01:46:15.013688] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:00.288 EAL: No free 2048 kB hugepages reported on node 1 00:08:00.288 [2024-07-24 01:46:15.076275] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:00.288 [2024-07-24 01:46:15.168154] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:00.288 [2024-07-24 01:46:15.168218] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:00.288 [2024-07-24 01:46:15.168234] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:00.288 [2024-07-24 01:46:15.168248] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:00.288 [2024-07-24 01:46:15.168267] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:00.288 [2024-07-24 01:46:15.168361] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:00.288 [2024-07-24 01:46:15.168368] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.546 01:46:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:00.546 01:46:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # return 0 00:08:00.546 01:46:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:00.546 01:46:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:00.546 01:46:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:00.546 01:46:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:00.546 01:46:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:00.546 01:46:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.546 01:46:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:00.546 [2024-07-24 01:46:15.317829] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:00.546 01:46:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.546 01:46:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:00.546 01:46:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.546 01:46:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:00.546 01:46:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.546 01:46:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:00.546 01:46:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.546 01:46:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:00.546 [2024-07-24 01:46:15.334100] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:00.546 01:46:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.546 01:46:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:00.546 01:46:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.546 01:46:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:00.546 NULL1 00:08:00.546 01:46:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.546 01:46:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:00.546 01:46:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.546 01:46:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:00.546 Delay0 00:08:00.546 01:46:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.546 01:46:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:00.546 01:46:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.546 01:46:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:00.546 01:46:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.546 01:46:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1319820 00:08:00.546 01:46:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:08:00.546 01:46:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:00.546 EAL: No free 2048 kB hugepages reported on node 1 00:08:00.546 [2024-07-24 01:46:15.408769] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:02.501 01:46:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:02.501 01:46:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.501 01:46:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:02.759 Read completed with error (sct=0, sc=8) 00:08:02.759 Read completed with error (sct=0, sc=8) 00:08:02.759 Read completed with error (sct=0, sc=8) 00:08:02.759 Read completed with error (sct=0, sc=8) 00:08:02.759 starting I/O failed: -6 00:08:02.759 Read completed with error (sct=0, sc=8) 00:08:02.759 Read completed with error (sct=0, sc=8) 00:08:02.759 Write completed with error (sct=0, sc=8) 00:08:02.759 Read completed with error (sct=0, sc=8) 00:08:02.759 starting I/O failed: -6 00:08:02.759 Read completed with error (sct=0, sc=8) 00:08:02.759 Read completed with error (sct=0, sc=8) 00:08:02.759 Read completed with error (sct=0, sc=8) 00:08:02.759 Read completed with error (sct=0, sc=8) 00:08:02.759 starting I/O failed: -6 00:08:02.759 Read completed with error (sct=0, sc=8) 00:08:02.759 Read completed with error (sct=0, sc=8) 00:08:02.759 Write completed with error (sct=0, sc=8) 00:08:02.759 Write completed with error (sct=0, sc=8) 00:08:02.759 starting I/O failed: -6 00:08:02.759 Write completed with error (sct=0, sc=8) 00:08:02.759 Write completed with error (sct=0, sc=8) 00:08:02.759 Read completed with error (sct=0, sc=8) 00:08:02.759 Read completed with error (sct=0, sc=8) 00:08:02.759 starting I/O failed: -6 00:08:02.759 Read completed with error (sct=0, sc=8) 00:08:02.759 Write completed with error (sct=0, sc=8) 00:08:02.759 Read completed with error (sct=0, sc=8) 00:08:02.759 Write completed with error (sct=0, sc=8) 00:08:02.759 starting I/O failed: -6 00:08:02.759 Read completed with error (sct=0, sc=8) 00:08:02.759 Read completed with error (sct=0, sc=8) 00:08:02.759 Write completed with error (sct=0, sc=8) 00:08:02.759 Read completed with error (sct=0, sc=8) 00:08:02.759 starting I/O failed: -6 00:08:02.759 Write completed with error (sct=0, sc=8) 00:08:02.759 Read completed with error (sct=0, sc=8) 00:08:02.759 Write completed with error (sct=0, sc=8) 00:08:02.759 Write completed with error (sct=0, sc=8) 00:08:02.759 starting I/O failed: -6 00:08:02.759 Read completed with error (sct=0, sc=8) 00:08:02.759 Read completed with error (sct=0, sc=8) 00:08:02.759 Write completed with error (sct=0, sc=8) 00:08:02.759 Write completed with error (sct=0, sc=8) 00:08:02.759 starting I/O failed: -6 00:08:02.759 Write completed with error (sct=0, sc=8) 00:08:02.759 Write completed with error (sct=0, sc=8) 00:08:02.759 Read completed with error (sct=0, sc=8) 00:08:02.759 Write completed with error (sct=0, sc=8) 00:08:02.759 starting I/O failed: -6 00:08:02.759 Read completed with error (sct=0, sc=8) 00:08:02.759 Read completed with error (sct=0, sc=8) 00:08:02.759 Read completed with error (sct=0, sc=8) 00:08:02.759 Write completed with error (sct=0, sc=8) 00:08:02.759 starting I/O failed: -6 00:08:02.759 Read completed with error (sct=0, sc=8) 00:08:02.760 Read completed with error (sct=0, sc=8) 00:08:02.760 [2024-07-24 01:46:17.636926] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c6970 is same with the state(5) to be set 00:08:02.760 Read completed with error (sct=0, sc=8) 00:08:02.760 Write completed with error (sct=0, sc=8) 00:08:02.760 Read completed with error (sct=0, sc=8) 00:08:02.760 Read completed with error (sct=0, sc=8) 00:08:02.760 Read completed with error (sct=0, sc=8) 00:08:02.760 Read completed with error (sct=0, sc=8) 00:08:02.760 Read completed with error (sct=0, sc=8) 00:08:02.760 Read completed with error (sct=0, sc=8) 00:08:02.760 Read completed with error (sct=0, sc=8) 00:08:02.760 Read completed with error (sct=0, sc=8) 00:08:02.760 Read completed with error (sct=0, sc=8) 00:08:02.760 Read completed with error (sct=0, sc=8) 00:08:02.760 Read completed with error (sct=0, sc=8) 00:08:02.760 Read completed with error (sct=0, sc=8) 00:08:02.760 Read completed with error (sct=0, sc=8) 00:08:02.760 Read completed with error (sct=0, sc=8) 00:08:02.760 Write completed with error (sct=0, sc=8) 00:08:02.760 Read completed with error (sct=0, sc=8) 00:08:02.760 Write completed with error (sct=0, sc=8) 00:08:02.760 Read completed with error (sct=0, sc=8) 00:08:02.760 Read completed with error (sct=0, sc=8) 00:08:02.760 Read completed with error (sct=0, sc=8) 00:08:02.760 Read completed with error (sct=0, sc=8) 00:08:02.760 Write completed with error (sct=0, sc=8) 00:08:02.760 Read completed with error (sct=0, sc=8) 00:08:02.760 Write completed with error (sct=0, sc=8) 00:08:02.760 Read completed with error (sct=0, sc=8) 00:08:02.760 Write completed with error (sct=0, sc=8) 00:08:02.760 Write completed with error (sct=0, sc=8) 00:08:02.760 Write completed with error (sct=0, sc=8) 00:08:02.760 Read completed with error (sct=0, sc=8) 00:08:02.760 Read completed with error (sct=0, sc=8) 00:08:02.760 Read completed with error (sct=0, sc=8) 00:08:02.760 Write completed with error (sct=0, sc=8) 00:08:02.760 Read completed with error (sct=0, sc=8) 00:08:02.760 Read completed with error (sct=0, sc=8) 00:08:02.760 Read completed with error (sct=0, sc=8) 00:08:02.760 Read completed with error (sct=0, sc=8) 00:08:02.760 Read completed with error (sct=0, sc=8) 00:08:02.760 Read completed with error (sct=0, sc=8) 00:08:02.760 Write completed with error (sct=0, sc=8) 00:08:02.760 Write completed with error (sct=0, sc=8) 00:08:02.760 Read completed with error (sct=0, sc=8) 00:08:02.760 Write completed with error (sct=0, sc=8) 00:08:02.760 Read completed with error (sct=0, sc=8) 00:08:02.760 Read completed with error (sct=0, sc=8) 00:08:02.760 Write completed with error (sct=0, sc=8) 00:08:02.760 Write completed with error (sct=0, sc=8) 00:08:02.760 Read completed with error (sct=0, sc=8) 00:08:02.760 Read completed with error (sct=0, sc=8) 00:08:02.760 Read completed with error (sct=0, sc=8) 00:08:02.760 Write completed with error (sct=0, sc=8) 00:08:02.760 Read completed with error (sct=0, sc=8) 00:08:02.760 Read completed with error (sct=0, sc=8) 00:08:02.760 Read completed with error (sct=0, sc=8) 00:08:02.760 Read completed with error (sct=0, sc=8) 00:08:02.760 starting I/O failed: -6 00:08:02.760 Read completed with error (sct=0, sc=8) 00:08:02.760 Read completed with error (sct=0, sc=8) 00:08:02.760 Read completed with error (sct=0, sc=8) 00:08:02.760 Read completed with error (sct=0, sc=8) 00:08:02.760 Read completed with error (sct=0, sc=8) 00:08:02.760 Read completed with error (sct=0, sc=8) 00:08:02.760 Read completed with error (sct=0, sc=8) 00:08:02.760 starting I/O failed: -6 00:08:02.760 Read completed with error (sct=0, sc=8) 00:08:02.760 Write completed with error (sct=0, sc=8) 00:08:02.760 Write completed with error (sct=0, sc=8) 00:08:02.760 Read completed with error (sct=0, sc=8) 00:08:02.760 starting I/O failed: -6 00:08:02.760 Write completed with error (sct=0, sc=8) 00:08:02.760 Read completed with error (sct=0, sc=8) 00:08:02.760 Read completed with error (sct=0, sc=8) 00:08:02.760 Read completed with error (sct=0, sc=8) 00:08:02.760 starting I/O failed: -6 00:08:02.760 Read completed with error (sct=0, sc=8) 00:08:02.760 Read completed with error (sct=0, sc=8) 00:08:02.760 Read completed with error (sct=0, sc=8) 00:08:02.760 Read completed with error (sct=0, sc=8) 00:08:02.760 starting I/O failed: -6 00:08:02.760 Read completed with error (sct=0, sc=8) 00:08:02.760 Read completed with error (sct=0, sc=8) 00:08:02.760 Read completed with error (sct=0, sc=8) 00:08:02.760 Read completed with error (sct=0, sc=8) 00:08:02.760 starting I/O failed: -6 00:08:02.760 Read completed with error (sct=0, sc=8) 00:08:02.760 Write completed with error (sct=0, sc=8) 00:08:02.760 Read completed with error (sct=0, sc=8) 00:08:02.760 Write completed with error (sct=0, sc=8) 00:08:02.760 starting I/O failed: -6 00:08:02.760 Write completed with error (sct=0, sc=8) 00:08:02.760 Read completed with error (sct=0, sc=8) 00:08:02.760 Read completed with error (sct=0, sc=8) 00:08:02.760 Write completed with error (sct=0, sc=8) 00:08:02.760 starting I/O failed: -6 00:08:02.760 Read completed with error (sct=0, sc=8) 00:08:02.760 Read completed with error (sct=0, sc=8) 00:08:02.760 Read completed with error (sct=0, sc=8) 00:08:02.760 Read completed with error (sct=0, sc=8) 00:08:02.760 starting I/O failed: -6 00:08:02.760 Read completed with error (sct=0, sc=8) 00:08:02.760 Read completed with error (sct=0, sc=8) 00:08:02.760 Read completed with error (sct=0, sc=8) 00:08:02.760 Read completed with error (sct=0, sc=8) 00:08:02.760 starting I/O failed: -6 00:08:02.760 Read completed with error (sct=0, sc=8) 00:08:02.760 Read completed with error (sct=0, sc=8) 00:08:02.760 Read completed with error (sct=0, sc=8) 00:08:02.760 Write completed with error (sct=0, sc=8) 00:08:02.760 starting I/O failed: -6 00:08:02.760 [2024-07-24 01:46:17.637872] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f4c88000c00 is same with the state(5) to be set 00:08:02.760 Read completed with error (sct=0, sc=8) 00:08:02.760 Read completed with error (sct=0, sc=8) 00:08:02.760 Read completed with error (sct=0, sc=8) 00:08:02.760 Read completed with error (sct=0, sc=8) 00:08:02.760 Write completed with error (sct=0, sc=8) 00:08:02.760 Read completed with error (sct=0, sc=8) 00:08:02.760 Read completed with error (sct=0, sc=8) 00:08:02.760 Read completed with error (sct=0, sc=8) 00:08:02.760 Write completed with error (sct=0, sc=8) 00:08:02.760 Read completed with error (sct=0, sc=8) 00:08:02.760 Write completed with error (sct=0, sc=8) 00:08:02.760 Read completed with error (sct=0, sc=8) 00:08:02.760 Read completed with error (sct=0, sc=8) 00:08:02.760 Write completed with error (sct=0, sc=8) 00:08:02.760 Read completed with error (sct=0, sc=8) 00:08:02.760 Read completed with error (sct=0, sc=8) 00:08:02.760 Write completed with error (sct=0, sc=8) 00:08:02.760 Write completed with error (sct=0, sc=8) 00:08:02.760 Read completed with error (sct=0, sc=8) 00:08:02.760 Read completed with error (sct=0, sc=8) 00:08:02.760 Read completed with error (sct=0, sc=8) 00:08:02.760 Read completed with error (sct=0, sc=8) 00:08:02.760 Read completed with error (sct=0, sc=8) 00:08:02.760 Read completed with error (sct=0, sc=8) 00:08:02.760 Read completed with error (sct=0, sc=8) 00:08:02.760 Read completed with error (sct=0, sc=8) 00:08:02.760 Read completed with error (sct=0, sc=8) 00:08:02.760 Write completed with error (sct=0, sc=8) 00:08:02.760 Read completed with error (sct=0, sc=8) 00:08:02.760 Read completed with error (sct=0, sc=8) 00:08:02.760 Read completed with error (sct=0, sc=8) 00:08:02.760 Read completed with error (sct=0, sc=8) 00:08:02.760 Read completed with error (sct=0, sc=8) 00:08:02.760 Write completed with error (sct=0, sc=8) 00:08:02.760 Read completed with error (sct=0, sc=8) 00:08:02.760 Read completed with error (sct=0, sc=8) 00:08:02.760 Read completed with error (sct=0, sc=8) 00:08:02.760 Read completed with error (sct=0, sc=8) 00:08:02.760 Read completed with error (sct=0, sc=8) 00:08:02.760 Write completed with error (sct=0, sc=8) 00:08:02.760 Write completed with error (sct=0, sc=8) 00:08:02.760 Write completed with error (sct=0, sc=8) 00:08:02.760 Read completed with error (sct=0, sc=8) 00:08:02.760 Write completed with error (sct=0, sc=8) 00:08:02.760 Read completed with error (sct=0, sc=8) 00:08:02.760 Read completed with error (sct=0, sc=8) 00:08:02.760 Read completed with error (sct=0, sc=8) 00:08:02.760 Read completed with error (sct=0, sc=8) 00:08:02.760 Read completed with error (sct=0, sc=8) 00:08:02.760 Read completed with error (sct=0, sc=8) 00:08:02.760 Read completed with error (sct=0, sc=8) 00:08:02.760 Write completed with error (sct=0, sc=8) 00:08:04.132 [2024-07-24 01:46:18.591870] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d4a30 is same with the state(5) to be set 00:08:04.132 Write completed with error (sct=0, sc=8) 00:08:04.132 Read completed with error (sct=0, sc=8) 00:08:04.132 Read completed with error (sct=0, sc=8) 00:08:04.132 Read completed with error (sct=0, sc=8) 00:08:04.132 Write completed with error (sct=0, sc=8) 00:08:04.132 Read completed with error (sct=0, sc=8) 00:08:04.132 Read completed with error (sct=0, sc=8) 00:08:04.132 Read completed with error (sct=0, sc=8) 00:08:04.132 Read completed with error (sct=0, sc=8) 00:08:04.132 Read completed with error (sct=0, sc=8) 00:08:04.132 Read completed with error (sct=0, sc=8) 00:08:04.132 Read completed with error (sct=0, sc=8) 00:08:04.132 Read completed with error (sct=0, sc=8) 00:08:04.132 Read completed with error (sct=0, sc=8) 00:08:04.132 Read completed with error (sct=0, sc=8) 00:08:04.132 Read completed with error (sct=0, sc=8) 00:08:04.132 Read completed with error (sct=0, sc=8) 00:08:04.132 Read completed with error (sct=0, sc=8) 00:08:04.132 Write completed with error (sct=0, sc=8) 00:08:04.132 Read completed with error (sct=0, sc=8) 00:08:04.132 Read completed with error (sct=0, sc=8) 00:08:04.132 Read completed with error (sct=0, sc=8) 00:08:04.132 Write completed with error (sct=0, sc=8) 00:08:04.132 Read completed with error (sct=0, sc=8) 00:08:04.132 Read completed with error (sct=0, sc=8) 00:08:04.132 [2024-07-24 01:46:18.639582] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c6e50 is same with the state(5) to be set 00:08:04.132 Write completed with error (sct=0, sc=8) 00:08:04.132 Write completed with error (sct=0, sc=8) 00:08:04.132 Read completed with error (sct=0, sc=8) 00:08:04.132 Read completed with error (sct=0, sc=8) 00:08:04.132 Read completed with error (sct=0, sc=8) 00:08:04.132 Write completed with error (sct=0, sc=8) 00:08:04.132 Read completed with error (sct=0, sc=8) 00:08:04.132 Read completed with error (sct=0, sc=8) 00:08:04.132 Read completed with error (sct=0, sc=8) 00:08:04.132 Read completed with error (sct=0, sc=8) 00:08:04.132 Read completed with error (sct=0, sc=8) 00:08:04.132 Read completed with error (sct=0, sc=8) 00:08:04.132 Read completed with error (sct=0, sc=8) 00:08:04.132 Read completed with error (sct=0, sc=8) 00:08:04.132 Read completed with error (sct=0, sc=8) 00:08:04.132 Read completed with error (sct=0, sc=8) 00:08:04.132 Write completed with error (sct=0, sc=8) 00:08:04.132 Read completed with error (sct=0, sc=8) 00:08:04.132 Read completed with error (sct=0, sc=8) 00:08:04.132 Read completed with error (sct=0, sc=8) 00:08:04.132 Write completed with error (sct=0, sc=8) 00:08:04.132 Read completed with error (sct=0, sc=8) 00:08:04.132 Read completed with error (sct=0, sc=8) 00:08:04.132 Write completed with error (sct=0, sc=8) 00:08:04.132 [2024-07-24 01:46:18.639776] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c74b0 is same with the state(5) to be set 00:08:04.132 Read completed with error (sct=0, sc=8) 00:08:04.132 Read completed with error (sct=0, sc=8) 00:08:04.132 Write completed with error (sct=0, sc=8) 00:08:04.132 Read completed with error (sct=0, sc=8) 00:08:04.132 Write completed with error (sct=0, sc=8) 00:08:04.132 Read completed with error (sct=0, sc=8) 00:08:04.133 Write completed with error (sct=0, sc=8) 00:08:04.133 Write completed with error (sct=0, sc=8) 00:08:04.133 Read completed with error (sct=0, sc=8) 00:08:04.133 Write completed with error (sct=0, sc=8) 00:08:04.133 Write completed with error (sct=0, sc=8) 00:08:04.133 Read completed with error (sct=0, sc=8) 00:08:04.133 Write completed with error (sct=0, sc=8) 00:08:04.133 Read completed with error (sct=0, sc=8) 00:08:04.133 Write completed with error (sct=0, sc=8) 00:08:04.133 Read completed with error (sct=0, sc=8) 00:08:04.133 Write completed with error (sct=0, sc=8) 00:08:04.133 Read completed with error (sct=0, sc=8) 00:08:04.133 Read completed with error (sct=0, sc=8) 00:08:04.133 Read completed with error (sct=0, sc=8) 00:08:04.133 Write completed with error (sct=0, sc=8) 00:08:04.133 Read completed with error (sct=0, sc=8) 00:08:04.133 Read completed with error (sct=0, sc=8) 00:08:04.133 Read completed with error (sct=0, sc=8) 00:08:04.133 Read completed with error (sct=0, sc=8) 00:08:04.133 Write completed with error (sct=0, sc=8) 00:08:04.133 Write completed with error (sct=0, sc=8) 00:08:04.133 Read completed with error (sct=0, sc=8) 00:08:04.133 Write completed with error (sct=0, sc=8) 00:08:04.133 Read completed with error (sct=0, sc=8) 00:08:04.133 [2024-07-24 01:46:18.640522] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f4c8800d660 is same with the state(5) to be set 00:08:04.133 Read completed with error (sct=0, sc=8) 00:08:04.133 Read completed with error (sct=0, sc=8) 00:08:04.133 Read completed with error (sct=0, sc=8) 00:08:04.133 Write completed with error (sct=0, sc=8) 00:08:04.133 Read completed with error (sct=0, sc=8) 00:08:04.133 Read completed with error (sct=0, sc=8) 00:08:04.133 Write completed with error (sct=0, sc=8) 00:08:04.133 Read completed with error (sct=0, sc=8) 00:08:04.133 Read completed with error (sct=0, sc=8) 00:08:04.133 Read completed with error (sct=0, sc=8) 00:08:04.133 Read completed with error (sct=0, sc=8) 00:08:04.133 Read completed with error (sct=0, sc=8) 00:08:04.133 Write completed with error (sct=0, sc=8) 00:08:04.133 Write completed with error (sct=0, sc=8) 00:08:04.133 Write completed with error (sct=0, sc=8) 00:08:04.133 Read completed with error (sct=0, sc=8) 00:08:04.133 Read completed with error (sct=0, sc=8) 00:08:04.133 Read completed with error (sct=0, sc=8) 00:08:04.133 Read completed with error (sct=0, sc=8) 00:08:04.133 Write completed with error (sct=0, sc=8) 00:08:04.133 Write completed with error (sct=0, sc=8) 00:08:04.133 Read completed with error (sct=0, sc=8) 00:08:04.133 Read completed with error (sct=0, sc=8) 00:08:04.133 Read completed with error (sct=0, sc=8) 00:08:04.133 Read completed with error (sct=0, sc=8) 00:08:04.133 Read completed with error (sct=0, sc=8) 00:08:04.133 Write completed with error (sct=0, sc=8) 00:08:04.133 Read completed with error (sct=0, sc=8) 00:08:04.133 Read completed with error (sct=0, sc=8) 00:08:04.133 [2024-07-24 01:46:18.641288] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f4c8800d000 is same with the state(5) to be set 00:08:04.133 Initializing NVMe Controllers 00:08:04.133 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:04.133 Controller IO queue size 128, less than required. 00:08:04.133 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:04.133 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:04.133 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:04.133 Initialization complete. Launching workers. 00:08:04.133 ======================================================== 00:08:04.133 Latency(us) 00:08:04.133 Device Information : IOPS MiB/s Average min max 00:08:04.133 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 170.82 0.08 891880.50 529.87 1012533.55 00:08:04.133 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 161.88 0.08 972125.47 379.22 2002211.30 00:08:04.133 ======================================================== 00:08:04.133 Total : 332.70 0.16 930925.07 379.22 2002211.30 00:08:04.133 00:08:04.133 01:46:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:04.133 [2024-07-24 01:46:18.641848] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5d4a30 (9): Bad file descriptor 00:08:04.133 01:46:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:08:04.133 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:08:04.133 01:46:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1319820 00:08:04.133 01:46:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:08:04.391 01:46:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:08:04.391 01:46:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1319820 00:08:04.391 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1319820) - No such process 00:08:04.391 01:46:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1319820 00:08:04.391 01:46:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:08:04.391 01:46:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 1319820 00:08:04.391 01:46:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:08:04.391 01:46:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:04.391 01:46:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:08:04.391 01:46:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:04.391 01:46:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 1319820 00:08:04.391 01:46:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:08:04.391 01:46:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:04.391 01:46:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:04.391 01:46:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:04.391 01:46:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:04.391 01:46:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:04.391 01:46:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:04.391 01:46:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:04.391 01:46:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:04.391 01:46:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:04.391 01:46:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:04.391 [2024-07-24 01:46:19.162906] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:04.391 01:46:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:04.391 01:46:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:04.391 01:46:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:04.391 01:46:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:04.391 01:46:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:04.391 01:46:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1320230 00:08:04.391 01:46:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:04.391 01:46:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:08:04.391 01:46:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1320230 00:08:04.391 01:46:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:04.391 EAL: No free 2048 kB hugepages reported on node 1 00:08:04.391 [2024-07-24 01:46:19.219582] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:04.955 01:46:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:04.955 01:46:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1320230 00:08:04.955 01:46:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:05.520 01:46:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:05.520 01:46:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1320230 00:08:05.520 01:46:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:06.086 01:46:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:06.086 01:46:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1320230 00:08:06.086 01:46:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:06.343 01:46:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:06.343 01:46:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1320230 00:08:06.343 01:46:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:06.907 01:46:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:06.908 01:46:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1320230 00:08:06.908 01:46:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:07.473 01:46:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:07.473 01:46:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1320230 00:08:07.473 01:46:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:07.473 Initializing NVMe Controllers 00:08:07.473 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:07.473 Controller IO queue size 128, less than required. 00:08:07.473 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:07.473 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:07.473 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:07.473 Initialization complete. Launching workers. 00:08:07.473 ======================================================== 00:08:07.473 Latency(us) 00:08:07.473 Device Information : IOPS MiB/s Average min max 00:08:07.473 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003584.57 1000193.25 1011500.92 00:08:07.473 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005382.04 1000178.12 1042033.37 00:08:07.473 ======================================================== 00:08:07.473 Total : 256.00 0.12 1004483.30 1000178.12 1042033.37 00:08:07.473 00:08:08.038 01:46:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:08.038 01:46:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1320230 00:08:08.038 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1320230) - No such process 00:08:08.038 01:46:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1320230 00:08:08.038 01:46:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:08:08.038 01:46:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:08:08.039 01:46:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:08.039 01:46:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:08:08.039 01:46:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:08.039 01:46:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:08:08.039 01:46:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:08.039 01:46:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:08.039 rmmod nvme_tcp 00:08:08.039 rmmod nvme_fabrics 00:08:08.039 rmmod nvme_keyring 00:08:08.039 01:46:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:08.039 01:46:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:08:08.039 01:46:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:08:08.039 01:46:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 1319682 ']' 00:08:08.039 01:46:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 1319682 00:08:08.039 01:46:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@948 -- # '[' -z 1319682 ']' 00:08:08.039 01:46:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # kill -0 1319682 00:08:08.039 01:46:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # uname 00:08:08.039 01:46:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:08.039 01:46:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1319682 00:08:08.039 01:46:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:08.039 01:46:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:08.039 01:46:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1319682' 00:08:08.039 killing process with pid 1319682 00:08:08.039 01:46:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # kill 1319682 00:08:08.039 01:46:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # wait 1319682 00:08:08.297 01:46:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:08.297 01:46:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:08.297 01:46:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:08.297 01:46:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:08.297 01:46:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:08.297 01:46:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:08.297 01:46:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:08.297 01:46:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:10.197 01:46:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:10.197 00:08:10.197 real 0m12.297s 00:08:10.197 user 0m28.066s 00:08:10.197 sys 0m2.781s 00:08:10.197 01:46:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:10.197 01:46:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:10.197 ************************************ 00:08:10.197 END TEST nvmf_delete_subsystem 00:08:10.197 ************************************ 00:08:10.197 01:46:25 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:10.197 01:46:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:10.197 01:46:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:10.197 01:46:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:10.456 ************************************ 00:08:10.456 START TEST nvmf_host_management 00:08:10.456 ************************************ 00:08:10.456 01:46:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:10.456 * Looking for test storage... 00:08:10.456 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:10.456 01:46:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:10.456 01:46:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:08:10.456 01:46:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:10.456 01:46:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:10.456 01:46:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:10.456 01:46:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:10.456 01:46:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:10.456 01:46:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:10.456 01:46:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:10.456 01:46:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:10.456 01:46:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:10.456 01:46:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:10.456 01:46:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:10.456 01:46:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:10.456 01:46:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:10.456 01:46:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:10.456 01:46:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:10.456 01:46:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:10.456 01:46:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:10.456 01:46:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:10.456 01:46:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:10.456 01:46:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:10.456 01:46:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.456 01:46:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.456 01:46:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.456 01:46:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:08:10.456 01:46:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.456 01:46:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:08:10.456 01:46:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:10.456 01:46:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:10.456 01:46:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:10.456 01:46:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:10.456 01:46:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:10.456 01:46:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:10.456 01:46:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:10.456 01:46:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:10.456 01:46:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:10.456 01:46:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:10.456 01:46:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:08:10.456 01:46:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:10.456 01:46:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:10.456 01:46:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:10.456 01:46:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:10.456 01:46:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:10.456 01:46:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:10.456 01:46:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:10.456 01:46:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:10.456 01:46:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:10.456 01:46:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:10.456 01:46:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:08:10.456 01:46:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:12.352 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:12.352 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:08:12.352 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:12.352 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:12.352 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:12.352 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:12.352 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:12.352 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:08:12.352 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:12.352 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:08:12.352 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:08:12.352 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:08:12.352 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:08:12.352 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:08:12.352 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:08:12.352 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:12.352 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:12.352 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:12.352 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:12.352 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:12.352 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:12.352 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:12.352 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:12.352 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:12.352 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:12.352 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:12.352 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:12.352 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:12.352 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:12.352 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:12.352 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:12.352 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:12.352 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:12.352 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:12.352 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:12.352 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:12.352 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:12.352 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:12.352 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:12.352 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:12.352 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:12.352 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:12.352 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:12.352 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:12.352 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:12.352 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:12.352 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:12.352 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:12.352 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:12.352 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:12.352 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:12.352 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:12.352 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:12.352 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:12.352 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:12.352 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:12.352 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:12.353 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:12.353 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:12.353 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:12.353 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:12.353 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:12.353 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:12.353 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:12.353 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:12.353 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:12.353 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:12.353 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:12.353 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:12.353 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:12.353 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:12.353 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:12.353 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:08:12.353 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:12.353 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:12.353 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:12.353 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:12.353 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:12.353 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:12.353 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:12.353 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:12.353 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:12.353 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:12.353 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:12.353 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:12.353 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:12.353 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:12.353 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:12.353 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:12.353 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:12.353 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:12.353 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:12.353 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:12.353 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:12.353 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:12.353 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:12.353 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:12.353 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.251 ms 00:08:12.353 00:08:12.353 --- 10.0.0.2 ping statistics --- 00:08:12.353 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:12.353 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:08:12.353 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:12.353 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:12.353 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.081 ms 00:08:12.353 00:08:12.353 --- 10.0.0.1 ping statistics --- 00:08:12.353 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:12.353 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:08:12.353 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:12.353 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:08:12.353 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:12.353 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:12.353 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:12.353 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:12.353 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:12.353 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:12.353 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:12.353 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:08:12.353 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:08:12.353 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:08:12.353 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:12.353 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:12.353 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:12.353 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=1322574 00:08:12.353 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:08:12.353 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 1322574 00:08:12.353 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 1322574 ']' 00:08:12.353 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:12.353 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:12.353 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:12.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:12.353 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:12.353 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:12.610 [2024-07-24 01:46:27.274658] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:08:12.610 [2024-07-24 01:46:27.274724] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:12.610 EAL: No free 2048 kB hugepages reported on node 1 00:08:12.610 [2024-07-24 01:46:27.341013] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:12.610 [2024-07-24 01:46:27.440378] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:12.610 [2024-07-24 01:46:27.440441] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:12.610 [2024-07-24 01:46:27.440457] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:12.610 [2024-07-24 01:46:27.440470] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:12.610 [2024-07-24 01:46:27.440481] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:12.610 [2024-07-24 01:46:27.440569] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:12.610 [2024-07-24 01:46:27.440689] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:12.610 [2024-07-24 01:46:27.440752] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:08:12.610 [2024-07-24 01:46:27.440754] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:12.868 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:12.868 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:08:12.868 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:12.868 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:12.868 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:12.868 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:12.868 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:12.868 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.868 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:12.868 [2024-07-24 01:46:27.587500] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:12.868 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.868 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:08:12.868 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:12.868 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:12.868 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:12.868 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:08:12.868 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:08:12.868 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.868 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:12.868 Malloc0 00:08:12.868 [2024-07-24 01:46:27.648565] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:12.868 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.868 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:08:12.868 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:12.868 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:12.868 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1322736 00:08:12.868 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1322736 /var/tmp/bdevperf.sock 00:08:12.868 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 1322736 ']' 00:08:12.868 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:12.868 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:08:12.868 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:08:12.868 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:12.868 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:08:12.868 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:12.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:12.868 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:12.868 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:08:12.868 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:12.868 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:12.868 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:12.868 { 00:08:12.868 "params": { 00:08:12.868 "name": "Nvme$subsystem", 00:08:12.868 "trtype": "$TEST_TRANSPORT", 00:08:12.868 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:12.868 "adrfam": "ipv4", 00:08:12.868 "trsvcid": "$NVMF_PORT", 00:08:12.868 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:12.868 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:12.868 "hdgst": ${hdgst:-false}, 00:08:12.868 "ddgst": ${ddgst:-false} 00:08:12.868 }, 00:08:12.868 "method": "bdev_nvme_attach_controller" 00:08:12.868 } 00:08:12.868 EOF 00:08:12.868 )") 00:08:12.868 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:08:12.869 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:08:12.869 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:08:12.869 01:46:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:12.869 "params": { 00:08:12.869 "name": "Nvme0", 00:08:12.869 "trtype": "tcp", 00:08:12.869 "traddr": "10.0.0.2", 00:08:12.869 "adrfam": "ipv4", 00:08:12.869 "trsvcid": "4420", 00:08:12.869 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:12.869 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:12.869 "hdgst": false, 00:08:12.869 "ddgst": false 00:08:12.869 }, 00:08:12.869 "method": "bdev_nvme_attach_controller" 00:08:12.869 }' 00:08:12.869 [2024-07-24 01:46:27.727034] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:08:12.869 [2024-07-24 01:46:27.727123] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1322736 ] 00:08:12.869 EAL: No free 2048 kB hugepages reported on node 1 00:08:13.126 [2024-07-24 01:46:27.789476] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.126 [2024-07-24 01:46:27.876352] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.383 Running I/O for 10 seconds... 00:08:13.383 01:46:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:13.383 01:46:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:08:13.383 01:46:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:08:13.383 01:46:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:13.383 01:46:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:13.383 01:46:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:13.383 01:46:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:13.383 01:46:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:08:13.383 01:46:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:08:13.383 01:46:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:08:13.383 01:46:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:08:13.383 01:46:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:08:13.383 01:46:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:08:13.383 01:46:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:13.383 01:46:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:13.383 01:46:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:13.383 01:46:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:13.383 01:46:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:13.383 01:46:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:13.383 01:46:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:08:13.383 01:46:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:08:13.383 01:46:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:08:13.642 01:46:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:08:13.642 01:46:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:13.642 01:46:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:13.642 01:46:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:13.642 01:46:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:13.642 01:46:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:13.642 01:46:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:13.642 01:46:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=579 00:08:13.642 01:46:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 579 -ge 100 ']' 00:08:13.642 01:46:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:08:13.642 01:46:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:08:13.642 01:46:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:08:13.642 01:46:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:13.642 01:46:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:13.642 01:46:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:13.642 [2024-07-24 01:46:28.451174] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1836d20 is same with the state(5) to be set 00:08:13.642 [2024-07-24 01:46:28.451296] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1836d20 is same with the state(5) to be set 00:08:13.642 [2024-07-24 01:46:28.451312] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1836d20 is same with the state(5) to be set 00:08:13.642 [2024-07-24 01:46:28.451336] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1836d20 is same with the state(5) to be set 00:08:13.642 [2024-07-24 01:46:28.451349] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1836d20 is same with the state(5) to be set 00:08:13.642 [2024-07-24 01:46:28.451361] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1836d20 is same with the state(5) to be set 00:08:13.642 [2024-07-24 01:46:28.451379] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1836d20 is same with the state(5) to be set 00:08:13.642 [2024-07-24 01:46:28.451391] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1836d20 is same with the state(5) to be set 00:08:13.642 01:46:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:13.642 01:46:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:13.642 01:46:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:13.642 01:46:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:13.642 [2024-07-24 01:46:28.460608] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:08:13.642 [2024-07-24 01:46:28.460657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:13.642 [2024-07-24 01:46:28.460674] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:08:13.642 [2024-07-24 01:46:28.460688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:13.642 [2024-07-24 01:46:28.460702] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:08:13.642 [2024-07-24 01:46:28.460715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:13.642 [2024-07-24 01:46:28.460728] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:08:13.642 [2024-07-24 01:46:28.460741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:13.642 [2024-07-24 01:46:28.460753] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824000 is same with the state(5) to be set 00:08:13.642 [2024-07-24 01:46:28.460820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.642 [2024-07-24 01:46:28.460841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:13.642 [2024-07-24 01:46:28.460868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.642 [2024-07-24 01:46:28.460884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:13.642 [2024-07-24 01:46:28.460901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.642 [2024-07-24 01:46:28.460914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:13.642 [2024-07-24 01:46:28.460929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.642 [2024-07-24 01:46:28.460943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:13.642 [2024-07-24 01:46:28.460958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.642 [2024-07-24 01:46:28.460972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:13.642 [2024-07-24 01:46:28.460987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.642 [2024-07-24 01:46:28.461000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:13.642 [2024-07-24 01:46:28.461015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.642 [2024-07-24 01:46:28.461034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:13.642 [2024-07-24 01:46:28.461050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.642 [2024-07-24 01:46:28.461064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:13.642 [2024-07-24 01:46:28.461079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.642 [2024-07-24 01:46:28.461092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:13.642 [2024-07-24 01:46:28.461107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.642 [2024-07-24 01:46:28.461120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:13.642 [2024-07-24 01:46:28.461135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.642 [2024-07-24 01:46:28.461148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:13.642 [2024-07-24 01:46:28.461163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.642 [2024-07-24 01:46:28.461177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:13.642 [2024-07-24 01:46:28.461192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.642 [2024-07-24 01:46:28.461205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:13.642 [2024-07-24 01:46:28.461220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.642 [2024-07-24 01:46:28.461233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:13.642 [2024-07-24 01:46:28.461248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.642 [2024-07-24 01:46:28.461261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:13.642 [2024-07-24 01:46:28.461275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.642 [2024-07-24 01:46:28.461289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:13.642 [2024-07-24 01:46:28.461303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.642 [2024-07-24 01:46:28.461323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:13.642 [2024-07-24 01:46:28.461340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.642 [2024-07-24 01:46:28.461354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:13.642 [2024-07-24 01:46:28.461369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.642 [2024-07-24 01:46:28.461383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:13.642 [2024-07-24 01:46:28.461402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.642 [2024-07-24 01:46:28.461417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:13.642 [2024-07-24 01:46:28.461432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.642 [2024-07-24 01:46:28.461445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:13.642 [2024-07-24 01:46:28.461460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.642 [2024-07-24 01:46:28.461474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:13.642 [2024-07-24 01:46:28.461489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.643 [2024-07-24 01:46:28.461502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:13.643 [2024-07-24 01:46:28.461518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.643 [2024-07-24 01:46:28.461531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:13.643 [2024-07-24 01:46:28.461546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.643 [2024-07-24 01:46:28.461560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:13.643 [2024-07-24 01:46:28.461576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.643 [2024-07-24 01:46:28.461590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:13.643 [2024-07-24 01:46:28.461605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.643 [2024-07-24 01:46:28.461628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:13.643 [2024-07-24 01:46:28.461643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.643 [2024-07-24 01:46:28.461657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:13.643 [2024-07-24 01:46:28.461672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.643 [2024-07-24 01:46:28.461690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:13.643 [2024-07-24 01:46:28.461705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.643 [2024-07-24 01:46:28.461718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:13.643 [2024-07-24 01:46:28.461733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.643 [2024-07-24 01:46:28.461746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:13.643 [2024-07-24 01:46:28.461762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.643 [2024-07-24 01:46:28.461779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:13.643 [2024-07-24 01:46:28.461794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.643 [2024-07-24 01:46:28.461808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:13.643 [2024-07-24 01:46:28.461823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.643 [2024-07-24 01:46:28.461837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:13.643 [2024-07-24 01:46:28.461852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.643 [2024-07-24 01:46:28.461865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:13.643 [2024-07-24 01:46:28.461880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.643 [2024-07-24 01:46:28.461894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:13.643 [2024-07-24 01:46:28.461909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.643 [2024-07-24 01:46:28.461922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:13.643 [2024-07-24 01:46:28.461937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.643 [2024-07-24 01:46:28.461950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:13.643 [2024-07-24 01:46:28.461965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.643 [2024-07-24 01:46:28.461978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:13.643 [2024-07-24 01:46:28.461993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.643 [2024-07-24 01:46:28.462006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:13.643 [2024-07-24 01:46:28.462021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.643 [2024-07-24 01:46:28.462035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:13.643 [2024-07-24 01:46:28.462050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.643 [2024-07-24 01:46:28.462063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:13.643 [2024-07-24 01:46:28.462079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.643 [2024-07-24 01:46:28.462092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:13.643 [2024-07-24 01:46:28.462107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.643 [2024-07-24 01:46:28.462120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:13.643 [2024-07-24 01:46:28.462139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.643 [2024-07-24 01:46:28.462153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:13.643 [2024-07-24 01:46:28.462168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.643 [2024-07-24 01:46:28.462181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:13.643 [2024-07-24 01:46:28.462196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.643 [2024-07-24 01:46:28.462210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:13.643 [2024-07-24 01:46:28.462225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.643 [2024-07-24 01:46:28.462238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:13.643 [2024-07-24 01:46:28.462253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.643 [2024-07-24 01:46:28.462266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:13.643 [2024-07-24 01:46:28.462281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.643 [2024-07-24 01:46:28.462295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:13.643 [2024-07-24 01:46:28.462310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.643 [2024-07-24 01:46:28.462330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:13.643 [2024-07-24 01:46:28.462346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.643 [2024-07-24 01:46:28.462360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:13.643 [2024-07-24 01:46:28.462378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.643 [2024-07-24 01:46:28.462391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:13.643 [2024-07-24 01:46:28.462406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.643 [2024-07-24 01:46:28.462419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:13.643 [2024-07-24 01:46:28.462433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.643 [2024-07-24 01:46:28.462446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:13.643 [2024-07-24 01:46:28.462461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.643 [2024-07-24 01:46:28.462474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:13.643 [2024-07-24 01:46:28.462489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.643 [2024-07-24 01:46:28.462507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:13.643 [2024-07-24 01:46:28.462523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.643 [2024-07-24 01:46:28.462536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:13.643 [2024-07-24 01:46:28.462551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.643 [2024-07-24 01:46:28.462564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:13.643 [2024-07-24 01:46:28.462579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.643 [2024-07-24 01:46:28.462593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:13.643 [2024-07-24 01:46:28.462607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.643 [2024-07-24 01:46:28.462621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:13.643 [2024-07-24 01:46:28.462636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.644 [2024-07-24 01:46:28.462649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:13.644 [2024-07-24 01:46:28.462665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.644 [2024-07-24 01:46:28.462682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:13.644 [2024-07-24 01:46:28.462697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:13.644 [2024-07-24 01:46:28.462711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:13.644 [2024-07-24 01:46:28.462798] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x81e420 was disconnected and freed. reset controller. 00:08:13.644 01:46:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:13.644 01:46:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:08:13.644 [2024-07-24 01:46:28.463922] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:08:13.644 task offset: 81920 on job bdev=Nvme0n1 fails 00:08:13.644 00:08:13.644 Latency(us) 00:08:13.644 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:13.644 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:13.644 Job: Nvme0n1 ended in about 0.41 seconds with error 00:08:13.644 Verification LBA range: start 0x0 length 0x400 00:08:13.644 Nvme0n1 : 0.41 1562.14 97.63 156.21 0.00 36189.54 2815.62 34175.81 00:08:13.644 =================================================================================================================== 00:08:13.644 Total : 1562.14 97.63 156.21 0.00 36189.54 2815.62 34175.81 00:08:13.644 [2024-07-24 01:46:28.465800] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:13.644 [2024-07-24 01:46:28.465844] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824000 (9): Bad file descriptor 00:08:13.644 [2024-07-24 01:46:28.476742] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:14.574 01:46:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1322736 00:08:14.574 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1322736) - No such process 00:08:14.574 01:46:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:08:14.574 01:46:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:08:14.574 01:46:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:08:14.574 01:46:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:08:14.831 01:46:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:08:14.831 01:46:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:08:14.831 01:46:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:14.831 01:46:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:14.831 { 00:08:14.831 "params": { 00:08:14.831 "name": "Nvme$subsystem", 00:08:14.831 "trtype": "$TEST_TRANSPORT", 00:08:14.831 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:14.831 "adrfam": "ipv4", 00:08:14.831 "trsvcid": "$NVMF_PORT", 00:08:14.831 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:14.831 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:14.831 "hdgst": ${hdgst:-false}, 00:08:14.831 "ddgst": ${ddgst:-false} 00:08:14.831 }, 00:08:14.832 "method": "bdev_nvme_attach_controller" 00:08:14.832 } 00:08:14.832 EOF 00:08:14.832 )") 00:08:14.832 01:46:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:08:14.832 01:46:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:08:14.832 01:46:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:08:14.832 01:46:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:14.832 "params": { 00:08:14.832 "name": "Nvme0", 00:08:14.832 "trtype": "tcp", 00:08:14.832 "traddr": "10.0.0.2", 00:08:14.832 "adrfam": "ipv4", 00:08:14.832 "trsvcid": "4420", 00:08:14.832 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:14.832 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:14.832 "hdgst": false, 00:08:14.832 "ddgst": false 00:08:14.832 }, 00:08:14.832 "method": "bdev_nvme_attach_controller" 00:08:14.832 }' 00:08:14.832 [2024-07-24 01:46:29.511866] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:08:14.832 [2024-07-24 01:46:29.511941] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1322896 ] 00:08:14.832 EAL: No free 2048 kB hugepages reported on node 1 00:08:14.832 [2024-07-24 01:46:29.570650] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:14.832 [2024-07-24 01:46:29.659396] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.088 Running I/O for 1 seconds... 00:08:16.462 00:08:16.462 Latency(us) 00:08:16.462 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:16.462 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:16.462 Verification LBA range: start 0x0 length 0x400 00:08:16.462 Nvme0n1 : 1.00 1530.89 95.68 0.00 0.00 41145.13 7184.69 36117.62 00:08:16.462 =================================================================================================================== 00:08:16.462 Total : 1530.89 95.68 0.00 0.00 41145.13 7184.69 36117.62 00:08:16.462 01:46:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:08:16.462 01:46:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:08:16.462 01:46:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:08:16.462 01:46:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:16.462 01:46:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:08:16.462 01:46:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:16.462 01:46:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:08:16.462 01:46:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:16.462 01:46:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:08:16.462 01:46:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:16.462 01:46:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:16.462 rmmod nvme_tcp 00:08:16.462 rmmod nvme_fabrics 00:08:16.462 rmmod nvme_keyring 00:08:16.462 01:46:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:16.462 01:46:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:08:16.462 01:46:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:08:16.462 01:46:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 1322574 ']' 00:08:16.462 01:46:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 1322574 00:08:16.462 01:46:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 1322574 ']' 00:08:16.462 01:46:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 1322574 00:08:16.462 01:46:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:08:16.462 01:46:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:16.462 01:46:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1322574 00:08:16.462 01:46:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:08:16.462 01:46:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:08:16.462 01:46:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1322574' 00:08:16.462 killing process with pid 1322574 00:08:16.462 01:46:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 1322574 00:08:16.462 01:46:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 1322574 00:08:16.721 [2024-07-24 01:46:31.496967] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:08:16.721 01:46:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:16.721 01:46:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:16.721 01:46:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:16.721 01:46:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:16.721 01:46:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:16.721 01:46:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:16.721 01:46:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:16.721 01:46:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:19.253 01:46:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:19.253 01:46:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:08:19.253 00:08:19.253 real 0m8.465s 00:08:19.253 user 0m19.028s 00:08:19.253 sys 0m2.610s 00:08:19.253 01:46:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:19.253 01:46:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:19.253 ************************************ 00:08:19.253 END TEST nvmf_host_management 00:08:19.253 ************************************ 00:08:19.253 01:46:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:19.253 01:46:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:19.253 01:46:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:19.253 01:46:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:19.253 ************************************ 00:08:19.253 START TEST nvmf_lvol 00:08:19.253 ************************************ 00:08:19.253 01:46:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:19.253 * Looking for test storage... 00:08:19.253 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:19.253 01:46:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:19.253 01:46:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:08:19.253 01:46:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:19.253 01:46:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:19.253 01:46:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:19.253 01:46:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:19.253 01:46:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:19.253 01:46:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:19.254 01:46:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:19.254 01:46:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:19.254 01:46:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:19.254 01:46:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:19.254 01:46:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:19.254 01:46:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:19.254 01:46:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:19.254 01:46:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:19.254 01:46:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:19.254 01:46:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:19.254 01:46:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:19.254 01:46:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:19.254 01:46:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:19.254 01:46:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:19.254 01:46:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.254 01:46:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.254 01:46:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.254 01:46:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:08:19.254 01:46:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.254 01:46:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:08:19.254 01:46:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:19.254 01:46:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:19.254 01:46:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:19.254 01:46:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:19.254 01:46:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:19.254 01:46:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:19.254 01:46:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:19.254 01:46:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:19.254 01:46:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:19.254 01:46:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:19.254 01:46:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:08:19.254 01:46:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:08:19.254 01:46:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:19.254 01:46:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:08:19.254 01:46:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:19.254 01:46:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:19.254 01:46:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:19.254 01:46:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:19.254 01:46:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:19.254 01:46:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:19.254 01:46:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:19.254 01:46:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:19.254 01:46:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:19.254 01:46:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:19.254 01:46:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:08:19.254 01:46:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:21.188 01:46:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:21.188 01:46:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:08:21.188 01:46:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:21.188 01:46:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:21.188 01:46:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:21.188 01:46:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:21.188 01:46:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:21.188 01:46:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:08:21.188 01:46:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:21.188 01:46:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:08:21.188 01:46:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:08:21.188 01:46:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:08:21.188 01:46:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:08:21.188 01:46:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:08:21.188 01:46:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:08:21.188 01:46:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:21.188 01:46:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:21.188 01:46:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:21.188 01:46:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:21.188 01:46:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:21.188 01:46:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:21.188 01:46:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:21.188 01:46:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:21.188 01:46:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:21.188 01:46:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:21.188 01:46:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:21.188 01:46:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:21.188 01:46:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:21.188 01:46:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:21.188 01:46:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:21.188 01:46:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:21.188 01:46:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:21.188 01:46:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:21.188 01:46:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:21.188 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:21.188 01:46:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:21.188 01:46:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:21.188 01:46:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:21.188 01:46:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:21.188 01:46:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:21.188 01:46:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:21.188 01:46:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:21.188 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:21.188 01:46:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:21.188 01:46:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:21.188 01:46:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:21.188 01:46:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:21.188 01:46:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:21.189 01:46:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:21.189 01:46:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:21.189 01:46:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:21.189 01:46:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:21.189 01:46:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:21.189 01:46:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:21.189 01:46:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:21.189 01:46:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:21.189 01:46:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:21.189 01:46:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:21.189 01:46:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:21.189 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:21.189 01:46:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:21.189 01:46:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:21.189 01:46:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:21.189 01:46:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:21.189 01:46:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:21.189 01:46:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:21.189 01:46:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:21.189 01:46:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:21.189 01:46:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:21.189 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:21.189 01:46:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:21.189 01:46:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:21.189 01:46:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:08:21.189 01:46:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:21.189 01:46:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:21.189 01:46:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:21.189 01:46:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:21.189 01:46:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:21.189 01:46:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:21.189 01:46:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:21.189 01:46:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:21.189 01:46:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:21.189 01:46:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:21.189 01:46:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:21.189 01:46:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:21.189 01:46:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:21.189 01:46:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:21.189 01:46:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:21.189 01:46:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:21.189 01:46:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:21.189 01:46:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:21.189 01:46:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:21.189 01:46:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:21.189 01:46:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:21.189 01:46:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:21.189 01:46:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:21.189 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:21.189 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.249 ms 00:08:21.189 00:08:21.189 --- 10.0.0.2 ping statistics --- 00:08:21.189 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:21.189 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:08:21.189 01:46:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:21.189 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:21.189 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.132 ms 00:08:21.189 00:08:21.189 --- 10.0.0.1 ping statistics --- 00:08:21.189 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:21.189 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:08:21.189 01:46:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:21.189 01:46:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:08:21.189 01:46:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:21.189 01:46:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:21.189 01:46:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:21.189 01:46:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:21.189 01:46:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:21.189 01:46:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:21.189 01:46:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:21.189 01:46:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:08:21.189 01:46:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:21.189 01:46:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:21.189 01:46:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:21.189 01:46:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=1325092 00:08:21.189 01:46:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:08:21.189 01:46:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 1325092 00:08:21.189 01:46:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 1325092 ']' 00:08:21.189 01:46:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:21.189 01:46:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:21.189 01:46:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:21.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:21.189 01:46:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:21.189 01:46:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:21.189 [2024-07-24 01:46:35.822047] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:08:21.189 [2024-07-24 01:46:35.822146] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:21.189 EAL: No free 2048 kB hugepages reported on node 1 00:08:21.189 [2024-07-24 01:46:35.907800] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:21.189 [2024-07-24 01:46:36.009249] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:21.189 [2024-07-24 01:46:36.009338] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:21.189 [2024-07-24 01:46:36.009363] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:21.189 [2024-07-24 01:46:36.009385] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:21.189 [2024-07-24 01:46:36.009421] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:21.189 [2024-07-24 01:46:36.009494] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:21.189 [2024-07-24 01:46:36.009556] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:21.189 [2024-07-24 01:46:36.009565] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.448 01:46:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:21.448 01:46:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:08:21.448 01:46:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:21.448 01:46:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:21.448 01:46:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:21.448 01:46:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:21.448 01:46:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:21.706 [2024-07-24 01:46:36.417930] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:21.706 01:46:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:21.964 01:46:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:08:21.964 01:46:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:22.222 01:46:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:08:22.222 01:46:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:08:22.480 01:46:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:08:22.738 01:46:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=26916428-ab69-4297-8c59-909a8f75fc5e 00:08:22.738 01:46:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 26916428-ab69-4297-8c59-909a8f75fc5e lvol 20 00:08:22.995 01:46:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=63666c25-750c-4a1d-b67a-7db2ac8c6155 00:08:22.995 01:46:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:23.252 01:46:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 63666c25-750c-4a1d-b67a-7db2ac8c6155 00:08:23.509 01:46:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:23.766 [2024-07-24 01:46:38.479668] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:23.766 01:46:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:24.023 01:46:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1325460 00:08:24.023 01:46:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:08:24.023 01:46:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:08:24.023 EAL: No free 2048 kB hugepages reported on node 1 00:08:24.955 01:46:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 63666c25-750c-4a1d-b67a-7db2ac8c6155 MY_SNAPSHOT 00:08:25.212 01:46:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=83079196-9525-40ec-8b63-e9f5b90321a7 00:08:25.212 01:46:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 63666c25-750c-4a1d-b67a-7db2ac8c6155 30 00:08:25.470 01:46:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 83079196-9525-40ec-8b63-e9f5b90321a7 MY_CLONE 00:08:26.033 01:46:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=c1e9834e-5525-44ee-b5bf-0c5d1e2ca046 00:08:26.033 01:46:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate c1e9834e-5525-44ee-b5bf-0c5d1e2ca046 00:08:26.598 01:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1325460 00:08:34.701 Initializing NVMe Controllers 00:08:34.701 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:34.701 Controller IO queue size 128, less than required. 00:08:34.701 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:34.701 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:08:34.701 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:08:34.701 Initialization complete. Launching workers. 00:08:34.701 ======================================================== 00:08:34.701 Latency(us) 00:08:34.701 Device Information : IOPS MiB/s Average min max 00:08:34.701 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10719.50 41.87 11944.95 2326.36 116387.65 00:08:34.701 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10493.00 40.99 12199.84 1792.79 50064.84 00:08:34.701 ======================================================== 00:08:34.701 Total : 21212.50 82.86 12071.04 1792.79 116387.65 00:08:34.701 00:08:34.701 01:46:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:34.701 01:46:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 63666c25-750c-4a1d-b67a-7db2ac8c6155 00:08:34.958 01:46:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 26916428-ab69-4297-8c59-909a8f75fc5e 00:08:34.958 01:46:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:08:34.958 01:46:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:08:34.959 01:46:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:08:34.959 01:46:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:34.959 01:46:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:08:34.959 01:46:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:35.216 01:46:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:08:35.216 01:46:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:35.216 01:46:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:35.216 rmmod nvme_tcp 00:08:35.216 rmmod nvme_fabrics 00:08:35.216 rmmod nvme_keyring 00:08:35.216 01:46:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:35.216 01:46:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:08:35.216 01:46:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:08:35.216 01:46:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 1325092 ']' 00:08:35.216 01:46:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 1325092 00:08:35.216 01:46:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 1325092 ']' 00:08:35.216 01:46:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 1325092 00:08:35.216 01:46:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:08:35.216 01:46:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:35.216 01:46:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1325092 00:08:35.216 01:46:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:35.216 01:46:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:35.216 01:46:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1325092' 00:08:35.216 killing process with pid 1325092 00:08:35.216 01:46:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 1325092 00:08:35.216 01:46:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 1325092 00:08:35.474 01:46:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:35.474 01:46:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:35.474 01:46:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:35.474 01:46:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:35.474 01:46:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:35.474 01:46:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:35.474 01:46:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:35.474 01:46:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:38.005 01:46:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:38.005 00:08:38.005 real 0m18.663s 00:08:38.005 user 1m3.115s 00:08:38.005 sys 0m5.931s 00:08:38.005 01:46:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:38.005 01:46:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:38.005 ************************************ 00:08:38.005 END TEST nvmf_lvol 00:08:38.005 ************************************ 00:08:38.005 01:46:52 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:38.005 01:46:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:38.005 01:46:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:38.005 01:46:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:38.005 ************************************ 00:08:38.005 START TEST nvmf_lvs_grow 00:08:38.005 ************************************ 00:08:38.005 01:46:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:38.005 * Looking for test storage... 00:08:38.005 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:38.005 01:46:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:38.005 01:46:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:08:38.005 01:46:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:38.005 01:46:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:38.005 01:46:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:38.005 01:46:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:38.005 01:46:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:38.005 01:46:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:38.005 01:46:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:38.005 01:46:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:38.005 01:46:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:38.005 01:46:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:38.005 01:46:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:38.005 01:46:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:38.005 01:46:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:38.005 01:46:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:38.005 01:46:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:38.005 01:46:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:38.005 01:46:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:38.005 01:46:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:38.005 01:46:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:38.005 01:46:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:38.005 01:46:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.005 01:46:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.005 01:46:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.005 01:46:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:08:38.006 01:46:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.006 01:46:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:08:38.006 01:46:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:38.006 01:46:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:38.006 01:46:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:38.006 01:46:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:38.006 01:46:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:38.006 01:46:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:38.006 01:46:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:38.006 01:46:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:38.006 01:46:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:38.006 01:46:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:38.006 01:46:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:08:38.006 01:46:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:38.006 01:46:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:38.006 01:46:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:38.006 01:46:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:38.006 01:46:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:38.006 01:46:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:38.006 01:46:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:38.006 01:46:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:38.006 01:46:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:38.006 01:46:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:38.006 01:46:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:08:38.006 01:46:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:39.384 01:46:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:39.384 01:46:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:08:39.384 01:46:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:39.384 01:46:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:39.384 01:46:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:39.384 01:46:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:39.384 01:46:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:39.384 01:46:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:08:39.384 01:46:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:39.384 01:46:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:08:39.384 01:46:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:08:39.384 01:46:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:08:39.384 01:46:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:08:39.384 01:46:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:08:39.384 01:46:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:08:39.384 01:46:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:39.384 01:46:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:39.384 01:46:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:39.384 01:46:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:39.384 01:46:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:39.384 01:46:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:39.384 01:46:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:39.384 01:46:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:39.384 01:46:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:39.384 01:46:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:39.384 01:46:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:39.384 01:46:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:39.384 01:46:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:39.384 01:46:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:39.384 01:46:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:39.384 01:46:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:39.384 01:46:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:39.384 01:46:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:39.384 01:46:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:39.384 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:39.384 01:46:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:39.384 01:46:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:39.384 01:46:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:39.384 01:46:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:39.384 01:46:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:39.384 01:46:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:39.384 01:46:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:39.384 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:39.384 01:46:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:39.384 01:46:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:39.384 01:46:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:39.384 01:46:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:39.384 01:46:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:39.384 01:46:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:39.384 01:46:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:39.384 01:46:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:39.384 01:46:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:39.384 01:46:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:39.384 01:46:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:39.384 01:46:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:39.384 01:46:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:39.384 01:46:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:39.384 01:46:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:39.384 01:46:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:39.384 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:39.384 01:46:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:39.384 01:46:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:39.384 01:46:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:39.384 01:46:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:39.384 01:46:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:39.384 01:46:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:39.384 01:46:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:39.384 01:46:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:39.384 01:46:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:39.384 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:39.384 01:46:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:39.384 01:46:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:39.384 01:46:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:08:39.384 01:46:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:39.384 01:46:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:39.384 01:46:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:39.384 01:46:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:39.384 01:46:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:39.384 01:46:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:39.384 01:46:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:39.384 01:46:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:39.384 01:46:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:39.384 01:46:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:39.384 01:46:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:39.384 01:46:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:39.384 01:46:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:39.384 01:46:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:39.384 01:46:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:39.384 01:46:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:39.642 01:46:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:39.642 01:46:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:39.642 01:46:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:39.642 01:46:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:39.642 01:46:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:39.642 01:46:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:39.642 01:46:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:39.642 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:39.642 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.236 ms 00:08:39.642 00:08:39.642 --- 10.0.0.2 ping statistics --- 00:08:39.642 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:39.642 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:08:39.642 01:46:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:39.642 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:39.642 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.087 ms 00:08:39.642 00:08:39.642 --- 10.0.0.1 ping statistics --- 00:08:39.642 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:39.642 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:08:39.642 01:46:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:39.642 01:46:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:08:39.642 01:46:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:39.642 01:46:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:39.642 01:46:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:39.642 01:46:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:39.642 01:46:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:39.642 01:46:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:39.642 01:46:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:39.642 01:46:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:08:39.642 01:46:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:39.642 01:46:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:39.642 01:46:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:39.642 01:46:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=1328679 00:08:39.642 01:46:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:39.642 01:46:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 1328679 00:08:39.642 01:46:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 1328679 ']' 00:08:39.642 01:46:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:39.642 01:46:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:39.642 01:46:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:39.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:39.642 01:46:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:39.642 01:46:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:39.642 [2024-07-24 01:46:54.469230] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:08:39.642 [2024-07-24 01:46:54.469314] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:39.642 EAL: No free 2048 kB hugepages reported on node 1 00:08:39.642 [2024-07-24 01:46:54.533353] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:39.898 [2024-07-24 01:46:54.627251] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:39.898 [2024-07-24 01:46:54.627313] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:39.898 [2024-07-24 01:46:54.627340] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:39.898 [2024-07-24 01:46:54.627354] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:39.898 [2024-07-24 01:46:54.627381] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:39.898 [2024-07-24 01:46:54.627407] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.898 01:46:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:39.898 01:46:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:08:39.898 01:46:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:39.899 01:46:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:39.899 01:46:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:39.899 01:46:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:39.899 01:46:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:40.156 [2024-07-24 01:46:55.032349] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:40.413 01:46:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:08:40.413 01:46:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:40.413 01:46:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:40.413 01:46:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:40.414 ************************************ 00:08:40.414 START TEST lvs_grow_clean 00:08:40.414 ************************************ 00:08:40.414 01:46:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:08:40.414 01:46:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:40.414 01:46:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:40.414 01:46:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:40.414 01:46:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:40.414 01:46:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:40.414 01:46:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:40.414 01:46:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:40.414 01:46:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:40.414 01:46:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:40.671 01:46:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:40.671 01:46:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:40.929 01:46:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=1eee4673-f77c-4e32-81fa-03dc7f7950c1 00:08:40.929 01:46:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1eee4673-f77c-4e32-81fa-03dc7f7950c1 00:08:40.929 01:46:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:41.185 01:46:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:41.185 01:46:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:41.185 01:46:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 1eee4673-f77c-4e32-81fa-03dc7f7950c1 lvol 150 00:08:41.443 01:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=640b0375-aa3b-4298-acbd-9e4fd567ad9d 00:08:41.443 01:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:41.443 01:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:41.700 [2024-07-24 01:46:56.425582] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:41.700 [2024-07-24 01:46:56.425676] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:41.700 true 00:08:41.700 01:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1eee4673-f77c-4e32-81fa-03dc7f7950c1 00:08:41.700 01:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:41.958 01:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:41.958 01:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:42.216 01:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 640b0375-aa3b-4298-acbd-9e4fd567ad9d 00:08:42.474 01:46:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:42.732 [2024-07-24 01:46:57.420696] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:42.732 01:46:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:43.021 01:46:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1329113 00:08:43.021 01:46:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:43.021 01:46:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:43.021 01:46:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1329113 /var/tmp/bdevperf.sock 00:08:43.021 01:46:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 1329113 ']' 00:08:43.021 01:46:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:43.021 01:46:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:43.021 01:46:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:43.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:43.021 01:46:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:43.021 01:46:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:43.021 [2024-07-24 01:46:57.720801] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:08:43.021 [2024-07-24 01:46:57.720889] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1329113 ] 00:08:43.021 EAL: No free 2048 kB hugepages reported on node 1 00:08:43.021 [2024-07-24 01:46:57.781581] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:43.021 [2024-07-24 01:46:57.874753] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:43.279 01:46:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:43.279 01:46:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:08:43.279 01:46:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:43.536 Nvme0n1 00:08:43.537 01:46:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:43.794 [ 00:08:43.794 { 00:08:43.794 "name": "Nvme0n1", 00:08:43.794 "aliases": [ 00:08:43.794 "640b0375-aa3b-4298-acbd-9e4fd567ad9d" 00:08:43.794 ], 00:08:43.794 "product_name": "NVMe disk", 00:08:43.794 "block_size": 4096, 00:08:43.794 "num_blocks": 38912, 00:08:43.794 "uuid": "640b0375-aa3b-4298-acbd-9e4fd567ad9d", 00:08:43.794 "assigned_rate_limits": { 00:08:43.794 "rw_ios_per_sec": 0, 00:08:43.794 "rw_mbytes_per_sec": 0, 00:08:43.794 "r_mbytes_per_sec": 0, 00:08:43.794 "w_mbytes_per_sec": 0 00:08:43.794 }, 00:08:43.794 "claimed": false, 00:08:43.794 "zoned": false, 00:08:43.794 "supported_io_types": { 00:08:43.794 "read": true, 00:08:43.794 "write": true, 00:08:43.794 "unmap": true, 00:08:43.794 "flush": true, 00:08:43.794 "reset": true, 00:08:43.794 "nvme_admin": true, 00:08:43.794 "nvme_io": true, 00:08:43.794 "nvme_io_md": false, 00:08:43.794 "write_zeroes": true, 00:08:43.794 "zcopy": false, 00:08:43.794 "get_zone_info": false, 00:08:43.794 "zone_management": false, 00:08:43.794 "zone_append": false, 00:08:43.794 "compare": true, 00:08:43.794 "compare_and_write": true, 00:08:43.794 "abort": true, 00:08:43.794 "seek_hole": false, 00:08:43.794 "seek_data": false, 00:08:43.794 "copy": true, 00:08:43.794 "nvme_iov_md": false 00:08:43.794 }, 00:08:43.794 "memory_domains": [ 00:08:43.794 { 00:08:43.794 "dma_device_id": "system", 00:08:43.794 "dma_device_type": 1 00:08:43.794 } 00:08:43.794 ], 00:08:43.794 "driver_specific": { 00:08:43.794 "nvme": [ 00:08:43.795 { 00:08:43.795 "trid": { 00:08:43.795 "trtype": "TCP", 00:08:43.795 "adrfam": "IPv4", 00:08:43.795 "traddr": "10.0.0.2", 00:08:43.795 "trsvcid": "4420", 00:08:43.795 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:43.795 }, 00:08:43.795 "ctrlr_data": { 00:08:43.795 "cntlid": 1, 00:08:43.795 "vendor_id": "0x8086", 00:08:43.795 "model_number": "SPDK bdev Controller", 00:08:43.795 "serial_number": "SPDK0", 00:08:43.795 "firmware_revision": "24.09", 00:08:43.795 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:43.795 "oacs": { 00:08:43.795 "security": 0, 00:08:43.795 "format": 0, 00:08:43.795 "firmware": 0, 00:08:43.795 "ns_manage": 0 00:08:43.795 }, 00:08:43.795 "multi_ctrlr": true, 00:08:43.795 "ana_reporting": false 00:08:43.795 }, 00:08:43.795 "vs": { 00:08:43.795 "nvme_version": "1.3" 00:08:43.795 }, 00:08:43.795 "ns_data": { 00:08:43.795 "id": 1, 00:08:43.795 "can_share": true 00:08:43.795 } 00:08:43.795 } 00:08:43.795 ], 00:08:43.795 "mp_policy": "active_passive" 00:08:43.795 } 00:08:43.795 } 00:08:43.795 ] 00:08:43.795 01:46:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1329251 00:08:43.795 01:46:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:43.795 01:46:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:44.053 Running I/O for 10 seconds... 00:08:44.986 Latency(us) 00:08:44.986 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:44.986 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:44.986 Nvme0n1 : 1.00 14036.00 54.83 0.00 0.00 0.00 0.00 0.00 00:08:44.986 =================================================================================================================== 00:08:44.986 Total : 14036.00 54.83 0.00 0.00 0.00 0.00 0.00 00:08:44.986 00:08:45.919 01:47:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 1eee4673-f77c-4e32-81fa-03dc7f7950c1 00:08:45.919 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:45.919 Nvme0n1 : 2.00 13795.00 53.89 0.00 0.00 0.00 0.00 0.00 00:08:45.919 =================================================================================================================== 00:08:45.919 Total : 13795.00 53.89 0.00 0.00 0.00 0.00 0.00 00:08:45.919 00:08:46.177 true 00:08:46.177 01:47:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1eee4673-f77c-4e32-81fa-03dc7f7950c1 00:08:46.177 01:47:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:46.435 01:47:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:46.435 01:47:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:46.435 01:47:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1329251 00:08:47.001 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:47.001 Nvme0n1 : 3.00 13708.67 53.55 0.00 0.00 0.00 0.00 0.00 00:08:47.001 =================================================================================================================== 00:08:47.001 Total : 13708.67 53.55 0.00 0.00 0.00 0.00 0.00 00:08:47.001 00:08:47.935 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:47.935 Nvme0n1 : 4.00 13673.50 53.41 0.00 0.00 0.00 0.00 0.00 00:08:47.935 =================================================================================================================== 00:08:47.935 Total : 13673.50 53.41 0.00 0.00 0.00 0.00 0.00 00:08:47.935 00:08:48.869 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:48.869 Nvme0n1 : 5.00 13665.20 53.38 0.00 0.00 0.00 0.00 0.00 00:08:48.869 =================================================================================================================== 00:08:48.869 Total : 13665.20 53.38 0.00 0.00 0.00 0.00 0.00 00:08:48.869 00:08:50.243 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:50.243 Nvme0n1 : 6.00 13669.00 53.39 0.00 0.00 0.00 0.00 0.00 00:08:50.243 =================================================================================================================== 00:08:50.243 Total : 13669.00 53.39 0.00 0.00 0.00 0.00 0.00 00:08:50.243 00:08:51.177 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:51.177 Nvme0n1 : 7.00 13671.71 53.41 0.00 0.00 0.00 0.00 0.00 00:08:51.177 =================================================================================================================== 00:08:51.177 Total : 13671.71 53.41 0.00 0.00 0.00 0.00 0.00 00:08:51.177 00:08:52.112 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:52.112 Nvme0n1 : 8.00 13680.75 53.44 0.00 0.00 0.00 0.00 0.00 00:08:52.112 =================================================================================================================== 00:08:52.112 Total : 13680.75 53.44 0.00 0.00 0.00 0.00 0.00 00:08:52.112 00:08:53.046 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:53.046 Nvme0n1 : 9.00 13685.11 53.46 0.00 0.00 0.00 0.00 0.00 00:08:53.046 =================================================================================================================== 00:08:53.046 Total : 13685.11 53.46 0.00 0.00 0.00 0.00 0.00 00:08:53.046 00:08:53.978 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:53.978 Nvme0n1 : 10.00 13690.20 53.48 0.00 0.00 0.00 0.00 0.00 00:08:53.978 =================================================================================================================== 00:08:53.978 Total : 13690.20 53.48 0.00 0.00 0.00 0.00 0.00 00:08:53.978 00:08:53.978 00:08:53.978 Latency(us) 00:08:53.978 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:53.978 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:53.978 Nvme0n1 : 10.01 13690.20 53.48 0.00 0.00 9340.82 3058.35 18738.44 00:08:53.978 =================================================================================================================== 00:08:53.978 Total : 13690.20 53.48 0.00 0.00 9340.82 3058.35 18738.44 00:08:53.978 0 00:08:53.978 01:47:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1329113 00:08:53.978 01:47:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 1329113 ']' 00:08:53.978 01:47:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 1329113 00:08:53.978 01:47:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:08:53.978 01:47:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:53.978 01:47:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1329113 00:08:53.978 01:47:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:08:53.978 01:47:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:08:53.978 01:47:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1329113' 00:08:53.978 killing process with pid 1329113 00:08:53.979 01:47:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 1329113 00:08:53.979 Received shutdown signal, test time was about 10.000000 seconds 00:08:53.979 00:08:53.979 Latency(us) 00:08:53.979 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:53.979 =================================================================================================================== 00:08:53.979 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:53.979 01:47:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 1329113 00:08:54.236 01:47:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:54.494 01:47:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:54.751 01:47:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1eee4673-f77c-4e32-81fa-03dc7f7950c1 00:08:54.751 01:47:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:55.009 01:47:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:55.009 01:47:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:08:55.009 01:47:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:55.266 [2024-07-24 01:47:10.082107] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:55.266 01:47:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1eee4673-f77c-4e32-81fa-03dc7f7950c1 00:08:55.266 01:47:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:08:55.266 01:47:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1eee4673-f77c-4e32-81fa-03dc7f7950c1 00:08:55.266 01:47:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:55.266 01:47:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:55.266 01:47:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:55.266 01:47:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:55.266 01:47:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:55.266 01:47:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:55.266 01:47:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:55.266 01:47:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:55.266 01:47:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1eee4673-f77c-4e32-81fa-03dc7f7950c1 00:08:55.523 request: 00:08:55.523 { 00:08:55.523 "uuid": "1eee4673-f77c-4e32-81fa-03dc7f7950c1", 00:08:55.523 "method": "bdev_lvol_get_lvstores", 00:08:55.523 "req_id": 1 00:08:55.523 } 00:08:55.523 Got JSON-RPC error response 00:08:55.523 response: 00:08:55.523 { 00:08:55.523 "code": -19, 00:08:55.523 "message": "No such device" 00:08:55.523 } 00:08:55.523 01:47:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:08:55.523 01:47:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:55.523 01:47:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:55.523 01:47:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:55.523 01:47:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:55.780 aio_bdev 00:08:55.780 01:47:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 640b0375-aa3b-4298-acbd-9e4fd567ad9d 00:08:55.780 01:47:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=640b0375-aa3b-4298-acbd-9e4fd567ad9d 00:08:55.780 01:47:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:08:55.780 01:47:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:08:55.780 01:47:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:08:55.780 01:47:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:08:55.780 01:47:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:56.345 01:47:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 640b0375-aa3b-4298-acbd-9e4fd567ad9d -t 2000 00:08:56.345 [ 00:08:56.345 { 00:08:56.345 "name": "640b0375-aa3b-4298-acbd-9e4fd567ad9d", 00:08:56.345 "aliases": [ 00:08:56.345 "lvs/lvol" 00:08:56.345 ], 00:08:56.345 "product_name": "Logical Volume", 00:08:56.345 "block_size": 4096, 00:08:56.345 "num_blocks": 38912, 00:08:56.345 "uuid": "640b0375-aa3b-4298-acbd-9e4fd567ad9d", 00:08:56.345 "assigned_rate_limits": { 00:08:56.345 "rw_ios_per_sec": 0, 00:08:56.345 "rw_mbytes_per_sec": 0, 00:08:56.345 "r_mbytes_per_sec": 0, 00:08:56.345 "w_mbytes_per_sec": 0 00:08:56.345 }, 00:08:56.345 "claimed": false, 00:08:56.345 "zoned": false, 00:08:56.345 "supported_io_types": { 00:08:56.345 "read": true, 00:08:56.345 "write": true, 00:08:56.345 "unmap": true, 00:08:56.345 "flush": false, 00:08:56.345 "reset": true, 00:08:56.345 "nvme_admin": false, 00:08:56.345 "nvme_io": false, 00:08:56.345 "nvme_io_md": false, 00:08:56.345 "write_zeroes": true, 00:08:56.345 "zcopy": false, 00:08:56.345 "get_zone_info": false, 00:08:56.345 "zone_management": false, 00:08:56.345 "zone_append": false, 00:08:56.345 "compare": false, 00:08:56.345 "compare_and_write": false, 00:08:56.345 "abort": false, 00:08:56.345 "seek_hole": true, 00:08:56.345 "seek_data": true, 00:08:56.345 "copy": false, 00:08:56.345 "nvme_iov_md": false 00:08:56.345 }, 00:08:56.345 "driver_specific": { 00:08:56.345 "lvol": { 00:08:56.345 "lvol_store_uuid": "1eee4673-f77c-4e32-81fa-03dc7f7950c1", 00:08:56.345 "base_bdev": "aio_bdev", 00:08:56.345 "thin_provision": false, 00:08:56.345 "num_allocated_clusters": 38, 00:08:56.345 "snapshot": false, 00:08:56.345 "clone": false, 00:08:56.345 "esnap_clone": false 00:08:56.345 } 00:08:56.345 } 00:08:56.345 } 00:08:56.345 ] 00:08:56.345 01:47:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:08:56.345 01:47:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1eee4673-f77c-4e32-81fa-03dc7f7950c1 00:08:56.345 01:47:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:56.602 01:47:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:56.602 01:47:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1eee4673-f77c-4e32-81fa-03dc7f7950c1 00:08:56.602 01:47:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:56.859 01:47:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:56.859 01:47:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 640b0375-aa3b-4298-acbd-9e4fd567ad9d 00:08:57.117 01:47:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 1eee4673-f77c-4e32-81fa-03dc7f7950c1 00:08:57.375 01:47:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:57.632 01:47:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:57.632 00:08:57.632 real 0m17.429s 00:08:57.632 user 0m15.931s 00:08:57.632 sys 0m2.335s 00:08:57.632 01:47:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:57.632 01:47:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:57.632 ************************************ 00:08:57.632 END TEST lvs_grow_clean 00:08:57.632 ************************************ 00:08:57.890 01:47:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:08:57.890 01:47:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:57.890 01:47:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:57.890 01:47:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:57.890 ************************************ 00:08:57.890 START TEST lvs_grow_dirty 00:08:57.890 ************************************ 00:08:57.890 01:47:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:08:57.890 01:47:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:57.890 01:47:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:57.890 01:47:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:57.890 01:47:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:57.890 01:47:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:57.890 01:47:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:57.890 01:47:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:57.890 01:47:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:57.890 01:47:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:58.148 01:47:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:58.148 01:47:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:58.405 01:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=a9ac506d-de22-4874-a0ad-3bf50a437232 00:08:58.405 01:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a9ac506d-de22-4874-a0ad-3bf50a437232 00:08:58.406 01:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:58.663 01:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:58.663 01:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:58.663 01:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u a9ac506d-de22-4874-a0ad-3bf50a437232 lvol 150 00:08:58.919 01:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=700fdad5-01ee-4f0f-a037-826725f0a23f 00:08:58.919 01:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:58.919 01:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:59.214 [2024-07-24 01:47:13.906583] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:59.214 [2024-07-24 01:47:13.906693] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:59.214 true 00:08:59.214 01:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a9ac506d-de22-4874-a0ad-3bf50a437232 00:08:59.214 01:47:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:59.494 01:47:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:59.494 01:47:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:59.751 01:47:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 700fdad5-01ee-4f0f-a037-826725f0a23f 00:09:00.008 01:47:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:00.265 [2024-07-24 01:47:14.941739] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:00.265 01:47:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:00.522 01:47:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1331293 00:09:00.522 01:47:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:00.522 01:47:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:00.522 01:47:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1331293 /var/tmp/bdevperf.sock 00:09:00.522 01:47:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 1331293 ']' 00:09:00.522 01:47:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:00.522 01:47:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:00.522 01:47:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:00.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:00.522 01:47:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:00.522 01:47:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:00.522 [2024-07-24 01:47:15.246916] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:09:00.522 [2024-07-24 01:47:15.247009] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1331293 ] 00:09:00.522 EAL: No free 2048 kB hugepages reported on node 1 00:09:00.522 [2024-07-24 01:47:15.307634] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:00.522 [2024-07-24 01:47:15.390796] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:00.780 01:47:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:00.780 01:47:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:09:00.780 01:47:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:01.344 Nvme0n1 00:09:01.344 01:47:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:01.602 [ 00:09:01.602 { 00:09:01.602 "name": "Nvme0n1", 00:09:01.602 "aliases": [ 00:09:01.602 "700fdad5-01ee-4f0f-a037-826725f0a23f" 00:09:01.602 ], 00:09:01.602 "product_name": "NVMe disk", 00:09:01.602 "block_size": 4096, 00:09:01.602 "num_blocks": 38912, 00:09:01.602 "uuid": "700fdad5-01ee-4f0f-a037-826725f0a23f", 00:09:01.602 "assigned_rate_limits": { 00:09:01.602 "rw_ios_per_sec": 0, 00:09:01.602 "rw_mbytes_per_sec": 0, 00:09:01.602 "r_mbytes_per_sec": 0, 00:09:01.602 "w_mbytes_per_sec": 0 00:09:01.602 }, 00:09:01.602 "claimed": false, 00:09:01.602 "zoned": false, 00:09:01.602 "supported_io_types": { 00:09:01.602 "read": true, 00:09:01.602 "write": true, 00:09:01.602 "unmap": true, 00:09:01.602 "flush": true, 00:09:01.602 "reset": true, 00:09:01.602 "nvme_admin": true, 00:09:01.602 "nvme_io": true, 00:09:01.602 "nvme_io_md": false, 00:09:01.602 "write_zeroes": true, 00:09:01.602 "zcopy": false, 00:09:01.602 "get_zone_info": false, 00:09:01.602 "zone_management": false, 00:09:01.602 "zone_append": false, 00:09:01.602 "compare": true, 00:09:01.602 "compare_and_write": true, 00:09:01.602 "abort": true, 00:09:01.602 "seek_hole": false, 00:09:01.602 "seek_data": false, 00:09:01.602 "copy": true, 00:09:01.602 "nvme_iov_md": false 00:09:01.602 }, 00:09:01.602 "memory_domains": [ 00:09:01.602 { 00:09:01.602 "dma_device_id": "system", 00:09:01.602 "dma_device_type": 1 00:09:01.602 } 00:09:01.602 ], 00:09:01.602 "driver_specific": { 00:09:01.602 "nvme": [ 00:09:01.602 { 00:09:01.602 "trid": { 00:09:01.602 "trtype": "TCP", 00:09:01.602 "adrfam": "IPv4", 00:09:01.602 "traddr": "10.0.0.2", 00:09:01.602 "trsvcid": "4420", 00:09:01.602 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:01.602 }, 00:09:01.602 "ctrlr_data": { 00:09:01.602 "cntlid": 1, 00:09:01.602 "vendor_id": "0x8086", 00:09:01.602 "model_number": "SPDK bdev Controller", 00:09:01.602 "serial_number": "SPDK0", 00:09:01.602 "firmware_revision": "24.09", 00:09:01.602 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:01.602 "oacs": { 00:09:01.602 "security": 0, 00:09:01.602 "format": 0, 00:09:01.602 "firmware": 0, 00:09:01.602 "ns_manage": 0 00:09:01.602 }, 00:09:01.602 "multi_ctrlr": true, 00:09:01.602 "ana_reporting": false 00:09:01.602 }, 00:09:01.602 "vs": { 00:09:01.602 "nvme_version": "1.3" 00:09:01.602 }, 00:09:01.602 "ns_data": { 00:09:01.602 "id": 1, 00:09:01.602 "can_share": true 00:09:01.602 } 00:09:01.602 } 00:09:01.602 ], 00:09:01.602 "mp_policy": "active_passive" 00:09:01.602 } 00:09:01.602 } 00:09:01.602 ] 00:09:01.602 01:47:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1331435 00:09:01.602 01:47:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:01.602 01:47:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:01.602 Running I/O for 10 seconds... 00:09:02.535 Latency(us) 00:09:02.535 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:02.535 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:02.535 Nvme0n1 : 1.00 15370.00 60.04 0.00 0.00 0.00 0.00 0.00 00:09:02.535 =================================================================================================================== 00:09:02.535 Total : 15370.00 60.04 0.00 0.00 0.00 0.00 0.00 00:09:02.535 00:09:03.469 01:47:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u a9ac506d-de22-4874-a0ad-3bf50a437232 00:09:03.727 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:03.727 Nvme0n1 : 2.00 15432.00 60.28 0.00 0.00 0.00 0.00 0.00 00:09:03.727 =================================================================================================================== 00:09:03.727 Total : 15432.00 60.28 0.00 0.00 0.00 0.00 0.00 00:09:03.727 00:09:03.727 true 00:09:03.727 01:47:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a9ac506d-de22-4874-a0ad-3bf50a437232 00:09:03.727 01:47:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:03.985 01:47:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:03.985 01:47:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:03.985 01:47:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1331435 00:09:04.551 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:04.551 Nvme0n1 : 3.00 15284.67 59.71 0.00 0.00 0.00 0.00 0.00 00:09:04.551 =================================================================================================================== 00:09:04.551 Total : 15284.67 59.71 0.00 0.00 0.00 0.00 0.00 00:09:04.551 00:09:05.924 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:05.924 Nvme0n1 : 4.00 15400.50 60.16 0.00 0.00 0.00 0.00 0.00 00:09:05.924 =================================================================================================================== 00:09:05.924 Total : 15400.50 60.16 0.00 0.00 0.00 0.00 0.00 00:09:05.924 00:09:06.859 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:06.859 Nvme0n1 : 5.00 15463.40 60.40 0.00 0.00 0.00 0.00 0.00 00:09:06.859 =================================================================================================================== 00:09:06.859 Total : 15463.40 60.40 0.00 0.00 0.00 0.00 0.00 00:09:06.859 00:09:07.792 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:07.792 Nvme0n1 : 6.00 15468.50 60.42 0.00 0.00 0.00 0.00 0.00 00:09:07.792 =================================================================================================================== 00:09:07.792 Total : 15468.50 60.42 0.00 0.00 0.00 0.00 0.00 00:09:07.792 00:09:08.725 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:08.725 Nvme0n1 : 7.00 15435.86 60.30 0.00 0.00 0.00 0.00 0.00 00:09:08.725 =================================================================================================================== 00:09:08.725 Total : 15435.86 60.30 0.00 0.00 0.00 0.00 0.00 00:09:08.725 00:09:09.657 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:09.657 Nvme0n1 : 8.00 15379.62 60.08 0.00 0.00 0.00 0.00 0.00 00:09:09.657 =================================================================================================================== 00:09:09.657 Total : 15379.62 60.08 0.00 0.00 0.00 0.00 0.00 00:09:09.657 00:09:10.591 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:10.591 Nvme0n1 : 9.00 15392.33 60.13 0.00 0.00 0.00 0.00 0.00 00:09:10.591 =================================================================================================================== 00:09:10.591 Total : 15392.33 60.13 0.00 0.00 0.00 0.00 0.00 00:09:10.591 00:09:11.965 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:11.965 Nvme0n1 : 10.00 15390.10 60.12 0.00 0.00 0.00 0.00 0.00 00:09:11.965 =================================================================================================================== 00:09:11.965 Total : 15390.10 60.12 0.00 0.00 0.00 0.00 0.00 00:09:11.965 00:09:11.965 00:09:11.965 Latency(us) 00:09:11.965 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:11.965 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:11.965 Nvme0n1 : 10.01 15387.33 60.11 0.00 0.00 8310.41 3543.80 17864.63 00:09:11.965 =================================================================================================================== 00:09:11.965 Total : 15387.33 60.11 0.00 0.00 8310.41 3543.80 17864.63 00:09:11.965 0 00:09:11.965 01:47:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1331293 00:09:11.965 01:47:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 1331293 ']' 00:09:11.965 01:47:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 1331293 00:09:11.965 01:47:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:09:11.965 01:47:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:11.965 01:47:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1331293 00:09:11.965 01:47:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:11.965 01:47:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:11.965 01:47:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1331293' 00:09:11.965 killing process with pid 1331293 00:09:11.965 01:47:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 1331293 00:09:11.965 Received shutdown signal, test time was about 10.000000 seconds 00:09:11.965 00:09:11.965 Latency(us) 00:09:11.965 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:11.965 =================================================================================================================== 00:09:11.965 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:11.965 01:47:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 1331293 00:09:11.965 01:47:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:12.223 01:47:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:12.481 01:47:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a9ac506d-de22-4874-a0ad-3bf50a437232 00:09:12.481 01:47:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:12.739 01:47:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:12.739 01:47:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:09:12.739 01:47:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1328679 00:09:12.739 01:47:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1328679 00:09:12.739 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1328679 Killed "${NVMF_APP[@]}" "$@" 00:09:12.739 01:47:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:09:12.739 01:47:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:09:12.739 01:47:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:12.739 01:47:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:12.739 01:47:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:12.739 01:47:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=1332770 00:09:12.739 01:47:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:12.739 01:47:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 1332770 00:09:12.739 01:47:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 1332770 ']' 00:09:12.739 01:47:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:12.739 01:47:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:12.739 01:47:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:12.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:12.739 01:47:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:12.739 01:47:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:12.997 [2024-07-24 01:47:27.657469] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:09:12.997 [2024-07-24 01:47:27.657560] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:12.997 EAL: No free 2048 kB hugepages reported on node 1 00:09:12.997 [2024-07-24 01:47:27.721137] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:12.997 [2024-07-24 01:47:27.810343] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:12.997 [2024-07-24 01:47:27.810410] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:12.997 [2024-07-24 01:47:27.810427] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:12.997 [2024-07-24 01:47:27.810441] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:12.997 [2024-07-24 01:47:27.810452] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:12.997 [2024-07-24 01:47:27.810481] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:13.255 01:47:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:13.255 01:47:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:09:13.255 01:47:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:13.255 01:47:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:13.255 01:47:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:13.255 01:47:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:13.255 01:47:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:13.513 [2024-07-24 01:47:28.221571] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:09:13.513 [2024-07-24 01:47:28.221715] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:09:13.513 [2024-07-24 01:47:28.221773] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:09:13.513 01:47:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:09:13.513 01:47:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 700fdad5-01ee-4f0f-a037-826725f0a23f 00:09:13.513 01:47:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=700fdad5-01ee-4f0f-a037-826725f0a23f 00:09:13.513 01:47:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:13.513 01:47:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:09:13.513 01:47:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:13.513 01:47:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:13.513 01:47:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:13.771 01:47:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 700fdad5-01ee-4f0f-a037-826725f0a23f -t 2000 00:09:14.029 [ 00:09:14.029 { 00:09:14.029 "name": "700fdad5-01ee-4f0f-a037-826725f0a23f", 00:09:14.029 "aliases": [ 00:09:14.029 "lvs/lvol" 00:09:14.029 ], 00:09:14.029 "product_name": "Logical Volume", 00:09:14.029 "block_size": 4096, 00:09:14.029 "num_blocks": 38912, 00:09:14.029 "uuid": "700fdad5-01ee-4f0f-a037-826725f0a23f", 00:09:14.029 "assigned_rate_limits": { 00:09:14.029 "rw_ios_per_sec": 0, 00:09:14.029 "rw_mbytes_per_sec": 0, 00:09:14.029 "r_mbytes_per_sec": 0, 00:09:14.029 "w_mbytes_per_sec": 0 00:09:14.029 }, 00:09:14.029 "claimed": false, 00:09:14.029 "zoned": false, 00:09:14.029 "supported_io_types": { 00:09:14.029 "read": true, 00:09:14.029 "write": true, 00:09:14.029 "unmap": true, 00:09:14.029 "flush": false, 00:09:14.029 "reset": true, 00:09:14.029 "nvme_admin": false, 00:09:14.029 "nvme_io": false, 00:09:14.029 "nvme_io_md": false, 00:09:14.029 "write_zeroes": true, 00:09:14.029 "zcopy": false, 00:09:14.029 "get_zone_info": false, 00:09:14.029 "zone_management": false, 00:09:14.029 "zone_append": false, 00:09:14.029 "compare": false, 00:09:14.029 "compare_and_write": false, 00:09:14.029 "abort": false, 00:09:14.029 "seek_hole": true, 00:09:14.029 "seek_data": true, 00:09:14.029 "copy": false, 00:09:14.029 "nvme_iov_md": false 00:09:14.029 }, 00:09:14.029 "driver_specific": { 00:09:14.029 "lvol": { 00:09:14.029 "lvol_store_uuid": "a9ac506d-de22-4874-a0ad-3bf50a437232", 00:09:14.029 "base_bdev": "aio_bdev", 00:09:14.029 "thin_provision": false, 00:09:14.029 "num_allocated_clusters": 38, 00:09:14.029 "snapshot": false, 00:09:14.029 "clone": false, 00:09:14.029 "esnap_clone": false 00:09:14.029 } 00:09:14.029 } 00:09:14.029 } 00:09:14.029 ] 00:09:14.029 01:47:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:09:14.029 01:47:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a9ac506d-de22-4874-a0ad-3bf50a437232 00:09:14.029 01:47:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:09:14.287 01:47:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:09:14.287 01:47:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a9ac506d-de22-4874-a0ad-3bf50a437232 00:09:14.287 01:47:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:09:14.545 01:47:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:09:14.545 01:47:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:14.803 [2024-07-24 01:47:29.474516] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:14.803 01:47:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a9ac506d-de22-4874-a0ad-3bf50a437232 00:09:14.803 01:47:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:09:14.803 01:47:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a9ac506d-de22-4874-a0ad-3bf50a437232 00:09:14.803 01:47:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:14.803 01:47:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:14.803 01:47:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:14.803 01:47:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:14.803 01:47:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:14.803 01:47:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:14.803 01:47:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:14.803 01:47:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:14.803 01:47:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a9ac506d-de22-4874-a0ad-3bf50a437232 00:09:15.062 request: 00:09:15.062 { 00:09:15.062 "uuid": "a9ac506d-de22-4874-a0ad-3bf50a437232", 00:09:15.062 "method": "bdev_lvol_get_lvstores", 00:09:15.062 "req_id": 1 00:09:15.062 } 00:09:15.062 Got JSON-RPC error response 00:09:15.062 response: 00:09:15.062 { 00:09:15.062 "code": -19, 00:09:15.062 "message": "No such device" 00:09:15.062 } 00:09:15.062 01:47:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:09:15.062 01:47:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:15.062 01:47:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:15.062 01:47:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:15.062 01:47:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:15.320 aio_bdev 00:09:15.320 01:47:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 700fdad5-01ee-4f0f-a037-826725f0a23f 00:09:15.320 01:47:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=700fdad5-01ee-4f0f-a037-826725f0a23f 00:09:15.320 01:47:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:15.320 01:47:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:09:15.320 01:47:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:15.320 01:47:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:15.320 01:47:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:15.578 01:47:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 700fdad5-01ee-4f0f-a037-826725f0a23f -t 2000 00:09:15.871 [ 00:09:15.871 { 00:09:15.871 "name": "700fdad5-01ee-4f0f-a037-826725f0a23f", 00:09:15.871 "aliases": [ 00:09:15.871 "lvs/lvol" 00:09:15.871 ], 00:09:15.871 "product_name": "Logical Volume", 00:09:15.871 "block_size": 4096, 00:09:15.871 "num_blocks": 38912, 00:09:15.871 "uuid": "700fdad5-01ee-4f0f-a037-826725f0a23f", 00:09:15.871 "assigned_rate_limits": { 00:09:15.871 "rw_ios_per_sec": 0, 00:09:15.871 "rw_mbytes_per_sec": 0, 00:09:15.871 "r_mbytes_per_sec": 0, 00:09:15.871 "w_mbytes_per_sec": 0 00:09:15.871 }, 00:09:15.871 "claimed": false, 00:09:15.871 "zoned": false, 00:09:15.871 "supported_io_types": { 00:09:15.871 "read": true, 00:09:15.871 "write": true, 00:09:15.871 "unmap": true, 00:09:15.871 "flush": false, 00:09:15.871 "reset": true, 00:09:15.871 "nvme_admin": false, 00:09:15.871 "nvme_io": false, 00:09:15.871 "nvme_io_md": false, 00:09:15.871 "write_zeroes": true, 00:09:15.871 "zcopy": false, 00:09:15.871 "get_zone_info": false, 00:09:15.871 "zone_management": false, 00:09:15.871 "zone_append": false, 00:09:15.871 "compare": false, 00:09:15.871 "compare_and_write": false, 00:09:15.871 "abort": false, 00:09:15.871 "seek_hole": true, 00:09:15.871 "seek_data": true, 00:09:15.871 "copy": false, 00:09:15.871 "nvme_iov_md": false 00:09:15.871 }, 00:09:15.871 "driver_specific": { 00:09:15.871 "lvol": { 00:09:15.871 "lvol_store_uuid": "a9ac506d-de22-4874-a0ad-3bf50a437232", 00:09:15.871 "base_bdev": "aio_bdev", 00:09:15.871 "thin_provision": false, 00:09:15.871 "num_allocated_clusters": 38, 00:09:15.871 "snapshot": false, 00:09:15.871 "clone": false, 00:09:15.871 "esnap_clone": false 00:09:15.871 } 00:09:15.871 } 00:09:15.871 } 00:09:15.871 ] 00:09:15.871 01:47:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:09:15.871 01:47:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a9ac506d-de22-4874-a0ad-3bf50a437232 00:09:15.871 01:47:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:16.133 01:47:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:16.133 01:47:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a9ac506d-de22-4874-a0ad-3bf50a437232 00:09:16.133 01:47:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:16.391 01:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:16.391 01:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 700fdad5-01ee-4f0f-a037-826725f0a23f 00:09:16.649 01:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a9ac506d-de22-4874-a0ad-3bf50a437232 00:09:16.906 01:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:17.165 01:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:17.165 00:09:17.165 real 0m19.312s 00:09:17.165 user 0m47.032s 00:09:17.165 sys 0m5.471s 00:09:17.165 01:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:17.165 01:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:17.165 ************************************ 00:09:17.165 END TEST lvs_grow_dirty 00:09:17.165 ************************************ 00:09:17.165 01:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:09:17.165 01:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:09:17.165 01:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:09:17.165 01:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:09:17.165 01:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:09:17.165 01:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:09:17.165 01:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:09:17.165 01:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:09:17.165 01:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:09:17.165 nvmf_trace.0 00:09:17.165 01:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:09:17.165 01:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:09:17.165 01:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:17.165 01:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:09:17.165 01:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:17.165 01:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:09:17.165 01:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:17.165 01:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:17.165 rmmod nvme_tcp 00:09:17.165 rmmod nvme_fabrics 00:09:17.165 rmmod nvme_keyring 00:09:17.165 01:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:17.165 01:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:09:17.165 01:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:09:17.165 01:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 1332770 ']' 00:09:17.165 01:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 1332770 00:09:17.165 01:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 1332770 ']' 00:09:17.165 01:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 1332770 00:09:17.165 01:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:09:17.165 01:47:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:17.165 01:47:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1332770 00:09:17.165 01:47:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:17.165 01:47:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:17.165 01:47:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1332770' 00:09:17.165 killing process with pid 1332770 00:09:17.165 01:47:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 1332770 00:09:17.165 01:47:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 1332770 00:09:17.423 01:47:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:17.423 01:47:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:17.423 01:47:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:17.423 01:47:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:17.423 01:47:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:17.423 01:47:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:17.423 01:47:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:17.423 01:47:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:19.953 01:47:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:19.953 00:09:19.953 real 0m41.977s 00:09:19.953 user 1m8.660s 00:09:19.953 sys 0m9.654s 00:09:19.953 01:47:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:19.953 01:47:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:19.953 ************************************ 00:09:19.953 END TEST nvmf_lvs_grow 00:09:19.953 ************************************ 00:09:19.953 01:47:34 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:19.953 01:47:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:19.953 01:47:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:19.953 01:47:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:19.953 ************************************ 00:09:19.953 START TEST nvmf_bdev_io_wait 00:09:19.953 ************************************ 00:09:19.953 01:47:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:19.953 * Looking for test storage... 00:09:19.953 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:19.953 01:47:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:19.953 01:47:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:09:19.953 01:47:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:19.953 01:47:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:19.953 01:47:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:19.954 01:47:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:19.954 01:47:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:19.954 01:47:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:19.954 01:47:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:19.954 01:47:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:19.954 01:47:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:19.954 01:47:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:19.954 01:47:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:19.954 01:47:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:19.954 01:47:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:19.954 01:47:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:19.954 01:47:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:19.954 01:47:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:19.954 01:47:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:19.954 01:47:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:19.954 01:47:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:19.954 01:47:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:19.954 01:47:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.954 01:47:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.954 01:47:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.954 01:47:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:09:19.954 01:47:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.954 01:47:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:09:19.954 01:47:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:19.954 01:47:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:19.954 01:47:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:19.954 01:47:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:19.954 01:47:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:19.954 01:47:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:19.954 01:47:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:19.954 01:47:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:19.954 01:47:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:19.954 01:47:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:19.954 01:47:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:09:19.954 01:47:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:19.954 01:47:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:19.954 01:47:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:19.954 01:47:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:19.954 01:47:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:19.954 01:47:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:19.954 01:47:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:19.954 01:47:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:19.954 01:47:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:19.954 01:47:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:19.954 01:47:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:09:19.954 01:47:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:21.853 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:21.853 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:09:21.853 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:21.853 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:21.853 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:21.853 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:21.853 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:21.853 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:09:21.853 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:21.853 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:09:21.853 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:09:21.853 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:09:21.853 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:09:21.853 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:09:21.853 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:09:21.853 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:21.853 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:21.853 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:21.853 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:21.853 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:21.853 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:21.853 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:21.853 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:21.853 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:21.853 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:21.853 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:21.853 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:21.853 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:21.853 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:21.853 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:21.853 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:21.853 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:21.853 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:21.853 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:21.853 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:21.853 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:21.853 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:21.853 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:21.853 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:21.853 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:21.853 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:21.853 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:21.853 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:21.853 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:21.853 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:21.853 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:21.853 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:21.853 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:21.853 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:21.853 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:21.853 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:21.853 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:21.853 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:21.853 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:21.853 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:21.853 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:21.853 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:21.853 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:21.853 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:21.853 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:21.853 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:21.853 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:21.853 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:21.853 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:21.853 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:21.853 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:21.853 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:21.853 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:21.853 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:21.853 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:21.853 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:21.853 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:21.853 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:09:21.853 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:21.853 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:21.853 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:21.853 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:21.853 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:21.853 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:21.853 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:21.853 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:21.853 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:21.853 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:21.853 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:21.853 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:21.853 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:21.853 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:21.853 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:21.853 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:21.853 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:21.853 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:21.853 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:21.853 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:21.853 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:21.853 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:21.853 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:21.853 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:21.853 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.132 ms 00:09:21.853 00:09:21.853 --- 10.0.0.2 ping statistics --- 00:09:21.853 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:21.853 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:09:21.853 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:21.853 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:21.853 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.053 ms 00:09:21.853 00:09:21.853 --- 10.0.0.1 ping statistics --- 00:09:21.853 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:21.853 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:09:21.853 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:21.853 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:09:21.853 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:21.853 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:21.853 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:21.853 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:21.853 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:21.853 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:21.853 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:21.853 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:09:21.853 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:21.853 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:21.853 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:21.853 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=1335264 00:09:21.853 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:09:21.853 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 1335264 00:09:21.853 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 1335264 ']' 00:09:21.853 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:21.853 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:21.853 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:21.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:21.853 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:21.853 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:21.853 [2024-07-24 01:47:36.567702] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:09:21.853 [2024-07-24 01:47:36.567794] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:21.853 EAL: No free 2048 kB hugepages reported on node 1 00:09:21.853 [2024-07-24 01:47:36.635361] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:21.853 [2024-07-24 01:47:36.727576] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:21.853 [2024-07-24 01:47:36.727640] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:21.853 [2024-07-24 01:47:36.727656] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:21.853 [2024-07-24 01:47:36.727669] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:21.853 [2024-07-24 01:47:36.727681] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:21.853 [2024-07-24 01:47:36.727767] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:21.853 [2024-07-24 01:47:36.727825] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:21.853 [2024-07-24 01:47:36.727941] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:21.853 [2024-07-24 01:47:36.727944] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:22.111 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:22.111 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:09:22.111 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:22.111 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:22.111 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:22.111 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:22.111 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:09:22.111 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:22.112 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:22.112 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:22.112 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:09:22.112 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:22.112 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:22.112 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:22.112 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:22.112 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:22.112 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:22.112 [2024-07-24 01:47:36.876440] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:22.112 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:22.112 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:22.112 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:22.112 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:22.112 Malloc0 00:09:22.112 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:22.112 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:22.112 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:22.112 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:22.112 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:22.112 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:22.112 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:22.112 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:22.112 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:22.112 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:22.112 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:22.112 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:22.112 [2024-07-24 01:47:36.938623] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:22.112 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:22.112 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1335321 00:09:22.112 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1335323 00:09:22.112 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:09:22.112 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:09:22.112 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:09:22.112 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:09:22.112 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:22.112 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:22.112 { 00:09:22.112 "params": { 00:09:22.112 "name": "Nvme$subsystem", 00:09:22.112 "trtype": "$TEST_TRANSPORT", 00:09:22.112 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:22.112 "adrfam": "ipv4", 00:09:22.112 "trsvcid": "$NVMF_PORT", 00:09:22.112 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:22.112 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:22.112 "hdgst": ${hdgst:-false}, 00:09:22.112 "ddgst": ${ddgst:-false} 00:09:22.112 }, 00:09:22.112 "method": "bdev_nvme_attach_controller" 00:09:22.112 } 00:09:22.112 EOF 00:09:22.112 )") 00:09:22.112 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1335325 00:09:22.112 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:09:22.112 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:09:22.112 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:09:22.112 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:09:22.112 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:22.112 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1335327 00:09:22.112 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:09:22.112 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:09:22.112 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:22.112 { 00:09:22.112 "params": { 00:09:22.112 "name": "Nvme$subsystem", 00:09:22.112 "trtype": "$TEST_TRANSPORT", 00:09:22.112 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:22.112 "adrfam": "ipv4", 00:09:22.112 "trsvcid": "$NVMF_PORT", 00:09:22.112 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:22.112 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:22.112 "hdgst": ${hdgst:-false}, 00:09:22.112 "ddgst": ${ddgst:-false} 00:09:22.112 }, 00:09:22.112 "method": "bdev_nvme_attach_controller" 00:09:22.112 } 00:09:22.112 EOF 00:09:22.112 )") 00:09:22.112 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:09:22.112 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:09:22.112 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:09:22.112 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:09:22.112 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:09:22.112 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:09:22.112 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:22.112 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:22.112 { 00:09:22.112 "params": { 00:09:22.112 "name": "Nvme$subsystem", 00:09:22.112 "trtype": "$TEST_TRANSPORT", 00:09:22.112 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:22.112 "adrfam": "ipv4", 00:09:22.112 "trsvcid": "$NVMF_PORT", 00:09:22.112 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:22.112 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:22.112 "hdgst": ${hdgst:-false}, 00:09:22.112 "ddgst": ${ddgst:-false} 00:09:22.112 }, 00:09:22.112 "method": "bdev_nvme_attach_controller" 00:09:22.112 } 00:09:22.112 EOF 00:09:22.112 )") 00:09:22.112 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:09:22.112 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:09:22.112 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:22.112 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:22.112 { 00:09:22.112 "params": { 00:09:22.112 "name": "Nvme$subsystem", 00:09:22.112 "trtype": "$TEST_TRANSPORT", 00:09:22.112 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:22.112 "adrfam": "ipv4", 00:09:22.112 "trsvcid": "$NVMF_PORT", 00:09:22.112 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:22.112 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:22.112 "hdgst": ${hdgst:-false}, 00:09:22.112 "ddgst": ${ddgst:-false} 00:09:22.112 }, 00:09:22.112 "method": "bdev_nvme_attach_controller" 00:09:22.112 } 00:09:22.112 EOF 00:09:22.112 )") 00:09:22.112 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:09:22.112 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:09:22.112 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1335321 00:09:22.112 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:09:22.112 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:09:22.112 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:09:22.112 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:09:22.112 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:09:22.112 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:22.112 "params": { 00:09:22.112 "name": "Nvme1", 00:09:22.112 "trtype": "tcp", 00:09:22.112 "traddr": "10.0.0.2", 00:09:22.112 "adrfam": "ipv4", 00:09:22.112 "trsvcid": "4420", 00:09:22.113 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:22.113 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:22.113 "hdgst": false, 00:09:22.113 "ddgst": false 00:09:22.113 }, 00:09:22.113 "method": "bdev_nvme_attach_controller" 00:09:22.113 }' 00:09:22.113 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:09:22.113 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:09:22.113 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:22.113 "params": { 00:09:22.113 "name": "Nvme1", 00:09:22.113 "trtype": "tcp", 00:09:22.113 "traddr": "10.0.0.2", 00:09:22.113 "adrfam": "ipv4", 00:09:22.113 "trsvcid": "4420", 00:09:22.113 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:22.113 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:22.113 "hdgst": false, 00:09:22.113 "ddgst": false 00:09:22.113 }, 00:09:22.113 "method": "bdev_nvme_attach_controller" 00:09:22.113 }' 00:09:22.113 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:09:22.113 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:22.113 "params": { 00:09:22.113 "name": "Nvme1", 00:09:22.113 "trtype": "tcp", 00:09:22.113 "traddr": "10.0.0.2", 00:09:22.113 "adrfam": "ipv4", 00:09:22.113 "trsvcid": "4420", 00:09:22.113 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:22.113 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:22.113 "hdgst": false, 00:09:22.113 "ddgst": false 00:09:22.113 }, 00:09:22.113 "method": "bdev_nvme_attach_controller" 00:09:22.113 }' 00:09:22.113 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:09:22.113 01:47:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:22.113 "params": { 00:09:22.113 "name": "Nvme1", 00:09:22.113 "trtype": "tcp", 00:09:22.113 "traddr": "10.0.0.2", 00:09:22.113 "adrfam": "ipv4", 00:09:22.113 "trsvcid": "4420", 00:09:22.113 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:22.113 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:22.113 "hdgst": false, 00:09:22.113 "ddgst": false 00:09:22.113 }, 00:09:22.113 "method": "bdev_nvme_attach_controller" 00:09:22.113 }' 00:09:22.113 [2024-07-24 01:47:36.986298] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:09:22.113 [2024-07-24 01:47:36.986298] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:09:22.113 [2024-07-24 01:47:36.986298] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:09:22.113 [2024-07-24 01:47:36.986396] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-24 01:47:36.986396] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-24 01:47:36.986397] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:09:22.113 --proc-type=auto ] 00:09:22.113 --proc-type=auto ] 00:09:22.113 [2024-07-24 01:47:36.986431] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:09:22.113 [2024-07-24 01:47:36.986498] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:09:22.371 EAL: No free 2048 kB hugepages reported on node 1 00:09:22.371 EAL: No free 2048 kB hugepages reported on node 1 00:09:22.371 [2024-07-24 01:47:37.158474] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:22.371 EAL: No free 2048 kB hugepages reported on node 1 00:09:22.371 [2024-07-24 01:47:37.233409] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:09:22.371 [2024-07-24 01:47:37.257697] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:22.629 EAL: No free 2048 kB hugepages reported on node 1 00:09:22.629 [2024-07-24 01:47:37.333822] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:09:22.629 [2024-07-24 01:47:37.359388] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:22.629 [2024-07-24 01:47:37.434146] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:09:22.629 [2024-07-24 01:47:37.459738] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:22.887 [2024-07-24 01:47:37.537645] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:09:22.887 Running I/O for 1 seconds... 00:09:22.887 Running I/O for 1 seconds... 00:09:22.887 Running I/O for 1 seconds... 00:09:23.145 Running I/O for 1 seconds... 00:09:24.079 00:09:24.079 Latency(us) 00:09:24.079 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:24.079 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:09:24.079 Nvme1n1 : 1.00 163657.95 639.29 0.00 0.00 779.10 335.27 1049.79 00:09:24.079 =================================================================================================================== 00:09:24.079 Total : 163657.95 639.29 0.00 0.00 779.10 335.27 1049.79 00:09:24.079 00:09:24.079 Latency(us) 00:09:24.079 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:24.079 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:09:24.079 Nvme1n1 : 1.01 10522.11 41.10 0.00 0.00 12112.91 6893.42 23010.42 00:09:24.079 =================================================================================================================== 00:09:24.079 Total : 10522.11 41.10 0.00 0.00 12112.91 6893.42 23010.42 00:09:24.079 00:09:24.079 Latency(us) 00:09:24.079 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:24.079 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:09:24.079 Nvme1n1 : 1.02 6446.64 25.18 0.00 0.00 19626.55 10437.21 33593.27 00:09:24.079 =================================================================================================================== 00:09:24.079 Total : 6446.64 25.18 0.00 0.00 19626.55 10437.21 33593.27 00:09:24.079 00:09:24.079 Latency(us) 00:09:24.079 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:24.079 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:09:24.079 Nvme1n1 : 1.01 6828.26 26.67 0.00 0.00 18688.17 5412.79 42525.58 00:09:24.079 =================================================================================================================== 00:09:24.079 Total : 6828.26 26.67 0.00 0.00 18688.17 5412.79 42525.58 00:09:24.337 01:47:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1335323 00:09:24.337 01:47:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1335325 00:09:24.337 01:47:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1335327 00:09:24.337 01:47:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:24.337 01:47:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:24.337 01:47:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:24.337 01:47:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:24.337 01:47:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:09:24.337 01:47:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:09:24.337 01:47:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:24.337 01:47:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:09:24.337 01:47:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:24.337 01:47:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:09:24.337 01:47:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:24.337 01:47:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:24.337 rmmod nvme_tcp 00:09:24.337 rmmod nvme_fabrics 00:09:24.337 rmmod nvme_keyring 00:09:24.337 01:47:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:24.337 01:47:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:09:24.337 01:47:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:09:24.337 01:47:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 1335264 ']' 00:09:24.337 01:47:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 1335264 00:09:24.337 01:47:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 1335264 ']' 00:09:24.337 01:47:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 1335264 00:09:24.337 01:47:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:09:24.337 01:47:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:24.337 01:47:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1335264 00:09:24.337 01:47:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:24.337 01:47:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:24.337 01:47:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1335264' 00:09:24.337 killing process with pid 1335264 00:09:24.337 01:47:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 1335264 00:09:24.337 01:47:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 1335264 00:09:24.596 01:47:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:24.596 01:47:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:24.596 01:47:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:24.596 01:47:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:24.596 01:47:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:24.596 01:47:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:24.596 01:47:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:24.596 01:47:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:27.129 01:47:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:27.129 00:09:27.129 real 0m7.147s 00:09:27.129 user 0m16.133s 00:09:27.129 sys 0m3.650s 00:09:27.129 01:47:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:27.129 01:47:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:27.129 ************************************ 00:09:27.129 END TEST nvmf_bdev_io_wait 00:09:27.129 ************************************ 00:09:27.129 01:47:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:27.129 01:47:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:27.129 01:47:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:27.129 01:47:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:27.129 ************************************ 00:09:27.129 START TEST nvmf_queue_depth 00:09:27.129 ************************************ 00:09:27.129 01:47:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:27.129 * Looking for test storage... 00:09:27.129 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:27.129 01:47:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:27.129 01:47:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:09:27.129 01:47:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:27.129 01:47:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:27.129 01:47:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:27.129 01:47:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:27.129 01:47:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:27.129 01:47:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:27.129 01:47:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:27.129 01:47:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:27.129 01:47:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:27.129 01:47:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:27.129 01:47:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:27.129 01:47:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:27.129 01:47:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:27.129 01:47:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:27.129 01:47:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:27.129 01:47:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:27.129 01:47:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:27.129 01:47:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:27.129 01:47:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:27.129 01:47:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:27.129 01:47:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.129 01:47:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.129 01:47:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.129 01:47:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:09:27.129 01:47:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.129 01:47:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:09:27.129 01:47:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:27.129 01:47:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:27.129 01:47:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:27.129 01:47:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:27.129 01:47:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:27.129 01:47:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:27.129 01:47:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:27.129 01:47:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:27.129 01:47:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:09:27.129 01:47:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:09:27.129 01:47:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:27.129 01:47:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:09:27.129 01:47:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:27.129 01:47:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:27.129 01:47:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:27.129 01:47:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:27.129 01:47:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:27.129 01:47:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:27.129 01:47:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:27.129 01:47:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:27.129 01:47:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:27.129 01:47:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:27.129 01:47:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:09:27.129 01:47:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:29.028 01:47:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:29.028 01:47:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:09:29.028 01:47:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:29.028 01:47:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:29.028 01:47:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:29.028 01:47:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:29.028 01:47:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:29.028 01:47:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:09:29.028 01:47:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:29.028 01:47:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:09:29.028 01:47:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:09:29.028 01:47:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:09:29.028 01:47:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:09:29.028 01:47:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:09:29.028 01:47:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:09:29.028 01:47:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:29.028 01:47:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:29.028 01:47:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:29.028 01:47:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:29.028 01:47:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:29.028 01:47:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:29.028 01:47:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:29.028 01:47:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:29.028 01:47:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:29.028 01:47:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:29.028 01:47:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:29.028 01:47:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:29.028 01:47:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:29.028 01:47:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:29.028 01:47:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:29.028 01:47:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:29.028 01:47:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:29.028 01:47:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:29.028 01:47:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:29.028 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:29.028 01:47:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:29.028 01:47:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:29.028 01:47:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:29.028 01:47:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:29.028 01:47:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:29.028 01:47:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:29.028 01:47:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:29.028 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:29.028 01:47:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:29.028 01:47:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:29.028 01:47:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:29.028 01:47:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:29.028 01:47:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:29.028 01:47:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:29.028 01:47:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:29.028 01:47:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:29.028 01:47:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:29.028 01:47:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:29.028 01:47:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:29.028 01:47:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:29.028 01:47:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:29.028 01:47:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:29.028 01:47:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:29.028 01:47:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:29.028 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:29.028 01:47:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:29.028 01:47:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:29.028 01:47:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:29.028 01:47:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:29.028 01:47:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:29.028 01:47:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:29.028 01:47:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:29.028 01:47:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:29.028 01:47:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:29.028 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:29.028 01:47:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:29.028 01:47:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:29.028 01:47:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:09:29.028 01:47:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:29.028 01:47:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:29.028 01:47:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:29.028 01:47:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:29.028 01:47:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:29.028 01:47:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:29.028 01:47:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:29.028 01:47:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:29.028 01:47:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:29.028 01:47:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:29.028 01:47:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:29.028 01:47:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:29.028 01:47:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:29.028 01:47:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:29.028 01:47:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:29.028 01:47:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:29.028 01:47:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:29.028 01:47:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:29.028 01:47:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:29.028 01:47:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:29.028 01:47:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:29.028 01:47:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:29.028 01:47:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:29.028 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:29.028 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.242 ms 00:09:29.028 00:09:29.028 --- 10.0.0.2 ping statistics --- 00:09:29.028 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:29.028 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:09:29.028 01:47:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:29.028 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:29.028 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.133 ms 00:09:29.028 00:09:29.028 --- 10.0.0.1 ping statistics --- 00:09:29.028 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:29.028 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:09:29.028 01:47:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:29.028 01:47:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:09:29.028 01:47:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:29.028 01:47:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:29.028 01:47:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:29.028 01:47:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:29.028 01:47:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:29.028 01:47:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:29.028 01:47:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:29.028 01:47:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:09:29.028 01:47:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:29.028 01:47:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:29.028 01:47:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:29.028 01:47:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=1337561 00:09:29.028 01:47:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:29.028 01:47:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 1337561 00:09:29.028 01:47:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 1337561 ']' 00:09:29.028 01:47:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:29.028 01:47:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:29.028 01:47:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:29.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:29.028 01:47:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:29.028 01:47:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:29.028 [2024-07-24 01:47:43.705724] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:09:29.028 [2024-07-24 01:47:43.705824] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:29.028 EAL: No free 2048 kB hugepages reported on node 1 00:09:29.028 [2024-07-24 01:47:43.771107] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:29.028 [2024-07-24 01:47:43.857917] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:29.028 [2024-07-24 01:47:43.857969] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:29.028 [2024-07-24 01:47:43.857999] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:29.028 [2024-07-24 01:47:43.858010] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:29.028 [2024-07-24 01:47:43.858020] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:29.028 [2024-07-24 01:47:43.858047] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:29.286 01:47:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:29.286 01:47:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:09:29.286 01:47:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:29.286 01:47:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:29.286 01:47:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:29.286 01:47:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:29.286 01:47:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:29.286 01:47:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:29.286 01:47:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:29.286 [2024-07-24 01:47:43.990134] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:29.286 01:47:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:29.286 01:47:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:29.286 01:47:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:29.286 01:47:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:29.286 Malloc0 00:09:29.286 01:47:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:29.286 01:47:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:29.286 01:47:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:29.286 01:47:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:29.286 01:47:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:29.286 01:47:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:29.286 01:47:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:29.286 01:47:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:29.286 01:47:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:29.286 01:47:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:29.286 01:47:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:29.286 01:47:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:29.286 [2024-07-24 01:47:44.051637] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:29.286 01:47:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:29.286 01:47:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1337584 00:09:29.286 01:47:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:09:29.286 01:47:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:29.286 01:47:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1337584 /var/tmp/bdevperf.sock 00:09:29.286 01:47:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 1337584 ']' 00:09:29.286 01:47:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:29.286 01:47:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:29.286 01:47:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:29.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:29.286 01:47:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:29.286 01:47:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:29.286 [2024-07-24 01:47:44.096791] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:09:29.287 [2024-07-24 01:47:44.096869] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1337584 ] 00:09:29.287 EAL: No free 2048 kB hugepages reported on node 1 00:09:29.287 [2024-07-24 01:47:44.157761] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:29.544 [2024-07-24 01:47:44.249474] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:29.544 01:47:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:29.544 01:47:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:09:29.544 01:47:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:09:29.544 01:47:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:29.544 01:47:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:29.544 NVMe0n1 00:09:29.544 01:47:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:29.544 01:47:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:29.802 Running I/O for 10 seconds... 00:09:42.000 00:09:42.000 Latency(us) 00:09:42.000 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:42.000 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:09:42.000 Verification LBA range: start 0x0 length 0x4000 00:09:42.000 NVMe0n1 : 10.10 8490.27 33.17 0.00 0.00 120054.29 24660.95 74177.04 00:09:42.000 =================================================================================================================== 00:09:42.000 Total : 8490.27 33.17 0.00 0.00 120054.29 24660.95 74177.04 00:09:42.000 0 00:09:42.000 01:47:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1337584 00:09:42.000 01:47:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 1337584 ']' 00:09:42.000 01:47:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 1337584 00:09:42.000 01:47:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:09:42.000 01:47:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:42.000 01:47:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1337584 00:09:42.000 01:47:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:42.000 01:47:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:42.000 01:47:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1337584' 00:09:42.000 killing process with pid 1337584 00:09:42.000 01:47:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 1337584 00:09:42.000 Received shutdown signal, test time was about 10.000000 seconds 00:09:42.000 00:09:42.000 Latency(us) 00:09:42.000 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:42.000 =================================================================================================================== 00:09:42.000 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:42.000 01:47:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 1337584 00:09:42.000 01:47:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:42.000 01:47:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:09:42.000 01:47:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:42.000 01:47:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:09:42.000 01:47:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:42.000 01:47:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:09:42.000 01:47:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:42.000 01:47:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:42.000 rmmod nvme_tcp 00:09:42.000 rmmod nvme_fabrics 00:09:42.000 rmmod nvme_keyring 00:09:42.000 01:47:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:42.000 01:47:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:09:42.000 01:47:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:09:42.000 01:47:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 1337561 ']' 00:09:42.000 01:47:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 1337561 00:09:42.000 01:47:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 1337561 ']' 00:09:42.000 01:47:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 1337561 00:09:42.000 01:47:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:09:42.000 01:47:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:42.000 01:47:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1337561 00:09:42.000 01:47:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:42.000 01:47:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:42.000 01:47:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1337561' 00:09:42.000 killing process with pid 1337561 00:09:42.000 01:47:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 1337561 00:09:42.000 01:47:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 1337561 00:09:42.000 01:47:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:42.000 01:47:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:42.000 01:47:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:42.000 01:47:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:42.000 01:47:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:42.000 01:47:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:42.000 01:47:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:42.000 01:47:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:42.568 01:47:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:42.568 00:09:42.568 real 0m15.767s 00:09:42.568 user 0m22.330s 00:09:42.568 sys 0m2.907s 00:09:42.568 01:47:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:42.568 01:47:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:42.568 ************************************ 00:09:42.568 END TEST nvmf_queue_depth 00:09:42.568 ************************************ 00:09:42.568 01:47:57 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:42.568 01:47:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:42.568 01:47:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:42.568 01:47:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:42.568 ************************************ 00:09:42.568 START TEST nvmf_target_multipath 00:09:42.568 ************************************ 00:09:42.568 01:47:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:42.568 * Looking for test storage... 00:09:42.568 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:42.568 01:47:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:42.568 01:47:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:09:42.568 01:47:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:42.568 01:47:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:42.568 01:47:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:42.568 01:47:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:42.568 01:47:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:42.568 01:47:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:42.568 01:47:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:42.568 01:47:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:42.568 01:47:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:42.568 01:47:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:42.568 01:47:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:42.568 01:47:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:42.568 01:47:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:42.568 01:47:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:42.568 01:47:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:42.568 01:47:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:42.568 01:47:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:42.568 01:47:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:42.568 01:47:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:42.568 01:47:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:42.568 01:47:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:42.568 01:47:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:42.568 01:47:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:42.568 01:47:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:09:42.568 01:47:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:42.568 01:47:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:09:42.568 01:47:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:42.568 01:47:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:42.568 01:47:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:42.568 01:47:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:42.568 01:47:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:42.568 01:47:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:42.568 01:47:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:42.568 01:47:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:42.568 01:47:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:42.568 01:47:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:42.568 01:47:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:09:42.568 01:47:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:42.568 01:47:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:09:42.568 01:47:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:42.568 01:47:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:42.568 01:47:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:42.568 01:47:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:42.568 01:47:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:42.568 01:47:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:42.568 01:47:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:42.568 01:47:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:42.568 01:47:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:42.568 01:47:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:42.568 01:47:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:09:42.568 01:47:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:44.511 01:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:44.511 01:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:09:44.511 01:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:44.511 01:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:44.511 01:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:44.511 01:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:44.511 01:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:44.511 01:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:09:44.511 01:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:44.511 01:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:09:44.511 01:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:09:44.511 01:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:09:44.511 01:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:09:44.511 01:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:09:44.511 01:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:09:44.512 01:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:44.512 01:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:44.512 01:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:44.512 01:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:44.512 01:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:44.512 01:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:44.512 01:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:44.512 01:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:44.512 01:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:44.512 01:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:44.512 01:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:44.512 01:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:44.512 01:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:44.512 01:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:44.512 01:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:44.512 01:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:44.512 01:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:44.512 01:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:44.512 01:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:44.512 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:44.512 01:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:44.512 01:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:44.512 01:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:44.512 01:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:44.512 01:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:44.512 01:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:44.512 01:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:44.512 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:44.512 01:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:44.512 01:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:44.512 01:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:44.512 01:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:44.512 01:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:44.512 01:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:44.512 01:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:44.512 01:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:44.512 01:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:44.512 01:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:44.512 01:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:44.512 01:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:44.512 01:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:44.512 01:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:44.512 01:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:44.512 01:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:44.512 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:44.512 01:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:44.512 01:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:44.512 01:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:44.512 01:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:44.512 01:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:44.512 01:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:44.512 01:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:44.512 01:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:44.512 01:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:44.512 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:44.512 01:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:44.512 01:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:44.512 01:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:09:44.512 01:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:44.512 01:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:44.512 01:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:44.512 01:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:44.512 01:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:44.512 01:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:44.512 01:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:44.512 01:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:44.512 01:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:44.512 01:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:44.512 01:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:44.512 01:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:44.512 01:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:44.512 01:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:44.512 01:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:44.512 01:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:44.771 01:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:44.771 01:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:44.771 01:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:44.771 01:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:44.771 01:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:44.771 01:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:44.771 01:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:44.771 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:44.771 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.148 ms 00:09:44.771 00:09:44.771 --- 10.0.0.2 ping statistics --- 00:09:44.771 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:44.771 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:09:44.771 01:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:44.771 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:44.772 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.080 ms 00:09:44.772 00:09:44.772 --- 10.0.0.1 ping statistics --- 00:09:44.772 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:44.772 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:09:44.772 01:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:44.772 01:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:09:44.772 01:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:44.772 01:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:44.772 01:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:44.772 01:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:44.772 01:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:44.772 01:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:44.772 01:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:44.772 01:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:09:44.772 01:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:09:44.772 only one NIC for nvmf test 00:09:44.772 01:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:09:44.772 01:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:44.772 01:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:09:44.772 01:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:44.772 01:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:09:44.772 01:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:44.772 01:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:44.772 rmmod nvme_tcp 00:09:44.772 rmmod nvme_fabrics 00:09:44.772 rmmod nvme_keyring 00:09:44.772 01:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:44.772 01:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:09:44.772 01:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:09:44.772 01:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:09:44.772 01:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:44.772 01:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:44.772 01:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:44.772 01:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:44.772 01:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:44.772 01:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:44.772 01:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:44.772 01:47:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:47.302 01:48:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:47.302 01:48:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:09:47.302 01:48:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:09:47.302 01:48:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:47.302 01:48:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:09:47.302 01:48:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:47.303 01:48:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:09:47.303 01:48:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:47.303 01:48:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:47.303 01:48:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:47.303 01:48:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:09:47.303 01:48:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:09:47.303 01:48:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:09:47.303 01:48:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:47.303 01:48:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:47.303 01:48:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:47.303 01:48:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:47.303 01:48:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:47.303 01:48:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:47.303 01:48:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:47.303 01:48:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:47.303 01:48:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:47.303 00:09:47.303 real 0m4.271s 00:09:47.303 user 0m0.796s 00:09:47.303 sys 0m1.456s 00:09:47.303 01:48:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:47.303 01:48:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:47.303 ************************************ 00:09:47.303 END TEST nvmf_target_multipath 00:09:47.303 ************************************ 00:09:47.303 01:48:01 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:47.303 01:48:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:47.303 01:48:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:47.303 01:48:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:47.303 ************************************ 00:09:47.303 START TEST nvmf_zcopy 00:09:47.303 ************************************ 00:09:47.303 01:48:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:47.303 * Looking for test storage... 00:09:47.303 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:47.303 01:48:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:47.303 01:48:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:09:47.303 01:48:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:47.303 01:48:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:47.303 01:48:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:47.303 01:48:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:47.303 01:48:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:47.303 01:48:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:47.303 01:48:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:47.303 01:48:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:47.303 01:48:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:47.303 01:48:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:47.303 01:48:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:47.303 01:48:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:47.303 01:48:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:47.303 01:48:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:47.303 01:48:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:47.303 01:48:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:47.303 01:48:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:47.303 01:48:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:47.303 01:48:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:47.303 01:48:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:47.303 01:48:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.303 01:48:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.303 01:48:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.303 01:48:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:09:47.303 01:48:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.303 01:48:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:09:47.303 01:48:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:47.303 01:48:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:47.303 01:48:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:47.303 01:48:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:47.303 01:48:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:47.303 01:48:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:47.303 01:48:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:47.303 01:48:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:47.303 01:48:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:09:47.303 01:48:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:47.303 01:48:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:47.303 01:48:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:47.303 01:48:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:47.303 01:48:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:47.303 01:48:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:47.303 01:48:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:47.303 01:48:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:47.303 01:48:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:47.303 01:48:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:47.303 01:48:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:09:47.303 01:48:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:49.199 01:48:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:49.199 01:48:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:09:49.199 01:48:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:49.199 01:48:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:49.199 01:48:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:49.199 01:48:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:49.199 01:48:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:49.199 01:48:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:09:49.199 01:48:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:49.199 01:48:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:09:49.199 01:48:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:09:49.199 01:48:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:09:49.199 01:48:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:09:49.199 01:48:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:09:49.199 01:48:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:09:49.199 01:48:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:49.199 01:48:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:49.199 01:48:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:49.199 01:48:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:49.199 01:48:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:49.199 01:48:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:49.199 01:48:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:49.199 01:48:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:49.199 01:48:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:49.200 01:48:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:49.200 01:48:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:49.200 01:48:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:49.200 01:48:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:49.200 01:48:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:49.200 01:48:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:49.200 01:48:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:49.200 01:48:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:49.200 01:48:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:49.200 01:48:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:49.200 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:49.200 01:48:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:49.200 01:48:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:49.200 01:48:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:49.200 01:48:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:49.200 01:48:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:49.200 01:48:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:49.200 01:48:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:49.200 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:49.200 01:48:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:49.200 01:48:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:49.200 01:48:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:49.200 01:48:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:49.200 01:48:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:49.200 01:48:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:49.200 01:48:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:49.200 01:48:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:49.200 01:48:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:49.200 01:48:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:49.200 01:48:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:49.200 01:48:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:49.200 01:48:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:49.200 01:48:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:49.200 01:48:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:49.200 01:48:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:49.200 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:49.200 01:48:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:49.200 01:48:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:49.200 01:48:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:49.200 01:48:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:49.200 01:48:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:49.200 01:48:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:49.200 01:48:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:49.200 01:48:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:49.200 01:48:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:49.200 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:49.200 01:48:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:49.200 01:48:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:49.200 01:48:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:09:49.200 01:48:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:49.200 01:48:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:49.200 01:48:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:49.200 01:48:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:49.200 01:48:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:49.200 01:48:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:49.200 01:48:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:49.200 01:48:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:49.200 01:48:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:49.200 01:48:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:49.200 01:48:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:49.200 01:48:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:49.200 01:48:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:49.200 01:48:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:49.200 01:48:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:49.200 01:48:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:49.200 01:48:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:49.200 01:48:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:49.200 01:48:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:49.200 01:48:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:49.200 01:48:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:49.200 01:48:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:49.200 01:48:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:49.200 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:49.200 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.132 ms 00:09:49.200 00:09:49.200 --- 10.0.0.2 ping statistics --- 00:09:49.200 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:49.200 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:09:49.200 01:48:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:49.200 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:49.200 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.058 ms 00:09:49.200 00:09:49.200 --- 10.0.0.1 ping statistics --- 00:09:49.200 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:49.200 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:09:49.200 01:48:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:49.200 01:48:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:09:49.200 01:48:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:49.200 01:48:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:49.200 01:48:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:49.200 01:48:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:49.200 01:48:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:49.200 01:48:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:49.200 01:48:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:49.200 01:48:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:09:49.200 01:48:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:49.200 01:48:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:49.200 01:48:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:49.200 01:48:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=1342879 00:09:49.200 01:48:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:49.200 01:48:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 1342879 00:09:49.200 01:48:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 1342879 ']' 00:09:49.200 01:48:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:49.200 01:48:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:49.200 01:48:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:49.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:49.200 01:48:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:49.200 01:48:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:49.200 [2024-07-24 01:48:03.876125] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:09:49.200 [2024-07-24 01:48:03.876197] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:49.200 EAL: No free 2048 kB hugepages reported on node 1 00:09:49.200 [2024-07-24 01:48:03.942944] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:49.200 [2024-07-24 01:48:04.032081] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:49.200 [2024-07-24 01:48:04.032144] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:49.200 [2024-07-24 01:48:04.032161] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:49.200 [2024-07-24 01:48:04.032175] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:49.201 [2024-07-24 01:48:04.032187] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:49.201 [2024-07-24 01:48:04.032217] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:49.459 01:48:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:49.459 01:48:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:09:49.459 01:48:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:49.459 01:48:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:49.459 01:48:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:49.459 01:48:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:49.459 01:48:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:09:49.459 01:48:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:09:49.459 01:48:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:49.459 01:48:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:49.459 [2024-07-24 01:48:04.176304] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:49.459 01:48:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:49.459 01:48:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:49.459 01:48:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:49.459 01:48:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:49.459 01:48:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:49.459 01:48:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:49.459 01:48:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:49.459 01:48:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:49.459 [2024-07-24 01:48:04.192538] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:49.459 01:48:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:49.459 01:48:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:49.459 01:48:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:49.459 01:48:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:49.459 01:48:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:49.459 01:48:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:09:49.459 01:48:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:49.459 01:48:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:49.459 malloc0 00:09:49.459 01:48:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:49.459 01:48:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:09:49.459 01:48:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:49.459 01:48:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:49.459 01:48:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:49.459 01:48:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:09:49.459 01:48:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:09:49.459 01:48:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:09:49.459 01:48:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:09:49.459 01:48:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:49.459 01:48:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:49.459 { 00:09:49.459 "params": { 00:09:49.459 "name": "Nvme$subsystem", 00:09:49.459 "trtype": "$TEST_TRANSPORT", 00:09:49.459 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:49.459 "adrfam": "ipv4", 00:09:49.459 "trsvcid": "$NVMF_PORT", 00:09:49.459 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:49.459 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:49.459 "hdgst": ${hdgst:-false}, 00:09:49.459 "ddgst": ${ddgst:-false} 00:09:49.459 }, 00:09:49.459 "method": "bdev_nvme_attach_controller" 00:09:49.459 } 00:09:49.459 EOF 00:09:49.459 )") 00:09:49.459 01:48:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:09:49.459 01:48:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:09:49.459 01:48:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:09:49.459 01:48:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:49.459 "params": { 00:09:49.459 "name": "Nvme1", 00:09:49.459 "trtype": "tcp", 00:09:49.459 "traddr": "10.0.0.2", 00:09:49.459 "adrfam": "ipv4", 00:09:49.459 "trsvcid": "4420", 00:09:49.459 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:49.459 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:49.459 "hdgst": false, 00:09:49.459 "ddgst": false 00:09:49.459 }, 00:09:49.459 "method": "bdev_nvme_attach_controller" 00:09:49.459 }' 00:09:49.459 [2024-07-24 01:48:04.284311] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:09:49.459 [2024-07-24 01:48:04.284422] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1342903 ] 00:09:49.459 EAL: No free 2048 kB hugepages reported on node 1 00:09:49.459 [2024-07-24 01:48:04.350819] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:49.717 [2024-07-24 01:48:04.449089] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:49.975 Running I/O for 10 seconds... 00:09:59.940 00:09:59.941 Latency(us) 00:09:59.941 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:59.941 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:09:59.941 Verification LBA range: start 0x0 length 0x1000 00:09:59.941 Nvme1n1 : 10.01 5960.74 46.57 0.00 0.00 21415.77 403.53 30486.38 00:09:59.941 =================================================================================================================== 00:09:59.941 Total : 5960.74 46.57 0.00 0.00 21415.77 403.53 30486.38 00:10:00.199 01:48:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1344609 00:10:00.199 01:48:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:10:00.199 01:48:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:00.199 01:48:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:10:00.199 01:48:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:10:00.199 01:48:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:10:00.199 01:48:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:10:00.199 01:48:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:00.199 01:48:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:00.199 { 00:10:00.199 "params": { 00:10:00.199 "name": "Nvme$subsystem", 00:10:00.199 "trtype": "$TEST_TRANSPORT", 00:10:00.199 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:00.199 "adrfam": "ipv4", 00:10:00.199 "trsvcid": "$NVMF_PORT", 00:10:00.199 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:00.199 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:00.199 "hdgst": ${hdgst:-false}, 00:10:00.199 "ddgst": ${ddgst:-false} 00:10:00.199 }, 00:10:00.199 "method": "bdev_nvme_attach_controller" 00:10:00.199 } 00:10:00.199 EOF 00:10:00.199 )") 00:10:00.199 01:48:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:10:00.199 [2024-07-24 01:48:14.934719] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.199 [2024-07-24 01:48:14.934763] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.199 01:48:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:10:00.199 01:48:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:10:00.199 01:48:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:00.199 "params": { 00:10:00.199 "name": "Nvme1", 00:10:00.199 "trtype": "tcp", 00:10:00.199 "traddr": "10.0.0.2", 00:10:00.199 "adrfam": "ipv4", 00:10:00.199 "trsvcid": "4420", 00:10:00.199 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:00.199 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:00.199 "hdgst": false, 00:10:00.199 "ddgst": false 00:10:00.199 }, 00:10:00.199 "method": "bdev_nvme_attach_controller" 00:10:00.199 }' 00:10:00.199 [2024-07-24 01:48:14.942679] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.199 [2024-07-24 01:48:14.942707] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.199 [2024-07-24 01:48:14.950679] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.199 [2024-07-24 01:48:14.950702] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.199 [2024-07-24 01:48:14.958718] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.199 [2024-07-24 01:48:14.958739] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.199 [2024-07-24 01:48:14.966745] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.199 [2024-07-24 01:48:14.966766] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.199 [2024-07-24 01:48:14.971506] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:10:00.200 [2024-07-24 01:48:14.971568] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1344609 ] 00:10:00.200 [2024-07-24 01:48:14.974756] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.200 [2024-07-24 01:48:14.974775] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.200 [2024-07-24 01:48:14.982778] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.200 [2024-07-24 01:48:14.982798] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.200 [2024-07-24 01:48:14.990799] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.200 [2024-07-24 01:48:14.990818] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.200 [2024-07-24 01:48:14.998822] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.200 [2024-07-24 01:48:14.998841] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.200 EAL: No free 2048 kB hugepages reported on node 1 00:10:00.200 [2024-07-24 01:48:15.006862] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.200 [2024-07-24 01:48:15.006887] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.200 [2024-07-24 01:48:15.014884] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.200 [2024-07-24 01:48:15.014908] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.200 [2024-07-24 01:48:15.022907] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.200 [2024-07-24 01:48:15.022931] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.200 [2024-07-24 01:48:15.030929] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.200 [2024-07-24 01:48:15.030954] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.200 [2024-07-24 01:48:15.032940] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:00.200 [2024-07-24 01:48:15.038966] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.200 [2024-07-24 01:48:15.038995] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.200 [2024-07-24 01:48:15.046999] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.200 [2024-07-24 01:48:15.047037] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.200 [2024-07-24 01:48:15.055015] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.200 [2024-07-24 01:48:15.055047] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.200 [2024-07-24 01:48:15.063021] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.200 [2024-07-24 01:48:15.063049] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.200 [2024-07-24 01:48:15.071042] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.200 [2024-07-24 01:48:15.071069] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.200 [2024-07-24 01:48:15.079066] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.200 [2024-07-24 01:48:15.079101] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.200 [2024-07-24 01:48:15.087109] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.200 [2024-07-24 01:48:15.087144] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.458 [2024-07-24 01:48:15.095133] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.458 [2024-07-24 01:48:15.095165] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.458 [2024-07-24 01:48:15.103139] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.458 [2024-07-24 01:48:15.103164] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.458 [2024-07-24 01:48:15.111159] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.458 [2024-07-24 01:48:15.111185] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.458 [2024-07-24 01:48:15.119179] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.458 [2024-07-24 01:48:15.119204] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.458 [2024-07-24 01:48:15.127201] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.458 [2024-07-24 01:48:15.127229] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.458 [2024-07-24 01:48:15.127892] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:00.458 [2024-07-24 01:48:15.135224] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.458 [2024-07-24 01:48:15.135249] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.458 [2024-07-24 01:48:15.143258] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.458 [2024-07-24 01:48:15.143289] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.458 [2024-07-24 01:48:15.151285] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.458 [2024-07-24 01:48:15.151332] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.458 [2024-07-24 01:48:15.159309] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.458 [2024-07-24 01:48:15.159368] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.458 [2024-07-24 01:48:15.167350] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.458 [2024-07-24 01:48:15.167409] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.458 [2024-07-24 01:48:15.175376] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.458 [2024-07-24 01:48:15.175411] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.458 [2024-07-24 01:48:15.183395] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.458 [2024-07-24 01:48:15.183430] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.458 [2024-07-24 01:48:15.191411] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.458 [2024-07-24 01:48:15.191445] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.458 [2024-07-24 01:48:15.199408] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.459 [2024-07-24 01:48:15.199431] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.459 [2024-07-24 01:48:15.207463] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.459 [2024-07-24 01:48:15.207496] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.459 [2024-07-24 01:48:15.215475] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.459 [2024-07-24 01:48:15.215516] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.459 [2024-07-24 01:48:15.223465] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.459 [2024-07-24 01:48:15.223488] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.459 [2024-07-24 01:48:15.231484] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.459 [2024-07-24 01:48:15.231504] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.459 [2024-07-24 01:48:15.239512] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.459 [2024-07-24 01:48:15.239533] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.459 [2024-07-24 01:48:15.247531] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.459 [2024-07-24 01:48:15.247557] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.459 [2024-07-24 01:48:15.255570] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.459 [2024-07-24 01:48:15.255612] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.459 [2024-07-24 01:48:15.263578] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.459 [2024-07-24 01:48:15.263617] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.459 [2024-07-24 01:48:15.271622] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.459 [2024-07-24 01:48:15.271650] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.459 [2024-07-24 01:48:15.279648] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.459 [2024-07-24 01:48:15.279673] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.459 [2024-07-24 01:48:15.287671] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.459 [2024-07-24 01:48:15.287696] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.459 [2024-07-24 01:48:15.295696] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.459 [2024-07-24 01:48:15.295720] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.459 [2024-07-24 01:48:15.303713] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.459 [2024-07-24 01:48:15.303738] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.459 [2024-07-24 01:48:15.311817] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.459 [2024-07-24 01:48:15.311845] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.459 [2024-07-24 01:48:15.319823] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.459 [2024-07-24 01:48:15.319851] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.459 [2024-07-24 01:48:15.327847] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.459 [2024-07-24 01:48:15.327875] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.459 [2024-07-24 01:48:15.335866] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.459 [2024-07-24 01:48:15.335892] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.459 [2024-07-24 01:48:15.343897] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.459 [2024-07-24 01:48:15.343928] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.459 [2024-07-24 01:48:15.351915] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.459 [2024-07-24 01:48:15.351942] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.745 Running I/O for 5 seconds... 00:10:00.745 [2024-07-24 01:48:15.359936] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.745 [2024-07-24 01:48:15.359964] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.745 [2024-07-24 01:48:15.375058] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.745 [2024-07-24 01:48:15.375086] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.745 [2024-07-24 01:48:15.386362] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.745 [2024-07-24 01:48:15.386398] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.745 [2024-07-24 01:48:15.397544] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.745 [2024-07-24 01:48:15.397573] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.745 [2024-07-24 01:48:15.409003] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.745 [2024-07-24 01:48:15.409035] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.745 [2024-07-24 01:48:15.420075] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.745 [2024-07-24 01:48:15.420102] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.745 [2024-07-24 01:48:15.431515] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.745 [2024-07-24 01:48:15.431544] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.745 [2024-07-24 01:48:15.442593] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.745 [2024-07-24 01:48:15.442635] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.745 [2024-07-24 01:48:15.453543] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.745 [2024-07-24 01:48:15.453571] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.745 [2024-07-24 01:48:15.465331] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.745 [2024-07-24 01:48:15.465359] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.745 [2024-07-24 01:48:15.476806] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.745 [2024-07-24 01:48:15.476833] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.745 [2024-07-24 01:48:15.487842] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.745 [2024-07-24 01:48:15.487869] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.745 [2024-07-24 01:48:15.498997] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.745 [2024-07-24 01:48:15.499028] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.745 [2024-07-24 01:48:15.510242] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.745 [2024-07-24 01:48:15.510269] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.745 [2024-07-24 01:48:15.521491] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.745 [2024-07-24 01:48:15.521522] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.745 [2024-07-24 01:48:15.532831] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.745 [2024-07-24 01:48:15.532862] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.745 [2024-07-24 01:48:15.544043] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.745 [2024-07-24 01:48:15.544070] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.745 [2024-07-24 01:48:15.555278] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.745 [2024-07-24 01:48:15.555331] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.745 [2024-07-24 01:48:15.566310] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.745 [2024-07-24 01:48:15.566345] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.745 [2024-07-24 01:48:15.578053] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.745 [2024-07-24 01:48:15.578083] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.745 [2024-07-24 01:48:15.589635] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.745 [2024-07-24 01:48:15.589666] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.745 [2024-07-24 01:48:15.603045] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.745 [2024-07-24 01:48:15.603085] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.745 [2024-07-24 01:48:15.613791] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.745 [2024-07-24 01:48:15.613820] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.745 [2024-07-24 01:48:15.625155] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.745 [2024-07-24 01:48:15.625182] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.745 [2024-07-24 01:48:15.638463] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.745 [2024-07-24 01:48:15.638491] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.004 [2024-07-24 01:48:15.648720] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.004 [2024-07-24 01:48:15.648752] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.004 [2024-07-24 01:48:15.659957] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.004 [2024-07-24 01:48:15.659989] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.004 [2024-07-24 01:48:15.671599] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.004 [2024-07-24 01:48:15.671630] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.004 [2024-07-24 01:48:15.683363] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.004 [2024-07-24 01:48:15.683403] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.004 [2024-07-24 01:48:15.695278] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.004 [2024-07-24 01:48:15.695327] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.004 [2024-07-24 01:48:15.707355] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.004 [2024-07-24 01:48:15.707386] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.004 [2024-07-24 01:48:15.718799] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.004 [2024-07-24 01:48:15.718830] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.004 [2024-07-24 01:48:15.730282] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.004 [2024-07-24 01:48:15.730310] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.004 [2024-07-24 01:48:15.741590] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.004 [2024-07-24 01:48:15.741621] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.004 [2024-07-24 01:48:15.752999] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.004 [2024-07-24 01:48:15.753029] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.004 [2024-07-24 01:48:15.764181] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.004 [2024-07-24 01:48:15.764208] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.004 [2024-07-24 01:48:15.777405] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.004 [2024-07-24 01:48:15.777437] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.004 [2024-07-24 01:48:15.788190] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.004 [2024-07-24 01:48:15.788217] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.004 [2024-07-24 01:48:15.799776] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.004 [2024-07-24 01:48:15.799806] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.004 [2024-07-24 01:48:15.811183] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.004 [2024-07-24 01:48:15.811210] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.004 [2024-07-24 01:48:15.822158] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.004 [2024-07-24 01:48:15.822196] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.004 [2024-07-24 01:48:15.833387] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.004 [2024-07-24 01:48:15.833415] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.004 [2024-07-24 01:48:15.846666] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.004 [2024-07-24 01:48:15.846698] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.004 [2024-07-24 01:48:15.858041] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.004 [2024-07-24 01:48:15.858072] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.004 [2024-07-24 01:48:15.869402] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.004 [2024-07-24 01:48:15.869430] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.004 [2024-07-24 01:48:15.880617] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.004 [2024-07-24 01:48:15.880648] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.004 [2024-07-24 01:48:15.894058] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.004 [2024-07-24 01:48:15.894089] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.262 [2024-07-24 01:48:15.905031] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.262 [2024-07-24 01:48:15.905063] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.262 [2024-07-24 01:48:15.916515] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.262 [2024-07-24 01:48:15.916546] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.262 [2024-07-24 01:48:15.929686] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.262 [2024-07-24 01:48:15.929717] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.262 [2024-07-24 01:48:15.940935] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.262 [2024-07-24 01:48:15.940966] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.262 [2024-07-24 01:48:15.952750] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.262 [2024-07-24 01:48:15.952781] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.262 [2024-07-24 01:48:15.963838] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.263 [2024-07-24 01:48:15.963870] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.263 [2024-07-24 01:48:15.975375] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.263 [2024-07-24 01:48:15.975408] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.263 [2024-07-24 01:48:15.988900] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.263 [2024-07-24 01:48:15.988932] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.263 [2024-07-24 01:48:15.999187] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.263 [2024-07-24 01:48:15.999230] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.263 [2024-07-24 01:48:16.010874] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.263 [2024-07-24 01:48:16.010905] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.263 [2024-07-24 01:48:16.022707] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.263 [2024-07-24 01:48:16.022738] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.263 [2024-07-24 01:48:16.033751] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.263 [2024-07-24 01:48:16.033779] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.263 [2024-07-24 01:48:16.045113] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.263 [2024-07-24 01:48:16.045152] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.263 [2024-07-24 01:48:16.056525] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.263 [2024-07-24 01:48:16.056553] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.263 [2024-07-24 01:48:16.068177] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.263 [2024-07-24 01:48:16.068204] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.263 [2024-07-24 01:48:16.079595] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.263 [2024-07-24 01:48:16.079625] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.263 [2024-07-24 01:48:16.093204] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.263 [2024-07-24 01:48:16.093232] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.263 [2024-07-24 01:48:16.104455] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.263 [2024-07-24 01:48:16.104483] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.263 [2024-07-24 01:48:16.115638] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.263 [2024-07-24 01:48:16.115669] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.263 [2024-07-24 01:48:16.126828] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.263 [2024-07-24 01:48:16.126859] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.263 [2024-07-24 01:48:16.138041] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.263 [2024-07-24 01:48:16.138067] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.263 [2024-07-24 01:48:16.149387] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.263 [2024-07-24 01:48:16.149414] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.521 [2024-07-24 01:48:16.160978] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.521 [2024-07-24 01:48:16.161009] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.521 [2024-07-24 01:48:16.172236] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.521 [2024-07-24 01:48:16.172263] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.521 [2024-07-24 01:48:16.183752] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.521 [2024-07-24 01:48:16.183782] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.521 [2024-07-24 01:48:16.195225] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.521 [2024-07-24 01:48:16.195253] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.521 [2024-07-24 01:48:16.208174] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.521 [2024-07-24 01:48:16.208202] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.521 [2024-07-24 01:48:16.219021] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.521 [2024-07-24 01:48:16.219048] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.521 [2024-07-24 01:48:16.229966] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.521 [2024-07-24 01:48:16.229998] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.521 [2024-07-24 01:48:16.243243] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.522 [2024-07-24 01:48:16.243271] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.522 [2024-07-24 01:48:16.253673] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.522 [2024-07-24 01:48:16.253704] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.522 [2024-07-24 01:48:16.265463] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.522 [2024-07-24 01:48:16.265498] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.522 [2024-07-24 01:48:16.276221] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.522 [2024-07-24 01:48:16.276248] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.522 [2024-07-24 01:48:16.287610] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.522 [2024-07-24 01:48:16.287642] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.522 [2024-07-24 01:48:16.300653] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.522 [2024-07-24 01:48:16.300683] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.522 [2024-07-24 01:48:16.311121] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.522 [2024-07-24 01:48:16.311152] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.522 [2024-07-24 01:48:16.322789] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.522 [2024-07-24 01:48:16.322819] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.522 [2024-07-24 01:48:16.334378] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.522 [2024-07-24 01:48:16.334406] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.522 [2024-07-24 01:48:16.347781] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.522 [2024-07-24 01:48:16.347812] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.522 [2024-07-24 01:48:16.358595] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.522 [2024-07-24 01:48:16.358626] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.522 [2024-07-24 01:48:16.370549] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.522 [2024-07-24 01:48:16.370580] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.522 [2024-07-24 01:48:16.381886] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.522 [2024-07-24 01:48:16.381917] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.522 [2024-07-24 01:48:16.393782] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.522 [2024-07-24 01:48:16.393812] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.522 [2024-07-24 01:48:16.405201] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.522 [2024-07-24 01:48:16.405229] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.522 [2024-07-24 01:48:16.416904] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.780 [2024-07-24 01:48:16.416934] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.780 [2024-07-24 01:48:16.428565] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.780 [2024-07-24 01:48:16.428596] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.780 [2024-07-24 01:48:16.439745] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.780 [2024-07-24 01:48:16.439776] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.780 [2024-07-24 01:48:16.451295] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.780 [2024-07-24 01:48:16.451348] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.780 [2024-07-24 01:48:16.464256] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.780 [2024-07-24 01:48:16.464283] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.780 [2024-07-24 01:48:16.475497] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.780 [2024-07-24 01:48:16.475529] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.780 [2024-07-24 01:48:16.486581] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.780 [2024-07-24 01:48:16.486612] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.780 [2024-07-24 01:48:16.497837] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.780 [2024-07-24 01:48:16.497863] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.780 [2024-07-24 01:48:16.509130] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.780 [2024-07-24 01:48:16.509157] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.780 [2024-07-24 01:48:16.522785] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.780 [2024-07-24 01:48:16.522813] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.780 [2024-07-24 01:48:16.533923] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.780 [2024-07-24 01:48:16.533950] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.780 [2024-07-24 01:48:16.545169] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.780 [2024-07-24 01:48:16.545197] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.780 [2024-07-24 01:48:16.556217] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.780 [2024-07-24 01:48:16.556244] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.780 [2024-07-24 01:48:16.567178] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.780 [2024-07-24 01:48:16.567206] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.780 [2024-07-24 01:48:16.578944] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.780 [2024-07-24 01:48:16.578975] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.781 [2024-07-24 01:48:16.590436] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.781 [2024-07-24 01:48:16.590465] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.781 [2024-07-24 01:48:16.601407] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.781 [2024-07-24 01:48:16.601438] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.781 [2024-07-24 01:48:16.612628] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.781 [2024-07-24 01:48:16.612658] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.781 [2024-07-24 01:48:16.623939] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.781 [2024-07-24 01:48:16.623969] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.781 [2024-07-24 01:48:16.635295] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.781 [2024-07-24 01:48:16.635331] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.781 [2024-07-24 01:48:16.647925] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.781 [2024-07-24 01:48:16.647956] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.781 [2024-07-24 01:48:16.658733] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.781 [2024-07-24 01:48:16.658764] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.781 [2024-07-24 01:48:16.669837] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.781 [2024-07-24 01:48:16.669864] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.039 [2024-07-24 01:48:16.681310] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.039 [2024-07-24 01:48:16.681346] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.039 [2024-07-24 01:48:16.692682] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.039 [2024-07-24 01:48:16.692713] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.039 [2024-07-24 01:48:16.703944] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.039 [2024-07-24 01:48:16.703971] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.039 [2024-07-24 01:48:16.715453] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.039 [2024-07-24 01:48:16.715480] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.039 [2024-07-24 01:48:16.726866] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.039 [2024-07-24 01:48:16.726898] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.039 [2024-07-24 01:48:16.738925] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.039 [2024-07-24 01:48:16.738956] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.039 [2024-07-24 01:48:16.750451] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.039 [2024-07-24 01:48:16.750479] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.039 [2024-07-24 01:48:16.761946] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.039 [2024-07-24 01:48:16.761977] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.039 [2024-07-24 01:48:16.773913] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.039 [2024-07-24 01:48:16.773944] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.039 [2024-07-24 01:48:16.785915] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.039 [2024-07-24 01:48:16.785946] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.039 [2024-07-24 01:48:16.797405] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.039 [2024-07-24 01:48:16.797433] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.039 [2024-07-24 01:48:16.809039] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.039 [2024-07-24 01:48:16.809070] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.039 [2024-07-24 01:48:16.820301] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.039 [2024-07-24 01:48:16.820337] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.039 [2024-07-24 01:48:16.833530] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.039 [2024-07-24 01:48:16.833558] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.039 [2024-07-24 01:48:16.844213] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.039 [2024-07-24 01:48:16.844240] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.039 [2024-07-24 01:48:16.855552] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.039 [2024-07-24 01:48:16.855589] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.039 [2024-07-24 01:48:16.866988] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.039 [2024-07-24 01:48:16.867020] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.039 [2024-07-24 01:48:16.878491] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.039 [2024-07-24 01:48:16.878519] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.039 [2024-07-24 01:48:16.889695] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.039 [2024-07-24 01:48:16.889726] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.039 [2024-07-24 01:48:16.900783] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.039 [2024-07-24 01:48:16.900815] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.039 [2024-07-24 01:48:16.912257] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.039 [2024-07-24 01:48:16.912284] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.039 [2024-07-24 01:48:16.923855] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.039 [2024-07-24 01:48:16.923886] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.298 [2024-07-24 01:48:16.935232] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.298 [2024-07-24 01:48:16.935259] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.298 [2024-07-24 01:48:16.946683] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.298 [2024-07-24 01:48:16.946715] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.298 [2024-07-24 01:48:16.958556] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.298 [2024-07-24 01:48:16.958588] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.298 [2024-07-24 01:48:16.970406] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.298 [2024-07-24 01:48:16.970437] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.298 [2024-07-24 01:48:16.982224] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.298 [2024-07-24 01:48:16.982253] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.298 [2024-07-24 01:48:16.994458] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.298 [2024-07-24 01:48:16.994486] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.298 [2024-07-24 01:48:17.005982] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.298 [2024-07-24 01:48:17.006012] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.298 [2024-07-24 01:48:17.017480] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.298 [2024-07-24 01:48:17.017517] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.298 [2024-07-24 01:48:17.028756] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.298 [2024-07-24 01:48:17.028783] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.298 [2024-07-24 01:48:17.040035] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.298 [2024-07-24 01:48:17.040065] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.298 [2024-07-24 01:48:17.051299] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.298 [2024-07-24 01:48:17.051336] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.298 [2024-07-24 01:48:17.062587] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.298 [2024-07-24 01:48:17.062619] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.298 [2024-07-24 01:48:17.074157] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.298 [2024-07-24 01:48:17.074185] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.298 [2024-07-24 01:48:17.085514] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.298 [2024-07-24 01:48:17.085542] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.298 [2024-07-24 01:48:17.098899] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.298 [2024-07-24 01:48:17.098925] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.298 [2024-07-24 01:48:17.109177] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.298 [2024-07-24 01:48:17.109207] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.298 [2024-07-24 01:48:17.120692] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.298 [2024-07-24 01:48:17.120720] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.298 [2024-07-24 01:48:17.132373] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.298 [2024-07-24 01:48:17.132410] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.298 [2024-07-24 01:48:17.143737] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.298 [2024-07-24 01:48:17.143764] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.298 [2024-07-24 01:48:17.157285] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.298 [2024-07-24 01:48:17.157336] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.298 [2024-07-24 01:48:17.168160] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.298 [2024-07-24 01:48:17.168186] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.298 [2024-07-24 01:48:17.179499] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.298 [2024-07-24 01:48:17.179527] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.299 [2024-07-24 01:48:17.190541] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.299 [2024-07-24 01:48:17.190577] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.557 [2024-07-24 01:48:17.201409] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.557 [2024-07-24 01:48:17.201436] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.557 [2024-07-24 01:48:17.212615] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.557 [2024-07-24 01:48:17.212647] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.557 [2024-07-24 01:48:17.223262] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.557 [2024-07-24 01:48:17.223288] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.557 [2024-07-24 01:48:17.234661] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.557 [2024-07-24 01:48:17.234703] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.557 [2024-07-24 01:48:17.246103] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.557 [2024-07-24 01:48:17.246130] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.557 [2024-07-24 01:48:17.258003] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.557 [2024-07-24 01:48:17.258035] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.557 [2024-07-24 01:48:17.269250] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.557 [2024-07-24 01:48:17.269276] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.557 [2024-07-24 01:48:17.282224] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.557 [2024-07-24 01:48:17.282251] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.557 [2024-07-24 01:48:17.292920] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.557 [2024-07-24 01:48:17.292950] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.557 [2024-07-24 01:48:17.303815] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.557 [2024-07-24 01:48:17.303845] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.557 [2024-07-24 01:48:17.314636] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.557 [2024-07-24 01:48:17.314669] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.557 [2024-07-24 01:48:17.325781] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.557 [2024-07-24 01:48:17.325812] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.557 [2024-07-24 01:48:17.337373] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.557 [2024-07-24 01:48:17.337401] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.557 [2024-07-24 01:48:17.348707] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.557 [2024-07-24 01:48:17.348749] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.557 [2024-07-24 01:48:17.361958] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.557 [2024-07-24 01:48:17.361988] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.557 [2024-07-24 01:48:17.372967] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.557 [2024-07-24 01:48:17.372995] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.557 [2024-07-24 01:48:17.384200] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.557 [2024-07-24 01:48:17.384227] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.557 [2024-07-24 01:48:17.397453] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.557 [2024-07-24 01:48:17.397482] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.557 [2024-07-24 01:48:17.408032] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.557 [2024-07-24 01:48:17.408063] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.557 [2024-07-24 01:48:17.419491] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.557 [2024-07-24 01:48:17.419525] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.557 [2024-07-24 01:48:17.430793] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.557 [2024-07-24 01:48:17.430820] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.557 [2024-07-24 01:48:17.442059] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.557 [2024-07-24 01:48:17.442085] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.815 [2024-07-24 01:48:17.455372] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.815 [2024-07-24 01:48:17.455407] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.815 [2024-07-24 01:48:17.465741] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.815 [2024-07-24 01:48:17.465771] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.815 [2024-07-24 01:48:17.477577] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.815 [2024-07-24 01:48:17.477607] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.815 [2024-07-24 01:48:17.488834] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.815 [2024-07-24 01:48:17.488864] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.815 [2024-07-24 01:48:17.500060] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.815 [2024-07-24 01:48:17.500087] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.815 [2024-07-24 01:48:17.511398] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.815 [2024-07-24 01:48:17.511425] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.815 [2024-07-24 01:48:17.522362] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.815 [2024-07-24 01:48:17.522389] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.815 [2024-07-24 01:48:17.534844] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.815 [2024-07-24 01:48:17.534885] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.815 [2024-07-24 01:48:17.546240] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.815 [2024-07-24 01:48:17.546266] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.815 [2024-07-24 01:48:17.555417] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.815 [2024-07-24 01:48:17.555456] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.815 [2024-07-24 01:48:17.566895] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.815 [2024-07-24 01:48:17.566931] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.816 [2024-07-24 01:48:17.579619] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.816 [2024-07-24 01:48:17.579646] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.816 [2024-07-24 01:48:17.591224] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.816 [2024-07-24 01:48:17.591250] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.816 [2024-07-24 01:48:17.600765] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.816 [2024-07-24 01:48:17.600792] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.816 [2024-07-24 01:48:17.612486] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.816 [2024-07-24 01:48:17.612513] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.816 [2024-07-24 01:48:17.623443] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.816 [2024-07-24 01:48:17.623469] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.816 [2024-07-24 01:48:17.634377] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.816 [2024-07-24 01:48:17.634404] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.816 [2024-07-24 01:48:17.645534] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.816 [2024-07-24 01:48:17.645561] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.816 [2024-07-24 01:48:17.656603] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.816 [2024-07-24 01:48:17.656644] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.816 [2024-07-24 01:48:17.669391] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.816 [2024-07-24 01:48:17.669419] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.816 [2024-07-24 01:48:17.681039] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.816 [2024-07-24 01:48:17.681066] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.816 [2024-07-24 01:48:17.690657] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.816 [2024-07-24 01:48:17.690684] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:02.816 [2024-07-24 01:48:17.702227] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:02.816 [2024-07-24 01:48:17.702253] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.074 [2024-07-24 01:48:17.714969] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.074 [2024-07-24 01:48:17.714997] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.074 [2024-07-24 01:48:17.725241] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.074 [2024-07-24 01:48:17.725284] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.074 [2024-07-24 01:48:17.736087] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.074 [2024-07-24 01:48:17.736114] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.074 [2024-07-24 01:48:17.746848] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.074 [2024-07-24 01:48:17.746876] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.074 [2024-07-24 01:48:17.757723] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.074 [2024-07-24 01:48:17.757749] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.074 [2024-07-24 01:48:17.768769] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.074 [2024-07-24 01:48:17.768795] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.074 [2024-07-24 01:48:17.779718] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.074 [2024-07-24 01:48:17.779752] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.074 [2024-07-24 01:48:17.792377] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.074 [2024-07-24 01:48:17.792403] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.074 [2024-07-24 01:48:17.803118] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.074 [2024-07-24 01:48:17.803145] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.074 [2024-07-24 01:48:17.814089] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.074 [2024-07-24 01:48:17.814116] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.074 [2024-07-24 01:48:17.826778] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.074 [2024-07-24 01:48:17.826804] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.074 [2024-07-24 01:48:17.836849] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.074 [2024-07-24 01:48:17.836875] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.074 [2024-07-24 01:48:17.847560] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.074 [2024-07-24 01:48:17.847587] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.074 [2024-07-24 01:48:17.860130] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.074 [2024-07-24 01:48:17.860156] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.074 [2024-07-24 01:48:17.869895] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.074 [2024-07-24 01:48:17.869921] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.074 [2024-07-24 01:48:17.881576] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.074 [2024-07-24 01:48:17.881603] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.074 [2024-07-24 01:48:17.892482] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.074 [2024-07-24 01:48:17.892510] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.074 [2024-07-24 01:48:17.903045] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.074 [2024-07-24 01:48:17.903070] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.074 [2024-07-24 01:48:17.913427] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.074 [2024-07-24 01:48:17.913454] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.074 [2024-07-24 01:48:17.924207] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.074 [2024-07-24 01:48:17.924238] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.074 [2024-07-24 01:48:17.934731] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.074 [2024-07-24 01:48:17.934757] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.074 [2024-07-24 01:48:17.945291] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.074 [2024-07-24 01:48:17.945340] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.074 [2024-07-24 01:48:17.956519] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.074 [2024-07-24 01:48:17.956546] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.074 [2024-07-24 01:48:17.967531] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.074 [2024-07-24 01:48:17.967561] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.332 [2024-07-24 01:48:17.978343] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.332 [2024-07-24 01:48:17.978370] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.332 [2024-07-24 01:48:17.989282] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.332 [2024-07-24 01:48:17.989341] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.332 [2024-07-24 01:48:18.000604] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.332 [2024-07-24 01:48:18.000647] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.332 [2024-07-24 01:48:18.011675] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.332 [2024-07-24 01:48:18.011701] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.332 [2024-07-24 01:48:18.022921] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.332 [2024-07-24 01:48:18.022948] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.332 [2024-07-24 01:48:18.035547] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.332 [2024-07-24 01:48:18.035574] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.332 [2024-07-24 01:48:18.046202] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.332 [2024-07-24 01:48:18.046228] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.332 [2024-07-24 01:48:18.057159] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.333 [2024-07-24 01:48:18.057186] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.333 [2024-07-24 01:48:18.070038] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.333 [2024-07-24 01:48:18.070064] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.333 [2024-07-24 01:48:18.079950] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.333 [2024-07-24 01:48:18.079976] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.333 [2024-07-24 01:48:18.090778] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.333 [2024-07-24 01:48:18.090805] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.333 [2024-07-24 01:48:18.103448] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.333 [2024-07-24 01:48:18.103475] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.333 [2024-07-24 01:48:18.113170] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.333 [2024-07-24 01:48:18.113196] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.333 [2024-07-24 01:48:18.124232] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.333 [2024-07-24 01:48:18.124263] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.333 [2024-07-24 01:48:18.135089] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.333 [2024-07-24 01:48:18.135115] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.333 [2024-07-24 01:48:18.145711] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.333 [2024-07-24 01:48:18.145738] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.333 [2024-07-24 01:48:18.156923] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.333 [2024-07-24 01:48:18.156949] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.333 [2024-07-24 01:48:18.168146] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.333 [2024-07-24 01:48:18.168172] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.333 [2024-07-24 01:48:18.178988] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.333 [2024-07-24 01:48:18.179014] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.333 [2024-07-24 01:48:18.191899] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.333 [2024-07-24 01:48:18.191926] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.333 [2024-07-24 01:48:18.202523] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.333 [2024-07-24 01:48:18.202550] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.333 [2024-07-24 01:48:18.213731] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.333 [2024-07-24 01:48:18.213757] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.333 [2024-07-24 01:48:18.224895] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.333 [2024-07-24 01:48:18.224922] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.591 [2024-07-24 01:48:18.235597] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.591 [2024-07-24 01:48:18.235624] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.591 [2024-07-24 01:48:18.248366] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.591 [2024-07-24 01:48:18.248394] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.591 [2024-07-24 01:48:18.258617] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.591 [2024-07-24 01:48:18.258644] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.591 [2024-07-24 01:48:18.269546] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.591 [2024-07-24 01:48:18.269573] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.591 [2024-07-24 01:48:18.282417] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.591 [2024-07-24 01:48:18.282449] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.591 [2024-07-24 01:48:18.292515] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.591 [2024-07-24 01:48:18.292543] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.591 [2024-07-24 01:48:18.303780] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.591 [2024-07-24 01:48:18.303807] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.591 [2024-07-24 01:48:18.314900] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.591 [2024-07-24 01:48:18.314927] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.591 [2024-07-24 01:48:18.325647] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.591 [2024-07-24 01:48:18.325690] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.591 [2024-07-24 01:48:18.336214] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.591 [2024-07-24 01:48:18.336241] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.591 [2024-07-24 01:48:18.347281] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.591 [2024-07-24 01:48:18.347308] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.591 [2024-07-24 01:48:18.358243] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.591 [2024-07-24 01:48:18.358269] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.591 [2024-07-24 01:48:18.369467] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.591 [2024-07-24 01:48:18.369495] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.591 [2024-07-24 01:48:18.380617] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.591 [2024-07-24 01:48:18.380643] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.591 [2024-07-24 01:48:18.392224] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.591 [2024-07-24 01:48:18.392250] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.591 [2024-07-24 01:48:18.402692] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.591 [2024-07-24 01:48:18.402718] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.591 [2024-07-24 01:48:18.413799] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.591 [2024-07-24 01:48:18.413825] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.591 [2024-07-24 01:48:18.424748] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.591 [2024-07-24 01:48:18.424774] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.591 [2024-07-24 01:48:18.435505] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.591 [2024-07-24 01:48:18.435532] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.591 [2024-07-24 01:48:18.446424] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.591 [2024-07-24 01:48:18.446451] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.591 [2024-07-24 01:48:18.457179] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.591 [2024-07-24 01:48:18.457206] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.591 [2024-07-24 01:48:18.469761] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.591 [2024-07-24 01:48:18.469789] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.591 [2024-07-24 01:48:18.480517] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.591 [2024-07-24 01:48:18.480545] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.850 [2024-07-24 01:48:18.491335] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.850 [2024-07-24 01:48:18.491401] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.850 [2024-07-24 01:48:18.502240] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.850 [2024-07-24 01:48:18.502266] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.850 [2024-07-24 01:48:18.513172] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.850 [2024-07-24 01:48:18.513203] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.850 [2024-07-24 01:48:18.525875] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.850 [2024-07-24 01:48:18.525902] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.850 [2024-07-24 01:48:18.536242] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.850 [2024-07-24 01:48:18.536269] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.850 [2024-07-24 01:48:18.547053] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.850 [2024-07-24 01:48:18.547080] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.850 [2024-07-24 01:48:18.557737] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.850 [2024-07-24 01:48:18.557764] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.850 [2024-07-24 01:48:18.568541] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.850 [2024-07-24 01:48:18.568568] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.850 [2024-07-24 01:48:18.579720] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.850 [2024-07-24 01:48:18.579756] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.850 [2024-07-24 01:48:18.590373] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.850 [2024-07-24 01:48:18.590403] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.850 [2024-07-24 01:48:18.601176] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.850 [2024-07-24 01:48:18.601206] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.850 [2024-07-24 01:48:18.612251] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.850 [2024-07-24 01:48:18.612281] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.850 [2024-07-24 01:48:18.623185] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.850 [2024-07-24 01:48:18.623210] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.850 [2024-07-24 01:48:18.634091] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.850 [2024-07-24 01:48:18.634118] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.850 [2024-07-24 01:48:18.646535] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.850 [2024-07-24 01:48:18.646563] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.850 [2024-07-24 01:48:18.656721] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.850 [2024-07-24 01:48:18.656749] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.850 [2024-07-24 01:48:18.667645] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.850 [2024-07-24 01:48:18.667673] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.850 [2024-07-24 01:48:18.680452] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.850 [2024-07-24 01:48:18.680479] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.850 [2024-07-24 01:48:18.690936] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.850 [2024-07-24 01:48:18.690964] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.850 [2024-07-24 01:48:18.701806] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.850 [2024-07-24 01:48:18.701836] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.850 [2024-07-24 01:48:18.714426] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.850 [2024-07-24 01:48:18.714453] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.850 [2024-07-24 01:48:18.725701] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.850 [2024-07-24 01:48:18.725728] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.850 [2024-07-24 01:48:18.735026] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.850 [2024-07-24 01:48:18.735053] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.109 [2024-07-24 01:48:18.746331] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.109 [2024-07-24 01:48:18.746360] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.109 [2024-07-24 01:48:18.757118] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.109 [2024-07-24 01:48:18.757145] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.109 [2024-07-24 01:48:18.767520] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.109 [2024-07-24 01:48:18.767548] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.109 [2024-07-24 01:48:18.777630] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.109 [2024-07-24 01:48:18.777658] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.109 [2024-07-24 01:48:18.787831] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.109 [2024-07-24 01:48:18.787859] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.109 [2024-07-24 01:48:18.798280] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.109 [2024-07-24 01:48:18.798307] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.109 [2024-07-24 01:48:18.808900] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.109 [2024-07-24 01:48:18.808927] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.109 [2024-07-24 01:48:18.819211] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.109 [2024-07-24 01:48:18.819249] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.109 [2024-07-24 01:48:18.829907] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.109 [2024-07-24 01:48:18.829934] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.109 [2024-07-24 01:48:18.840410] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.109 [2024-07-24 01:48:18.840438] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.109 [2024-07-24 01:48:18.850993] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.109 [2024-07-24 01:48:18.851020] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.109 [2024-07-24 01:48:18.863698] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.109 [2024-07-24 01:48:18.863725] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.109 [2024-07-24 01:48:18.873117] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.109 [2024-07-24 01:48:18.873144] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.109 [2024-07-24 01:48:18.883518] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.109 [2024-07-24 01:48:18.883545] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.109 [2024-07-24 01:48:18.894181] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.109 [2024-07-24 01:48:18.894208] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.109 [2024-07-24 01:48:18.904615] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.109 [2024-07-24 01:48:18.904642] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.109 [2024-07-24 01:48:18.916820] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.109 [2024-07-24 01:48:18.916847] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.109 [2024-07-24 01:48:18.926824] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.109 [2024-07-24 01:48:18.926851] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.109 [2024-07-24 01:48:18.936812] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.109 [2024-07-24 01:48:18.936839] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.109 [2024-07-24 01:48:18.947176] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.109 [2024-07-24 01:48:18.947203] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.109 [2024-07-24 01:48:18.957812] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.109 [2024-07-24 01:48:18.957839] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.109 [2024-07-24 01:48:18.969426] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.109 [2024-07-24 01:48:18.969454] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.109 [2024-07-24 01:48:18.979741] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.109 [2024-07-24 01:48:18.979767] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.109 [2024-07-24 01:48:18.990581] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.109 [2024-07-24 01:48:18.990622] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.109 [2024-07-24 01:48:19.003760] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.109 [2024-07-24 01:48:19.003788] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.368 [2024-07-24 01:48:19.014727] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.368 [2024-07-24 01:48:19.014768] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.368 [2024-07-24 01:48:19.025579] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.368 [2024-07-24 01:48:19.025629] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.368 [2024-07-24 01:48:19.036092] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.368 [2024-07-24 01:48:19.036119] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.368 [2024-07-24 01:48:19.046721] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.368 [2024-07-24 01:48:19.046748] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.368 [2024-07-24 01:48:19.057967] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.368 [2024-07-24 01:48:19.057994] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.368 [2024-07-24 01:48:19.069084] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.368 [2024-07-24 01:48:19.069110] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.368 [2024-07-24 01:48:19.079940] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.368 [2024-07-24 01:48:19.079967] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.368 [2024-07-24 01:48:19.091225] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.368 [2024-07-24 01:48:19.091256] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.368 [2024-07-24 01:48:19.102366] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.368 [2024-07-24 01:48:19.102393] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.368 [2024-07-24 01:48:19.113809] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.368 [2024-07-24 01:48:19.113836] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.368 [2024-07-24 01:48:19.124164] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.368 [2024-07-24 01:48:19.124190] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.368 [2024-07-24 01:48:19.134751] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.368 [2024-07-24 01:48:19.134777] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.368 [2024-07-24 01:48:19.145544] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.368 [2024-07-24 01:48:19.145570] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.368 [2024-07-24 01:48:19.156626] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.368 [2024-07-24 01:48:19.156653] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.368 [2024-07-24 01:48:19.167256] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.368 [2024-07-24 01:48:19.167282] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.368 [2024-07-24 01:48:19.177971] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.368 [2024-07-24 01:48:19.177999] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.368 [2024-07-24 01:48:19.188586] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.368 [2024-07-24 01:48:19.188627] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.368 [2024-07-24 01:48:19.198813] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.368 [2024-07-24 01:48:19.198840] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.368 [2024-07-24 01:48:19.209743] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.368 [2024-07-24 01:48:19.209773] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.368 [2024-07-24 01:48:19.222471] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.368 [2024-07-24 01:48:19.222498] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.368 [2024-07-24 01:48:19.232469] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.368 [2024-07-24 01:48:19.232510] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.368 [2024-07-24 01:48:19.243399] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.368 [2024-07-24 01:48:19.243425] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.368 [2024-07-24 01:48:19.254301] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.368 [2024-07-24 01:48:19.254340] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.627 [2024-07-24 01:48:19.265730] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.627 [2024-07-24 01:48:19.265764] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.627 [2024-07-24 01:48:19.278669] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.627 [2024-07-24 01:48:19.278696] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.627 [2024-07-24 01:48:19.290292] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.627 [2024-07-24 01:48:19.290343] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.627 [2024-07-24 01:48:19.299624] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.627 [2024-07-24 01:48:19.299650] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.627 [2024-07-24 01:48:19.311580] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.627 [2024-07-24 01:48:19.311623] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.627 [2024-07-24 01:48:19.322117] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.627 [2024-07-24 01:48:19.322144] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.627 [2024-07-24 01:48:19.332981] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.627 [2024-07-24 01:48:19.333008] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.627 [2024-07-24 01:48:19.344342] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.627 [2024-07-24 01:48:19.344369] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.627 [2024-07-24 01:48:19.355378] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.627 [2024-07-24 01:48:19.355405] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.627 [2024-07-24 01:48:19.367775] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.627 [2024-07-24 01:48:19.367802] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.627 [2024-07-24 01:48:19.377616] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.627 [2024-07-24 01:48:19.377659] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.627 [2024-07-24 01:48:19.389157] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.627 [2024-07-24 01:48:19.389183] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.627 [2024-07-24 01:48:19.399701] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.627 [2024-07-24 01:48:19.399727] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.627 [2024-07-24 01:48:19.410644] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.627 [2024-07-24 01:48:19.410670] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.627 [2024-07-24 01:48:19.421689] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.627 [2024-07-24 01:48:19.421716] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.627 [2024-07-24 01:48:19.432723] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.627 [2024-07-24 01:48:19.432751] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.627 [2024-07-24 01:48:19.443613] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.627 [2024-07-24 01:48:19.443647] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.627 [2024-07-24 01:48:19.454662] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.627 [2024-07-24 01:48:19.454693] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.627 [2024-07-24 01:48:19.465841] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.627 [2024-07-24 01:48:19.465868] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.627 [2024-07-24 01:48:19.476775] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.627 [2024-07-24 01:48:19.476801] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.627 [2024-07-24 01:48:19.487581] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.627 [2024-07-24 01:48:19.487608] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.627 [2024-07-24 01:48:19.499975] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.627 [2024-07-24 01:48:19.500001] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.627 [2024-07-24 01:48:19.510223] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.627 [2024-07-24 01:48:19.510253] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.627 [2024-07-24 01:48:19.521240] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.627 [2024-07-24 01:48:19.521271] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.886 [2024-07-24 01:48:19.533806] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.886 [2024-07-24 01:48:19.533833] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.886 [2024-07-24 01:48:19.543827] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.886 [2024-07-24 01:48:19.543854] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.886 [2024-07-24 01:48:19.555484] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.886 [2024-07-24 01:48:19.555515] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.886 [2024-07-24 01:48:19.566505] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.886 [2024-07-24 01:48:19.566532] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.886 [2024-07-24 01:48:19.578920] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.886 [2024-07-24 01:48:19.578946] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.886 [2024-07-24 01:48:19.588655] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.886 [2024-07-24 01:48:19.588683] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.886 [2024-07-24 01:48:19.599297] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.886 [2024-07-24 01:48:19.599347] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.886 [2024-07-24 01:48:19.609933] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.886 [2024-07-24 01:48:19.609961] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.886 [2024-07-24 01:48:19.620556] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.886 [2024-07-24 01:48:19.620584] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.886 [2024-07-24 01:48:19.631623] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.886 [2024-07-24 01:48:19.631650] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.886 [2024-07-24 01:48:19.644447] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.886 [2024-07-24 01:48:19.644475] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.886 [2024-07-24 01:48:19.654675] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.886 [2024-07-24 01:48:19.654714] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.886 [2024-07-24 01:48:19.665711] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.886 [2024-07-24 01:48:19.665738] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.886 [2024-07-24 01:48:19.678176] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.886 [2024-07-24 01:48:19.678204] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.886 [2024-07-24 01:48:19.687995] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.886 [2024-07-24 01:48:19.688022] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.886 [2024-07-24 01:48:19.698777] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.886 [2024-07-24 01:48:19.698804] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.886 [2024-07-24 01:48:19.709674] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.886 [2024-07-24 01:48:19.709701] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.886 [2024-07-24 01:48:19.720528] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.886 [2024-07-24 01:48:19.720555] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.886 [2024-07-24 01:48:19.731141] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.886 [2024-07-24 01:48:19.731168] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.886 [2024-07-24 01:48:19.743678] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.886 [2024-07-24 01:48:19.743705] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.886 [2024-07-24 01:48:19.753421] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.886 [2024-07-24 01:48:19.753448] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.886 [2024-07-24 01:48:19.765048] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.886 [2024-07-24 01:48:19.765078] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.886 [2024-07-24 01:48:19.775837] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.886 [2024-07-24 01:48:19.775863] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.145 [2024-07-24 01:48:19.787215] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.145 [2024-07-24 01:48:19.787242] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.145 [2024-07-24 01:48:19.799851] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.145 [2024-07-24 01:48:19.799878] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.145 [2024-07-24 01:48:19.810390] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.145 [2024-07-24 01:48:19.810418] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.145 [2024-07-24 01:48:19.821243] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.145 [2024-07-24 01:48:19.821269] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.145 [2024-07-24 01:48:19.833849] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.145 [2024-07-24 01:48:19.833875] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.145 [2024-07-24 01:48:19.844108] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.145 [2024-07-24 01:48:19.844134] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.145 [2024-07-24 01:48:19.858646] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.145 [2024-07-24 01:48:19.858675] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.145 [2024-07-24 01:48:19.870982] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.145 [2024-07-24 01:48:19.871009] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.145 [2024-07-24 01:48:19.880136] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.145 [2024-07-24 01:48:19.880163] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.145 [2024-07-24 01:48:19.891882] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.145 [2024-07-24 01:48:19.891908] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.145 [2024-07-24 01:48:19.902788] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.145 [2024-07-24 01:48:19.902815] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.145 [2024-07-24 01:48:19.914049] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.145 [2024-07-24 01:48:19.914075] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.145 [2024-07-24 01:48:19.926963] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.145 [2024-07-24 01:48:19.926990] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.145 [2024-07-24 01:48:19.937693] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.145 [2024-07-24 01:48:19.937720] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.145 [2024-07-24 01:48:19.948615] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.145 [2024-07-24 01:48:19.948641] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.145 [2024-07-24 01:48:19.961479] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.145 [2024-07-24 01:48:19.961506] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.145 [2024-07-24 01:48:19.971628] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.145 [2024-07-24 01:48:19.971656] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.145 [2024-07-24 01:48:19.981989] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.145 [2024-07-24 01:48:19.982017] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.145 [2024-07-24 01:48:19.992556] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.145 [2024-07-24 01:48:19.992583] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.145 [2024-07-24 01:48:20.005642] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.145 [2024-07-24 01:48:20.005670] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.145 [2024-07-24 01:48:20.015695] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.145 [2024-07-24 01:48:20.015729] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.145 [2024-07-24 01:48:20.026311] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.145 [2024-07-24 01:48:20.026359] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.145 [2024-07-24 01:48:20.038791] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.145 [2024-07-24 01:48:20.038819] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.404 [2024-07-24 01:48:20.049254] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.404 [2024-07-24 01:48:20.049282] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.404 [2024-07-24 01:48:20.059951] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.404 [2024-07-24 01:48:20.059979] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.404 [2024-07-24 01:48:20.073126] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.404 [2024-07-24 01:48:20.073153] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.404 [2024-07-24 01:48:20.083410] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.404 [2024-07-24 01:48:20.083438] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.404 [2024-07-24 01:48:20.093780] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.404 [2024-07-24 01:48:20.093808] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.404 [2024-07-24 01:48:20.104209] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.404 [2024-07-24 01:48:20.104236] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.404 [2024-07-24 01:48:20.114763] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.404 [2024-07-24 01:48:20.114790] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.404 [2024-07-24 01:48:20.125453] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.404 [2024-07-24 01:48:20.125480] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.404 [2024-07-24 01:48:20.138051] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.404 [2024-07-24 01:48:20.138078] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.404 [2024-07-24 01:48:20.148373] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.404 [2024-07-24 01:48:20.148401] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.404 [2024-07-24 01:48:20.159025] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.404 [2024-07-24 01:48:20.159052] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.404 [2024-07-24 01:48:20.171688] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.404 [2024-07-24 01:48:20.171715] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.404 [2024-07-24 01:48:20.181773] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.404 [2024-07-24 01:48:20.181800] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.404 [2024-07-24 01:48:20.192366] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.404 [2024-07-24 01:48:20.192394] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.404 [2024-07-24 01:48:20.203069] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.404 [2024-07-24 01:48:20.203097] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.404 [2024-07-24 01:48:20.213639] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.404 [2024-07-24 01:48:20.213666] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.404 [2024-07-24 01:48:20.227136] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.404 [2024-07-24 01:48:20.227163] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.404 [2024-07-24 01:48:20.236943] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.404 [2024-07-24 01:48:20.236970] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.404 [2024-07-24 01:48:20.247364] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.404 [2024-07-24 01:48:20.247390] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.404 [2024-07-24 01:48:20.257903] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.404 [2024-07-24 01:48:20.257930] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.404 [2024-07-24 01:48:20.268090] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.404 [2024-07-24 01:48:20.268117] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.404 [2024-07-24 01:48:20.278975] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.405 [2024-07-24 01:48:20.279002] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.405 [2024-07-24 01:48:20.290105] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.405 [2024-07-24 01:48:20.290132] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.663 [2024-07-24 01:48:20.301150] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.663 [2024-07-24 01:48:20.301176] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.663 [2024-07-24 01:48:20.313692] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.663 [2024-07-24 01:48:20.313718] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.663 [2024-07-24 01:48:20.324324] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.663 [2024-07-24 01:48:20.324351] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.663 [2024-07-24 01:48:20.335288] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.663 [2024-07-24 01:48:20.335338] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.663 [2024-07-24 01:48:20.348030] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.663 [2024-07-24 01:48:20.348056] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.663 [2024-07-24 01:48:20.358703] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.663 [2024-07-24 01:48:20.358729] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.663 [2024-07-24 01:48:20.369517] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.663 [2024-07-24 01:48:20.369544] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.663 [2024-07-24 01:48:20.377716] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.663 [2024-07-24 01:48:20.377741] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.663 00:10:05.664 Latency(us) 00:10:05.664 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:05.664 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:10:05.664 Nvme1n1 : 5.01 11505.16 89.88 0.00 0.00 11110.08 4805.97 23204.60 00:10:05.664 =================================================================================================================== 00:10:05.664 Total : 11505.16 89.88 0.00 0.00 11110.08 4805.97 23204.60 00:10:05.664 [2024-07-24 01:48:20.385497] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.664 [2024-07-24 01:48:20.385521] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.664 [2024-07-24 01:48:20.393531] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.664 [2024-07-24 01:48:20.393555] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.664 [2024-07-24 01:48:20.401581] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.664 [2024-07-24 01:48:20.401624] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.664 [2024-07-24 01:48:20.409618] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.664 [2024-07-24 01:48:20.409664] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.664 [2024-07-24 01:48:20.417626] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.664 [2024-07-24 01:48:20.417672] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.664 [2024-07-24 01:48:20.425647] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.664 [2024-07-24 01:48:20.425692] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.664 [2024-07-24 01:48:20.433658] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.664 [2024-07-24 01:48:20.433719] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.664 [2024-07-24 01:48:20.441691] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.664 [2024-07-24 01:48:20.441737] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.664 [2024-07-24 01:48:20.449709] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.664 [2024-07-24 01:48:20.449756] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.664 [2024-07-24 01:48:20.457741] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.664 [2024-07-24 01:48:20.457789] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.664 [2024-07-24 01:48:20.465755] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.664 [2024-07-24 01:48:20.465801] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.664 [2024-07-24 01:48:20.473779] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.664 [2024-07-24 01:48:20.473830] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.664 [2024-07-24 01:48:20.481805] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.664 [2024-07-24 01:48:20.481853] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.664 [2024-07-24 01:48:20.489818] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.664 [2024-07-24 01:48:20.489863] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.664 [2024-07-24 01:48:20.497837] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.664 [2024-07-24 01:48:20.497883] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.664 [2024-07-24 01:48:20.505856] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.664 [2024-07-24 01:48:20.505900] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.664 [2024-07-24 01:48:20.513883] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.664 [2024-07-24 01:48:20.513929] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.664 [2024-07-24 01:48:20.521888] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.664 [2024-07-24 01:48:20.521922] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.664 [2024-07-24 01:48:20.529898] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.664 [2024-07-24 01:48:20.529925] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.664 [2024-07-24 01:48:20.537960] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.664 [2024-07-24 01:48:20.538006] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.664 [2024-07-24 01:48:20.545976] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.664 [2024-07-24 01:48:20.546020] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.664 [2024-07-24 01:48:20.553988] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.664 [2024-07-24 01:48:20.554031] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.923 [2024-07-24 01:48:20.562009] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.923 [2024-07-24 01:48:20.562042] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.923 [2024-07-24 01:48:20.570049] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.923 [2024-07-24 01:48:20.570096] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.923 [2024-07-24 01:48:20.578067] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.923 [2024-07-24 01:48:20.578114] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.923 [2024-07-24 01:48:20.586075] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.923 [2024-07-24 01:48:20.586123] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.923 [2024-07-24 01:48:20.594072] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.923 [2024-07-24 01:48:20.594097] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.923 [2024-07-24 01:48:20.602092] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.923 [2024-07-24 01:48:20.602116] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.923 [2024-07-24 01:48:20.610114] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.923 [2024-07-24 01:48:20.610140] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.923 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1344609) - No such process 00:10:05.923 01:48:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1344609 00:10:05.923 01:48:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:05.923 01:48:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:05.923 01:48:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:05.923 01:48:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:05.923 01:48:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:05.923 01:48:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:05.923 01:48:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:05.923 delay0 00:10:05.923 01:48:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:05.923 01:48:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:10:05.923 01:48:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:05.923 01:48:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:05.923 01:48:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:05.923 01:48:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:10:05.923 EAL: No free 2048 kB hugepages reported on node 1 00:10:05.923 [2024-07-24 01:48:20.696405] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:14.064 Initializing NVMe Controllers 00:10:14.064 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:14.064 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:14.064 Initialization complete. Launching workers. 00:10:14.064 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 240, failed: 18851 00:10:14.064 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 18966, failed to submit 125 00:10:14.064 success 18904, unsuccess 62, failed 0 00:10:14.064 01:48:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:10:14.064 01:48:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:10:14.064 01:48:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:14.064 01:48:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:10:14.064 01:48:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:14.064 01:48:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:10:14.064 01:48:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:14.064 01:48:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:14.064 rmmod nvme_tcp 00:10:14.064 rmmod nvme_fabrics 00:10:14.064 rmmod nvme_keyring 00:10:14.064 01:48:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:14.064 01:48:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:10:14.064 01:48:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:10:14.064 01:48:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 1342879 ']' 00:10:14.064 01:48:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 1342879 00:10:14.064 01:48:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 1342879 ']' 00:10:14.064 01:48:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 1342879 00:10:14.064 01:48:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:10:14.064 01:48:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:14.064 01:48:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1342879 00:10:14.064 01:48:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:10:14.064 01:48:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:10:14.064 01:48:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1342879' 00:10:14.064 killing process with pid 1342879 00:10:14.064 01:48:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 1342879 00:10:14.064 01:48:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 1342879 00:10:14.064 01:48:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:14.064 01:48:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:14.064 01:48:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:14.064 01:48:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:14.064 01:48:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:14.064 01:48:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:14.064 01:48:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:14.064 01:48:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:15.000 01:48:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:15.000 00:10:15.000 real 0m28.113s 00:10:15.000 user 0m40.871s 00:10:15.000 sys 0m9.182s 00:10:15.000 01:48:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:15.000 01:48:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:15.000 ************************************ 00:10:15.000 END TEST nvmf_zcopy 00:10:15.000 ************************************ 00:10:15.000 01:48:29 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:15.000 01:48:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:15.000 01:48:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:15.000 01:48:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:15.000 ************************************ 00:10:15.000 START TEST nvmf_nmic 00:10:15.000 ************************************ 00:10:15.000 01:48:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:15.258 * Looking for test storage... 00:10:15.258 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:15.258 01:48:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:15.258 01:48:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:10:15.258 01:48:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:15.258 01:48:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:15.258 01:48:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:15.258 01:48:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:15.258 01:48:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:15.258 01:48:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:15.258 01:48:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:15.258 01:48:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:15.258 01:48:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:15.258 01:48:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:15.258 01:48:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:15.258 01:48:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:15.258 01:48:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:15.258 01:48:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:15.258 01:48:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:15.258 01:48:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:15.258 01:48:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:15.258 01:48:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:15.258 01:48:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:15.258 01:48:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:15.258 01:48:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:15.258 01:48:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:15.258 01:48:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:15.258 01:48:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:10:15.258 01:48:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:15.258 01:48:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:10:15.258 01:48:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:15.258 01:48:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:15.258 01:48:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:15.258 01:48:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:15.258 01:48:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:15.258 01:48:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:15.258 01:48:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:15.258 01:48:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:15.258 01:48:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:15.258 01:48:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:15.258 01:48:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:10:15.258 01:48:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:15.258 01:48:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:15.258 01:48:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:15.258 01:48:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:15.258 01:48:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:15.258 01:48:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:15.258 01:48:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:15.258 01:48:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:15.258 01:48:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:15.258 01:48:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:15.258 01:48:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:10:15.258 01:48:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:17.155 01:48:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:17.155 01:48:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:10:17.155 01:48:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:17.155 01:48:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:17.155 01:48:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:17.155 01:48:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:17.155 01:48:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:17.155 01:48:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:10:17.155 01:48:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:17.155 01:48:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:10:17.155 01:48:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:10:17.155 01:48:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:10:17.155 01:48:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:10:17.155 01:48:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:10:17.155 01:48:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:10:17.155 01:48:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:17.155 01:48:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:17.155 01:48:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:17.155 01:48:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:17.155 01:48:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:17.155 01:48:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:17.155 01:48:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:17.155 01:48:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:17.155 01:48:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:17.155 01:48:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:17.155 01:48:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:17.155 01:48:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:17.155 01:48:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:17.155 01:48:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:17.155 01:48:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:17.155 01:48:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:17.155 01:48:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:17.155 01:48:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:17.155 01:48:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:17.155 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:17.155 01:48:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:17.155 01:48:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:17.155 01:48:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:17.155 01:48:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:17.156 01:48:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:17.156 01:48:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:17.156 01:48:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:17.156 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:17.156 01:48:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:17.156 01:48:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:17.156 01:48:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:17.156 01:48:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:17.156 01:48:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:17.156 01:48:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:17.156 01:48:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:17.156 01:48:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:17.156 01:48:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:17.156 01:48:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:17.156 01:48:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:17.156 01:48:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:17.156 01:48:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:17.156 01:48:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:17.156 01:48:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:17.156 01:48:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:17.156 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:17.156 01:48:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:17.156 01:48:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:17.156 01:48:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:17.156 01:48:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:17.156 01:48:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:17.156 01:48:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:17.156 01:48:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:17.156 01:48:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:17.156 01:48:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:17.156 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:17.156 01:48:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:17.156 01:48:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:17.156 01:48:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:10:17.156 01:48:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:17.156 01:48:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:17.156 01:48:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:17.156 01:48:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:17.156 01:48:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:17.156 01:48:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:17.156 01:48:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:17.156 01:48:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:17.156 01:48:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:17.156 01:48:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:17.156 01:48:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:17.156 01:48:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:17.156 01:48:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:17.156 01:48:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:17.156 01:48:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:17.156 01:48:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:17.412 01:48:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:17.412 01:48:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:17.412 01:48:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:17.412 01:48:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:17.412 01:48:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:17.412 01:48:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:17.412 01:48:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:17.412 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:17.412 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.131 ms 00:10:17.412 00:10:17.412 --- 10.0.0.2 ping statistics --- 00:10:17.412 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:17.412 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:10:17.412 01:48:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:17.412 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:17.412 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.086 ms 00:10:17.412 00:10:17.412 --- 10.0.0.1 ping statistics --- 00:10:17.412 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:17.412 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:10:17.412 01:48:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:17.412 01:48:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:10:17.412 01:48:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:17.412 01:48:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:17.412 01:48:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:17.412 01:48:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:17.412 01:48:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:17.412 01:48:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:17.412 01:48:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:17.412 01:48:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:10:17.412 01:48:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:17.412 01:48:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:17.412 01:48:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:17.412 01:48:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=1348124 00:10:17.412 01:48:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:17.412 01:48:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 1348124 00:10:17.413 01:48:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 1348124 ']' 00:10:17.413 01:48:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:17.413 01:48:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:17.413 01:48:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:17.413 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:17.413 01:48:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:17.413 01:48:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:17.413 [2024-07-24 01:48:32.219531] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:10:17.413 [2024-07-24 01:48:32.219630] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:17.413 EAL: No free 2048 kB hugepages reported on node 1 00:10:17.413 [2024-07-24 01:48:32.289825] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:17.670 [2024-07-24 01:48:32.385768] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:17.670 [2024-07-24 01:48:32.385828] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:17.670 [2024-07-24 01:48:32.385845] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:17.670 [2024-07-24 01:48:32.385858] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:17.670 [2024-07-24 01:48:32.385869] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:17.670 [2024-07-24 01:48:32.385967] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:17.670 [2024-07-24 01:48:32.386334] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:17.670 [2024-07-24 01:48:32.386373] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:17.670 [2024-07-24 01:48:32.386379] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:17.670 01:48:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:17.670 01:48:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:10:17.670 01:48:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:17.670 01:48:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:17.670 01:48:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:17.670 01:48:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:17.670 01:48:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:17.670 01:48:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:17.670 01:48:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:17.670 [2024-07-24 01:48:32.547957] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:17.670 01:48:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:17.670 01:48:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:17.670 01:48:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:17.670 01:48:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:17.927 Malloc0 00:10:17.928 01:48:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:17.928 01:48:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:17.928 01:48:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:17.928 01:48:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:17.928 01:48:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:17.928 01:48:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:17.928 01:48:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:17.928 01:48:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:17.928 01:48:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:17.928 01:48:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:17.928 01:48:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:17.928 01:48:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:17.928 [2024-07-24 01:48:32.601893] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:17.928 01:48:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:17.928 01:48:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:10:17.928 test case1: single bdev can't be used in multiple subsystems 00:10:17.928 01:48:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:10:17.928 01:48:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:17.928 01:48:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:17.928 01:48:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:17.928 01:48:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:17.928 01:48:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:17.928 01:48:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:17.928 01:48:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:17.928 01:48:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:10:17.928 01:48:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:10:17.928 01:48:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:17.928 01:48:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:17.928 [2024-07-24 01:48:32.625724] bdev.c:8111:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:10:17.928 [2024-07-24 01:48:32.625752] subsystem.c:2087:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:10:17.928 [2024-07-24 01:48:32.625782] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.928 request: 00:10:17.928 { 00:10:17.928 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:17.928 "namespace": { 00:10:17.928 "bdev_name": "Malloc0", 00:10:17.928 "no_auto_visible": false 00:10:17.928 }, 00:10:17.928 "method": "nvmf_subsystem_add_ns", 00:10:17.928 "req_id": 1 00:10:17.928 } 00:10:17.928 Got JSON-RPC error response 00:10:17.928 response: 00:10:17.928 { 00:10:17.928 "code": -32602, 00:10:17.928 "message": "Invalid parameters" 00:10:17.928 } 00:10:17.928 01:48:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:10:17.928 01:48:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:10:17.928 01:48:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:10:17.928 01:48:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:10:17.928 Adding namespace failed - expected result. 00:10:17.928 01:48:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:10:17.928 test case2: host connect to nvmf target in multiple paths 00:10:17.928 01:48:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:10:17.928 01:48:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:17.928 01:48:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:17.928 [2024-07-24 01:48:32.637854] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:10:17.928 01:48:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:17.928 01:48:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:18.493 01:48:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:10:19.057 01:48:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:19.057 01:48:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1196 -- # local i=0 00:10:19.057 01:48:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:10:19.057 01:48:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # [[ -n '' ]] 00:10:19.057 01:48:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # sleep 2 00:10:21.576 01:48:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:10:21.576 01:48:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:10:21.576 01:48:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:10:21.576 01:48:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:10:21.576 01:48:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:10:21.576 01:48:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # return 0 00:10:21.576 01:48:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:21.576 [global] 00:10:21.576 thread=1 00:10:21.576 invalidate=1 00:10:21.576 rw=write 00:10:21.576 time_based=1 00:10:21.576 runtime=1 00:10:21.576 ioengine=libaio 00:10:21.576 direct=1 00:10:21.576 bs=4096 00:10:21.576 iodepth=1 00:10:21.576 norandommap=0 00:10:21.576 numjobs=1 00:10:21.576 00:10:21.576 verify_dump=1 00:10:21.576 verify_backlog=512 00:10:21.576 verify_state_save=0 00:10:21.576 do_verify=1 00:10:21.576 verify=crc32c-intel 00:10:21.576 [job0] 00:10:21.576 filename=/dev/nvme0n1 00:10:21.576 Could not set queue depth (nvme0n1) 00:10:21.576 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:21.576 fio-3.35 00:10:21.576 Starting 1 thread 00:10:22.508 00:10:22.508 job0: (groupid=0, jobs=1): err= 0: pid=1348638: Wed Jul 24 01:48:37 2024 00:10:22.508 read: IOPS=645, BW=2584KiB/s (2646kB/s)(2664KiB/1031msec) 00:10:22.508 slat (nsec): min=7019, max=74910, avg=15926.36, stdev=5228.77 00:10:22.508 clat (usec): min=214, max=42025, avg=1124.11, stdev=5870.73 00:10:22.508 lat (usec): min=223, max=42040, avg=1140.04, stdev=5871.43 00:10:22.508 clat percentiles (usec): 00:10:22.508 | 1.00th=[ 221], 5.00th=[ 231], 10.00th=[ 237], 20.00th=[ 251], 00:10:22.508 | 30.00th=[ 258], 40.00th=[ 262], 50.00th=[ 265], 60.00th=[ 269], 00:10:22.508 | 70.00th=[ 273], 80.00th=[ 277], 90.00th=[ 285], 95.00th=[ 293], 00:10:22.508 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:10:22.508 | 99.99th=[42206] 00:10:22.508 write: IOPS=993, BW=3973KiB/s (4068kB/s)(4096KiB/1031msec); 0 zone resets 00:10:22.508 slat (usec): min=10, max=29241, avg=48.15, stdev=913.20 00:10:22.508 clat (usec): min=153, max=396, avg=207.52, stdev=43.77 00:10:22.508 lat (usec): min=164, max=29481, avg=255.67, stdev=915.47 00:10:22.508 clat percentiles (usec): 00:10:22.508 | 1.00th=[ 157], 5.00th=[ 161], 10.00th=[ 167], 20.00th=[ 174], 00:10:22.508 | 30.00th=[ 182], 40.00th=[ 190], 50.00th=[ 196], 60.00th=[ 202], 00:10:22.508 | 70.00th=[ 217], 80.00th=[ 233], 90.00th=[ 265], 95.00th=[ 306], 00:10:22.508 | 99.00th=[ 367], 99.50th=[ 379], 99.90th=[ 383], 99.95th=[ 396], 00:10:22.508 | 99.99th=[ 396] 00:10:22.508 bw ( KiB/s): min= 8192, max= 8192, per=100.00%, avg=8192.00, stdev= 0.00, samples=1 00:10:22.508 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:22.508 lat (usec) : 250=60.06%, 500=38.99%, 750=0.12% 00:10:22.508 lat (msec) : 50=0.83% 00:10:22.508 cpu : usr=2.52%, sys=3.69%, ctx=1692, majf=0, minf=2 00:10:22.508 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:22.508 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:22.508 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:22.508 issued rwts: total=666,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:22.508 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:22.508 00:10:22.508 Run status group 0 (all jobs): 00:10:22.508 READ: bw=2584KiB/s (2646kB/s), 2584KiB/s-2584KiB/s (2646kB/s-2646kB/s), io=2664KiB (2728kB), run=1031-1031msec 00:10:22.508 WRITE: bw=3973KiB/s (4068kB/s), 3973KiB/s-3973KiB/s (4068kB/s-4068kB/s), io=4096KiB (4194kB), run=1031-1031msec 00:10:22.508 00:10:22.508 Disk stats (read/write): 00:10:22.508 nvme0n1: ios=687/1024, merge=0/0, ticks=1530/195, in_queue=1725, util=98.80% 00:10:22.508 01:48:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:22.764 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:22.764 01:48:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:22.764 01:48:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1217 -- # local i=0 00:10:22.764 01:48:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:10:22.764 01:48:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1218 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:22.764 01:48:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:10:22.764 01:48:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1225 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:22.764 01:48:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1229 -- # return 0 00:10:22.764 01:48:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:22.764 01:48:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:10:22.764 01:48:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:22.764 01:48:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:10:22.764 01:48:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:22.764 01:48:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:10:22.764 01:48:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:22.764 01:48:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:22.764 rmmod nvme_tcp 00:10:22.764 rmmod nvme_fabrics 00:10:22.764 rmmod nvme_keyring 00:10:22.764 01:48:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:22.764 01:48:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:10:22.764 01:48:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:10:22.764 01:48:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 1348124 ']' 00:10:22.764 01:48:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 1348124 00:10:22.764 01:48:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 1348124 ']' 00:10:22.764 01:48:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 1348124 00:10:22.764 01:48:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:10:22.764 01:48:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:22.764 01:48:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1348124 00:10:22.764 01:48:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:22.764 01:48:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:22.764 01:48:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1348124' 00:10:22.764 killing process with pid 1348124 00:10:22.764 01:48:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 1348124 00:10:22.764 01:48:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 1348124 00:10:23.021 01:48:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:23.021 01:48:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:23.021 01:48:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:23.021 01:48:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:23.021 01:48:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:23.021 01:48:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:23.021 01:48:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:23.021 01:48:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:25.553 01:48:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:25.553 00:10:25.553 real 0m10.054s 00:10:25.553 user 0m22.787s 00:10:25.553 sys 0m2.419s 00:10:25.553 01:48:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:25.553 01:48:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:25.553 ************************************ 00:10:25.553 END TEST nvmf_nmic 00:10:25.553 ************************************ 00:10:25.553 01:48:39 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:25.553 01:48:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:25.553 01:48:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:25.553 01:48:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:25.553 ************************************ 00:10:25.553 START TEST nvmf_fio_target 00:10:25.553 ************************************ 00:10:25.553 01:48:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:25.553 * Looking for test storage... 00:10:25.553 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:25.553 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:25.553 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:10:25.553 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:25.553 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:25.553 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:25.553 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:25.553 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:25.553 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:25.553 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:25.553 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:25.553 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:25.553 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:25.553 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:25.553 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:25.553 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:25.553 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:25.553 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:25.553 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:25.553 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:25.553 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:25.553 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:25.553 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:25.553 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:25.553 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:25.553 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:25.553 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:10:25.553 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:25.553 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:10:25.553 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:25.553 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:25.553 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:25.553 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:25.553 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:25.553 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:25.553 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:25.553 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:25.553 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:25.553 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:25.553 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:25.553 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:10:25.553 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:25.553 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:25.553 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:25.553 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:25.553 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:25.553 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:25.553 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:25.553 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:25.553 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:25.553 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:25.553 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:10:25.553 01:48:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:27.453 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:27.453 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:10:27.453 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:27.453 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:27.453 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:27.453 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:27.453 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:27.453 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:10:27.453 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:27.453 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:10:27.453 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:10:27.453 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:10:27.453 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:10:27.453 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:10:27.453 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:10:27.453 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:27.453 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:27.453 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:27.453 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:27.453 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:27.453 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:27.453 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:27.453 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:27.453 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:27.453 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:27.453 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:27.453 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:27.453 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:27.453 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:27.453 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:27.453 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:27.453 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:27.453 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:27.453 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:27.453 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:27.453 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:27.453 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:27.453 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:27.453 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:27.453 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:27.453 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:27.453 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:27.453 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:27.453 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:27.453 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:27.453 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:27.453 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:27.454 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:27.454 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:27.454 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:27.454 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:27.454 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:27.454 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:27.454 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:27.454 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:27.454 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:27.454 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:27.454 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:27.454 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:27.454 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:27.454 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:27.454 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:27.454 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:27.454 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:27.454 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:27.454 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:27.454 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:27.454 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:27.454 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:27.454 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:27.454 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:27.454 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:27.454 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:10:27.454 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:27.454 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:27.454 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:27.454 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:27.454 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:27.454 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:27.454 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:27.454 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:27.454 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:27.454 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:27.454 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:27.454 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:27.454 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:27.454 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:27.454 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:27.454 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:27.454 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:27.454 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:27.454 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:27.454 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:27.454 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:27.454 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:27.454 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:27.454 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:27.454 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.213 ms 00:10:27.454 00:10:27.454 --- 10.0.0.2 ping statistics --- 00:10:27.454 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:27.454 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:10:27.454 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:27.454 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:27.454 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.137 ms 00:10:27.454 00:10:27.454 --- 10.0.0.1 ping statistics --- 00:10:27.454 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:27.454 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:10:27.454 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:27.454 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:10:27.454 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:27.454 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:27.454 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:27.454 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:27.454 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:27.454 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:27.454 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:27.454 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:27.454 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:27.454 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:27.454 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:27.454 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=1350717 00:10:27.454 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:27.454 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 1350717 00:10:27.454 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 1350717 ']' 00:10:27.454 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:27.454 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:27.454 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:27.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:27.454 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:27.454 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:27.454 [2024-07-24 01:48:42.224328] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:10:27.454 [2024-07-24 01:48:42.224440] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:27.454 EAL: No free 2048 kB hugepages reported on node 1 00:10:27.454 [2024-07-24 01:48:42.289999] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:27.712 [2024-07-24 01:48:42.379139] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:27.712 [2024-07-24 01:48:42.379195] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:27.712 [2024-07-24 01:48:42.379223] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:27.712 [2024-07-24 01:48:42.379234] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:27.712 [2024-07-24 01:48:42.379244] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:27.712 [2024-07-24 01:48:42.379412] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:27.712 [2024-07-24 01:48:42.379441] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:27.712 [2024-07-24 01:48:42.379489] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:27.712 [2024-07-24 01:48:42.379491] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:27.712 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:27.712 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:10:27.713 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:27.713 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:27.713 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:27.713 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:27.713 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:27.970 [2024-07-24 01:48:42.738262] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:27.970 01:48:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:28.227 01:48:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:28.227 01:48:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:28.791 01:48:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:28.791 01:48:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:28.791 01:48:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:28.791 01:48:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:29.049 01:48:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:29.049 01:48:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:29.306 01:48:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:29.564 01:48:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:29.564 01:48:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:29.821 01:48:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:30.078 01:48:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:30.336 01:48:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:30.336 01:48:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:30.336 01:48:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:30.626 01:48:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:30.626 01:48:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:30.888 01:48:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:30.888 01:48:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:31.147 01:48:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:31.404 [2024-07-24 01:48:46.203180] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:31.404 01:48:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:10:31.661 01:48:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:10:31.918 01:48:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:32.851 01:48:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:10:32.851 01:48:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1196 -- # local i=0 00:10:32.851 01:48:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:10:32.851 01:48:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # [[ -n 4 ]] 00:10:32.851 01:48:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # nvme_device_counter=4 00:10:32.851 01:48:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # sleep 2 00:10:34.749 01:48:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:10:34.749 01:48:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:10:34.749 01:48:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:10:34.749 01:48:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_devices=4 00:10:34.749 01:48:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:10:34.749 01:48:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # return 0 00:10:34.749 01:48:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:34.749 [global] 00:10:34.749 thread=1 00:10:34.749 invalidate=1 00:10:34.749 rw=write 00:10:34.749 time_based=1 00:10:34.749 runtime=1 00:10:34.749 ioengine=libaio 00:10:34.749 direct=1 00:10:34.749 bs=4096 00:10:34.749 iodepth=1 00:10:34.749 norandommap=0 00:10:34.749 numjobs=1 00:10:34.749 00:10:34.749 verify_dump=1 00:10:34.749 verify_backlog=512 00:10:34.749 verify_state_save=0 00:10:34.749 do_verify=1 00:10:34.749 verify=crc32c-intel 00:10:34.749 [job0] 00:10:34.749 filename=/dev/nvme0n1 00:10:34.749 [job1] 00:10:34.749 filename=/dev/nvme0n2 00:10:34.749 [job2] 00:10:34.749 filename=/dev/nvme0n3 00:10:34.749 [job3] 00:10:34.749 filename=/dev/nvme0n4 00:10:34.749 Could not set queue depth (nvme0n1) 00:10:34.749 Could not set queue depth (nvme0n2) 00:10:34.750 Could not set queue depth (nvme0n3) 00:10:34.750 Could not set queue depth (nvme0n4) 00:10:35.007 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:35.007 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:35.007 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:35.007 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:35.007 fio-3.35 00:10:35.007 Starting 4 threads 00:10:36.380 00:10:36.380 job0: (groupid=0, jobs=1): err= 0: pid=1351793: Wed Jul 24 01:48:50 2024 00:10:36.380 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:10:36.380 slat (nsec): min=5521, max=51732, avg=12684.82, stdev=5862.02 00:10:36.380 clat (usec): min=229, max=41043, avg=337.96, stdev=1100.07 00:10:36.380 lat (usec): min=236, max=41053, avg=350.64, stdev=1100.14 00:10:36.380 clat percentiles (usec): 00:10:36.380 | 1.00th=[ 239], 5.00th=[ 245], 10.00th=[ 251], 20.00th=[ 265], 00:10:36.380 | 30.00th=[ 281], 40.00th=[ 293], 50.00th=[ 302], 60.00th=[ 310], 00:10:36.380 | 70.00th=[ 318], 80.00th=[ 326], 90.00th=[ 338], 95.00th=[ 359], 00:10:36.380 | 99.00th=[ 506], 99.50th=[ 529], 99.90th=[14353], 99.95th=[41157], 00:10:36.380 | 99.99th=[41157] 00:10:36.380 write: IOPS=1730, BW=6921KiB/s (7087kB/s)(6928KiB/1001msec); 0 zone resets 00:10:36.380 slat (nsec): min=7800, max=74042, avg=17192.66, stdev=8771.03 00:10:36.380 clat (usec): min=169, max=529, avg=241.10, stdev=66.99 00:10:36.380 lat (usec): min=179, max=566, avg=258.29, stdev=71.00 00:10:36.380 clat percentiles (usec): 00:10:36.380 | 1.00th=[ 178], 5.00th=[ 182], 10.00th=[ 186], 20.00th=[ 192], 00:10:36.380 | 30.00th=[ 200], 40.00th=[ 210], 50.00th=[ 219], 60.00th=[ 225], 00:10:36.380 | 70.00th=[ 241], 80.00th=[ 281], 90.00th=[ 351], 95.00th=[ 392], 00:10:36.380 | 99.00th=[ 457], 99.50th=[ 478], 99.90th=[ 523], 99.95th=[ 529], 00:10:36.380 | 99.99th=[ 529] 00:10:36.380 bw ( KiB/s): min= 8192, max= 8192, per=65.18%, avg=8192.00, stdev= 0.00, samples=1 00:10:36.380 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:36.380 lat (usec) : 250=44.40%, 500=54.93%, 750=0.58% 00:10:36.380 lat (msec) : 2=0.03%, 20=0.03%, 50=0.03% 00:10:36.380 cpu : usr=3.50%, sys=6.90%, ctx=3271, majf=0, minf=1 00:10:36.380 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:36.380 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:36.380 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:36.380 issued rwts: total=1536,1732,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:36.380 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:36.380 job1: (groupid=0, jobs=1): err= 0: pid=1351794: Wed Jul 24 01:48:50 2024 00:10:36.380 read: IOPS=21, BW=86.7KiB/s (88.8kB/s)(88.0KiB/1015msec) 00:10:36.380 slat (nsec): min=8235, max=35367, avg=19887.09, stdev=8951.92 00:10:36.380 clat (usec): min=625, max=41946, avg=39162.81, stdev=8610.93 00:10:36.380 lat (usec): min=635, max=41981, avg=39182.69, stdev=8613.21 00:10:36.380 clat percentiles (usec): 00:10:36.380 | 1.00th=[ 627], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:10:36.380 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:36.380 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:36.380 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:36.380 | 99.99th=[42206] 00:10:36.380 write: IOPS=504, BW=2018KiB/s (2066kB/s)(2048KiB/1015msec); 0 zone resets 00:10:36.380 slat (nsec): min=8312, max=61156, avg=19995.71, stdev=10256.39 00:10:36.380 clat (usec): min=156, max=523, avg=272.43, stdev=74.36 00:10:36.380 lat (usec): min=168, max=564, avg=292.42, stdev=76.31 00:10:36.380 clat percentiles (usec): 00:10:36.380 | 1.00th=[ 161], 5.00th=[ 169], 10.00th=[ 180], 20.00th=[ 196], 00:10:36.380 | 30.00th=[ 215], 40.00th=[ 241], 50.00th=[ 273], 60.00th=[ 302], 00:10:36.380 | 70.00th=[ 318], 80.00th=[ 334], 90.00th=[ 375], 95.00th=[ 400], 00:10:36.380 | 99.00th=[ 441], 99.50th=[ 461], 99.90th=[ 523], 99.95th=[ 523], 00:10:36.380 | 99.99th=[ 523] 00:10:36.380 bw ( KiB/s): min= 4096, max= 4096, per=32.59%, avg=4096.00, stdev= 0.00, samples=1 00:10:36.380 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:36.380 lat (usec) : 250=42.13%, 500=53.56%, 750=0.37% 00:10:36.380 lat (msec) : 50=3.93% 00:10:36.380 cpu : usr=0.59%, sys=1.28%, ctx=536, majf=0, minf=1 00:10:36.380 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:36.380 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:36.380 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:36.380 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:36.380 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:36.380 job2: (groupid=0, jobs=1): err= 0: pid=1351795: Wed Jul 24 01:48:50 2024 00:10:36.380 read: IOPS=260, BW=1043KiB/s (1068kB/s)(1076KiB/1032msec) 00:10:36.380 slat (nsec): min=6056, max=37522, avg=9989.78, stdev=5640.92 00:10:36.380 clat (usec): min=248, max=41221, avg=3290.01, stdev=10460.74 00:10:36.380 lat (usec): min=256, max=41241, avg=3300.00, stdev=10464.78 00:10:36.380 clat percentiles (usec): 00:10:36.380 | 1.00th=[ 251], 5.00th=[ 260], 10.00th=[ 265], 20.00th=[ 269], 00:10:36.380 | 30.00th=[ 285], 40.00th=[ 306], 50.00th=[ 318], 60.00th=[ 363], 00:10:36.380 | 70.00th=[ 445], 80.00th=[ 478], 90.00th=[ 537], 95.00th=[41157], 00:10:36.380 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:36.380 | 99.99th=[41157] 00:10:36.380 write: IOPS=496, BW=1984KiB/s (2032kB/s)(2048KiB/1032msec); 0 zone resets 00:10:36.380 slat (nsec): min=8652, max=74791, avg=18167.86, stdev=10868.87 00:10:36.380 clat (usec): min=160, max=552, avg=255.88, stdev=78.14 00:10:36.380 lat (usec): min=170, max=566, avg=274.05, stdev=83.59 00:10:36.380 clat percentiles (usec): 00:10:36.380 | 1.00th=[ 167], 5.00th=[ 172], 10.00th=[ 178], 20.00th=[ 190], 00:10:36.380 | 30.00th=[ 198], 40.00th=[ 212], 50.00th=[ 225], 60.00th=[ 255], 00:10:36.380 | 70.00th=[ 297], 80.00th=[ 322], 90.00th=[ 371], 95.00th=[ 400], 00:10:36.380 | 99.00th=[ 490], 99.50th=[ 506], 99.90th=[ 553], 99.95th=[ 553], 00:10:36.380 | 99.99th=[ 553] 00:10:36.380 bw ( KiB/s): min= 4096, max= 4096, per=32.59%, avg=4096.00, stdev= 0.00, samples=1 00:10:36.380 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:36.380 lat (usec) : 250=38.92%, 500=56.34%, 750=1.92% 00:10:36.380 lat (msec) : 2=0.26%, 20=0.13%, 50=2.43% 00:10:36.380 cpu : usr=0.19%, sys=1.94%, ctx=782, majf=0, minf=1 00:10:36.380 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:36.380 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:36.380 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:36.380 issued rwts: total=269,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:36.380 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:36.380 job3: (groupid=0, jobs=1): err= 0: pid=1351796: Wed Jul 24 01:48:50 2024 00:10:36.380 read: IOPS=20, BW=80.8KiB/s (82.7kB/s)(84.0KiB/1040msec) 00:10:36.380 slat (nsec): min=8373, max=37779, avg=22389.67, stdev=9953.75 00:10:36.380 clat (usec): min=40854, max=41060, avg=40968.44, stdev=59.31 00:10:36.380 lat (usec): min=40892, max=41075, avg=40990.83, stdev=56.55 00:10:36.380 clat percentiles (usec): 00:10:36.380 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:10:36.380 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:36.380 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:36.380 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:36.380 | 99.99th=[41157] 00:10:36.380 write: IOPS=492, BW=1969KiB/s (2016kB/s)(2048KiB/1040msec); 0 zone resets 00:10:36.380 slat (nsec): min=8624, max=66102, avg=18830.62, stdev=10887.22 00:10:36.380 clat (usec): min=162, max=598, avg=325.23, stdev=92.57 00:10:36.380 lat (usec): min=172, max=635, avg=344.06, stdev=97.20 00:10:36.380 clat percentiles (usec): 00:10:36.380 | 1.00th=[ 174], 5.00th=[ 198], 10.00th=[ 210], 20.00th=[ 227], 00:10:36.380 | 30.00th=[ 260], 40.00th=[ 297], 50.00th=[ 326], 60.00th=[ 347], 00:10:36.380 | 70.00th=[ 379], 80.00th=[ 404], 90.00th=[ 441], 95.00th=[ 494], 00:10:36.380 | 99.00th=[ 553], 99.50th=[ 578], 99.90th=[ 603], 99.95th=[ 603], 00:10:36.380 | 99.99th=[ 603] 00:10:36.380 bw ( KiB/s): min= 4096, max= 4096, per=32.59%, avg=4096.00, stdev= 0.00, samples=1 00:10:36.380 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:36.380 lat (usec) : 250=27.02%, 500=64.73%, 750=4.32% 00:10:36.380 lat (msec) : 50=3.94% 00:10:36.380 cpu : usr=0.58%, sys=1.25%, ctx=533, majf=0, minf=2 00:10:36.380 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:36.380 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:36.380 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:36.380 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:36.380 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:36.380 00:10:36.380 Run status group 0 (all jobs): 00:10:36.380 READ: bw=7108KiB/s (7278kB/s), 80.8KiB/s-6138KiB/s (82.7kB/s-6285kB/s), io=7392KiB (7569kB), run=1001-1040msec 00:10:36.380 WRITE: bw=12.3MiB/s (12.9MB/s), 1969KiB/s-6921KiB/s (2016kB/s-7087kB/s), io=12.8MiB (13.4MB), run=1001-1040msec 00:10:36.380 00:10:36.380 Disk stats (read/write): 00:10:36.380 nvme0n1: ios=1314/1536, merge=0/0, ticks=808/354, in_queue=1162, util=98.00% 00:10:36.380 nvme0n2: ios=68/512, merge=0/0, ticks=1383/134, in_queue=1517, util=98.07% 00:10:36.380 nvme0n3: ios=290/512, merge=0/0, ticks=1662/128, in_queue=1790, util=98.01% 00:10:36.380 nvme0n4: ios=42/512, merge=0/0, ticks=1051/158, in_queue=1209, util=94.95% 00:10:36.380 01:48:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:10:36.380 [global] 00:10:36.380 thread=1 00:10:36.380 invalidate=1 00:10:36.380 rw=randwrite 00:10:36.380 time_based=1 00:10:36.380 runtime=1 00:10:36.380 ioengine=libaio 00:10:36.380 direct=1 00:10:36.380 bs=4096 00:10:36.380 iodepth=1 00:10:36.380 norandommap=0 00:10:36.380 numjobs=1 00:10:36.380 00:10:36.380 verify_dump=1 00:10:36.380 verify_backlog=512 00:10:36.380 verify_state_save=0 00:10:36.380 do_verify=1 00:10:36.381 verify=crc32c-intel 00:10:36.381 [job0] 00:10:36.381 filename=/dev/nvme0n1 00:10:36.381 [job1] 00:10:36.381 filename=/dev/nvme0n2 00:10:36.381 [job2] 00:10:36.381 filename=/dev/nvme0n3 00:10:36.381 [job3] 00:10:36.381 filename=/dev/nvme0n4 00:10:36.381 Could not set queue depth (nvme0n1) 00:10:36.381 Could not set queue depth (nvme0n2) 00:10:36.381 Could not set queue depth (nvme0n3) 00:10:36.381 Could not set queue depth (nvme0n4) 00:10:36.381 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:36.381 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:36.381 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:36.381 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:36.381 fio-3.35 00:10:36.381 Starting 4 threads 00:10:37.753 00:10:37.753 job0: (groupid=0, jobs=1): err= 0: pid=1352020: Wed Jul 24 01:48:52 2024 00:10:37.753 read: IOPS=21, BW=86.1KiB/s (88.2kB/s)(88.0KiB/1022msec) 00:10:37.753 slat (nsec): min=11187, max=34618, avg=25151.73, stdev=9914.27 00:10:37.753 clat (usec): min=40757, max=41049, avg=40963.27, stdev=56.06 00:10:37.753 lat (usec): min=40768, max=41078, avg=40988.42, stdev=57.10 00:10:37.753 clat percentiles (usec): 00:10:37.753 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:10:37.753 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:37.753 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:37.753 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:37.753 | 99.99th=[41157] 00:10:37.753 write: IOPS=500, BW=2004KiB/s (2052kB/s)(2048KiB/1022msec); 0 zone resets 00:10:37.753 slat (nsec): min=11262, max=40847, avg=13649.56, stdev=3468.35 00:10:37.753 clat (usec): min=164, max=1202, avg=216.69, stdev=64.55 00:10:37.753 lat (usec): min=176, max=1214, avg=230.34, stdev=65.11 00:10:37.753 clat percentiles (usec): 00:10:37.753 | 1.00th=[ 176], 5.00th=[ 182], 10.00th=[ 188], 20.00th=[ 194], 00:10:37.753 | 30.00th=[ 202], 40.00th=[ 208], 50.00th=[ 217], 60.00th=[ 219], 00:10:37.753 | 70.00th=[ 223], 80.00th=[ 229], 90.00th=[ 235], 95.00th=[ 243], 00:10:37.753 | 99.00th=[ 277], 99.50th=[ 840], 99.90th=[ 1205], 99.95th=[ 1205], 00:10:37.753 | 99.99th=[ 1205] 00:10:37.753 bw ( KiB/s): min= 4096, max= 4096, per=23.02%, avg=4096.00, stdev= 0.00, samples=1 00:10:37.753 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:37.753 lat (usec) : 250=93.45%, 500=1.87%, 1000=0.37% 00:10:37.753 lat (msec) : 2=0.19%, 50=4.12% 00:10:37.753 cpu : usr=0.59%, sys=0.39%, ctx=534, majf=0, minf=1 00:10:37.753 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:37.753 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:37.753 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:37.753 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:37.753 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:37.753 job1: (groupid=0, jobs=1): err= 0: pid=1352021: Wed Jul 24 01:48:52 2024 00:10:37.753 read: IOPS=21, BW=84.9KiB/s (87.0kB/s)(88.0KiB/1036msec) 00:10:37.753 slat (nsec): min=7558, max=37304, avg=26254.09, stdev=10183.33 00:10:37.753 clat (usec): min=40945, max=42022, avg=41711.95, stdev=427.32 00:10:37.753 lat (usec): min=40982, max=42040, avg=41738.21, stdev=430.44 00:10:37.753 clat percentiles (usec): 00:10:37.753 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:10:37.753 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:10:37.753 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:10:37.753 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:37.753 | 99.99th=[42206] 00:10:37.753 write: IOPS=494, BW=1977KiB/s (2024kB/s)(2048KiB/1036msec); 0 zone resets 00:10:37.753 slat (nsec): min=6350, max=33733, avg=8463.27, stdev=2092.41 00:10:37.753 clat (usec): min=162, max=771, avg=217.81, stdev=37.07 00:10:37.753 lat (usec): min=169, max=781, avg=226.28, stdev=37.32 00:10:37.753 clat percentiles (usec): 00:10:37.753 | 1.00th=[ 169], 5.00th=[ 178], 10.00th=[ 186], 20.00th=[ 196], 00:10:37.753 | 30.00th=[ 204], 40.00th=[ 212], 50.00th=[ 221], 60.00th=[ 225], 00:10:37.753 | 70.00th=[ 229], 80.00th=[ 235], 90.00th=[ 243], 95.00th=[ 245], 00:10:37.753 | 99.00th=[ 293], 99.50th=[ 330], 99.90th=[ 775], 99.95th=[ 775], 00:10:37.753 | 99.99th=[ 775] 00:10:37.753 bw ( KiB/s): min= 4096, max= 4096, per=23.02%, avg=4096.00, stdev= 0.00, samples=1 00:10:37.753 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:37.753 lat (usec) : 250=93.26%, 500=2.25%, 750=0.19%, 1000=0.19% 00:10:37.753 lat (msec) : 50=4.12% 00:10:37.753 cpu : usr=0.00%, sys=0.97%, ctx=534, majf=0, minf=1 00:10:37.753 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:37.753 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:37.753 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:37.753 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:37.753 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:37.753 job2: (groupid=0, jobs=1): err= 0: pid=1352022: Wed Jul 24 01:48:52 2024 00:10:37.753 read: IOPS=1255, BW=5022KiB/s (5143kB/s)(5188KiB/1033msec) 00:10:37.753 slat (nsec): min=4803, max=71235, avg=12487.78, stdev=7646.68 00:10:37.753 clat (usec): min=220, max=41325, avg=520.71, stdev=3190.54 00:10:37.753 lat (usec): min=227, max=41359, avg=533.19, stdev=3192.21 00:10:37.753 clat percentiles (usec): 00:10:37.753 | 1.00th=[ 225], 5.00th=[ 233], 10.00th=[ 237], 20.00th=[ 243], 00:10:37.753 | 30.00th=[ 247], 40.00th=[ 253], 50.00th=[ 258], 60.00th=[ 265], 00:10:37.753 | 70.00th=[ 277], 80.00th=[ 289], 90.00th=[ 326], 95.00th=[ 363], 00:10:37.753 | 99.00th=[ 408], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:37.753 | 99.99th=[41157] 00:10:37.753 write: IOPS=1486, BW=5948KiB/s (6090kB/s)(6144KiB/1033msec); 0 zone resets 00:10:37.753 slat (nsec): min=6130, max=45412, avg=13685.91, stdev=5063.47 00:10:37.753 clat (usec): min=152, max=434, avg=201.52, stdev=35.86 00:10:37.753 lat (usec): min=160, max=453, avg=215.20, stdev=34.11 00:10:37.753 clat percentiles (usec): 00:10:37.753 | 1.00th=[ 163], 5.00th=[ 169], 10.00th=[ 174], 20.00th=[ 178], 00:10:37.753 | 30.00th=[ 182], 40.00th=[ 186], 50.00th=[ 192], 60.00th=[ 198], 00:10:37.753 | 70.00th=[ 208], 80.00th=[ 223], 90.00th=[ 241], 95.00th=[ 258], 00:10:37.753 | 99.00th=[ 383], 99.50th=[ 404], 99.90th=[ 429], 99.95th=[ 437], 00:10:37.753 | 99.99th=[ 437] 00:10:37.753 bw ( KiB/s): min= 4096, max= 8192, per=34.53%, avg=6144.00, stdev=2896.31, samples=2 00:10:37.753 iops : min= 1024, max= 2048, avg=1536.00, stdev=724.08, samples=2 00:10:37.753 lat (usec) : 250=67.21%, 500=32.47%, 750=0.04% 00:10:37.753 lat (msec) : 50=0.28% 00:10:37.753 cpu : usr=1.55%, sys=4.07%, ctx=2833, majf=0, minf=2 00:10:37.753 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:37.753 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:37.753 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:37.753 issued rwts: total=1297,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:37.753 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:37.754 job3: (groupid=0, jobs=1): err= 0: pid=1352023: Wed Jul 24 01:48:52 2024 00:10:37.754 read: IOPS=1805, BW=7221KiB/s (7394kB/s)(7228KiB/1001msec) 00:10:37.754 slat (nsec): min=5508, max=53798, avg=14534.23, stdev=8500.19 00:10:37.754 clat (usec): min=216, max=544, avg=288.66, stdev=44.24 00:10:37.754 lat (usec): min=230, max=564, avg=303.19, stdev=45.91 00:10:37.754 clat percentiles (usec): 00:10:37.754 | 1.00th=[ 223], 5.00th=[ 231], 10.00th=[ 237], 20.00th=[ 245], 00:10:37.754 | 30.00th=[ 265], 40.00th=[ 273], 50.00th=[ 289], 60.00th=[ 297], 00:10:37.754 | 70.00th=[ 306], 80.00th=[ 318], 90.00th=[ 338], 95.00th=[ 359], 00:10:37.754 | 99.00th=[ 469], 99.50th=[ 478], 99.90th=[ 510], 99.95th=[ 545], 00:10:37.754 | 99.99th=[ 545] 00:10:37.754 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:10:37.754 slat (nsec): min=7133, max=44250, avg=14578.17, stdev=4894.22 00:10:37.754 clat (usec): min=149, max=570, avg=198.38, stdev=30.60 00:10:37.754 lat (usec): min=164, max=580, avg=212.96, stdev=30.54 00:10:37.754 clat percentiles (usec): 00:10:37.754 | 1.00th=[ 159], 5.00th=[ 163], 10.00th=[ 167], 20.00th=[ 174], 00:10:37.754 | 30.00th=[ 180], 40.00th=[ 186], 50.00th=[ 192], 60.00th=[ 198], 00:10:37.754 | 70.00th=[ 210], 80.00th=[ 221], 90.00th=[ 235], 95.00th=[ 249], 00:10:37.754 | 99.00th=[ 297], 99.50th=[ 318], 99.90th=[ 408], 99.95th=[ 412], 00:10:37.754 | 99.99th=[ 570] 00:10:37.754 bw ( KiB/s): min= 8192, max= 8192, per=46.04%, avg=8192.00, stdev= 0.00, samples=1 00:10:37.754 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:37.754 lat (usec) : 250=61.35%, 500=38.55%, 750=0.10% 00:10:37.754 cpu : usr=4.40%, sys=5.80%, ctx=3856, majf=0, minf=1 00:10:37.754 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:37.754 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:37.754 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:37.754 issued rwts: total=1807,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:37.754 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:37.754 00:10:37.754 Run status group 0 (all jobs): 00:10:37.754 READ: bw=11.9MiB/s (12.4MB/s), 84.9KiB/s-7221KiB/s (87.0kB/s-7394kB/s), io=12.3MiB (12.9MB), run=1001-1036msec 00:10:37.754 WRITE: bw=17.4MiB/s (18.2MB/s), 1977KiB/s-8184KiB/s (2024kB/s-8380kB/s), io=18.0MiB (18.9MB), run=1001-1036msec 00:10:37.754 00:10:37.754 Disk stats (read/write): 00:10:37.754 nvme0n1: ios=67/512, merge=0/0, ticks=721/103, in_queue=824, util=86.87% 00:10:37.754 nvme0n2: ios=37/512, merge=0/0, ticks=730/112, in_queue=842, util=86.98% 00:10:37.754 nvme0n3: ios=1292/1536, merge=0/0, ticks=457/296, in_queue=753, util=89.02% 00:10:37.754 nvme0n4: ios=1549/1654, merge=0/0, ticks=759/317, in_queue=1076, util=91.25% 00:10:37.754 01:48:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:10:37.754 [global] 00:10:37.754 thread=1 00:10:37.754 invalidate=1 00:10:37.754 rw=write 00:10:37.754 time_based=1 00:10:37.754 runtime=1 00:10:37.754 ioengine=libaio 00:10:37.754 direct=1 00:10:37.754 bs=4096 00:10:37.754 iodepth=128 00:10:37.754 norandommap=0 00:10:37.754 numjobs=1 00:10:37.754 00:10:37.754 verify_dump=1 00:10:37.754 verify_backlog=512 00:10:37.754 verify_state_save=0 00:10:37.754 do_verify=1 00:10:37.754 verify=crc32c-intel 00:10:37.754 [job0] 00:10:37.754 filename=/dev/nvme0n1 00:10:37.754 [job1] 00:10:37.754 filename=/dev/nvme0n2 00:10:37.754 [job2] 00:10:37.754 filename=/dev/nvme0n3 00:10:37.754 [job3] 00:10:37.754 filename=/dev/nvme0n4 00:10:37.754 Could not set queue depth (nvme0n1) 00:10:37.754 Could not set queue depth (nvme0n2) 00:10:37.754 Could not set queue depth (nvme0n3) 00:10:37.754 Could not set queue depth (nvme0n4) 00:10:37.754 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:37.754 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:37.754 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:37.754 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:37.754 fio-3.35 00:10:37.754 Starting 4 threads 00:10:39.136 00:10:39.136 job0: (groupid=0, jobs=1): err= 0: pid=1352377: Wed Jul 24 01:48:53 2024 00:10:39.136 read: IOPS=4362, BW=17.0MiB/s (17.9MB/s)(17.1MiB/1004msec) 00:10:39.136 slat (usec): min=3, max=7631, avg=96.95, stdev=532.13 00:10:39.136 clat (usec): min=3179, max=23284, avg=12838.90, stdev=2012.53 00:10:39.136 lat (usec): min=3191, max=23299, avg=12935.85, stdev=2052.34 00:10:39.136 clat percentiles (usec): 00:10:39.136 | 1.00th=[ 8094], 5.00th=[ 9634], 10.00th=[10552], 20.00th=[11600], 00:10:39.136 | 30.00th=[11994], 40.00th=[12387], 50.00th=[12780], 60.00th=[13042], 00:10:39.136 | 70.00th=[13829], 80.00th=[14222], 90.00th=[15270], 95.00th=[15926], 00:10:39.136 | 99.00th=[18744], 99.50th=[18744], 99.90th=[21365], 99.95th=[21627], 00:10:39.136 | 99.99th=[23200] 00:10:39.136 write: IOPS=4589, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1004msec); 0 zone resets 00:10:39.136 slat (usec): min=3, max=28865, avg=113.47, stdev=828.28 00:10:39.136 clat (usec): min=6417, max=74479, avg=15030.45, stdev=8871.45 00:10:39.136 lat (usec): min=6433, max=74504, avg=15143.92, stdev=8941.58 00:10:39.136 clat percentiles (usec): 00:10:39.136 | 1.00th=[ 8094], 5.00th=[ 8979], 10.00th=[10159], 20.00th=[11207], 00:10:39.136 | 30.00th=[11600], 40.00th=[11863], 50.00th=[12256], 60.00th=[12649], 00:10:39.136 | 70.00th=[13173], 80.00th=[14877], 90.00th=[23200], 95.00th=[35914], 00:10:39.136 | 99.00th=[52167], 99.50th=[61604], 99.90th=[65799], 99.95th=[74974], 00:10:39.136 | 99.99th=[74974] 00:10:39.136 bw ( KiB/s): min=16416, max=20480, per=26.61%, avg=18448.00, stdev=2873.68, samples=2 00:10:39.136 iops : min= 4104, max= 5120, avg=4612.00, stdev=718.42, samples=2 00:10:39.136 lat (msec) : 4=0.22%, 10=7.83%, 20=85.21%, 50=6.00%, 100=0.73% 00:10:39.136 cpu : usr=6.18%, sys=10.57%, ctx=430, majf=0, minf=1 00:10:39.136 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:10:39.136 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:39.136 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:39.136 issued rwts: total=4380,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:39.136 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:39.136 job1: (groupid=0, jobs=1): err= 0: pid=1352378: Wed Jul 24 01:48:53 2024 00:10:39.136 read: IOPS=4051, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1011msec) 00:10:39.136 slat (usec): min=2, max=13994, avg=113.55, stdev=841.00 00:10:39.136 clat (usec): min=4461, max=37334, avg=15176.22, stdev=4072.14 00:10:39.136 lat (usec): min=4472, max=37343, avg=15289.77, stdev=4132.24 00:10:39.136 clat percentiles (usec): 00:10:39.136 | 1.00th=[ 4883], 5.00th=[10290], 10.00th=[11338], 20.00th=[12125], 00:10:39.136 | 30.00th=[13042], 40.00th=[13829], 50.00th=[14615], 60.00th=[15270], 00:10:39.137 | 70.00th=[16450], 80.00th=[17957], 90.00th=[20579], 95.00th=[22938], 00:10:39.137 | 99.00th=[27132], 99.50th=[29754], 99.90th=[36439], 99.95th=[36439], 00:10:39.137 | 99.99th=[37487] 00:10:39.137 write: IOPS=4223, BW=16.5MiB/s (17.3MB/s)(16.7MiB/1011msec); 0 zone resets 00:10:39.137 slat (usec): min=3, max=12542, avg=113.01, stdev=831.18 00:10:39.137 clat (usec): min=712, max=47197, avg=15439.01, stdev=7577.61 00:10:39.137 lat (usec): min=721, max=47210, avg=15552.03, stdev=7654.32 00:10:39.137 clat percentiles (usec): 00:10:39.137 | 1.00th=[ 4817], 5.00th=[ 7242], 10.00th=[10028], 20.00th=[11469], 00:10:39.137 | 30.00th=[12518], 40.00th=[13173], 50.00th=[13566], 60.00th=[13960], 00:10:39.137 | 70.00th=[15533], 80.00th=[16909], 90.00th=[23200], 95.00th=[35390], 00:10:39.137 | 99.00th=[45351], 99.50th=[45351], 99.90th=[47449], 99.95th=[47449], 00:10:39.137 | 99.99th=[47449] 00:10:39.137 bw ( KiB/s): min=16384, max=16760, per=23.90%, avg=16572.00, stdev=265.87, samples=2 00:10:39.137 iops : min= 4096, max= 4190, avg=4143.00, stdev=66.47, samples=2 00:10:39.137 lat (usec) : 750=0.02% 00:10:39.137 lat (msec) : 10=7.16%, 20=81.77%, 50=11.04% 00:10:39.137 cpu : usr=6.53%, sys=7.72%, ctx=269, majf=0, minf=1 00:10:39.137 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:39.137 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:39.137 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:39.137 issued rwts: total=4096,4270,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:39.137 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:39.137 job2: (groupid=0, jobs=1): err= 0: pid=1352379: Wed Jul 24 01:48:53 2024 00:10:39.137 read: IOPS=4045, BW=15.8MiB/s (16.6MB/s)(16.5MiB/1044msec) 00:10:39.137 slat (usec): min=2, max=21334, avg=103.01, stdev=759.66 00:10:39.137 clat (usec): min=1058, max=60530, avg=16140.17, stdev=8319.47 00:10:39.137 lat (usec): min=1063, max=60535, avg=16243.18, stdev=8333.52 00:10:39.137 clat percentiles (usec): 00:10:39.137 | 1.00th=[ 2540], 5.00th=[ 8586], 10.00th=[10814], 20.00th=[12649], 00:10:39.137 | 30.00th=[13698], 40.00th=[14222], 50.00th=[14484], 60.00th=[14877], 00:10:39.137 | 70.00th=[15533], 80.00th=[17171], 90.00th=[23200], 95.00th=[27395], 00:10:39.137 | 99.00th=[55313], 99.50th=[55837], 99.90th=[55837], 99.95th=[55837], 00:10:39.137 | 99.99th=[60556] 00:10:39.137 write: IOPS=4413, BW=17.2MiB/s (18.1MB/s)(18.0MiB/1044msec); 0 zone resets 00:10:39.137 slat (usec): min=3, max=24795, avg=92.39, stdev=684.33 00:10:39.137 clat (usec): min=581, max=30690, avg=13949.36, stdev=3745.65 00:10:39.137 lat (usec): min=746, max=43379, avg=14041.75, stdev=3799.78 00:10:39.137 clat percentiles (usec): 00:10:39.137 | 1.00th=[ 5604], 5.00th=[ 8717], 10.00th=[ 9765], 20.00th=[11600], 00:10:39.137 | 30.00th=[13435], 40.00th=[13829], 50.00th=[14222], 60.00th=[14353], 00:10:39.137 | 70.00th=[14615], 80.00th=[14877], 90.00th=[15795], 95.00th=[19530], 00:10:39.137 | 99.00th=[30016], 99.50th=[30802], 99.90th=[30802], 99.95th=[30802], 00:10:39.137 | 99.99th=[30802] 00:10:39.137 bw ( KiB/s): min=18184, max=18680, per=26.59%, avg=18432.00, stdev=350.72, samples=2 00:10:39.137 iops : min= 4546, max= 4670, avg=4608.00, stdev=87.68, samples=2 00:10:39.137 lat (usec) : 750=0.03%, 1000=0.01% 00:10:39.137 lat (msec) : 2=0.07%, 4=1.17%, 10=8.57%, 20=81.73%, 50=7.09% 00:10:39.137 lat (msec) : 100=1.32% 00:10:39.137 cpu : usr=3.84%, sys=6.52%, ctx=386, majf=0, minf=1 00:10:39.137 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:10:39.137 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:39.137 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:39.137 issued rwts: total=4223,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:39.137 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:39.137 job3: (groupid=0, jobs=1): err= 0: pid=1352380: Wed Jul 24 01:48:53 2024 00:10:39.137 read: IOPS=4160, BW=16.3MiB/s (17.0MB/s)(16.4MiB/1010msec) 00:10:39.137 slat (usec): min=2, max=17679, avg=127.24, stdev=915.76 00:10:39.137 clat (usec): min=3885, max=67117, avg=15614.94, stdev=7525.83 00:10:39.137 lat (usec): min=4681, max=67135, avg=15742.18, stdev=7591.62 00:10:39.137 clat percentiles (usec): 00:10:39.137 | 1.00th=[ 5932], 5.00th=[10552], 10.00th=[11076], 20.00th=[11731], 00:10:39.137 | 30.00th=[12649], 40.00th=[13042], 50.00th=[13435], 60.00th=[13829], 00:10:39.137 | 70.00th=[15533], 80.00th=[17957], 90.00th=[22676], 95.00th=[24249], 00:10:39.137 | 99.00th=[63177], 99.50th=[65274], 99.90th=[66847], 99.95th=[67634], 00:10:39.137 | 99.99th=[67634] 00:10:39.137 write: IOPS=4562, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1010msec); 0 zone resets 00:10:39.137 slat (usec): min=3, max=17890, avg=89.52, stdev=651.37 00:10:39.137 clat (usec): min=2570, max=67122, avg=13565.59, stdev=5320.39 00:10:39.137 lat (usec): min=2576, max=67142, avg=13655.11, stdev=5372.94 00:10:39.137 clat percentiles (usec): 00:10:39.137 | 1.00th=[ 4359], 5.00th=[ 6718], 10.00th=[ 8586], 20.00th=[10814], 00:10:39.137 | 30.00th=[11863], 40.00th=[12518], 50.00th=[13566], 60.00th=[14222], 00:10:39.137 | 70.00th=[14484], 80.00th=[15008], 90.00th=[17433], 95.00th=[19268], 00:10:39.137 | 99.00th=[43779], 99.50th=[50594], 99.90th=[50594], 99.95th=[50594], 00:10:39.137 | 99.99th=[67634] 00:10:39.137 bw ( KiB/s): min=16224, max=20464, per=26.46%, avg=18344.00, stdev=2998.13, samples=2 00:10:39.137 iops : min= 4056, max= 5116, avg=4586.00, stdev=749.53, samples=2 00:10:39.137 lat (msec) : 4=0.48%, 10=9.36%, 20=80.12%, 50=8.83%, 100=1.20% 00:10:39.137 cpu : usr=5.85%, sys=7.33%, ctx=441, majf=0, minf=1 00:10:39.137 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:10:39.137 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:39.137 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:39.137 issued rwts: total=4202,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:39.137 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:39.137 00:10:39.137 Run status group 0 (all jobs): 00:10:39.137 READ: bw=63.2MiB/s (66.3MB/s), 15.8MiB/s-17.0MiB/s (16.6MB/s-17.9MB/s), io=66.0MiB (69.2MB), run=1004-1044msec 00:10:39.137 WRITE: bw=67.7MiB/s (71.0MB/s), 16.5MiB/s-17.9MiB/s (17.3MB/s-18.8MB/s), io=70.7MiB (74.1MB), run=1004-1044msec 00:10:39.137 00:10:39.137 Disk stats (read/write): 00:10:39.137 nvme0n1: ios=3634/3853, merge=0/0, ticks=15957/23010, in_queue=38967, util=86.97% 00:10:39.137 nvme0n2: ios=3237/3584, merge=0/0, ticks=45960/55435, in_queue=101395, util=97.66% 00:10:39.137 nvme0n3: ios=3642/3905, merge=0/0, ticks=31279/32792, in_queue=64071, util=97.91% 00:10:39.137 nvme0n4: ios=3590/4055, merge=0/0, ticks=49620/50542, in_queue=100162, util=90.53% 00:10:39.137 01:48:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:10:39.137 [global] 00:10:39.137 thread=1 00:10:39.137 invalidate=1 00:10:39.137 rw=randwrite 00:10:39.137 time_based=1 00:10:39.137 runtime=1 00:10:39.137 ioengine=libaio 00:10:39.137 direct=1 00:10:39.137 bs=4096 00:10:39.137 iodepth=128 00:10:39.137 norandommap=0 00:10:39.137 numjobs=1 00:10:39.137 00:10:39.137 verify_dump=1 00:10:39.137 verify_backlog=512 00:10:39.137 verify_state_save=0 00:10:39.137 do_verify=1 00:10:39.137 verify=crc32c-intel 00:10:39.137 [job0] 00:10:39.137 filename=/dev/nvme0n1 00:10:39.137 [job1] 00:10:39.137 filename=/dev/nvme0n2 00:10:39.137 [job2] 00:10:39.137 filename=/dev/nvme0n3 00:10:39.137 [job3] 00:10:39.137 filename=/dev/nvme0n4 00:10:39.137 Could not set queue depth (nvme0n1) 00:10:39.137 Could not set queue depth (nvme0n2) 00:10:39.137 Could not set queue depth (nvme0n3) 00:10:39.137 Could not set queue depth (nvme0n4) 00:10:39.395 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:39.395 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:39.395 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:39.395 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:39.395 fio-3.35 00:10:39.395 Starting 4 threads 00:10:40.769 00:10:40.769 job0: (groupid=0, jobs=1): err= 0: pid=1352608: Wed Jul 24 01:48:55 2024 00:10:40.769 read: IOPS=4589, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1004msec) 00:10:40.769 slat (usec): min=2, max=15245, avg=96.94, stdev=653.14 00:10:40.769 clat (usec): min=2114, max=55196, avg=13265.08, stdev=6443.04 00:10:40.769 lat (usec): min=2118, max=55204, avg=13362.02, stdev=6483.84 00:10:40.769 clat percentiles (usec): 00:10:40.769 | 1.00th=[ 4686], 5.00th=[ 7570], 10.00th=[ 8717], 20.00th=[ 9896], 00:10:40.769 | 30.00th=[10290], 40.00th=[10945], 50.00th=[11207], 60.00th=[12387], 00:10:40.770 | 70.00th=[13566], 80.00th=[14877], 90.00th=[19006], 95.00th=[29230], 00:10:40.770 | 99.00th=[35390], 99.50th=[49021], 99.90th=[53740], 99.95th=[53740], 00:10:40.770 | 99.99th=[55313] 00:10:40.770 write: IOPS=4696, BW=18.3MiB/s (19.2MB/s)(18.4MiB/1004msec); 0 zone resets 00:10:40.770 slat (usec): min=3, max=20658, avg=94.80, stdev=719.07 00:10:40.770 clat (usec): min=600, max=72861, avg=13936.12, stdev=10899.06 00:10:40.770 lat (usec): min=618, max=72870, avg=14030.92, stdev=10940.66 00:10:40.770 clat percentiles (usec): 00:10:40.770 | 1.00th=[ 1336], 5.00th=[ 2933], 10.00th=[ 7046], 20.00th=[ 8455], 00:10:40.770 | 30.00th=[10159], 40.00th=[11076], 50.00th=[11600], 60.00th=[12649], 00:10:40.770 | 70.00th=[12911], 80.00th=[14877], 90.00th=[24773], 95.00th=[36439], 00:10:40.770 | 99.00th=[69731], 99.50th=[70779], 99.90th=[72877], 99.95th=[72877], 00:10:40.770 | 99.99th=[72877] 00:10:40.770 bw ( KiB/s): min=18176, max=18688, per=31.07%, avg=18432.00, stdev=362.04, samples=2 00:10:40.770 iops : min= 4544, max= 4672, avg=4608.00, stdev=90.51, samples=2 00:10:40.770 lat (usec) : 750=0.09%, 1000=0.25% 00:10:40.770 lat (msec) : 2=1.00%, 4=2.53%, 10=21.36%, 20=64.81%, 50=8.59% 00:10:40.770 lat (msec) : 100=1.38% 00:10:40.770 cpu : usr=3.29%, sys=7.38%, ctx=333, majf=0, minf=13 00:10:40.770 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:10:40.770 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:40.770 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:40.770 issued rwts: total=4608,4715,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:40.770 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:40.770 job1: (groupid=0, jobs=1): err= 0: pid=1352609: Wed Jul 24 01:48:55 2024 00:10:40.770 read: IOPS=2736, BW=10.7MiB/s (11.2MB/s)(11.1MiB/1043msec) 00:10:40.770 slat (usec): min=2, max=24657, avg=167.04, stdev=1132.40 00:10:40.770 clat (usec): min=4620, max=74915, avg=20417.41, stdev=13178.03 00:10:40.770 lat (usec): min=4629, max=99572, avg=20584.45, stdev=13265.39 00:10:40.770 clat percentiles (usec): 00:10:40.770 | 1.00th=[ 7635], 5.00th=[ 8979], 10.00th=[10814], 20.00th=[11469], 00:10:40.770 | 30.00th=[13566], 40.00th=[15008], 50.00th=[16188], 60.00th=[17433], 00:10:40.770 | 70.00th=[19268], 80.00th=[24249], 90.00th=[42206], 95.00th=[50594], 00:10:40.770 | 99.00th=[74974], 99.50th=[74974], 99.90th=[74974], 99.95th=[74974], 00:10:40.770 | 99.99th=[74974] 00:10:40.770 write: IOPS=2945, BW=11.5MiB/s (12.1MB/s)(12.0MiB/1043msec); 0 zone resets 00:10:40.770 slat (usec): min=3, max=31041, avg=160.49, stdev=1121.27 00:10:40.770 clat (usec): min=5425, max=80425, avg=23353.08, stdev=17745.99 00:10:40.770 lat (usec): min=6742, max=80469, avg=23513.57, stdev=17865.30 00:10:40.770 clat percentiles (usec): 00:10:40.770 | 1.00th=[ 8225], 5.00th=[ 9503], 10.00th=[10159], 20.00th=[11076], 00:10:40.770 | 30.00th=[11600], 40.00th=[13042], 50.00th=[13435], 60.00th=[15664], 00:10:40.770 | 70.00th=[27657], 80.00th=[35390], 90.00th=[52167], 95.00th=[63177], 00:10:40.770 | 99.00th=[72877], 99.50th=[72877], 99.90th=[76022], 99.95th=[76022], 00:10:40.770 | 99.99th=[80217] 00:10:40.770 bw ( KiB/s): min= 8872, max=15704, per=20.72%, avg=12288.00, stdev=4830.95, samples=2 00:10:40.770 iops : min= 2218, max= 3926, avg=3072.00, stdev=1207.74, samples=2 00:10:40.770 lat (msec) : 10=7.85%, 20=59.80%, 50=23.49%, 100=8.86% 00:10:40.770 cpu : usr=3.93%, sys=3.55%, ctx=322, majf=0, minf=13 00:10:40.770 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:10:40.770 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:40.770 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:40.770 issued rwts: total=2854,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:40.770 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:40.770 job2: (groupid=0, jobs=1): err= 0: pid=1352611: Wed Jul 24 01:48:55 2024 00:10:40.770 read: IOPS=4081, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1003msec) 00:10:40.770 slat (usec): min=2, max=26531, avg=124.14, stdev=954.33 00:10:40.770 clat (usec): min=1966, max=53014, avg=16880.88, stdev=7563.72 00:10:40.770 lat (usec): min=2817, max=53027, avg=17005.01, stdev=7604.03 00:10:40.770 clat percentiles (usec): 00:10:40.770 | 1.00th=[ 8848], 5.00th=[10028], 10.00th=[10814], 20.00th=[11863], 00:10:40.770 | 30.00th=[12649], 40.00th=[13173], 50.00th=[13698], 60.00th=[14877], 00:10:40.770 | 70.00th=[17433], 80.00th=[21890], 90.00th=[29230], 95.00th=[31851], 00:10:40.770 | 99.00th=[45876], 99.50th=[45876], 99.90th=[45876], 99.95th=[45876], 00:10:40.770 | 99.99th=[53216] 00:10:40.770 write: IOPS=4083, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1003msec); 0 zone resets 00:10:40.770 slat (usec): min=3, max=14513, avg=104.85, stdev=769.90 00:10:40.770 clat (usec): min=5156, max=30092, avg=14158.57, stdev=3709.37 00:10:40.770 lat (usec): min=5176, max=30103, avg=14263.42, stdev=3759.77 00:10:40.770 clat percentiles (usec): 00:10:40.770 | 1.00th=[ 7373], 5.00th=[ 8586], 10.00th=[10159], 20.00th=[11207], 00:10:40.770 | 30.00th=[12256], 40.00th=[12911], 50.00th=[13304], 60.00th=[14222], 00:10:40.770 | 70.00th=[15401], 80.00th=[16909], 90.00th=[19268], 95.00th=[21103], 00:10:40.770 | 99.00th=[26608], 99.50th=[27657], 99.90th=[27657], 99.95th=[27919], 00:10:40.770 | 99.99th=[30016] 00:10:40.770 bw ( KiB/s): min=16384, max=16384, per=27.62%, avg=16384.00, stdev= 0.00, samples=2 00:10:40.770 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:10:40.770 lat (msec) : 2=0.01%, 4=0.21%, 10=7.13%, 20=77.36%, 50=15.27% 00:10:40.770 lat (msec) : 100=0.01% 00:10:40.770 cpu : usr=3.19%, sys=6.99%, ctx=240, majf=0, minf=15 00:10:40.770 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:40.770 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:40.770 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:40.770 issued rwts: total=4094,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:40.770 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:40.770 job3: (groupid=0, jobs=1): err= 0: pid=1352612: Wed Jul 24 01:48:55 2024 00:10:40.770 read: IOPS=3183, BW=12.4MiB/s (13.0MB/s)(13.0MiB/1043msec) 00:10:40.770 slat (usec): min=2, max=22172, avg=151.48, stdev=1009.75 00:10:40.770 clat (usec): min=5776, max=73535, avg=18588.79, stdev=9864.51 00:10:40.770 lat (usec): min=5789, max=73573, avg=18740.27, stdev=9938.80 00:10:40.770 clat percentiles (usec): 00:10:40.770 | 1.00th=[ 7570], 5.00th=[10814], 10.00th=[11731], 20.00th=[12649], 00:10:40.770 | 30.00th=[13566], 40.00th=[14746], 50.00th=[15270], 60.00th=[16188], 00:10:40.770 | 70.00th=[18220], 80.00th=[21890], 90.00th=[28705], 95.00th=[44827], 00:10:40.770 | 99.00th=[58459], 99.50th=[60556], 99.90th=[73925], 99.95th=[73925], 00:10:40.770 | 99.99th=[73925] 00:10:40.770 write: IOPS=3436, BW=13.4MiB/s (14.1MB/s)(14.0MiB/1043msec); 0 zone resets 00:10:40.770 slat (usec): min=4, max=12110, avg=126.03, stdev=562.83 00:10:40.770 clat (usec): min=1350, max=73561, avg=19680.97, stdev=9761.78 00:10:40.770 lat (usec): min=1363, max=73584, avg=19807.00, stdev=9799.45 00:10:40.770 clat percentiles (usec): 00:10:40.770 | 1.00th=[ 5407], 5.00th=[ 8979], 10.00th=[11469], 20.00th=[13566], 00:10:40.770 | 30.00th=[14222], 40.00th=[15008], 50.00th=[15795], 60.00th=[18220], 00:10:40.770 | 70.00th=[22152], 80.00th=[26608], 90.00th=[33424], 95.00th=[35390], 00:10:40.770 | 99.00th=[60031], 99.50th=[62653], 99.90th=[63701], 99.95th=[73925], 00:10:40.770 | 99.99th=[73925] 00:10:40.770 bw ( KiB/s): min=12288, max=16384, per=24.17%, avg=14336.00, stdev=2896.31, samples=2 00:10:40.770 iops : min= 3072, max= 4096, avg=3584.00, stdev=724.08, samples=2 00:10:40.770 lat (msec) : 2=0.13%, 4=0.04%, 10=4.79%, 20=63.05%, 50=29.49% 00:10:40.770 lat (msec) : 100=2.49% 00:10:40.770 cpu : usr=6.05%, sys=7.77%, ctx=433, majf=0, minf=9 00:10:40.770 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:10:40.770 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:40.770 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:40.770 issued rwts: total=3320,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:40.770 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:40.770 00:10:40.770 Run status group 0 (all jobs): 00:10:40.770 READ: bw=55.7MiB/s (58.4MB/s), 10.7MiB/s-17.9MiB/s (11.2MB/s-18.8MB/s), io=58.1MiB (60.9MB), run=1003-1043msec 00:10:40.770 WRITE: bw=57.9MiB/s (60.7MB/s), 11.5MiB/s-18.3MiB/s (12.1MB/s-19.2MB/s), io=60.4MiB (63.4MB), run=1003-1043msec 00:10:40.770 00:10:40.770 Disk stats (read/write): 00:10:40.770 nvme0n1: ios=3606/3887, merge=0/0, ticks=23924/20923, in_queue=44847, util=96.69% 00:10:40.770 nvme0n2: ios=2574/2823, merge=0/0, ticks=18907/21018, in_queue=39925, util=96.85% 00:10:40.770 nvme0n3: ios=3562/3584, merge=0/0, ticks=31549/27245, in_queue=58794, util=97.49% 00:10:40.770 nvme0n4: ios=2881/3072, merge=0/0, ticks=48202/55344, in_queue=103546, util=89.59% 00:10:40.770 01:48:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:10:40.770 01:48:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1352750 00:10:40.770 01:48:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:10:40.770 01:48:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:10:40.770 [global] 00:10:40.770 thread=1 00:10:40.770 invalidate=1 00:10:40.770 rw=read 00:10:40.770 time_based=1 00:10:40.770 runtime=10 00:10:40.770 ioengine=libaio 00:10:40.770 direct=1 00:10:40.770 bs=4096 00:10:40.770 iodepth=1 00:10:40.771 norandommap=1 00:10:40.771 numjobs=1 00:10:40.771 00:10:40.771 [job0] 00:10:40.771 filename=/dev/nvme0n1 00:10:40.771 [job1] 00:10:40.771 filename=/dev/nvme0n2 00:10:40.771 [job2] 00:10:40.771 filename=/dev/nvme0n3 00:10:40.771 [job3] 00:10:40.771 filename=/dev/nvme0n4 00:10:40.771 Could not set queue depth (nvme0n1) 00:10:40.771 Could not set queue depth (nvme0n2) 00:10:40.771 Could not set queue depth (nvme0n3) 00:10:40.771 Could not set queue depth (nvme0n4) 00:10:40.771 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:40.771 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:40.771 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:40.771 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:40.771 fio-3.35 00:10:40.771 Starting 4 threads 00:10:44.049 01:48:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:10:44.049 01:48:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:10:44.049 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=303104, buflen=4096 00:10:44.049 fio: pid=1352848, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:44.049 01:48:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:44.049 01:48:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:10:44.049 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=450560, buflen=4096 00:10:44.049 fio: pid=1352847, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:44.307 01:48:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:44.307 01:48:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:10:44.307 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=1454080, buflen=4096 00:10:44.307 fio: pid=1352845, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:44.565 01:48:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:44.565 01:48:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:10:44.565 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=3534848, buflen=4096 00:10:44.565 fio: pid=1352846, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:44.565 00:10:44.565 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1352845: Wed Jul 24 01:48:59 2024 00:10:44.565 read: IOPS=103, BW=413KiB/s (423kB/s)(1420KiB/3436msec) 00:10:44.565 slat (usec): min=5, max=28824, avg=112.86, stdev=1533.61 00:10:44.565 clat (usec): min=271, max=41301, avg=9496.01, stdev=16999.36 00:10:44.565 lat (usec): min=282, max=44192, avg=9609.10, stdev=17039.74 00:10:44.565 clat percentiles (usec): 00:10:44.565 | 1.00th=[ 277], 5.00th=[ 281], 10.00th=[ 285], 20.00th=[ 293], 00:10:44.565 | 30.00th=[ 334], 40.00th=[ 347], 50.00th=[ 351], 60.00th=[ 359], 00:10:44.565 | 70.00th=[ 388], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:44.565 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:44.565 | 99.99th=[41157] 00:10:44.565 bw ( KiB/s): min= 96, max= 2184, per=29.41%, avg=445.33, stdev=851.78, samples=6 00:10:44.565 iops : min= 24, max= 546, avg=111.33, stdev=212.94, samples=6 00:10:44.565 lat (usec) : 500=76.12%, 750=1.12% 00:10:44.565 lat (msec) : 50=22.47% 00:10:44.565 cpu : usr=0.00%, sys=0.38%, ctx=362, majf=0, minf=1 00:10:44.565 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:44.565 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:44.565 complete : 0=0.3%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:44.565 issued rwts: total=356,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:44.565 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:44.565 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1352846: Wed Jul 24 01:48:59 2024 00:10:44.565 read: IOPS=233, BW=931KiB/s (954kB/s)(3452KiB/3706msec) 00:10:44.565 slat (usec): min=4, max=21910, avg=42.68, stdev=763.02 00:10:44.565 clat (usec): min=215, max=41228, avg=4208.67, stdev=11920.93 00:10:44.565 lat (usec): min=228, max=45983, avg=4251.38, stdev=11957.75 00:10:44.565 clat percentiles (usec): 00:10:44.565 | 1.00th=[ 225], 5.00th=[ 235], 10.00th=[ 241], 20.00th=[ 255], 00:10:44.565 | 30.00th=[ 269], 40.00th=[ 289], 50.00th=[ 326], 60.00th=[ 388], 00:10:44.565 | 70.00th=[ 474], 80.00th=[ 502], 90.00th=[ 562], 95.00th=[41157], 00:10:44.565 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:44.565 | 99.99th=[41157] 00:10:44.565 bw ( KiB/s): min= 96, max= 3386, per=41.63%, avg=630.00, stdev=1225.75, samples=7 00:10:44.565 iops : min= 24, max= 846, avg=157.43, stdev=306.25, samples=7 00:10:44.565 lat (usec) : 250=16.32%, 500=63.66%, 750=10.42% 00:10:44.565 lat (msec) : 50=9.49% 00:10:44.565 cpu : usr=0.13%, sys=0.27%, ctx=868, majf=0, minf=1 00:10:44.565 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:44.565 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:44.565 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:44.565 issued rwts: total=864,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:44.565 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:44.565 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1352847: Wed Jul 24 01:48:59 2024 00:10:44.565 read: IOPS=34, BW=138KiB/s (141kB/s)(440KiB/3195msec) 00:10:44.565 slat (nsec): min=6120, max=42952, avg=21490.14, stdev=9343.78 00:10:44.565 clat (usec): min=259, max=43957, avg=28812.80, stdev=18707.91 00:10:44.565 lat (usec): min=284, max=43972, avg=28834.17, stdev=18710.43 00:10:44.565 clat percentiles (usec): 00:10:44.565 | 1.00th=[ 269], 5.00th=[ 306], 10.00th=[ 338], 20.00th=[ 392], 00:10:44.565 | 30.00th=[ 701], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:44.565 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:44.565 | 99.00th=[41157], 99.50th=[43779], 99.90th=[43779], 99.95th=[43779], 00:10:44.565 | 99.99th=[43779] 00:10:44.565 bw ( KiB/s): min= 96, max= 344, per=9.25%, avg=140.00, stdev=100.02, samples=6 00:10:44.565 iops : min= 24, max= 86, avg=35.00, stdev=25.00, samples=6 00:10:44.565 lat (usec) : 500=28.83%, 750=0.90% 00:10:44.565 lat (msec) : 50=69.37% 00:10:44.565 cpu : usr=0.03%, sys=0.03%, ctx=111, majf=0, minf=1 00:10:44.565 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:44.565 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:44.565 complete : 0=0.9%, 4=99.1%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:44.565 issued rwts: total=111,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:44.565 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:44.565 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1352848: Wed Jul 24 01:48:59 2024 00:10:44.565 read: IOPS=25, BW=101KiB/s (103kB/s)(296KiB/2935msec) 00:10:44.565 slat (nsec): min=15016, max=46450, avg=22441.51, stdev=8446.03 00:10:44.565 clat (usec): min=430, max=41581, avg=39338.13, stdev=8035.00 00:10:44.565 lat (usec): min=451, max=41599, avg=39360.66, stdev=8035.05 00:10:44.565 clat percentiles (usec): 00:10:44.565 | 1.00th=[ 433], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:10:44.565 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:44.565 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:44.565 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:10:44.565 | 99.99th=[41681] 00:10:44.565 bw ( KiB/s): min= 96, max= 112, per=6.61%, avg=100.80, stdev= 7.16, samples=5 00:10:44.565 iops : min= 24, max= 28, avg=25.20, stdev= 1.79, samples=5 00:10:44.565 lat (usec) : 500=2.67%, 750=1.33% 00:10:44.565 lat (msec) : 50=94.67% 00:10:44.565 cpu : usr=0.07%, sys=0.00%, ctx=76, majf=0, minf=1 00:10:44.565 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:44.565 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:44.565 complete : 0=1.3%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:44.565 issued rwts: total=75,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:44.565 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:44.565 00:10:44.565 Run status group 0 (all jobs): 00:10:44.565 READ: bw=1513KiB/s (1550kB/s), 101KiB/s-931KiB/s (103kB/s-954kB/s), io=5608KiB (5743kB), run=2935-3706msec 00:10:44.565 00:10:44.565 Disk stats (read/write): 00:10:44.565 nvme0n1: ios=395/0, merge=0/0, ticks=3478/0, in_queue=3478, util=98.31% 00:10:44.566 nvme0n2: ios=672/0, merge=0/0, ticks=3979/0, in_queue=3979, util=98.47% 00:10:44.566 nvme0n3: ios=108/0, merge=0/0, ticks=3089/0, in_queue=3089, util=96.72% 00:10:44.566 nvme0n4: ios=123/0, merge=0/0, ticks=3922/0, in_queue=3922, util=98.81% 00:10:44.823 01:48:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:44.823 01:48:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:10:45.081 01:48:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:45.081 01:48:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:10:45.338 01:49:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:45.338 01:49:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:10:45.596 01:49:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:45.596 01:49:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:10:45.853 01:49:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:10:45.853 01:49:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 1352750 00:10:45.853 01:49:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:10:45.853 01:49:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:46.111 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:46.111 01:49:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:46.111 01:49:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1217 -- # local i=0 00:10:46.111 01:49:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:10:46.111 01:49:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1218 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:46.111 01:49:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:10:46.111 01:49:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1225 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:46.111 01:49:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1229 -- # return 0 00:10:46.111 01:49:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:10:46.111 01:49:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:10:46.111 nvmf hotplug test: fio failed as expected 00:10:46.111 01:49:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:46.369 01:49:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:10:46.369 01:49:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:10:46.369 01:49:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:10:46.369 01:49:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:10:46.369 01:49:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:10:46.369 01:49:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:46.369 01:49:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:10:46.369 01:49:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:46.369 01:49:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:10:46.369 01:49:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:46.369 01:49:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:46.369 rmmod nvme_tcp 00:10:46.369 rmmod nvme_fabrics 00:10:46.369 rmmod nvme_keyring 00:10:46.369 01:49:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:46.369 01:49:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:10:46.369 01:49:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:10:46.369 01:49:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 1350717 ']' 00:10:46.369 01:49:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 1350717 00:10:46.369 01:49:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 1350717 ']' 00:10:46.369 01:49:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 1350717 00:10:46.369 01:49:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:10:46.369 01:49:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:46.369 01:49:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1350717 00:10:46.369 01:49:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:46.369 01:49:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:46.370 01:49:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1350717' 00:10:46.370 killing process with pid 1350717 00:10:46.370 01:49:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 1350717 00:10:46.370 01:49:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 1350717 00:10:46.629 01:49:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:46.629 01:49:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:46.629 01:49:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:46.629 01:49:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:46.629 01:49:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:46.629 01:49:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:46.629 01:49:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:46.629 01:49:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:49.162 01:49:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:49.162 00:10:49.162 real 0m23.509s 00:10:49.162 user 1m22.588s 00:10:49.162 sys 0m6.106s 00:10:49.162 01:49:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:49.162 01:49:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:49.162 ************************************ 00:10:49.162 END TEST nvmf_fio_target 00:10:49.162 ************************************ 00:10:49.162 01:49:03 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:49.162 01:49:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:49.162 01:49:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:49.162 01:49:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:49.162 ************************************ 00:10:49.162 START TEST nvmf_bdevio 00:10:49.162 ************************************ 00:10:49.162 01:49:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:49.162 * Looking for test storage... 00:10:49.162 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:49.162 01:49:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:49.162 01:49:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:10:49.162 01:49:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:49.162 01:49:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:49.162 01:49:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:49.162 01:49:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:49.162 01:49:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:49.162 01:49:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:49.162 01:49:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:49.162 01:49:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:49.162 01:49:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:49.162 01:49:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:49.162 01:49:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:49.162 01:49:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:49.162 01:49:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:49.162 01:49:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:49.162 01:49:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:49.162 01:49:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:49.162 01:49:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:49.162 01:49:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:49.162 01:49:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:49.162 01:49:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:49.162 01:49:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:49.162 01:49:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:49.162 01:49:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:49.162 01:49:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:10:49.162 01:49:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:49.162 01:49:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:10:49.162 01:49:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:49.162 01:49:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:49.162 01:49:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:49.162 01:49:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:49.162 01:49:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:49.162 01:49:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:49.162 01:49:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:49.162 01:49:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:49.162 01:49:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:49.162 01:49:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:49.162 01:49:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:10:49.162 01:49:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:49.162 01:49:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:49.162 01:49:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:49.162 01:49:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:49.162 01:49:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:49.162 01:49:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:49.162 01:49:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:49.162 01:49:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:49.162 01:49:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:49.162 01:49:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:49.162 01:49:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:10:49.162 01:49:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:51.096 01:49:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:51.096 01:49:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:10:51.096 01:49:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:51.096 01:49:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:51.096 01:49:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:51.096 01:49:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:51.096 01:49:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:51.096 01:49:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:10:51.096 01:49:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:51.096 01:49:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:10:51.096 01:49:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:10:51.096 01:49:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:10:51.096 01:49:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:10:51.096 01:49:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:10:51.096 01:49:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:10:51.096 01:49:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:51.096 01:49:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:51.096 01:49:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:51.096 01:49:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:51.096 01:49:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:51.096 01:49:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:51.096 01:49:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:51.096 01:49:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:51.096 01:49:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:51.096 01:49:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:51.096 01:49:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:51.096 01:49:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:51.096 01:49:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:51.096 01:49:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:51.096 01:49:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:51.096 01:49:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:51.096 01:49:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:51.096 01:49:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:51.096 01:49:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:51.096 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:51.096 01:49:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:51.096 01:49:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:51.096 01:49:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:51.096 01:49:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:51.096 01:49:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:51.096 01:49:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:51.096 01:49:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:51.096 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:51.096 01:49:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:51.096 01:49:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:51.096 01:49:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:51.096 01:49:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:51.096 01:49:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:51.096 01:49:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:51.096 01:49:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:51.096 01:49:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:51.096 01:49:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:51.096 01:49:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:51.096 01:49:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:51.096 01:49:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:51.096 01:49:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:51.096 01:49:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:51.096 01:49:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:51.096 01:49:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:51.096 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:51.096 01:49:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:51.096 01:49:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:51.096 01:49:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:51.096 01:49:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:51.096 01:49:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:51.096 01:49:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:51.096 01:49:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:51.096 01:49:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:51.096 01:49:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:51.096 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:51.096 01:49:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:51.096 01:49:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:51.096 01:49:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:10:51.096 01:49:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:51.096 01:49:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:51.096 01:49:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:51.096 01:49:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:51.096 01:49:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:51.096 01:49:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:51.096 01:49:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:51.096 01:49:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:51.096 01:49:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:51.096 01:49:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:51.096 01:49:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:51.096 01:49:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:51.096 01:49:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:51.096 01:49:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:51.096 01:49:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:51.096 01:49:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:51.096 01:49:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:51.096 01:49:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:51.096 01:49:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:51.096 01:49:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:51.096 01:49:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:51.096 01:49:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:51.096 01:49:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:51.096 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:51.096 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.181 ms 00:10:51.096 00:10:51.096 --- 10.0.0.2 ping statistics --- 00:10:51.096 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:51.096 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:10:51.096 01:49:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:51.096 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:51.096 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.079 ms 00:10:51.096 00:10:51.096 --- 10.0.0.1 ping statistics --- 00:10:51.096 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:51.097 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:10:51.097 01:49:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:51.097 01:49:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:10:51.097 01:49:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:51.097 01:49:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:51.097 01:49:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:51.097 01:49:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:51.097 01:49:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:51.097 01:49:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:51.097 01:49:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:51.097 01:49:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:51.097 01:49:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:51.097 01:49:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:51.097 01:49:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:51.097 01:49:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=1355471 00:10:51.097 01:49:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:10:51.097 01:49:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 1355471 00:10:51.097 01:49:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 1355471 ']' 00:10:51.097 01:49:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:51.097 01:49:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:51.097 01:49:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:51.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:51.097 01:49:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:51.097 01:49:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:51.097 [2024-07-24 01:49:05.885858] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:10:51.097 [2024-07-24 01:49:05.885948] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:51.097 EAL: No free 2048 kB hugepages reported on node 1 00:10:51.097 [2024-07-24 01:49:05.953922] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:51.355 [2024-07-24 01:49:06.053951] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:51.355 [2024-07-24 01:49:06.054019] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:51.355 [2024-07-24 01:49:06.054036] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:51.355 [2024-07-24 01:49:06.054049] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:51.355 [2024-07-24 01:49:06.054071] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:51.355 [2024-07-24 01:49:06.054161] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:10:51.355 [2024-07-24 01:49:06.054217] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:10:51.355 [2024-07-24 01:49:06.054273] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:10:51.355 [2024-07-24 01:49:06.054276] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:51.355 01:49:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:51.355 01:49:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:10:51.355 01:49:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:51.355 01:49:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:51.355 01:49:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:51.355 01:49:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:51.355 01:49:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:51.355 01:49:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:51.355 01:49:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:51.355 [2024-07-24 01:49:06.206519] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:51.355 01:49:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:51.355 01:49:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:51.355 01:49:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:51.355 01:49:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:51.355 Malloc0 00:10:51.355 01:49:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:51.355 01:49:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:51.355 01:49:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:51.355 01:49:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:51.355 01:49:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:51.355 01:49:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:51.355 01:49:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:51.355 01:49:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:51.614 01:49:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:51.614 01:49:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:51.614 01:49:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:51.614 01:49:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:51.614 [2024-07-24 01:49:06.258663] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:51.614 01:49:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:51.614 01:49:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:10:51.614 01:49:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:51.614 01:49:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:10:51.614 01:49:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:10:51.614 01:49:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:51.614 01:49:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:51.614 { 00:10:51.614 "params": { 00:10:51.614 "name": "Nvme$subsystem", 00:10:51.614 "trtype": "$TEST_TRANSPORT", 00:10:51.614 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:51.614 "adrfam": "ipv4", 00:10:51.614 "trsvcid": "$NVMF_PORT", 00:10:51.614 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:51.614 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:51.614 "hdgst": ${hdgst:-false}, 00:10:51.614 "ddgst": ${ddgst:-false} 00:10:51.614 }, 00:10:51.614 "method": "bdev_nvme_attach_controller" 00:10:51.614 } 00:10:51.614 EOF 00:10:51.614 )") 00:10:51.614 01:49:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:10:51.614 01:49:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:10:51.614 01:49:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:10:51.614 01:49:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:51.614 "params": { 00:10:51.614 "name": "Nvme1", 00:10:51.614 "trtype": "tcp", 00:10:51.614 "traddr": "10.0.0.2", 00:10:51.614 "adrfam": "ipv4", 00:10:51.614 "trsvcid": "4420", 00:10:51.614 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:51.614 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:51.614 "hdgst": false, 00:10:51.614 "ddgst": false 00:10:51.614 }, 00:10:51.614 "method": "bdev_nvme_attach_controller" 00:10:51.614 }' 00:10:51.614 [2024-07-24 01:49:06.305219] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:10:51.614 [2024-07-24 01:49:06.305290] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1355612 ] 00:10:51.614 EAL: No free 2048 kB hugepages reported on node 1 00:10:51.614 [2024-07-24 01:49:06.367028] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:51.614 [2024-07-24 01:49:06.459288] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:51.614 [2024-07-24 01:49:06.459345] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:51.614 [2024-07-24 01:49:06.459349] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:51.872 I/O targets: 00:10:51.872 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:51.872 00:10:51.872 00:10:51.872 CUnit - A unit testing framework for C - Version 2.1-3 00:10:51.872 http://cunit.sourceforge.net/ 00:10:51.872 00:10:51.872 00:10:51.872 Suite: bdevio tests on: Nvme1n1 00:10:51.872 Test: blockdev write read block ...passed 00:10:51.872 Test: blockdev write zeroes read block ...passed 00:10:51.872 Test: blockdev write zeroes read no split ...passed 00:10:51.872 Test: blockdev write zeroes read split ...passed 00:10:52.130 Test: blockdev write zeroes read split partial ...passed 00:10:52.130 Test: blockdev reset ...[2024-07-24 01:49:06.798232] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:10:52.130 [2024-07-24 01:49:06.798354] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xca9a60 (9): Bad file descriptor 00:10:52.130 [2024-07-24 01:49:06.900522] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:52.130 passed 00:10:52.130 Test: blockdev write read 8 blocks ...passed 00:10:52.130 Test: blockdev write read size > 128k ...passed 00:10:52.130 Test: blockdev write read invalid size ...passed 00:10:52.130 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:52.130 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:52.130 Test: blockdev write read max offset ...passed 00:10:52.387 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:52.387 Test: blockdev writev readv 8 blocks ...passed 00:10:52.387 Test: blockdev writev readv 30 x 1block ...passed 00:10:52.387 Test: blockdev writev readv block ...passed 00:10:52.387 Test: blockdev writev readv size > 128k ...passed 00:10:52.387 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:52.387 Test: blockdev comparev and writev ...[2024-07-24 01:49:07.198477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:52.387 [2024-07-24 01:49:07.198513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:52.387 [2024-07-24 01:49:07.198537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:52.387 [2024-07-24 01:49:07.198561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:52.387 [2024-07-24 01:49:07.198914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:52.387 [2024-07-24 01:49:07.198939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:52.387 [2024-07-24 01:49:07.198961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:52.387 [2024-07-24 01:49:07.198977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:52.387 [2024-07-24 01:49:07.199339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:52.387 [2024-07-24 01:49:07.199364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:52.387 [2024-07-24 01:49:07.199386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:52.387 [2024-07-24 01:49:07.199401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:52.387 [2024-07-24 01:49:07.199755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:52.387 [2024-07-24 01:49:07.199779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:52.387 [2024-07-24 01:49:07.199799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:52.387 [2024-07-24 01:49:07.199815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:52.387 passed 00:10:52.388 Test: blockdev nvme passthru rw ...passed 00:10:52.388 Test: blockdev nvme passthru vendor specific ...[2024-07-24 01:49:07.282638] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:52.388 [2024-07-24 01:49:07.282672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:52.388 [2024-07-24 01:49:07.282843] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:52.388 [2024-07-24 01:49:07.282866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:52.646 [2024-07-24 01:49:07.283056] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:52.646 [2024-07-24 01:49:07.283090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:52.646 [2024-07-24 01:49:07.283291] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:52.646 [2024-07-24 01:49:07.283324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:52.646 passed 00:10:52.646 Test: blockdev nvme admin passthru ...passed 00:10:52.646 Test: blockdev copy ...passed 00:10:52.646 00:10:52.646 Run Summary: Type Total Ran Passed Failed Inactive 00:10:52.646 suites 1 1 n/a 0 0 00:10:52.646 tests 23 23 23 0 0 00:10:52.646 asserts 152 152 152 0 n/a 00:10:52.646 00:10:52.646 Elapsed time = 1.397 seconds 00:10:52.646 01:49:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:52.646 01:49:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:52.646 01:49:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:52.904 01:49:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:52.904 01:49:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:52.904 01:49:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:10:52.904 01:49:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:52.904 01:49:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:10:52.904 01:49:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:52.904 01:49:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:10:52.904 01:49:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:52.904 01:49:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:52.904 rmmod nvme_tcp 00:10:52.904 rmmod nvme_fabrics 00:10:52.904 rmmod nvme_keyring 00:10:52.904 01:49:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:52.904 01:49:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:10:52.904 01:49:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:10:52.904 01:49:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 1355471 ']' 00:10:52.904 01:49:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 1355471 00:10:52.904 01:49:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 1355471 ']' 00:10:52.904 01:49:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 1355471 00:10:52.904 01:49:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:10:52.904 01:49:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:52.904 01:49:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1355471 00:10:52.904 01:49:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:10:52.904 01:49:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:10:52.904 01:49:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1355471' 00:10:52.904 killing process with pid 1355471 00:10:52.904 01:49:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 1355471 00:10:52.904 01:49:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 1355471 00:10:53.162 01:49:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:53.162 01:49:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:53.162 01:49:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:53.162 01:49:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:53.162 01:49:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:53.162 01:49:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:53.162 01:49:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:53.162 01:49:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:55.065 01:49:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:55.065 00:10:55.065 real 0m6.415s 00:10:55.065 user 0m10.347s 00:10:55.065 sys 0m2.125s 00:10:55.065 01:49:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:55.065 01:49:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:55.065 ************************************ 00:10:55.065 END TEST nvmf_bdevio 00:10:55.065 ************************************ 00:10:55.065 01:49:09 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:55.065 00:10:55.065 real 3m51.553s 00:10:55.065 user 9m58.542s 00:10:55.065 sys 1m8.452s 00:10:55.065 01:49:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:55.065 01:49:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:55.065 ************************************ 00:10:55.065 END TEST nvmf_target_core 00:10:55.065 ************************************ 00:10:55.324 01:49:09 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:55.324 01:49:09 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:55.324 01:49:09 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:55.324 01:49:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:55.324 ************************************ 00:10:55.324 START TEST nvmf_target_extra 00:10:55.324 ************************************ 00:10:55.324 01:49:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:55.324 * Looking for test storage... 00:10:55.324 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:10:55.324 01:49:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:55.324 01:49:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:10:55.324 01:49:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:55.324 01:49:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:55.324 01:49:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:55.324 01:49:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:55.324 01:49:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:55.324 01:49:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:55.324 01:49:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:55.324 01:49:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:55.324 01:49:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:55.324 01:49:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:55.324 01:49:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:55.324 01:49:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:55.324 01:49:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:55.324 01:49:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:55.324 01:49:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:55.324 01:49:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:55.324 01:49:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:55.324 01:49:10 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:55.324 01:49:10 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:55.324 01:49:10 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:55.324 01:49:10 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.324 01:49:10 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.324 01:49:10 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.324 01:49:10 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:10:55.324 01:49:10 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.324 01:49:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@47 -- # : 0 00:10:55.324 01:49:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:55.324 01:49:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:55.324 01:49:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:55.324 01:49:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:55.324 01:49:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:55.324 01:49:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:55.324 01:49:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:55.324 01:49:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:55.324 01:49:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:10:55.324 01:49:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:10:55.324 01:49:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:10:55.324 01:49:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:55.324 01:49:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:55.324 01:49:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:55.324 01:49:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:55.324 ************************************ 00:10:55.324 START TEST nvmf_example 00:10:55.324 ************************************ 00:10:55.324 01:49:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:55.324 * Looking for test storage... 00:10:55.324 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:55.324 01:49:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:55.324 01:49:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:10:55.324 01:49:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:55.324 01:49:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:55.324 01:49:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:55.324 01:49:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:55.324 01:49:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:55.324 01:49:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:55.324 01:49:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:55.325 01:49:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:55.325 01:49:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:55.325 01:49:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:55.325 01:49:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:55.325 01:49:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:55.325 01:49:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:55.325 01:49:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:55.325 01:49:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:55.325 01:49:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:55.325 01:49:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:55.325 01:49:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:55.325 01:49:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:55.325 01:49:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:55.325 01:49:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.325 01:49:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.325 01:49:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.325 01:49:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:10:55.325 01:49:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.325 01:49:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:10:55.325 01:49:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:55.325 01:49:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:55.325 01:49:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:55.325 01:49:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:55.325 01:49:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:55.325 01:49:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:55.325 01:49:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:55.325 01:49:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:55.325 01:49:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:10:55.325 01:49:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:10:55.325 01:49:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:10:55.325 01:49:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:10:55.325 01:49:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:10:55.325 01:49:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:10:55.325 01:49:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:10:55.325 01:49:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:10:55.325 01:49:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:55.325 01:49:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:55.325 01:49:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:10:55.325 01:49:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:55.325 01:49:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:55.325 01:49:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:55.325 01:49:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:55.325 01:49:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:55.325 01:49:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:55.325 01:49:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:55.325 01:49:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:55.325 01:49:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:55.325 01:49:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:55.325 01:49:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:10:55.325 01:49:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:57.854 01:49:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:57.854 01:49:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:10:57.854 01:49:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:57.854 01:49:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:57.854 01:49:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:57.854 01:49:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:57.854 01:49:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:57.854 01:49:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:10:57.854 01:49:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:57.854 01:49:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:10:57.854 01:49:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:10:57.854 01:49:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:10:57.854 01:49:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:10:57.854 01:49:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:10:57.854 01:49:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:10:57.854 01:49:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:57.854 01:49:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:57.854 01:49:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:57.854 01:49:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:57.854 01:49:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:57.854 01:49:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:57.854 01:49:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:57.854 01:49:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:57.854 01:49:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:57.854 01:49:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:57.854 01:49:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:57.854 01:49:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:57.854 01:49:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:57.854 01:49:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:57.854 01:49:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:57.854 01:49:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:57.854 01:49:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:57.854 01:49:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:57.854 01:49:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:57.854 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:57.854 01:49:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:57.854 01:49:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:57.854 01:49:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:57.854 01:49:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:57.854 01:49:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:57.854 01:49:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:57.854 01:49:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:57.854 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:57.854 01:49:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:57.854 01:49:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:57.854 01:49:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:57.854 01:49:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:57.854 01:49:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:57.854 01:49:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:57.854 01:49:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:57.854 01:49:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:57.854 01:49:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:57.854 01:49:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:57.854 01:49:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:57.854 01:49:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:57.854 01:49:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:57.854 01:49:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:57.854 01:49:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:57.854 01:49:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:57.854 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:57.854 01:49:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:57.854 01:49:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:57.854 01:49:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:57.854 01:49:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:57.854 01:49:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:57.854 01:49:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:57.854 01:49:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:57.854 01:49:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:57.854 01:49:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:57.854 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:57.854 01:49:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:57.854 01:49:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:57.854 01:49:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:10:57.854 01:49:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:57.854 01:49:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:57.854 01:49:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:57.854 01:49:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:57.854 01:49:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:57.854 01:49:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:57.854 01:49:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:57.854 01:49:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:57.854 01:49:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:57.854 01:49:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:57.855 01:49:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:57.855 01:49:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:57.855 01:49:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:57.855 01:49:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:57.855 01:49:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:57.855 01:49:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:57.855 01:49:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:57.855 01:49:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:57.855 01:49:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:57.855 01:49:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:57.855 01:49:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:57.855 01:49:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:57.855 01:49:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:57.855 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:57.855 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.217 ms 00:10:57.855 00:10:57.855 --- 10.0.0.2 ping statistics --- 00:10:57.855 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:57.855 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:10:57.855 01:49:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:57.855 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:57.855 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.090 ms 00:10:57.855 00:10:57.855 --- 10.0.0.1 ping statistics --- 00:10:57.855 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:57.855 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:10:57.855 01:49:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:57.855 01:49:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:10:57.855 01:49:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:57.855 01:49:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:57.855 01:49:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:57.855 01:49:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:57.855 01:49:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:57.855 01:49:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:57.855 01:49:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:57.855 01:49:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:10:57.855 01:49:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:10:57.855 01:49:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:57.855 01:49:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:57.855 01:49:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:10:57.855 01:49:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:10:57.855 01:49:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=1357729 00:10:57.855 01:49:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:10:57.855 01:49:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:57.855 01:49:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 1357729 00:10:57.855 01:49:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@829 -- # '[' -z 1357729 ']' 00:10:57.855 01:49:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:57.855 01:49:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:57.855 01:49:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:57.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:57.855 01:49:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:57.855 01:49:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:57.855 EAL: No free 2048 kB hugepages reported on node 1 00:10:58.788 01:49:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:58.788 01:49:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@862 -- # return 0 00:10:58.788 01:49:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:10:58.788 01:49:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:58.788 01:49:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:58.788 01:49:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:58.788 01:49:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:58.788 01:49:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:58.788 01:49:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:58.788 01:49:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:10:58.788 01:49:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:58.788 01:49:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:58.788 01:49:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:58.788 01:49:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:10:58.788 01:49:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:58.788 01:49:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:58.788 01:49:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:58.788 01:49:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:58.788 01:49:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:10:58.788 01:49:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:58.788 01:49:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:58.788 01:49:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:58.788 01:49:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:58.788 01:49:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:58.788 01:49:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:58.788 01:49:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:58.788 01:49:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:58.788 01:49:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:10:58.788 01:49:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:10:58.788 EAL: No free 2048 kB hugepages reported on node 1 00:11:10.980 Initializing NVMe Controllers 00:11:10.980 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:10.980 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:10.980 Initialization complete. Launching workers. 00:11:10.980 ======================================================== 00:11:10.980 Latency(us) 00:11:10.980 Device Information : IOPS MiB/s Average min max 00:11:10.980 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15312.91 59.82 4179.70 671.46 45083.35 00:11:10.980 ======================================================== 00:11:10.980 Total : 15312.91 59.82 4179.70 671.46 45083.35 00:11:10.980 00:11:10.980 01:49:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:11:10.980 01:49:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:11:10.980 01:49:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:10.980 01:49:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # sync 00:11:10.980 01:49:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:10.980 01:49:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:11:10.980 01:49:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:10.980 01:49:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:10.980 rmmod nvme_tcp 00:11:10.980 rmmod nvme_fabrics 00:11:10.980 rmmod nvme_keyring 00:11:10.980 01:49:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:10.980 01:49:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:11:10.980 01:49:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:11:10.980 01:49:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 1357729 ']' 00:11:10.980 01:49:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@490 -- # killprocess 1357729 00:11:10.980 01:49:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@948 -- # '[' -z 1357729 ']' 00:11:10.980 01:49:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@952 -- # kill -0 1357729 00:11:10.980 01:49:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@953 -- # uname 00:11:10.980 01:49:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:10.980 01:49:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1357729 00:11:10.980 01:49:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # process_name=nvmf 00:11:10.980 01:49:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # '[' nvmf = sudo ']' 00:11:10.980 01:49:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1357729' 00:11:10.980 killing process with pid 1357729 00:11:10.980 01:49:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@967 -- # kill 1357729 00:11:10.980 01:49:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # wait 1357729 00:11:10.980 nvmf threads initialize successfully 00:11:10.980 bdev subsystem init successfully 00:11:10.980 created a nvmf target service 00:11:10.980 create targets's poll groups done 00:11:10.980 all subsystems of target started 00:11:10.980 nvmf target is running 00:11:10.980 all subsystems of target stopped 00:11:10.980 destroy targets's poll groups done 00:11:10.980 destroyed the nvmf target service 00:11:10.980 bdev subsystem finish successfully 00:11:10.980 nvmf threads destroy successfully 00:11:10.980 01:49:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:10.980 01:49:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:10.980 01:49:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:10.980 01:49:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:10.980 01:49:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:10.980 01:49:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:10.980 01:49:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:10.980 01:49:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:11.238 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:11.238 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:11:11.238 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:11.238 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:11.238 00:11:11.238 real 0m16.014s 00:11:11.238 user 0m45.550s 00:11:11.238 sys 0m3.355s 00:11:11.238 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:11.238 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:11.238 ************************************ 00:11:11.238 END TEST nvmf_example 00:11:11.238 ************************************ 00:11:11.238 01:49:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:11.238 01:49:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:11.238 01:49:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:11.238 01:49:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:11.499 ************************************ 00:11:11.499 START TEST nvmf_filesystem 00:11:11.499 ************************************ 00:11:11.499 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:11.499 * Looking for test storage... 00:11:11.499 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:11.499 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:11:11.499 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:11:11.499 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:11:11.499 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:11:11.499 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:11:11.499 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:11:11.499 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:11:11.499 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:11:11.499 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:11:11.499 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:11:11.499 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:11:11.499 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:11:11.499 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:11:11.499 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:11:11.499 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:11:11.499 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:11:11.499 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:11:11.499 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:11:11.499 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:11:11.499 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:11:11.499 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:11:11.499 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:11:11.499 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:11:11.499 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:11:11.499 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:11:11.499 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:11:11.499 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:11:11.499 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:11.499 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:11:11.499 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:11:11.499 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:11:11.499 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:11:11.499 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:11:11.499 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:11:11.499 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:11:11.499 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:11:11.499 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:11:11.499 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:11:11.499 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:11:11.499 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:11:11.499 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:11:11.499 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:11:11.499 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:11:11.499 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:11:11.499 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:11:11.499 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:11:11.499 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:11:11.499 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:11:11.499 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:11:11.499 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:11.499 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:11:11.499 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:11:11.499 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:11:11.499 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:11:11.499 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:11:11.499 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:11:11.499 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:11:11.499 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:11:11.499 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:11:11.499 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:11:11.499 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:11:11.499 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:11:11.499 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:11:11.499 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:11:11.499 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:11:11.499 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:11:11.499 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:11:11.499 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:11:11.499 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:11:11.499 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:11:11.499 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:11:11.499 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:11.499 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:11:11.499 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:11:11.499 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:11:11.499 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:11:11.499 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:11:11.499 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:11:11.499 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:11:11.499 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:11:11.499 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:11:11.500 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:11:11.500 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:11:11.500 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:11:11.500 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:11:11.500 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:11:11.500 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:11:11.500 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:11:11.500 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:11:11.500 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:11:11.500 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:11:11.500 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:11:11.500 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:11.500 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:11.500 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:11.500 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:11.500 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:11.500 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:11.500 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:11:11.500 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:11.500 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:11:11.500 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:11:11.500 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:11:11.500 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:11:11.500 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:11:11.500 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:11:11.500 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:11:11.500 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:11:11.500 #define SPDK_CONFIG_H 00:11:11.500 #define SPDK_CONFIG_APPS 1 00:11:11.500 #define SPDK_CONFIG_ARCH native 00:11:11.500 #undef SPDK_CONFIG_ASAN 00:11:11.500 #undef SPDK_CONFIG_AVAHI 00:11:11.500 #undef SPDK_CONFIG_CET 00:11:11.500 #define SPDK_CONFIG_COVERAGE 1 00:11:11.500 #define SPDK_CONFIG_CROSS_PREFIX 00:11:11.500 #undef SPDK_CONFIG_CRYPTO 00:11:11.500 #undef SPDK_CONFIG_CRYPTO_MLX5 00:11:11.500 #undef SPDK_CONFIG_CUSTOMOCF 00:11:11.500 #undef SPDK_CONFIG_DAOS 00:11:11.500 #define SPDK_CONFIG_DAOS_DIR 00:11:11.500 #define SPDK_CONFIG_DEBUG 1 00:11:11.500 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:11:11.500 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:11:11.500 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:11.500 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:11.500 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:11:11.500 #undef SPDK_CONFIG_DPDK_UADK 00:11:11.500 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:11.500 #define SPDK_CONFIG_EXAMPLES 1 00:11:11.500 #undef SPDK_CONFIG_FC 00:11:11.500 #define SPDK_CONFIG_FC_PATH 00:11:11.500 #define SPDK_CONFIG_FIO_PLUGIN 1 00:11:11.500 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:11:11.500 #undef SPDK_CONFIG_FUSE 00:11:11.500 #undef SPDK_CONFIG_FUZZER 00:11:11.500 #define SPDK_CONFIG_FUZZER_LIB 00:11:11.500 #undef SPDK_CONFIG_GOLANG 00:11:11.500 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:11:11.500 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:11:11.500 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:11:11.500 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:11:11.500 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:11:11.500 #undef SPDK_CONFIG_HAVE_LIBBSD 00:11:11.500 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:11:11.500 #define SPDK_CONFIG_IDXD 1 00:11:11.500 #define SPDK_CONFIG_IDXD_KERNEL 1 00:11:11.500 #undef SPDK_CONFIG_IPSEC_MB 00:11:11.500 #define SPDK_CONFIG_IPSEC_MB_DIR 00:11:11.500 #define SPDK_CONFIG_ISAL 1 00:11:11.500 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:11:11.500 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:11:11.500 #define SPDK_CONFIG_LIBDIR 00:11:11.500 #undef SPDK_CONFIG_LTO 00:11:11.500 #define SPDK_CONFIG_MAX_LCORES 128 00:11:11.500 #define SPDK_CONFIG_NVME_CUSE 1 00:11:11.500 #undef SPDK_CONFIG_OCF 00:11:11.500 #define SPDK_CONFIG_OCF_PATH 00:11:11.500 #define SPDK_CONFIG_OPENSSL_PATH 00:11:11.500 #undef SPDK_CONFIG_PGO_CAPTURE 00:11:11.500 #define SPDK_CONFIG_PGO_DIR 00:11:11.500 #undef SPDK_CONFIG_PGO_USE 00:11:11.500 #define SPDK_CONFIG_PREFIX /usr/local 00:11:11.500 #undef SPDK_CONFIG_RAID5F 00:11:11.500 #undef SPDK_CONFIG_RBD 00:11:11.500 #define SPDK_CONFIG_RDMA 1 00:11:11.500 #define SPDK_CONFIG_RDMA_PROV verbs 00:11:11.500 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:11:11.500 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:11:11.500 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:11:11.500 #define SPDK_CONFIG_SHARED 1 00:11:11.500 #undef SPDK_CONFIG_SMA 00:11:11.500 #define SPDK_CONFIG_TESTS 1 00:11:11.500 #undef SPDK_CONFIG_TSAN 00:11:11.500 #define SPDK_CONFIG_UBLK 1 00:11:11.500 #define SPDK_CONFIG_UBSAN 1 00:11:11.500 #undef SPDK_CONFIG_UNIT_TESTS 00:11:11.500 #undef SPDK_CONFIG_URING 00:11:11.500 #define SPDK_CONFIG_URING_PATH 00:11:11.500 #undef SPDK_CONFIG_URING_ZNS 00:11:11.500 #undef SPDK_CONFIG_USDT 00:11:11.500 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:11:11.500 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:11:11.500 #define SPDK_CONFIG_VFIO_USER 1 00:11:11.500 #define SPDK_CONFIG_VFIO_USER_DIR 00:11:11.500 #define SPDK_CONFIG_VHOST 1 00:11:11.500 #define SPDK_CONFIG_VIRTIO 1 00:11:11.500 #undef SPDK_CONFIG_VTUNE 00:11:11.500 #define SPDK_CONFIG_VTUNE_DIR 00:11:11.500 #define SPDK_CONFIG_WERROR 1 00:11:11.500 #define SPDK_CONFIG_WPDK_DIR 00:11:11.500 #undef SPDK_CONFIG_XNVME 00:11:11.500 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:11:11.500 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:11:11.500 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:11.500 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:11.500 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:11.500 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:11.500 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:11.500 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:11.500 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:11.500 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:11.500 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:11.500 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:11.500 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:11.500 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:11.501 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:11.501 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:11:11.501 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:11.501 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:11:11.501 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:11:11.501 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:11:11.501 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:11:11.501 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:11:11.501 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:11:11.501 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:11:11.501 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:11:11.501 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:11:11.501 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:11:11.501 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:11:11.501 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:11:11.501 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:11:11.501 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:11:11.501 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:11:11.501 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:11:11.501 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:11:11.501 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:11:11.501 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:11:11.501 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:11:11.501 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:11:11.501 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:11:11.501 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:11:11.501 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:11:11.501 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:11:11.501 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:11:11.501 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:11:11.501 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:11:11.501 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:11:11.501 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:11:11.501 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:11:11.501 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:11:11.501 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:11:11.501 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:11:11.501 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:11:11.501 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:11:11.501 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:11:11.501 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:11:11.501 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:11:11.501 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:11:11.501 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:11:11.501 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:11:11.501 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:11:11.501 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:11:11.501 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:11:11.501 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:11:11.501 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:11:11.501 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:11:11.501 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:11:11.501 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:11:11.501 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:11:11.501 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:11:11.501 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:11:11.501 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:11:11.501 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:11:11.501 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:11:11.501 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:11:11.501 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:11:11.501 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:11:11.501 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:11:11.501 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:11:11.501 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:11:11.501 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:11:11.501 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:11:11.501 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:11:11.501 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:11:11.501 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:11:11.501 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:11:11.501 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:11:11.501 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:11:11.501 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:11:11.501 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:11:11.501 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:11:11.501 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:11:11.501 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:11:11.501 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:11:11.501 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:11:11.501 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:11:11.501 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:11:11.501 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:11:11.501 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:11:11.501 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:11:11.501 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:11:11.501 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:11:11.501 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:11:11.501 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:11:11.501 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:11:11.501 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:11:11.501 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:11:11.501 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:11:11.501 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:11:11.501 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:11:11.501 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:11:11.501 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:11:11.501 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:11:11.501 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:11:11.501 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:11:11.501 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:11:11.501 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:11:11.501 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : v22.11.4 00:11:11.501 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:11:11.502 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:11:11.502 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:11:11.502 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:11:11.502 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:11:11.502 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:11:11.502 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:11:11.502 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:11:11.502 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:11:11.502 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:11:11.502 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:11:11.502 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:11:11.502 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:11:11.502 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:11:11.502 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:11:11.502 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:11:11.502 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:11:11.502 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:11:11.502 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:11:11.502 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:11:11.502 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:11:11.502 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:11:11.502 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:11:11.502 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:11:11.502 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:11:11.502 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:11:11.502 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:11:11.502 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:11:11.502 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:11:11.502 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:11:11.502 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:11:11.502 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:11:11.502 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:11:11.502 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:11.502 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:11.502 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:11.502 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:11.502 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:11.502 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:11.502 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:11.502 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:11.502 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:11:11.502 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:11:11.502 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:11.502 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:11.502 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:11:11.502 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:11:11.502 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:11.502 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:11.502 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:11.502 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:11.502 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:11:11.502 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:11:11.502 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:11:11.502 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:11:11.502 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:11.502 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:11.502 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:11.502 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:11.502 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:11:11.502 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:11:11.502 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:11.502 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:11.502 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:11.502 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:11.502 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:11.502 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:11.502 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:11.502 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:11.502 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:11.503 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:11.503 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:11.503 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:11.503 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:11:11.503 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:11:11.503 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:11:11.503 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:11:11.503 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:11:11.503 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:11:11.503 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:11:11.503 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:11:11.503 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:11:11.503 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:11:11.503 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:11:11.503 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j48 00:11:11.503 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:11:11.503 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:11:11.503 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:11:11.503 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:11:11.503 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:11:11.503 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:11:11.503 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:11:11.503 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 1359433 ]] 00:11:11.503 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 1359433 00:11:11.503 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:11:11.503 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:11:11.503 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:11:11.503 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:11:11.503 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:11:11.503 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:11:11.503 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:11:11.503 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:11:11.503 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.2xlCbt 00:11:11.503 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:11:11.503 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:11:11.503 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:11:11.503 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.2xlCbt/tests/target /tmp/spdk.2xlCbt 00:11:11.503 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:11:11.503 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:11:11.503 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:11:11.503 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:11:11.503 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:11:11.503 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:11:11.503 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:11:11.503 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:11:11.503 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:11:11.503 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:11:11.503 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:11:11.503 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:11:11.503 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=953643008 00:11:11.503 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:11:11.503 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4330786816 00:11:11.503 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:11:11.503 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:11:11.503 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:11:11.503 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=53508886528 00:11:11.503 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=61994721280 00:11:11.503 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=8485834752 00:11:11.503 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:11:11.503 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:11:11.503 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:11:11.503 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=30935179264 00:11:11.503 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=30997360640 00:11:11.503 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=62181376 00:11:11.503 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:11:11.503 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:11:11.503 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:11:11.503 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=12376535040 00:11:11.503 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=12398944256 00:11:11.503 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=22409216 00:11:11.503 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:11:11.503 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:11:11.503 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:11:11.503 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=30996803584 00:11:11.503 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=30997360640 00:11:11.503 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=557056 00:11:11.503 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:11:11.503 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:11:11.504 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:11:11.504 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=6199468032 00:11:11.504 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=6199472128 00:11:11.504 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:11:11.504 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:11:11.504 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:11:11.504 * Looking for test storage... 00:11:11.504 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:11:11.504 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:11:11.504 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:11.504 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:11:11.504 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/ 00:11:11.504 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=53508886528 00:11:11.504 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:11:11.504 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:11:11.504 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:11:11.504 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:11:11.504 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:11:11.504 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # new_size=10700427264 00:11:11.504 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:11:11.504 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:11.504 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:11.504 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:11.504 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:11.504 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:11:11.504 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set -o errtrace 00:11:11.504 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:11:11.504 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:11:11.504 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:11:11.504 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # true 00:11:11.504 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # xtrace_fd 00:11:11.504 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:11:11.504 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:11:11.504 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:11:11.504 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:11:11.504 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:11:11.504 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:11:11.504 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:11:11.504 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:11:11.504 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:11.504 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:11:11.504 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:11.504 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:11.504 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:11.504 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:11.504 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:11.504 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:11.504 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:11.504 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:11.504 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:11.504 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:11.504 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:11.504 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:11.504 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:11.504 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:11.504 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:11.504 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:11.504 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:11.504 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:11.504 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:11.504 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:11.504 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:11.504 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:11.504 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:11.504 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:11.504 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:11.504 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:11:11.504 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:11.504 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:11.504 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:11.504 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:11.504 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:11.504 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:11.504 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:11.504 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:11.504 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:11:11.504 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:11.504 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:11:11.504 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:11.504 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:11.504 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:11.505 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:11.505 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:11.505 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:11.505 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:11.505 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:11.505 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:11.505 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:11.505 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:11:11.505 01:49:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:14.035 01:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:14.035 01:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:11:14.035 01:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:14.035 01:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:14.035 01:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:14.035 01:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:14.035 01:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:14.035 01:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:11:14.035 01:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:14.036 01:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:11:14.036 01:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:11:14.036 01:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:11:14.036 01:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:11:14.036 01:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:11:14.036 01:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:11:14.036 01:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:14.036 01:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:14.036 01:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:14.036 01:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:14.036 01:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:14.036 01:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:14.036 01:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:14.036 01:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:14.036 01:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:14.036 01:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:14.036 01:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:14.036 01:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:14.036 01:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:14.036 01:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:14.036 01:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:14.036 01:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:14.036 01:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:14.036 01:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:14.036 01:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:14.036 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:14.036 01:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:14.036 01:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:14.036 01:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:14.036 01:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:14.036 01:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:14.036 01:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:14.036 01:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:14.036 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:14.036 01:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:14.036 01:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:14.036 01:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:14.036 01:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:14.036 01:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:14.036 01:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:14.036 01:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:14.036 01:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:14.036 01:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:14.036 01:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:14.036 01:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:14.036 01:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:14.036 01:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:14.036 01:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:14.036 01:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:14.036 01:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:14.036 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:14.036 01:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:14.036 01:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:14.036 01:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:14.036 01:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:14.036 01:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:14.036 01:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:14.036 01:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:14.036 01:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:14.036 01:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:14.036 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:14.036 01:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:14.036 01:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:14.036 01:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:11:14.036 01:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:14.036 01:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:14.036 01:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:14.036 01:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:14.036 01:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:14.036 01:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:14.036 01:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:14.036 01:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:14.036 01:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:14.036 01:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:14.036 01:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:14.036 01:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:14.036 01:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:14.036 01:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:14.036 01:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:14.036 01:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:14.036 01:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:14.036 01:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:14.036 01:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:14.036 01:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:14.036 01:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:14.036 01:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:14.036 01:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:14.036 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:14.036 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.139 ms 00:11:14.036 00:11:14.036 --- 10.0.0.2 ping statistics --- 00:11:14.036 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:14.036 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:11:14.036 01:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:14.036 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:14.036 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.139 ms 00:11:14.036 00:11:14.036 --- 10.0.0.1 ping statistics --- 00:11:14.036 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:14.036 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:11:14.036 01:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:14.036 01:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:11:14.036 01:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:14.036 01:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:14.036 01:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:14.036 01:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:14.036 01:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:14.036 01:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:14.036 01:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:14.037 01:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:11:14.037 01:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:14.037 01:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:14.037 01:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:14.037 ************************************ 00:11:14.037 START TEST nvmf_filesystem_no_in_capsule 00:11:14.037 ************************************ 00:11:14.037 01:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 0 00:11:14.037 01:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:11:14.037 01:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:14.037 01:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:14.037 01:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:14.037 01:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:14.037 01:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=1361061 00:11:14.037 01:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:14.037 01:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 1361061 00:11:14.037 01:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 1361061 ']' 00:11:14.037 01:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:14.037 01:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:14.037 01:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:14.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:14.037 01:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:14.037 01:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:14.037 [2024-07-24 01:49:28.663035] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:11:14.037 [2024-07-24 01:49:28.663122] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:14.037 EAL: No free 2048 kB hugepages reported on node 1 00:11:14.037 [2024-07-24 01:49:28.730245] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:14.037 [2024-07-24 01:49:28.825389] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:14.037 [2024-07-24 01:49:28.825447] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:14.037 [2024-07-24 01:49:28.825463] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:14.037 [2024-07-24 01:49:28.825476] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:14.037 [2024-07-24 01:49:28.825488] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:14.037 [2024-07-24 01:49:28.825544] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:14.037 [2024-07-24 01:49:28.825576] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:14.037 [2024-07-24 01:49:28.825631] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:14.037 [2024-07-24 01:49:28.825634] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:14.295 01:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:14.295 01:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:11:14.295 01:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:14.295 01:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:14.295 01:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:14.295 01:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:14.295 01:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:14.295 01:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:14.295 01:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:14.295 01:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:14.295 [2024-07-24 01:49:28.991899] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:14.295 01:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:14.295 01:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:14.295 01:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:14.295 01:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:14.295 Malloc1 00:11:14.295 01:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:14.295 01:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:14.295 01:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:14.295 01:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:14.295 01:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:14.295 01:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:14.295 01:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:14.295 01:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:14.295 01:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:14.295 01:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:14.295 01:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:14.295 01:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:14.295 [2024-07-24 01:49:29.176485] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:14.295 01:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:14.295 01:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:14.295 01:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1376 -- # local bdev_name=Malloc1 00:11:14.295 01:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1377 -- # local bdev_info 00:11:14.295 01:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bs 00:11:14.295 01:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local nb 00:11:14.295 01:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:14.295 01:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:14.295 01:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:14.553 01:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:14.553 01:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # bdev_info='[ 00:11:14.553 { 00:11:14.553 "name": "Malloc1", 00:11:14.553 "aliases": [ 00:11:14.553 "ff5188e0-6a11-4ab0-aa91-48eccedb9d07" 00:11:14.553 ], 00:11:14.553 "product_name": "Malloc disk", 00:11:14.553 "block_size": 512, 00:11:14.553 "num_blocks": 1048576, 00:11:14.553 "uuid": "ff5188e0-6a11-4ab0-aa91-48eccedb9d07", 00:11:14.553 "assigned_rate_limits": { 00:11:14.553 "rw_ios_per_sec": 0, 00:11:14.553 "rw_mbytes_per_sec": 0, 00:11:14.553 "r_mbytes_per_sec": 0, 00:11:14.553 "w_mbytes_per_sec": 0 00:11:14.553 }, 00:11:14.553 "claimed": true, 00:11:14.553 "claim_type": "exclusive_write", 00:11:14.553 "zoned": false, 00:11:14.553 "supported_io_types": { 00:11:14.553 "read": true, 00:11:14.553 "write": true, 00:11:14.553 "unmap": true, 00:11:14.553 "flush": true, 00:11:14.553 "reset": true, 00:11:14.553 "nvme_admin": false, 00:11:14.553 "nvme_io": false, 00:11:14.553 "nvme_io_md": false, 00:11:14.553 "write_zeroes": true, 00:11:14.553 "zcopy": true, 00:11:14.553 "get_zone_info": false, 00:11:14.553 "zone_management": false, 00:11:14.553 "zone_append": false, 00:11:14.553 "compare": false, 00:11:14.553 "compare_and_write": false, 00:11:14.553 "abort": true, 00:11:14.553 "seek_hole": false, 00:11:14.553 "seek_data": false, 00:11:14.553 "copy": true, 00:11:14.553 "nvme_iov_md": false 00:11:14.553 }, 00:11:14.553 "memory_domains": [ 00:11:14.553 { 00:11:14.553 "dma_device_id": "system", 00:11:14.553 "dma_device_type": 1 00:11:14.553 }, 00:11:14.553 { 00:11:14.553 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:14.553 "dma_device_type": 2 00:11:14.553 } 00:11:14.553 ], 00:11:14.553 "driver_specific": {} 00:11:14.553 } 00:11:14.553 ]' 00:11:14.553 01:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # jq '.[] .block_size' 00:11:14.553 01:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # bs=512 00:11:14.553 01:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # jq '.[] .num_blocks' 00:11:14.553 01:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # nb=1048576 00:11:14.553 01:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # bdev_size=512 00:11:14.553 01:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # echo 512 00:11:14.553 01:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:14.553 01:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:15.118 01:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:15.118 01:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1196 -- # local i=0 00:11:15.118 01:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:11:15.118 01:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # [[ -n '' ]] 00:11:15.118 01:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # sleep 2 00:11:17.016 01:49:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:11:17.016 01:49:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:11:17.016 01:49:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:11:17.016 01:49:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:11:17.016 01:49:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:11:17.016 01:49:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # return 0 00:11:17.016 01:49:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:17.016 01:49:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:17.016 01:49:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:17.016 01:49:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:17.016 01:49:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:17.016 01:49:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:17.016 01:49:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:17.016 01:49:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:17.016 01:49:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:17.016 01:49:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:17.280 01:49:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:17.578 01:49:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:18.143 01:49:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:19.514 01:49:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:11:19.514 01:49:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:19.514 01:49:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:11:19.514 01:49:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:19.514 01:49:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:19.514 ************************************ 00:11:19.514 START TEST filesystem_ext4 00:11:19.514 ************************************ 00:11:19.514 01:49:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:19.514 01:49:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:19.514 01:49:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:19.514 01:49:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:19.514 01:49:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:11:19.514 01:49:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:11:19.514 01:49:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:11:19.514 01:49:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local force 00:11:19.514 01:49:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:11:19.514 01:49:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:11:19.514 01:49:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:19.514 mke2fs 1.46.5 (30-Dec-2021) 00:11:19.514 Discarding device blocks: 0/522240 done 00:11:19.514 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:19.514 Filesystem UUID: d0251d71-ac08-4d23-8b69-b53aa6f59048 00:11:19.514 Superblock backups stored on blocks: 00:11:19.514 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:19.514 00:11:19.514 Allocating group tables: 0/64 done 00:11:19.514 Writing inode tables: 0/64 done 00:11:19.771 Creating journal (8192 blocks): done 00:11:19.771 Writing superblocks and filesystem accounting information: 0/64 done 00:11:19.771 00:11:19.771 01:49:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@943 -- # return 0 00:11:19.771 01:49:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:20.704 01:49:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:20.704 01:49:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:11:20.704 01:49:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:20.704 01:49:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:11:20.704 01:49:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:20.704 01:49:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:20.704 01:49:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 1361061 00:11:20.704 01:49:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:20.704 01:49:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:20.704 01:49:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:20.704 01:49:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:20.704 00:11:20.704 real 0m1.347s 00:11:20.704 user 0m0.024s 00:11:20.704 sys 0m0.041s 00:11:20.704 01:49:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:20.704 01:49:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:20.704 ************************************ 00:11:20.704 END TEST filesystem_ext4 00:11:20.704 ************************************ 00:11:20.704 01:49:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:20.704 01:49:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:11:20.704 01:49:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:20.705 01:49:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:20.705 ************************************ 00:11:20.705 START TEST filesystem_btrfs 00:11:20.705 ************************************ 00:11:20.705 01:49:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:20.705 01:49:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:20.705 01:49:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:20.705 01:49:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:20.705 01:49:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:11:20.705 01:49:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:11:20.705 01:49:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:11:20.705 01:49:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local force 00:11:20.705 01:49:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:11:20.705 01:49:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:11:20.705 01:49:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:21.270 btrfs-progs v6.6.2 00:11:21.270 See https://btrfs.readthedocs.io for more information. 00:11:21.270 00:11:21.270 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:21.270 NOTE: several default settings have changed in version 5.15, please make sure 00:11:21.270 this does not affect your deployments: 00:11:21.270 - DUP for metadata (-m dup) 00:11:21.270 - enabled no-holes (-O no-holes) 00:11:21.270 - enabled free-space-tree (-R free-space-tree) 00:11:21.270 00:11:21.270 Label: (null) 00:11:21.270 UUID: 1f90afef-e42d-4cba-9e6d-a71e13bb3f68 00:11:21.270 Node size: 16384 00:11:21.270 Sector size: 4096 00:11:21.270 Filesystem size: 510.00MiB 00:11:21.270 Block group profiles: 00:11:21.270 Data: single 8.00MiB 00:11:21.270 Metadata: DUP 32.00MiB 00:11:21.270 System: DUP 8.00MiB 00:11:21.270 SSD detected: yes 00:11:21.270 Zoned device: no 00:11:21.270 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:11:21.270 Runtime features: free-space-tree 00:11:21.270 Checksum: crc32c 00:11:21.270 Number of devices: 1 00:11:21.270 Devices: 00:11:21.270 ID SIZE PATH 00:11:21.270 1 510.00MiB /dev/nvme0n1p1 00:11:21.270 00:11:21.270 01:49:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@943 -- # return 0 00:11:21.270 01:49:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:21.836 01:49:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:21.836 01:49:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:11:21.836 01:49:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:21.836 01:49:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:11:21.836 01:49:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:21.836 01:49:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:21.836 01:49:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 1361061 00:11:21.836 01:49:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:21.836 01:49:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:21.836 01:49:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:21.836 01:49:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:21.836 00:11:21.836 real 0m1.147s 00:11:21.836 user 0m0.013s 00:11:21.836 sys 0m0.121s 00:11:21.836 01:49:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:21.836 01:49:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:21.836 ************************************ 00:11:21.836 END TEST filesystem_btrfs 00:11:21.836 ************************************ 00:11:21.836 01:49:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:11:21.836 01:49:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:11:21.836 01:49:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:21.836 01:49:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:21.836 ************************************ 00:11:21.836 START TEST filesystem_xfs 00:11:21.836 ************************************ 00:11:21.836 01:49:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:11:21.836 01:49:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:21.836 01:49:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:21.836 01:49:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:21.836 01:49:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:11:21.836 01:49:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:11:21.836 01:49:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local i=0 00:11:21.836 01:49:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local force 00:11:21.836 01:49:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:11:21.836 01:49:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # force=-f 00:11:21.836 01:49:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:21.836 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:21.836 = sectsz=512 attr=2, projid32bit=1 00:11:21.836 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:21.836 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:21.836 data = bsize=4096 blocks=130560, imaxpct=25 00:11:21.836 = sunit=0 swidth=0 blks 00:11:21.836 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:21.836 log =internal log bsize=4096 blocks=16384, version=2 00:11:21.836 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:21.836 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:22.768 Discarding blocks...Done. 00:11:22.768 01:49:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@943 -- # return 0 00:11:22.768 01:49:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:25.295 01:49:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:25.295 01:49:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:11:25.295 01:49:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:25.295 01:49:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:11:25.295 01:49:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:11:25.295 01:49:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:25.295 01:49:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 1361061 00:11:25.295 01:49:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:25.295 01:49:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:25.295 01:49:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:25.295 01:49:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:25.295 00:11:25.295 real 0m3.315s 00:11:25.295 user 0m0.013s 00:11:25.295 sys 0m0.065s 00:11:25.295 01:49:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:25.295 01:49:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:25.295 ************************************ 00:11:25.295 END TEST filesystem_xfs 00:11:25.295 ************************************ 00:11:25.295 01:49:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:25.553 01:49:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:25.553 01:49:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:25.553 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:25.553 01:49:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:25.553 01:49:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1217 -- # local i=0 00:11:25.553 01:49:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:11:25.553 01:49:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1218 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:25.553 01:49:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:11:25.553 01:49:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1225 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:25.553 01:49:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1229 -- # return 0 00:11:25.553 01:49:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:25.553 01:49:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:25.553 01:49:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:25.553 01:49:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:25.553 01:49:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:25.553 01:49:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 1361061 00:11:25.553 01:49:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 1361061 ']' 00:11:25.553 01:49:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # kill -0 1361061 00:11:25.553 01:49:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # uname 00:11:25.553 01:49:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:25.553 01:49:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1361061 00:11:25.553 01:49:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:25.553 01:49:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:25.553 01:49:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1361061' 00:11:25.553 killing process with pid 1361061 00:11:25.553 01:49:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # kill 1361061 00:11:25.553 01:49:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # wait 1361061 00:11:26.119 01:49:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:26.119 00:11:26.119 real 0m12.199s 00:11:26.119 user 0m46.753s 00:11:26.119 sys 0m1.859s 00:11:26.119 01:49:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:26.119 01:49:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:26.119 ************************************ 00:11:26.119 END TEST nvmf_filesystem_no_in_capsule 00:11:26.119 ************************************ 00:11:26.119 01:49:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:11:26.119 01:49:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:26.119 01:49:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:26.119 01:49:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:26.119 ************************************ 00:11:26.119 START TEST nvmf_filesystem_in_capsule 00:11:26.119 ************************************ 00:11:26.119 01:49:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 4096 00:11:26.119 01:49:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:11:26.119 01:49:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:26.119 01:49:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:26.119 01:49:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:26.119 01:49:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:26.119 01:49:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=1362742 00:11:26.119 01:49:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:26.119 01:49:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 1362742 00:11:26.119 01:49:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 1362742 ']' 00:11:26.119 01:49:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:26.119 01:49:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:26.119 01:49:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:26.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:26.119 01:49:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:26.119 01:49:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:26.119 [2024-07-24 01:49:40.914402] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:11:26.119 [2024-07-24 01:49:40.914483] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:26.119 EAL: No free 2048 kB hugepages reported on node 1 00:11:26.120 [2024-07-24 01:49:40.985499] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:26.378 [2024-07-24 01:49:41.082782] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:26.378 [2024-07-24 01:49:41.082849] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:26.378 [2024-07-24 01:49:41.082865] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:26.378 [2024-07-24 01:49:41.082879] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:26.378 [2024-07-24 01:49:41.082891] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:26.378 [2024-07-24 01:49:41.082974] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:26.378 [2024-07-24 01:49:41.083048] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:26.378 [2024-07-24 01:49:41.083069] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:26.378 [2024-07-24 01:49:41.083072] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:26.378 01:49:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:26.378 01:49:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:11:26.378 01:49:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:26.378 01:49:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:26.378 01:49:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:26.378 01:49:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:26.378 01:49:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:26.378 01:49:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:11:26.378 01:49:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:26.378 01:49:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:26.378 [2024-07-24 01:49:41.226672] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:26.378 01:49:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:26.378 01:49:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:26.378 01:49:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:26.378 01:49:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:26.636 Malloc1 00:11:26.636 01:49:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:26.636 01:49:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:26.636 01:49:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:26.636 01:49:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:26.636 01:49:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:26.637 01:49:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:26.637 01:49:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:26.637 01:49:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:26.637 01:49:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:26.637 01:49:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:26.637 01:49:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:26.637 01:49:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:26.637 [2024-07-24 01:49:41.411393] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:26.637 01:49:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:26.637 01:49:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:26.637 01:49:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1376 -- # local bdev_name=Malloc1 00:11:26.637 01:49:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1377 -- # local bdev_info 00:11:26.637 01:49:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bs 00:11:26.637 01:49:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local nb 00:11:26.637 01:49:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:26.637 01:49:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:26.637 01:49:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:26.637 01:49:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:26.637 01:49:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # bdev_info='[ 00:11:26.637 { 00:11:26.637 "name": "Malloc1", 00:11:26.637 "aliases": [ 00:11:26.637 "08b3a5af-66f0-4d15-8107-4f15ba1c4c07" 00:11:26.637 ], 00:11:26.637 "product_name": "Malloc disk", 00:11:26.637 "block_size": 512, 00:11:26.637 "num_blocks": 1048576, 00:11:26.637 "uuid": "08b3a5af-66f0-4d15-8107-4f15ba1c4c07", 00:11:26.637 "assigned_rate_limits": { 00:11:26.637 "rw_ios_per_sec": 0, 00:11:26.637 "rw_mbytes_per_sec": 0, 00:11:26.637 "r_mbytes_per_sec": 0, 00:11:26.637 "w_mbytes_per_sec": 0 00:11:26.637 }, 00:11:26.637 "claimed": true, 00:11:26.637 "claim_type": "exclusive_write", 00:11:26.637 "zoned": false, 00:11:26.637 "supported_io_types": { 00:11:26.637 "read": true, 00:11:26.637 "write": true, 00:11:26.637 "unmap": true, 00:11:26.637 "flush": true, 00:11:26.637 "reset": true, 00:11:26.637 "nvme_admin": false, 00:11:26.637 "nvme_io": false, 00:11:26.637 "nvme_io_md": false, 00:11:26.637 "write_zeroes": true, 00:11:26.637 "zcopy": true, 00:11:26.637 "get_zone_info": false, 00:11:26.637 "zone_management": false, 00:11:26.637 "zone_append": false, 00:11:26.637 "compare": false, 00:11:26.637 "compare_and_write": false, 00:11:26.637 "abort": true, 00:11:26.637 "seek_hole": false, 00:11:26.637 "seek_data": false, 00:11:26.637 "copy": true, 00:11:26.637 "nvme_iov_md": false 00:11:26.637 }, 00:11:26.637 "memory_domains": [ 00:11:26.637 { 00:11:26.637 "dma_device_id": "system", 00:11:26.637 "dma_device_type": 1 00:11:26.637 }, 00:11:26.637 { 00:11:26.637 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:26.637 "dma_device_type": 2 00:11:26.637 } 00:11:26.637 ], 00:11:26.637 "driver_specific": {} 00:11:26.637 } 00:11:26.637 ]' 00:11:26.637 01:49:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # jq '.[] .block_size' 00:11:26.637 01:49:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # bs=512 00:11:26.637 01:49:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # jq '.[] .num_blocks' 00:11:26.637 01:49:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # nb=1048576 00:11:26.637 01:49:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # bdev_size=512 00:11:26.637 01:49:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # echo 512 00:11:26.637 01:49:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:26.637 01:49:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:27.570 01:49:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:27.570 01:49:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1196 -- # local i=0 00:11:27.570 01:49:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:11:27.570 01:49:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # [[ -n '' ]] 00:11:27.570 01:49:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # sleep 2 00:11:29.465 01:49:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:11:29.465 01:49:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:11:29.465 01:49:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:11:29.465 01:49:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:11:29.465 01:49:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:11:29.465 01:49:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # return 0 00:11:29.465 01:49:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:29.465 01:49:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:29.465 01:49:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:29.465 01:49:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:29.465 01:49:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:29.465 01:49:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:29.466 01:49:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:29.466 01:49:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:29.466 01:49:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:29.466 01:49:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:29.466 01:49:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:29.722 01:49:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:30.654 01:49:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:31.586 01:49:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:11:31.586 01:49:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:31.586 01:49:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:11:31.586 01:49:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:31.586 01:49:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:31.586 ************************************ 00:11:31.586 START TEST filesystem_in_capsule_ext4 00:11:31.586 ************************************ 00:11:31.586 01:49:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:31.586 01:49:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:31.586 01:49:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:31.586 01:49:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:31.586 01:49:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:11:31.586 01:49:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:11:31.586 01:49:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:11:31.586 01:49:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local force 00:11:31.586 01:49:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:11:31.586 01:49:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:11:31.586 01:49:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:31.586 mke2fs 1.46.5 (30-Dec-2021) 00:11:31.586 Discarding device blocks: 0/522240 done 00:11:31.586 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:31.586 Filesystem UUID: 02d5c1bd-9280-4bbd-a1c0-f071ecbee3e7 00:11:31.586 Superblock backups stored on blocks: 00:11:31.586 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:31.586 00:11:31.586 Allocating group tables: 0/64 done 00:11:31.586 Writing inode tables: 0/64 done 00:11:31.843 Creating journal (8192 blocks): done 00:11:32.101 Writing superblocks and filesystem accounting information: 0/64 done 00:11:32.101 00:11:32.101 01:49:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@943 -- # return 0 00:11:32.101 01:49:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:32.358 01:49:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:32.616 01:49:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:11:32.616 01:49:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:32.616 01:49:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:11:32.616 01:49:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:32.616 01:49:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:32.616 01:49:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 1362742 00:11:32.616 01:49:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:32.616 01:49:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:32.616 01:49:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:32.616 01:49:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:32.616 00:11:32.616 real 0m1.019s 00:11:32.616 user 0m0.018s 00:11:32.616 sys 0m0.052s 00:11:32.616 01:49:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:32.616 01:49:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:32.616 ************************************ 00:11:32.616 END TEST filesystem_in_capsule_ext4 00:11:32.616 ************************************ 00:11:32.616 01:49:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:32.616 01:49:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:11:32.616 01:49:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:32.616 01:49:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:32.616 ************************************ 00:11:32.616 START TEST filesystem_in_capsule_btrfs 00:11:32.616 ************************************ 00:11:32.616 01:49:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:32.616 01:49:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:32.616 01:49:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:32.616 01:49:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:32.616 01:49:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:11:32.616 01:49:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:11:32.616 01:49:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:11:32.616 01:49:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local force 00:11:32.616 01:49:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:11:32.616 01:49:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:11:32.616 01:49:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:32.874 btrfs-progs v6.6.2 00:11:32.874 See https://btrfs.readthedocs.io for more information. 00:11:32.874 00:11:32.874 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:32.874 NOTE: several default settings have changed in version 5.15, please make sure 00:11:32.874 this does not affect your deployments: 00:11:32.874 - DUP for metadata (-m dup) 00:11:32.874 - enabled no-holes (-O no-holes) 00:11:32.874 - enabled free-space-tree (-R free-space-tree) 00:11:32.874 00:11:32.874 Label: (null) 00:11:32.874 UUID: c0caf5b1-9e44-43c6-a882-de1ad12fb41a 00:11:32.874 Node size: 16384 00:11:32.874 Sector size: 4096 00:11:32.874 Filesystem size: 510.00MiB 00:11:32.874 Block group profiles: 00:11:32.874 Data: single 8.00MiB 00:11:32.874 Metadata: DUP 32.00MiB 00:11:32.874 System: DUP 8.00MiB 00:11:32.874 SSD detected: yes 00:11:32.874 Zoned device: no 00:11:32.874 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:11:32.874 Runtime features: free-space-tree 00:11:32.874 Checksum: crc32c 00:11:32.874 Number of devices: 1 00:11:32.874 Devices: 00:11:32.874 ID SIZE PATH 00:11:32.874 1 510.00MiB /dev/nvme0n1p1 00:11:32.874 00:11:32.874 01:49:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@943 -- # return 0 00:11:32.874 01:49:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:33.131 01:49:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:33.131 01:49:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:11:33.131 01:49:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:33.131 01:49:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:11:33.131 01:49:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:33.131 01:49:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:33.131 01:49:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 1362742 00:11:33.131 01:49:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:33.131 01:49:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:33.131 01:49:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:33.131 01:49:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:33.131 00:11:33.131 real 0m0.529s 00:11:33.131 user 0m0.016s 00:11:33.131 sys 0m0.115s 00:11:33.131 01:49:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:33.131 01:49:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:33.131 ************************************ 00:11:33.131 END TEST filesystem_in_capsule_btrfs 00:11:33.131 ************************************ 00:11:33.131 01:49:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:11:33.131 01:49:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:11:33.131 01:49:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:33.131 01:49:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:33.131 ************************************ 00:11:33.131 START TEST filesystem_in_capsule_xfs 00:11:33.131 ************************************ 00:11:33.132 01:49:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:11:33.132 01:49:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:33.132 01:49:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:33.132 01:49:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:33.132 01:49:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:11:33.132 01:49:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:11:33.132 01:49:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local i=0 00:11:33.132 01:49:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local force 00:11:33.132 01:49:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:11:33.132 01:49:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # force=-f 00:11:33.132 01:49:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:33.389 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:33.389 = sectsz=512 attr=2, projid32bit=1 00:11:33.389 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:33.389 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:33.389 data = bsize=4096 blocks=130560, imaxpct=25 00:11:33.389 = sunit=0 swidth=0 blks 00:11:33.389 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:33.389 log =internal log bsize=4096 blocks=16384, version=2 00:11:33.389 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:33.389 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:34.320 Discarding blocks...Done. 00:11:34.320 01:49:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@943 -- # return 0 00:11:34.320 01:49:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:36.251 01:49:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:36.251 01:49:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:11:36.251 01:49:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:36.251 01:49:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:11:36.251 01:49:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:11:36.251 01:49:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:36.251 01:49:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 1362742 00:11:36.251 01:49:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:36.251 01:49:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:36.251 01:49:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:36.251 01:49:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:36.251 00:11:36.251 real 0m2.946s 00:11:36.251 user 0m0.017s 00:11:36.251 sys 0m0.059s 00:11:36.251 01:49:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:36.251 01:49:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:36.251 ************************************ 00:11:36.251 END TEST filesystem_in_capsule_xfs 00:11:36.251 ************************************ 00:11:36.251 01:49:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:36.251 01:49:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:36.251 01:49:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:36.251 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:36.251 01:49:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:36.251 01:49:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1217 -- # local i=0 00:11:36.251 01:49:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:11:36.251 01:49:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1218 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:36.251 01:49:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:11:36.251 01:49:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1225 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:36.251 01:49:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1229 -- # return 0 00:11:36.251 01:49:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:36.251 01:49:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:36.251 01:49:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:36.251 01:49:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:36.251 01:49:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:36.251 01:49:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 1362742 00:11:36.251 01:49:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 1362742 ']' 00:11:36.251 01:49:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # kill -0 1362742 00:11:36.251 01:49:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # uname 00:11:36.251 01:49:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:36.251 01:49:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1362742 00:11:36.509 01:49:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:36.510 01:49:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:36.510 01:49:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1362742' 00:11:36.510 killing process with pid 1362742 00:11:36.510 01:49:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # kill 1362742 00:11:36.510 01:49:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # wait 1362742 00:11:36.768 01:49:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:36.768 00:11:36.768 real 0m10.705s 00:11:36.768 user 0m41.021s 00:11:36.768 sys 0m1.692s 00:11:36.768 01:49:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:36.768 01:49:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:36.768 ************************************ 00:11:36.768 END TEST nvmf_filesystem_in_capsule 00:11:36.768 ************************************ 00:11:36.768 01:49:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:11:36.768 01:49:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:36.768 01:49:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:11:36.768 01:49:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:36.768 01:49:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:11:36.768 01:49:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:36.768 01:49:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:36.768 rmmod nvme_tcp 00:11:36.768 rmmod nvme_fabrics 00:11:36.768 rmmod nvme_keyring 00:11:36.768 01:49:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:36.768 01:49:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:11:36.768 01:49:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:11:36.768 01:49:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:11:36.768 01:49:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:36.768 01:49:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:36.768 01:49:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:36.768 01:49:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:36.768 01:49:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:36.768 01:49:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:36.768 01:49:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:36.768 01:49:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:39.302 01:49:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:39.302 00:11:39.302 real 0m27.541s 00:11:39.302 user 1m28.784s 00:11:39.302 sys 0m5.170s 00:11:39.302 01:49:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:39.302 01:49:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:39.302 ************************************ 00:11:39.302 END TEST nvmf_filesystem 00:11:39.302 ************************************ 00:11:39.303 01:49:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:39.303 01:49:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:39.303 01:49:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:39.303 01:49:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:39.303 ************************************ 00:11:39.303 START TEST nvmf_target_discovery 00:11:39.303 ************************************ 00:11:39.303 01:49:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:39.303 * Looking for test storage... 00:11:39.303 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:39.303 01:49:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:39.303 01:49:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:11:39.303 01:49:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:39.303 01:49:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:39.303 01:49:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:39.303 01:49:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:39.303 01:49:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:39.303 01:49:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:39.303 01:49:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:39.303 01:49:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:39.303 01:49:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:39.303 01:49:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:39.303 01:49:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:39.303 01:49:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:39.303 01:49:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:39.303 01:49:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:39.303 01:49:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:39.303 01:49:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:39.303 01:49:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:39.303 01:49:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:39.303 01:49:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:39.303 01:49:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:39.303 01:49:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:39.303 01:49:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:39.303 01:49:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:39.303 01:49:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:11:39.303 01:49:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:39.303 01:49:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:11:39.303 01:49:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:39.303 01:49:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:39.303 01:49:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:39.303 01:49:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:39.303 01:49:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:39.303 01:49:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:39.303 01:49:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:39.303 01:49:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:39.303 01:49:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:11:39.303 01:49:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:11:39.303 01:49:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:11:39.303 01:49:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:11:39.303 01:49:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:11:39.303 01:49:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:39.303 01:49:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:39.303 01:49:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:39.303 01:49:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:39.303 01:49:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:39.303 01:49:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:39.303 01:49:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:39.303 01:49:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:39.303 01:49:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:39.303 01:49:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:39.303 01:49:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:11:39.303 01:49:53 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:41.203 01:49:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:41.203 01:49:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:11:41.203 01:49:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:41.203 01:49:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:41.203 01:49:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:41.203 01:49:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:41.203 01:49:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:41.203 01:49:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:11:41.203 01:49:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:41.203 01:49:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:11:41.203 01:49:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:11:41.203 01:49:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:11:41.203 01:49:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:11:41.203 01:49:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:11:41.203 01:49:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:11:41.203 01:49:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:41.203 01:49:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:41.203 01:49:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:41.203 01:49:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:41.203 01:49:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:41.204 01:49:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:41.204 01:49:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:41.204 01:49:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:41.204 01:49:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:41.204 01:49:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:41.204 01:49:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:41.204 01:49:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:41.204 01:49:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:41.204 01:49:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:41.204 01:49:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:41.204 01:49:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:41.204 01:49:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:41.204 01:49:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:41.204 01:49:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:41.204 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:41.204 01:49:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:41.204 01:49:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:41.204 01:49:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:41.204 01:49:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:41.204 01:49:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:41.204 01:49:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:41.204 01:49:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:41.204 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:41.204 01:49:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:41.204 01:49:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:41.204 01:49:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:41.204 01:49:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:41.204 01:49:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:41.204 01:49:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:41.204 01:49:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:41.204 01:49:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:41.204 01:49:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:41.204 01:49:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:41.204 01:49:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:41.204 01:49:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:41.204 01:49:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:41.204 01:49:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:41.204 01:49:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:41.204 01:49:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:41.204 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:41.204 01:49:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:41.204 01:49:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:41.204 01:49:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:41.204 01:49:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:41.204 01:49:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:41.204 01:49:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:41.204 01:49:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:41.204 01:49:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:41.204 01:49:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:41.204 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:41.204 01:49:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:41.204 01:49:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:41.204 01:49:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:11:41.204 01:49:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:41.204 01:49:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:41.204 01:49:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:41.204 01:49:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:41.204 01:49:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:41.204 01:49:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:41.204 01:49:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:41.204 01:49:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:41.204 01:49:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:41.204 01:49:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:41.204 01:49:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:41.204 01:49:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:41.204 01:49:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:41.204 01:49:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:41.204 01:49:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:41.204 01:49:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:41.204 01:49:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:41.204 01:49:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:41.204 01:49:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:41.204 01:49:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:41.204 01:49:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:41.204 01:49:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:41.204 01:49:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:41.204 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:41.204 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.221 ms 00:11:41.204 00:11:41.204 --- 10.0.0.2 ping statistics --- 00:11:41.204 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:41.204 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:11:41.204 01:49:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:41.204 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:41.204 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.143 ms 00:11:41.204 00:11:41.204 --- 10.0.0.1 ping statistics --- 00:11:41.204 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:41.204 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:11:41.204 01:49:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:41.204 01:49:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:11:41.204 01:49:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:41.204 01:49:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:41.204 01:49:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:41.204 01:49:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:41.204 01:49:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:41.204 01:49:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:41.204 01:49:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:41.204 01:49:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:11:41.204 01:49:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:41.204 01:49:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:41.204 01:49:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:41.204 01:49:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=1366078 00:11:41.204 01:49:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:41.204 01:49:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 1366078 00:11:41.204 01:49:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@829 -- # '[' -z 1366078 ']' 00:11:41.205 01:49:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:41.205 01:49:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:41.205 01:49:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:41.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:41.205 01:49:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:41.205 01:49:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:41.205 [2024-07-24 01:49:56.033355] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:11:41.205 [2024-07-24 01:49:56.033444] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:41.205 EAL: No free 2048 kB hugepages reported on node 1 00:11:41.463 [2024-07-24 01:49:56.099212] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:41.463 [2024-07-24 01:49:56.189766] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:41.463 [2024-07-24 01:49:56.189833] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:41.463 [2024-07-24 01:49:56.189846] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:41.463 [2024-07-24 01:49:56.189857] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:41.463 [2024-07-24 01:49:56.189867] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:41.463 [2024-07-24 01:49:56.189951] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:41.463 [2024-07-24 01:49:56.190030] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:41.463 [2024-07-24 01:49:56.190092] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:41.463 [2024-07-24 01:49:56.190090] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:41.463 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:41.463 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@862 -- # return 0 00:11:41.463 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:41.463 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:41.463 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:41.463 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:41.463 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:41.463 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:41.463 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:41.463 [2024-07-24 01:49:56.344841] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:41.463 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:41.463 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:11:41.463 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:41.463 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:11:41.463 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:41.463 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:41.721 Null1 00:11:41.721 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:41.721 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:41.721 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:41.721 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:41.721 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:41.721 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:11:41.721 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:41.721 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:41.721 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:41.721 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:41.721 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:41.721 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:41.721 [2024-07-24 01:49:56.385194] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:41.721 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:41.721 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:41.721 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:11:41.721 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:41.721 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:41.721 Null2 00:11:41.721 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:41.721 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:11:41.721 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:41.722 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:41.722 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:41.722 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:11:41.722 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:41.722 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:41.722 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:41.722 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:11:41.722 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:41.722 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:41.722 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:41.722 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:41.722 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:11:41.722 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:41.722 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:41.722 Null3 00:11:41.722 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:41.722 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:11:41.722 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:41.722 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:41.722 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:41.722 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:11:41.722 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:41.722 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:41.722 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:41.722 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:11:41.722 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:41.722 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:41.722 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:41.722 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:41.722 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:11:41.722 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:41.722 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:41.722 Null4 00:11:41.722 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:41.722 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:11:41.722 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:41.722 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:41.722 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:41.722 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:11:41.722 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:41.722 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:41.722 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:41.722 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:11:41.722 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:41.722 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:41.722 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:41.722 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:41.722 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:41.722 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:41.722 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:41.722 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:11:41.722 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:41.722 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:41.722 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:41.722 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:11:41.981 00:11:41.981 Discovery Log Number of Records 6, Generation counter 6 00:11:41.981 =====Discovery Log Entry 0====== 00:11:41.981 trtype: tcp 00:11:41.981 adrfam: ipv4 00:11:41.981 subtype: current discovery subsystem 00:11:41.981 treq: not required 00:11:41.981 portid: 0 00:11:41.981 trsvcid: 4420 00:11:41.981 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:41.981 traddr: 10.0.0.2 00:11:41.981 eflags: explicit discovery connections, duplicate discovery information 00:11:41.981 sectype: none 00:11:41.981 =====Discovery Log Entry 1====== 00:11:41.981 trtype: tcp 00:11:41.981 adrfam: ipv4 00:11:41.981 subtype: nvme subsystem 00:11:41.981 treq: not required 00:11:41.981 portid: 0 00:11:41.981 trsvcid: 4420 00:11:41.981 subnqn: nqn.2016-06.io.spdk:cnode1 00:11:41.981 traddr: 10.0.0.2 00:11:41.981 eflags: none 00:11:41.981 sectype: none 00:11:41.981 =====Discovery Log Entry 2====== 00:11:41.981 trtype: tcp 00:11:41.981 adrfam: ipv4 00:11:41.981 subtype: nvme subsystem 00:11:41.981 treq: not required 00:11:41.981 portid: 0 00:11:41.981 trsvcid: 4420 00:11:41.981 subnqn: nqn.2016-06.io.spdk:cnode2 00:11:41.981 traddr: 10.0.0.2 00:11:41.981 eflags: none 00:11:41.981 sectype: none 00:11:41.981 =====Discovery Log Entry 3====== 00:11:41.981 trtype: tcp 00:11:41.981 adrfam: ipv4 00:11:41.981 subtype: nvme subsystem 00:11:41.981 treq: not required 00:11:41.981 portid: 0 00:11:41.981 trsvcid: 4420 00:11:41.981 subnqn: nqn.2016-06.io.spdk:cnode3 00:11:41.981 traddr: 10.0.0.2 00:11:41.981 eflags: none 00:11:41.981 sectype: none 00:11:41.981 =====Discovery Log Entry 4====== 00:11:41.981 trtype: tcp 00:11:41.981 adrfam: ipv4 00:11:41.981 subtype: nvme subsystem 00:11:41.981 treq: not required 00:11:41.981 portid: 0 00:11:41.981 trsvcid: 4420 00:11:41.981 subnqn: nqn.2016-06.io.spdk:cnode4 00:11:41.981 traddr: 10.0.0.2 00:11:41.981 eflags: none 00:11:41.981 sectype: none 00:11:41.981 =====Discovery Log Entry 5====== 00:11:41.981 trtype: tcp 00:11:41.981 adrfam: ipv4 00:11:41.981 subtype: discovery subsystem referral 00:11:41.981 treq: not required 00:11:41.981 portid: 0 00:11:41.981 trsvcid: 4430 00:11:41.981 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:41.981 traddr: 10.0.0.2 00:11:41.981 eflags: none 00:11:41.981 sectype: none 00:11:41.981 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:11:41.981 Perform nvmf subsystem discovery via RPC 00:11:41.981 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:11:41.981 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:41.981 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:41.981 [ 00:11:41.981 { 00:11:41.981 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:41.981 "subtype": "Discovery", 00:11:41.981 "listen_addresses": [ 00:11:41.981 { 00:11:41.981 "trtype": "TCP", 00:11:41.981 "adrfam": "IPv4", 00:11:41.981 "traddr": "10.0.0.2", 00:11:41.981 "trsvcid": "4420" 00:11:41.981 } 00:11:41.981 ], 00:11:41.981 "allow_any_host": true, 00:11:41.981 "hosts": [] 00:11:41.981 }, 00:11:41.981 { 00:11:41.981 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:41.981 "subtype": "NVMe", 00:11:41.981 "listen_addresses": [ 00:11:41.981 { 00:11:41.981 "trtype": "TCP", 00:11:41.981 "adrfam": "IPv4", 00:11:41.981 "traddr": "10.0.0.2", 00:11:41.981 "trsvcid": "4420" 00:11:41.981 } 00:11:41.981 ], 00:11:41.981 "allow_any_host": true, 00:11:41.981 "hosts": [], 00:11:41.981 "serial_number": "SPDK00000000000001", 00:11:41.981 "model_number": "SPDK bdev Controller", 00:11:41.981 "max_namespaces": 32, 00:11:41.981 "min_cntlid": 1, 00:11:41.981 "max_cntlid": 65519, 00:11:41.981 "namespaces": [ 00:11:41.981 { 00:11:41.981 "nsid": 1, 00:11:41.981 "bdev_name": "Null1", 00:11:41.981 "name": "Null1", 00:11:41.981 "nguid": "2BB8F62EC56F4743B5471C13870BD94C", 00:11:41.981 "uuid": "2bb8f62e-c56f-4743-b547-1c13870bd94c" 00:11:41.981 } 00:11:41.981 ] 00:11:41.981 }, 00:11:41.981 { 00:11:41.981 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:41.981 "subtype": "NVMe", 00:11:41.981 "listen_addresses": [ 00:11:41.981 { 00:11:41.981 "trtype": "TCP", 00:11:41.981 "adrfam": "IPv4", 00:11:41.981 "traddr": "10.0.0.2", 00:11:41.981 "trsvcid": "4420" 00:11:41.981 } 00:11:41.981 ], 00:11:41.981 "allow_any_host": true, 00:11:41.981 "hosts": [], 00:11:41.981 "serial_number": "SPDK00000000000002", 00:11:41.981 "model_number": "SPDK bdev Controller", 00:11:41.982 "max_namespaces": 32, 00:11:41.982 "min_cntlid": 1, 00:11:41.982 "max_cntlid": 65519, 00:11:41.982 "namespaces": [ 00:11:41.982 { 00:11:41.982 "nsid": 1, 00:11:41.982 "bdev_name": "Null2", 00:11:41.982 "name": "Null2", 00:11:41.982 "nguid": "E43F178E50594E67A7CEF6C7AC52F2BF", 00:11:41.982 "uuid": "e43f178e-5059-4e67-a7ce-f6c7ac52f2bf" 00:11:41.982 } 00:11:41.982 ] 00:11:41.982 }, 00:11:41.982 { 00:11:41.982 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:11:41.982 "subtype": "NVMe", 00:11:41.982 "listen_addresses": [ 00:11:41.982 { 00:11:41.982 "trtype": "TCP", 00:11:41.982 "adrfam": "IPv4", 00:11:41.982 "traddr": "10.0.0.2", 00:11:41.982 "trsvcid": "4420" 00:11:41.982 } 00:11:41.982 ], 00:11:41.982 "allow_any_host": true, 00:11:41.982 "hosts": [], 00:11:41.982 "serial_number": "SPDK00000000000003", 00:11:41.982 "model_number": "SPDK bdev Controller", 00:11:41.982 "max_namespaces": 32, 00:11:41.982 "min_cntlid": 1, 00:11:41.982 "max_cntlid": 65519, 00:11:41.982 "namespaces": [ 00:11:41.982 { 00:11:41.982 "nsid": 1, 00:11:41.982 "bdev_name": "Null3", 00:11:41.982 "name": "Null3", 00:11:41.982 "nguid": "7E5B65FFAE7D416A9E76481B5F5ACC45", 00:11:41.982 "uuid": "7e5b65ff-ae7d-416a-9e76-481b5f5acc45" 00:11:41.982 } 00:11:41.982 ] 00:11:41.982 }, 00:11:41.982 { 00:11:41.982 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:11:41.982 "subtype": "NVMe", 00:11:41.982 "listen_addresses": [ 00:11:41.982 { 00:11:41.982 "trtype": "TCP", 00:11:41.982 "adrfam": "IPv4", 00:11:41.982 "traddr": "10.0.0.2", 00:11:41.982 "trsvcid": "4420" 00:11:41.982 } 00:11:41.982 ], 00:11:41.982 "allow_any_host": true, 00:11:41.982 "hosts": [], 00:11:41.982 "serial_number": "SPDK00000000000004", 00:11:41.982 "model_number": "SPDK bdev Controller", 00:11:41.982 "max_namespaces": 32, 00:11:41.982 "min_cntlid": 1, 00:11:41.982 "max_cntlid": 65519, 00:11:41.982 "namespaces": [ 00:11:41.982 { 00:11:41.982 "nsid": 1, 00:11:41.982 "bdev_name": "Null4", 00:11:41.982 "name": "Null4", 00:11:41.982 "nguid": "C5B542690E50410C8598902C65958038", 00:11:41.982 "uuid": "c5b54269-0e50-410c-8598-902c65958038" 00:11:41.982 } 00:11:41.982 ] 00:11:41.982 } 00:11:41.982 ] 00:11:41.982 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:41.982 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:11:41.982 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:41.982 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:41.982 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:41.982 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:41.982 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:41.982 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:11:41.982 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:41.982 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:41.982 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:41.982 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:41.982 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:11:41.982 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:41.982 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:41.982 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:41.982 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:11:41.982 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:41.982 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:41.982 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:41.982 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:41.982 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:11:41.982 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:41.982 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:41.982 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:41.982 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:11:41.982 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:41.982 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:41.982 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:41.982 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:41.982 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:11:41.982 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:41.982 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:41.982 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:41.982 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:11:41.982 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:41.982 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:41.982 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:41.982 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:11:41.982 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:41.982 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:41.982 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:41.982 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:11:41.982 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:11:41.982 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:41.982 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:41.982 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:41.982 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:11:41.982 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:11:41.982 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:11:41.982 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:11:41.982 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:41.982 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:11:41.982 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:41.982 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:11:41.982 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:41.982 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:41.982 rmmod nvme_tcp 00:11:41.982 rmmod nvme_fabrics 00:11:41.982 rmmod nvme_keyring 00:11:41.982 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:41.982 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:11:41.982 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:11:41.982 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 1366078 ']' 00:11:41.982 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 1366078 00:11:41.982 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@948 -- # '[' -z 1366078 ']' 00:11:41.982 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@952 -- # kill -0 1366078 00:11:41.982 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@953 -- # uname 00:11:41.982 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:41.982 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1366078 00:11:41.982 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:41.982 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:41.982 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1366078' 00:11:41.982 killing process with pid 1366078 00:11:41.982 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@967 -- # kill 1366078 00:11:41.982 01:49:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # wait 1366078 00:11:42.242 01:49:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:42.242 01:49:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:42.242 01:49:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:42.242 01:49:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:42.242 01:49:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:42.242 01:49:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:42.242 01:49:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:42.242 01:49:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:44.775 01:49:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:44.775 00:11:44.776 real 0m5.352s 00:11:44.776 user 0m4.240s 00:11:44.776 sys 0m1.820s 00:11:44.776 01:49:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:44.776 01:49:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:44.776 ************************************ 00:11:44.776 END TEST nvmf_target_discovery 00:11:44.776 ************************************ 00:11:44.776 01:49:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:44.776 01:49:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:44.776 01:49:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:44.776 01:49:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:44.776 ************************************ 00:11:44.776 START TEST nvmf_referrals 00:11:44.776 ************************************ 00:11:44.776 01:49:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:44.776 * Looking for test storage... 00:11:44.776 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:44.776 01:49:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:44.776 01:49:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:11:44.776 01:49:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:44.776 01:49:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:44.776 01:49:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:44.776 01:49:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:44.776 01:49:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:44.776 01:49:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:44.776 01:49:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:44.776 01:49:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:44.776 01:49:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:44.776 01:49:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:44.776 01:49:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:44.776 01:49:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:44.776 01:49:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:44.776 01:49:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:44.776 01:49:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:44.776 01:49:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:44.776 01:49:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:44.776 01:49:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:44.776 01:49:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:44.776 01:49:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:44.776 01:49:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.776 01:49:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.776 01:49:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.776 01:49:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:11:44.776 01:49:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.776 01:49:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:11:44.776 01:49:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:44.776 01:49:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:44.776 01:49:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:44.776 01:49:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:44.776 01:49:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:44.776 01:49:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:44.776 01:49:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:44.776 01:49:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:44.776 01:49:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:11:44.776 01:49:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:11:44.776 01:49:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:11:44.776 01:49:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:11:44.776 01:49:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:11:44.776 01:49:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:11:44.776 01:49:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:11:44.776 01:49:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:44.776 01:49:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:44.776 01:49:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:44.776 01:49:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:44.776 01:49:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:44.776 01:49:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:44.776 01:49:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:44.776 01:49:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:44.776 01:49:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:44.776 01:49:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:44.776 01:49:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:11:44.776 01:49:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:46.680 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:46.680 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:11:46.680 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:46.680 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:46.680 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:46.680 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:46.680 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:46.680 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:11:46.680 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:46.680 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:11:46.680 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:11:46.680 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:11:46.680 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:11:46.680 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:11:46.680 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:11:46.680 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:46.680 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:46.680 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:46.680 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:46.680 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:46.680 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:46.680 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:46.680 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:46.680 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:46.680 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:46.680 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:46.680 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:46.680 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:46.680 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:46.680 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:46.680 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:46.680 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:46.680 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:46.680 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:46.680 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:46.680 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:46.680 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:46.680 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:46.680 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:46.680 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:46.680 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:46.680 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:46.680 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:46.680 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:46.680 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:46.680 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:46.680 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:46.680 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:46.680 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:46.680 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:46.680 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:46.680 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:46.680 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:46.680 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:46.680 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:46.680 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:46.680 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:46.680 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:46.680 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:46.680 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:46.680 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:46.680 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:46.680 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:46.680 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:46.680 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:46.680 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:46.680 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:46.680 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:46.681 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:46.681 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:46.681 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:46.681 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:46.681 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:11:46.681 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:46.681 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:46.681 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:46.681 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:46.681 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:46.681 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:46.681 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:46.681 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:46.681 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:46.681 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:46.681 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:46.681 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:46.681 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:46.681 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:46.681 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:46.681 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:46.681 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:46.681 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:46.681 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:46.681 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:46.681 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:46.681 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:46.681 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:46.681 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:46.681 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.268 ms 00:11:46.681 00:11:46.681 --- 10.0.0.2 ping statistics --- 00:11:46.681 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:46.681 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:11:46.681 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:46.681 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:46.681 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.084 ms 00:11:46.681 00:11:46.681 --- 10.0.0.1 ping statistics --- 00:11:46.681 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:46.681 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:11:46.681 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:46.681 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:11:46.681 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:46.681 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:46.681 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:46.681 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:46.681 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:46.681 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:46.681 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:46.681 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:11:46.681 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:46.681 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:46.681 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:46.681 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=1368169 00:11:46.681 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:46.681 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 1368169 00:11:46.681 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@829 -- # '[' -z 1368169 ']' 00:11:46.681 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:46.681 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:46.681 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:46.681 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:46.681 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:46.681 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:46.681 [2024-07-24 01:50:01.498953] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:11:46.681 [2024-07-24 01:50:01.499052] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:46.681 EAL: No free 2048 kB hugepages reported on node 1 00:11:46.939 [2024-07-24 01:50:01.577872] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:46.939 [2024-07-24 01:50:01.678038] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:46.939 [2024-07-24 01:50:01.678102] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:46.939 [2024-07-24 01:50:01.678118] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:46.939 [2024-07-24 01:50:01.678132] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:46.939 [2024-07-24 01:50:01.678144] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:46.939 [2024-07-24 01:50:01.678203] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:46.939 [2024-07-24 01:50:01.678259] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:46.939 [2024-07-24 01:50:01.678312] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:46.940 [2024-07-24 01:50:01.678315] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:46.940 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:46.940 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@862 -- # return 0 00:11:46.940 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:46.940 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:46.940 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:46.940 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:46.940 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:46.940 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:46.940 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:46.940 [2024-07-24 01:50:01.832989] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:47.197 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:47.197 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:11:47.197 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:47.197 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:47.197 [2024-07-24 01:50:01.845178] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:11:47.197 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:47.197 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:11:47.197 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:47.197 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:47.197 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:47.197 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:11:47.197 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:47.197 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:47.197 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:47.197 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:11:47.197 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:47.197 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:47.197 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:47.197 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:47.197 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:11:47.197 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:47.197 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:47.197 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:47.197 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:11:47.197 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:11:47.197 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:47.197 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:47.198 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:47.198 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:47.198 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:47.198 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:47.198 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:47.198 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:47.198 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:47.198 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:11:47.198 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:47.198 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:47.198 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:47.198 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:47.198 01:50:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:47.455 01:50:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:47.455 01:50:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:47.455 01:50:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:11:47.455 01:50:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:47.455 01:50:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:47.455 01:50:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:47.455 01:50:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:11:47.455 01:50:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:47.455 01:50:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:47.455 01:50:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:47.455 01:50:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:11:47.455 01:50:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:47.455 01:50:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:47.455 01:50:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:47.455 01:50:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:47.455 01:50:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:11:47.455 01:50:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:47.455 01:50:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:47.455 01:50:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:47.456 01:50:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:11:47.456 01:50:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:11:47.456 01:50:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:47.456 01:50:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:47.456 01:50:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:47.456 01:50:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:47.456 01:50:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:47.456 01:50:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:47.456 01:50:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:11:47.456 01:50:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:11:47.456 01:50:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:47.456 01:50:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:47.456 01:50:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:47.456 01:50:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:47.456 01:50:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:47.456 01:50:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:47.456 01:50:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:47.456 01:50:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:11:47.456 01:50:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:47.456 01:50:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:47.456 01:50:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:47.456 01:50:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:47.456 01:50:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:47.456 01:50:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:47.456 01:50:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:47.713 01:50:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:11:47.713 01:50:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:47.713 01:50:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:11:47.713 01:50:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:47.713 01:50:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:47.713 01:50:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:47.713 01:50:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:47.713 01:50:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:47.713 01:50:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:11:47.713 01:50:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:47.713 01:50:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:11:47.713 01:50:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:11:47.713 01:50:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:47.713 01:50:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:47.713 01:50:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:47.971 01:50:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:11:47.971 01:50:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:11:47.971 01:50:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:47.971 01:50:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:11:47.971 01:50:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:47.971 01:50:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:47.971 01:50:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:47.971 01:50:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:47.971 01:50:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:47.971 01:50:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:47.971 01:50:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:47.971 01:50:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:11:47.971 01:50:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:47.971 01:50:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:47.971 01:50:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:47.971 01:50:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:47.971 01:50:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:47.971 01:50:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:47.971 01:50:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:48.228 01:50:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:11:48.228 01:50:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:48.228 01:50:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:11:48.228 01:50:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:48.228 01:50:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:48.228 01:50:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:48.228 01:50:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:48.228 01:50:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:48.228 01:50:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:11:48.228 01:50:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:48.228 01:50:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:11:48.228 01:50:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:48.228 01:50:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:11:48.228 01:50:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:48.228 01:50:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:48.228 01:50:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:11:48.228 01:50:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:11:48.228 01:50:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:11:48.228 01:50:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:48.228 01:50:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:48.228 01:50:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:48.486 01:50:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:48.486 01:50:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:11:48.486 01:50:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:48.486 01:50:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:48.486 01:50:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:48.486 01:50:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:48.486 01:50:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:11:48.486 01:50:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:48.486 01:50:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:48.486 01:50:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:48.486 01:50:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:11:48.486 01:50:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:11:48.486 01:50:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:48.486 01:50:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:48.486 01:50:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:48.486 01:50:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:48.486 01:50:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:48.486 01:50:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:48.486 01:50:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:11:48.486 01:50:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:11:48.486 01:50:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:11:48.486 01:50:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:48.486 01:50:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:11:48.486 01:50:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:48.486 01:50:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:11:48.486 01:50:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:48.486 01:50:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:48.486 rmmod nvme_tcp 00:11:48.743 rmmod nvme_fabrics 00:11:48.743 rmmod nvme_keyring 00:11:48.743 01:50:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:48.743 01:50:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:11:48.743 01:50:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:11:48.743 01:50:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 1368169 ']' 00:11:48.743 01:50:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 1368169 00:11:48.743 01:50:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@948 -- # '[' -z 1368169 ']' 00:11:48.743 01:50:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@952 -- # kill -0 1368169 00:11:48.743 01:50:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@953 -- # uname 00:11:48.743 01:50:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:48.743 01:50:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1368169 00:11:48.743 01:50:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:48.743 01:50:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:48.743 01:50:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1368169' 00:11:48.743 killing process with pid 1368169 00:11:48.743 01:50:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@967 -- # kill 1368169 00:11:48.743 01:50:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # wait 1368169 00:11:49.000 01:50:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:49.000 01:50:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:49.000 01:50:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:49.000 01:50:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:49.000 01:50:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:49.000 01:50:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:49.000 01:50:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:49.000 01:50:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:50.899 01:50:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:50.899 00:11:50.899 real 0m6.579s 00:11:50.899 user 0m9.329s 00:11:50.899 sys 0m2.186s 00:11:50.899 01:50:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:50.899 01:50:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:50.899 ************************************ 00:11:50.899 END TEST nvmf_referrals 00:11:50.899 ************************************ 00:11:50.899 01:50:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:50.899 01:50:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:50.899 01:50:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:50.899 01:50:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:50.899 ************************************ 00:11:50.899 START TEST nvmf_connect_disconnect 00:11:50.899 ************************************ 00:11:50.899 01:50:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:51.156 * Looking for test storage... 00:11:51.156 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:51.156 01:50:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:51.156 01:50:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:11:51.156 01:50:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:51.156 01:50:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:51.156 01:50:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:51.156 01:50:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:51.156 01:50:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:51.156 01:50:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:51.156 01:50:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:51.156 01:50:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:51.156 01:50:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:51.156 01:50:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:51.156 01:50:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:51.156 01:50:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:51.157 01:50:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:51.157 01:50:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:51.157 01:50:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:51.157 01:50:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:51.157 01:50:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:51.157 01:50:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:51.157 01:50:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:51.157 01:50:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:51.157 01:50:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:51.157 01:50:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:51.157 01:50:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:51.157 01:50:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:11:51.157 01:50:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:51.157 01:50:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:11:51.157 01:50:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:51.157 01:50:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:51.157 01:50:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:51.157 01:50:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:51.157 01:50:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:51.157 01:50:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:51.157 01:50:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:51.157 01:50:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:51.157 01:50:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:51.157 01:50:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:51.157 01:50:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:11:51.157 01:50:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:51.157 01:50:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:51.157 01:50:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:51.157 01:50:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:51.157 01:50:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:51.157 01:50:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:51.157 01:50:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:51.157 01:50:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:51.157 01:50:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:51.157 01:50:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:51.157 01:50:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:11:51.157 01:50:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:53.059 01:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:53.059 01:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:11:53.059 01:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:53.059 01:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:53.059 01:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:53.059 01:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:53.059 01:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:53.059 01:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:11:53.059 01:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:53.059 01:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:11:53.059 01:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:11:53.059 01:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:11:53.059 01:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:11:53.059 01:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:11:53.059 01:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:11:53.059 01:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:53.059 01:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:53.059 01:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:53.059 01:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:53.059 01:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:53.059 01:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:53.059 01:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:53.059 01:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:53.059 01:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:53.059 01:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:53.059 01:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:53.059 01:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:53.059 01:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:53.059 01:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:53.059 01:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:53.059 01:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:53.059 01:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:53.059 01:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:53.059 01:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:53.059 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:53.059 01:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:53.059 01:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:53.059 01:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:53.059 01:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:53.059 01:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:53.059 01:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:53.059 01:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:53.059 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:53.059 01:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:53.059 01:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:53.059 01:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:53.318 01:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:53.318 01:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:53.318 01:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:53.318 01:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:53.318 01:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:53.318 01:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:53.318 01:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:53.318 01:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:53.318 01:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:53.318 01:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:53.318 01:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:53.318 01:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:53.318 01:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:53.318 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:53.318 01:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:53.318 01:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:53.318 01:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:53.318 01:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:53.318 01:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:53.318 01:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:53.318 01:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:53.318 01:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:53.318 01:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:53.318 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:53.318 01:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:53.318 01:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:53.318 01:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:11:53.318 01:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:53.318 01:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:53.318 01:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:53.318 01:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:53.318 01:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:53.318 01:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:53.318 01:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:53.318 01:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:53.318 01:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:53.318 01:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:53.318 01:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:53.318 01:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:53.318 01:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:53.318 01:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:53.318 01:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:53.318 01:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:53.318 01:50:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:53.318 01:50:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:53.318 01:50:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:53.318 01:50:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:53.319 01:50:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:53.319 01:50:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:53.319 01:50:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:53.319 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:53.319 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.254 ms 00:11:53.319 00:11:53.319 --- 10.0.0.2 ping statistics --- 00:11:53.319 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:53.319 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:11:53.319 01:50:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:53.319 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:53.319 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.144 ms 00:11:53.319 00:11:53.319 --- 10.0.0.1 ping statistics --- 00:11:53.319 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:53.319 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:11:53.319 01:50:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:53.319 01:50:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:11:53.319 01:50:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:53.319 01:50:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:53.319 01:50:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:53.319 01:50:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:53.319 01:50:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:53.319 01:50:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:53.319 01:50:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:53.319 01:50:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:11:53.319 01:50:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:53.319 01:50:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:53.319 01:50:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:53.319 01:50:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=1370455 00:11:53.319 01:50:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:53.319 01:50:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 1370455 00:11:53.319 01:50:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@829 -- # '[' -z 1370455 ']' 00:11:53.319 01:50:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:53.319 01:50:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:53.319 01:50:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:53.319 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:53.319 01:50:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:53.319 01:50:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:53.319 [2024-07-24 01:50:08.161119] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:11:53.319 [2024-07-24 01:50:08.161192] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:53.319 EAL: No free 2048 kB hugepages reported on node 1 00:11:53.577 [2024-07-24 01:50:08.225055] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:53.577 [2024-07-24 01:50:08.312713] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:53.577 [2024-07-24 01:50:08.312772] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:53.577 [2024-07-24 01:50:08.312800] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:53.577 [2024-07-24 01:50:08.312811] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:53.577 [2024-07-24 01:50:08.312821] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:53.577 [2024-07-24 01:50:08.312875] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:53.577 [2024-07-24 01:50:08.312935] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:53.577 [2024-07-24 01:50:08.313001] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:53.577 [2024-07-24 01:50:08.313003] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:53.577 01:50:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:53.577 01:50:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # return 0 00:11:53.577 01:50:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:53.577 01:50:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:53.577 01:50:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:53.577 01:50:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:53.577 01:50:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:53.577 01:50:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:53.577 01:50:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:53.577 [2024-07-24 01:50:08.465892] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:53.835 01:50:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:53.835 01:50:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:11:53.835 01:50:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:53.835 01:50:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:53.835 01:50:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:53.835 01:50:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:11:53.835 01:50:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:53.835 01:50:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:53.835 01:50:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:53.835 01:50:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:53.835 01:50:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:53.835 01:50:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:53.835 01:50:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:53.835 01:50:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:53.835 01:50:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:53.835 01:50:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:53.835 01:50:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:53.835 [2024-07-24 01:50:08.523439] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:53.835 01:50:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:53.835 01:50:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:11:53.835 01:50:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:11:53.835 01:50:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:11:53.835 01:50:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:11:56.390 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:58.296 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:00.828 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:03.361 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:05.899 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:07.808 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:10.344 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:12.883 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:14.788 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:17.322 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:19.859 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:21.799 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:24.356 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:26.887 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:28.791 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:31.326 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:33.229 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:35.763 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:38.298 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:40.203 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:42.740 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:44.644 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:47.177 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:49.710 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:51.614 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:54.148 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:56.680 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:58.587 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:01.122 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:03.072 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:05.614 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:08.146 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:10.050 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:12.584 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:14.490 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:17.024 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:19.556 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:21.460 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:24.033 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:26.563 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:28.465 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:30.997 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:33.530 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:35.433 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:37.965 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:39.868 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:42.404 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:44.941 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:47.478 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:49.385 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:51.936 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:53.840 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:56.372 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:58.905 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:00.806 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:03.338 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:05.242 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:07.771 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:09.681 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:12.218 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:14.753 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:16.695 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:19.228 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:21.764 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:23.668 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:26.201 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:28.105 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:30.636 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:32.541 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:35.076 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:37.612 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:39.515 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:42.085 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:43.989 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:46.523 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:49.058 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:50.961 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:53.492 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:55.396 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:57.927 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:00.462 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:02.368 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:04.904 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:07.442 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:09.346 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:11.885 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:14.418 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:16.322 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:18.852 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:20.759 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:23.294 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:25.833 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:27.733 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:30.264 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:32.797 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:34.739 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:37.276 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:39.823 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:41.726 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:44.266 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:44.266 01:53:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:15:44.266 01:53:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:15:44.266 01:53:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:44.266 01:53:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:15:44.266 01:53:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:44.266 01:53:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:15:44.266 01:53:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:44.266 01:53:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:44.266 rmmod nvme_tcp 00:15:44.266 rmmod nvme_fabrics 00:15:44.266 rmmod nvme_keyring 00:15:44.266 01:53:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:44.266 01:53:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:15:44.266 01:53:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:15:44.266 01:53:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 1370455 ']' 00:15:44.266 01:53:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 1370455 00:15:44.266 01:53:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@948 -- # '[' -z 1370455 ']' 00:15:44.266 01:53:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # kill -0 1370455 00:15:44.266 01:53:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # uname 00:15:44.266 01:53:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:44.266 01:53:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1370455 00:15:44.266 01:53:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:44.266 01:53:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:44.266 01:53:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1370455' 00:15:44.266 killing process with pid 1370455 00:15:44.266 01:53:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # kill 1370455 00:15:44.266 01:53:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # wait 1370455 00:15:44.528 01:53:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:44.528 01:53:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:44.528 01:53:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:44.528 01:53:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:44.528 01:53:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:44.528 01:53:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:44.528 01:53:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:44.528 01:53:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:46.441 01:54:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:46.441 00:15:46.441 real 3m55.482s 00:15:46.441 user 14m56.659s 00:15:46.441 sys 0m34.385s 00:15:46.441 01:54:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:46.441 01:54:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:46.441 ************************************ 00:15:46.441 END TEST nvmf_connect_disconnect 00:15:46.441 ************************************ 00:15:46.441 01:54:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:15:46.441 01:54:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:46.441 01:54:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:46.441 01:54:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:46.441 ************************************ 00:15:46.441 START TEST nvmf_multitarget 00:15:46.441 ************************************ 00:15:46.441 01:54:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:15:46.700 * Looking for test storage... 00:15:46.700 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:46.700 01:54:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:46.700 01:54:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:15:46.700 01:54:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:46.700 01:54:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:46.700 01:54:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:46.700 01:54:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:46.700 01:54:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:46.700 01:54:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:46.700 01:54:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:46.700 01:54:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:46.700 01:54:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:46.700 01:54:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:46.700 01:54:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:46.700 01:54:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:46.700 01:54:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:46.700 01:54:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:46.700 01:54:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:46.700 01:54:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:46.700 01:54:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:46.700 01:54:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:46.700 01:54:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:46.700 01:54:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:46.700 01:54:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:46.700 01:54:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:46.700 01:54:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:46.700 01:54:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:15:46.700 01:54:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:46.700 01:54:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:15:46.700 01:54:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:46.700 01:54:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:46.700 01:54:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:46.700 01:54:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:46.700 01:54:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:46.700 01:54:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:46.700 01:54:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:46.700 01:54:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:46.700 01:54:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:15:46.700 01:54:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:15:46.700 01:54:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:46.700 01:54:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:46.700 01:54:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:46.700 01:54:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:46.700 01:54:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:46.700 01:54:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:46.700 01:54:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:46.700 01:54:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:46.700 01:54:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:46.700 01:54:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:46.700 01:54:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:15:46.700 01:54:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:15:48.606 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:48.606 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:15:48.606 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:48.606 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:48.606 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:48.606 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:48.606 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:48.606 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:15:48.606 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:48.606 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:15:48.606 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:15:48.606 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:15:48.606 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:15:48.606 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:15:48.606 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:15:48.606 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:48.606 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:48.606 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:48.606 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:48.606 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:48.606 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:48.606 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:48.606 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:48.606 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:48.606 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:48.606 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:48.606 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:48.606 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:48.606 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:48.606 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:48.606 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:48.606 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:48.607 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:48.607 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:15:48.607 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:15:48.607 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:48.607 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:48.607 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:48.607 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:48.607 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:48.607 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:48.607 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:15:48.607 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:15:48.607 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:48.607 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:48.607 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:48.607 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:48.607 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:48.607 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:48.607 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:48.607 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:48.607 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:48.607 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:48.607 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:48.607 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:48.607 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:48.607 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:48.607 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:48.607 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:15:48.607 Found net devices under 0000:0a:00.0: cvl_0_0 00:15:48.607 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:48.607 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:48.607 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:48.607 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:48.607 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:48.607 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:48.607 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:48.607 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:48.607 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:15:48.607 Found net devices under 0000:0a:00.1: cvl_0_1 00:15:48.607 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:48.607 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:48.607 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:15:48.607 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:48.607 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:48.607 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:48.607 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:48.607 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:48.607 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:48.607 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:48.607 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:48.607 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:48.607 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:48.607 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:48.607 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:48.607 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:48.607 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:48.607 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:48.607 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:48.607 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:48.607 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:48.607 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:48.607 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:48.607 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:48.607 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:48.607 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:48.607 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:48.608 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.123 ms 00:15:48.608 00:15:48.608 --- 10.0.0.2 ping statistics --- 00:15:48.608 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:48.608 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:15:48.608 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:48.608 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:48.608 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.097 ms 00:15:48.608 00:15:48.608 --- 10.0.0.1 ping statistics --- 00:15:48.608 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:48.608 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:15:48.608 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:48.608 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:15:48.608 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:48.608 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:48.608 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:48.608 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:48.608 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:48.608 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:48.608 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:48.608 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:15:48.608 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:48.608 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:48.608 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:15:48.868 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=1401527 00:15:48.868 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:48.868 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 1401527 00:15:48.868 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@829 -- # '[' -z 1401527 ']' 00:15:48.868 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:48.868 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:48.868 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:48.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:48.868 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:48.868 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:15:48.868 [2024-07-24 01:54:03.547270] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:15:48.868 [2024-07-24 01:54:03.547373] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:48.868 EAL: No free 2048 kB hugepages reported on node 1 00:15:48.868 [2024-07-24 01:54:03.611771] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:48.868 [2024-07-24 01:54:03.697689] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:48.868 [2024-07-24 01:54:03.697742] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:48.868 [2024-07-24 01:54:03.697765] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:48.868 [2024-07-24 01:54:03.697777] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:48.868 [2024-07-24 01:54:03.697787] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:48.868 [2024-07-24 01:54:03.697865] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:48.868 [2024-07-24 01:54:03.697928] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:48.868 [2024-07-24 01:54:03.698005] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:48.868 [2024-07-24 01:54:03.698007] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:49.126 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:49.126 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@862 -- # return 0 00:15:49.126 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:49.126 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:49.126 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:15:49.126 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:49.126 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:15:49.126 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:15:49.126 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:15:49.126 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:15:49.126 01:54:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:15:49.384 "nvmf_tgt_1" 00:15:49.384 01:54:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:15:49.384 "nvmf_tgt_2" 00:15:49.384 01:54:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:15:49.384 01:54:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:15:49.642 01:54:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:15:49.642 01:54:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:15:49.642 true 00:15:49.642 01:54:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:15:49.901 true 00:15:49.901 01:54:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:15:49.901 01:54:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:15:49.901 01:54:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:15:49.901 01:54:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:15:49.901 01:54:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:15:49.901 01:54:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:49.901 01:54:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:15:49.901 01:54:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:49.901 01:54:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:15:49.901 01:54:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:49.901 01:54:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:49.901 rmmod nvme_tcp 00:15:49.901 rmmod nvme_fabrics 00:15:49.901 rmmod nvme_keyring 00:15:49.901 01:54:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:49.901 01:54:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:15:49.901 01:54:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:15:49.901 01:54:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 1401527 ']' 00:15:49.901 01:54:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 1401527 00:15:49.901 01:54:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@948 -- # '[' -z 1401527 ']' 00:15:49.901 01:54:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@952 -- # kill -0 1401527 00:15:49.901 01:54:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@953 -- # uname 00:15:49.901 01:54:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:49.901 01:54:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1401527 00:15:49.901 01:54:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:49.901 01:54:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:49.901 01:54:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1401527' 00:15:49.901 killing process with pid 1401527 00:15:49.901 01:54:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@967 -- # kill 1401527 00:15:49.901 01:54:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # wait 1401527 00:15:50.160 01:54:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:50.160 01:54:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:50.160 01:54:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:50.160 01:54:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:50.160 01:54:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:50.160 01:54:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:50.160 01:54:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:50.160 01:54:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:52.704 01:54:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:52.704 00:15:52.704 real 0m5.705s 00:15:52.704 user 0m6.584s 00:15:52.704 sys 0m1.829s 00:15:52.704 01:54:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:52.704 01:54:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:15:52.704 ************************************ 00:15:52.704 END TEST nvmf_multitarget 00:15:52.704 ************************************ 00:15:52.704 01:54:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:15:52.704 01:54:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:52.704 01:54:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:52.704 01:54:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:52.704 ************************************ 00:15:52.704 START TEST nvmf_rpc 00:15:52.704 ************************************ 00:15:52.704 01:54:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:15:52.704 * Looking for test storage... 00:15:52.704 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:52.704 01:54:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:52.704 01:54:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:15:52.704 01:54:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:52.704 01:54:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:52.704 01:54:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:52.704 01:54:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:52.704 01:54:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:52.704 01:54:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:52.704 01:54:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:52.704 01:54:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:52.704 01:54:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:52.704 01:54:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:52.704 01:54:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:52.704 01:54:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:52.704 01:54:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:52.704 01:54:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:52.704 01:54:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:52.704 01:54:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:52.704 01:54:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:52.704 01:54:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:52.704 01:54:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:52.704 01:54:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:52.704 01:54:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:52.704 01:54:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:52.704 01:54:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:52.704 01:54:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:15:52.704 01:54:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:52.704 01:54:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:15:52.704 01:54:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:52.704 01:54:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:52.704 01:54:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:52.704 01:54:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:52.704 01:54:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:52.704 01:54:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:52.704 01:54:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:52.704 01:54:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:52.704 01:54:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:15:52.704 01:54:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:15:52.704 01:54:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:52.704 01:54:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:52.704 01:54:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:52.704 01:54:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:52.704 01:54:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:52.704 01:54:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:52.704 01:54:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:52.704 01:54:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:52.704 01:54:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:52.704 01:54:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:52.704 01:54:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:15:52.704 01:54:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:54.612 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:54.612 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:15:54.612 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:54.612 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:54.612 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:54.612 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:54.612 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:54.612 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:15:54.612 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:54.612 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:15:54.612 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:15:54.612 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:15:54.612 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:15:54.612 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:15:54.612 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:15:54.612 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:54.612 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:54.612 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:54.612 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:54.612 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:54.612 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:54.612 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:54.612 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:54.612 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:54.612 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:54.612 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:54.612 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:54.612 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:54.612 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:54.612 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:54.612 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:54.612 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:54.612 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:54.612 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:15:54.612 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:15:54.612 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:54.612 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:54.612 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:54.612 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:54.612 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:54.612 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:54.612 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:15:54.612 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:15:54.612 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:54.612 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:54.612 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:54.612 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:54.612 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:54.612 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:54.612 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:54.612 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:54.612 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:54.612 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:54.612 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:54.612 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:54.612 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:54.612 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:54.612 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:54.612 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:15:54.612 Found net devices under 0000:0a:00.0: cvl_0_0 00:15:54.612 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:54.612 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:54.612 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:54.612 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:54.612 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:54.612 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:54.612 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:54.612 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:54.612 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:15:54.612 Found net devices under 0000:0a:00.1: cvl_0_1 00:15:54.612 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:54.612 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:54.612 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:15:54.612 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:54.612 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:54.612 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:54.612 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:54.612 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:54.612 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:54.613 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:54.613 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:54.613 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:54.613 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:54.613 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:54.613 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:54.613 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:54.613 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:54.613 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:54.613 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:54.613 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:54.613 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:54.613 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:54.613 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:54.613 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:54.613 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:54.613 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:54.613 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:54.613 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.214 ms 00:15:54.613 00:15:54.613 --- 10.0.0.2 ping statistics --- 00:15:54.613 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:54.613 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:15:54.613 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:54.613 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:54.613 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.083 ms 00:15:54.613 00:15:54.613 --- 10.0.0.1 ping statistics --- 00:15:54.613 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:54.613 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:15:54.613 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:54.613 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:15:54.613 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:54.613 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:54.613 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:54.613 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:54.613 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:54.613 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:54.613 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:54.613 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:15:54.613 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:54.613 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:54.613 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:54.613 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=1404116 00:15:54.613 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:54.613 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 1404116 00:15:54.613 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@829 -- # '[' -z 1404116 ']' 00:15:54.613 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:54.613 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:54.613 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:54.613 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:54.613 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:54.613 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:54.613 [2024-07-24 01:54:09.280386] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:15:54.613 [2024-07-24 01:54:09.280467] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:54.613 EAL: No free 2048 kB hugepages reported on node 1 00:15:54.613 [2024-07-24 01:54:09.347240] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:54.613 [2024-07-24 01:54:09.434947] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:54.613 [2024-07-24 01:54:09.435000] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:54.613 [2024-07-24 01:54:09.435024] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:54.613 [2024-07-24 01:54:09.435035] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:54.613 [2024-07-24 01:54:09.435045] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:54.613 [2024-07-24 01:54:09.435187] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:54.613 [2024-07-24 01:54:09.435254] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:54.613 [2024-07-24 01:54:09.435358] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:54.613 [2024-07-24 01:54:09.435336] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:54.872 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:54.872 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@862 -- # return 0 00:15:54.872 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:54.872 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:54.872 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:54.872 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:54.872 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:15:54.872 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:54.872 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:54.872 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:54.872 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:15:54.872 "tick_rate": 2700000000, 00:15:54.872 "poll_groups": [ 00:15:54.872 { 00:15:54.873 "name": "nvmf_tgt_poll_group_000", 00:15:54.873 "admin_qpairs": 0, 00:15:54.873 "io_qpairs": 0, 00:15:54.873 "current_admin_qpairs": 0, 00:15:54.873 "current_io_qpairs": 0, 00:15:54.873 "pending_bdev_io": 0, 00:15:54.873 "completed_nvme_io": 0, 00:15:54.873 "transports": [] 00:15:54.873 }, 00:15:54.873 { 00:15:54.873 "name": "nvmf_tgt_poll_group_001", 00:15:54.873 "admin_qpairs": 0, 00:15:54.873 "io_qpairs": 0, 00:15:54.873 "current_admin_qpairs": 0, 00:15:54.873 "current_io_qpairs": 0, 00:15:54.873 "pending_bdev_io": 0, 00:15:54.873 "completed_nvme_io": 0, 00:15:54.873 "transports": [] 00:15:54.873 }, 00:15:54.873 { 00:15:54.873 "name": "nvmf_tgt_poll_group_002", 00:15:54.873 "admin_qpairs": 0, 00:15:54.873 "io_qpairs": 0, 00:15:54.873 "current_admin_qpairs": 0, 00:15:54.873 "current_io_qpairs": 0, 00:15:54.873 "pending_bdev_io": 0, 00:15:54.873 "completed_nvme_io": 0, 00:15:54.873 "transports": [] 00:15:54.873 }, 00:15:54.873 { 00:15:54.873 "name": "nvmf_tgt_poll_group_003", 00:15:54.873 "admin_qpairs": 0, 00:15:54.873 "io_qpairs": 0, 00:15:54.873 "current_admin_qpairs": 0, 00:15:54.873 "current_io_qpairs": 0, 00:15:54.873 "pending_bdev_io": 0, 00:15:54.873 "completed_nvme_io": 0, 00:15:54.873 "transports": [] 00:15:54.873 } 00:15:54.873 ] 00:15:54.873 }' 00:15:54.873 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:15:54.873 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:15:54.873 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:15:54.873 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:15:54.873 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:15:54.873 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:15:54.873 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:15:54.873 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:54.873 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:54.873 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:54.873 [2024-07-24 01:54:09.651777] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:54.873 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:54.873 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:15:54.873 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:54.873 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:54.873 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:54.873 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:15:54.873 "tick_rate": 2700000000, 00:15:54.873 "poll_groups": [ 00:15:54.873 { 00:15:54.873 "name": "nvmf_tgt_poll_group_000", 00:15:54.873 "admin_qpairs": 0, 00:15:54.873 "io_qpairs": 0, 00:15:54.873 "current_admin_qpairs": 0, 00:15:54.873 "current_io_qpairs": 0, 00:15:54.873 "pending_bdev_io": 0, 00:15:54.873 "completed_nvme_io": 0, 00:15:54.873 "transports": [ 00:15:54.873 { 00:15:54.873 "trtype": "TCP" 00:15:54.873 } 00:15:54.873 ] 00:15:54.873 }, 00:15:54.873 { 00:15:54.873 "name": "nvmf_tgt_poll_group_001", 00:15:54.873 "admin_qpairs": 0, 00:15:54.873 "io_qpairs": 0, 00:15:54.873 "current_admin_qpairs": 0, 00:15:54.873 "current_io_qpairs": 0, 00:15:54.873 "pending_bdev_io": 0, 00:15:54.873 "completed_nvme_io": 0, 00:15:54.873 "transports": [ 00:15:54.873 { 00:15:54.873 "trtype": "TCP" 00:15:54.873 } 00:15:54.873 ] 00:15:54.873 }, 00:15:54.873 { 00:15:54.873 "name": "nvmf_tgt_poll_group_002", 00:15:54.873 "admin_qpairs": 0, 00:15:54.873 "io_qpairs": 0, 00:15:54.873 "current_admin_qpairs": 0, 00:15:54.873 "current_io_qpairs": 0, 00:15:54.873 "pending_bdev_io": 0, 00:15:54.873 "completed_nvme_io": 0, 00:15:54.873 "transports": [ 00:15:54.873 { 00:15:54.873 "trtype": "TCP" 00:15:54.873 } 00:15:54.873 ] 00:15:54.873 }, 00:15:54.873 { 00:15:54.873 "name": "nvmf_tgt_poll_group_003", 00:15:54.873 "admin_qpairs": 0, 00:15:54.873 "io_qpairs": 0, 00:15:54.873 "current_admin_qpairs": 0, 00:15:54.873 "current_io_qpairs": 0, 00:15:54.873 "pending_bdev_io": 0, 00:15:54.873 "completed_nvme_io": 0, 00:15:54.873 "transports": [ 00:15:54.873 { 00:15:54.873 "trtype": "TCP" 00:15:54.873 } 00:15:54.873 ] 00:15:54.873 } 00:15:54.873 ] 00:15:54.873 }' 00:15:54.873 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:15:54.873 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:15:54.873 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:15:54.873 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:54.873 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:15:54.873 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:15:54.873 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:15:54.873 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:15:54.873 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:54.873 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:15:54.873 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:15:54.873 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:15:54.873 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:15:54.873 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:15:54.873 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:54.873 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:55.134 Malloc1 00:15:55.134 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:55.134 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:55.134 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:55.134 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:55.134 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:55.134 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:55.134 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:55.134 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:55.134 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:55.134 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:15:55.134 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:55.134 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:55.135 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:55.135 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:55.135 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:55.135 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:55.135 [2024-07-24 01:54:09.800189] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:55.135 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:55.135 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:15:55.135 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:15:55.135 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:15:55.135 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:15:55.135 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:55.135 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:15:55.135 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:55.135 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:15:55.135 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:55.135 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:15:55.135 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:15:55.135 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:15:55.135 [2024-07-24 01:54:09.822649] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:15:55.135 Failed to write to /dev/nvme-fabrics: Input/output error 00:15:55.135 could not add new controller: failed to write to nvme-fabrics device 00:15:55.135 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:15:55.135 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:55.135 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:55.135 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:55.135 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:55.135 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:55.135 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:55.135 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:55.135 01:54:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:55.708 01:54:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:15:55.708 01:54:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1196 -- # local i=0 00:15:55.708 01:54:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:15:55.708 01:54:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # [[ -n '' ]] 00:15:55.708 01:54:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # sleep 2 00:15:57.631 01:54:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:15:57.631 01:54:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:15:57.631 01:54:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:15:57.631 01:54:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:15:57.631 01:54:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:15:57.631 01:54:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # return 0 00:15:57.631 01:54:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:57.890 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:57.890 01:54:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:57.890 01:54:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1217 -- # local i=0 00:15:57.890 01:54:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:15:57.890 01:54:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1218 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:57.890 01:54:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:15:57.890 01:54:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1225 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:57.890 01:54:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # return 0 00:15:57.890 01:54:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:57.890 01:54:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.890 01:54:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:57.890 01:54:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.890 01:54:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:57.890 01:54:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:15:57.890 01:54:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:57.890 01:54:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:15:57.890 01:54:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:57.890 01:54:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:15:57.890 01:54:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:57.890 01:54:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:15:57.890 01:54:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:57.890 01:54:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:15:57.890 01:54:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:15:57.891 01:54:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:57.891 [2024-07-24 01:54:12.583157] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:15:57.891 Failed to write to /dev/nvme-fabrics: Input/output error 00:15:57.891 could not add new controller: failed to write to nvme-fabrics device 00:15:57.891 01:54:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:15:57.891 01:54:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:57.891 01:54:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:57.891 01:54:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:57.891 01:54:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:15:57.891 01:54:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.891 01:54:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:57.891 01:54:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.891 01:54:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:58.462 01:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:15:58.462 01:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1196 -- # local i=0 00:15:58.462 01:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:15:58.462 01:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # [[ -n '' ]] 00:15:58.462 01:54:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # sleep 2 00:16:01.005 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:16:01.005 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:16:01.005 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:16:01.005 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:16:01.005 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:16:01.005 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # return 0 00:16:01.005 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:01.005 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:01.005 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:01.005 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1217 -- # local i=0 00:16:01.005 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:16:01.005 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1218 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:01.005 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:16:01.005 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1225 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:01.005 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # return 0 00:16:01.005 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:01.005 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.005 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:01.005 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.005 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:16:01.005 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:01.005 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:01.005 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.005 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:01.005 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.005 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:01.005 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.005 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:01.005 [2024-07-24 01:54:15.409503] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:01.005 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.005 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:01.005 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.005 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:01.005 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.005 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:01.005 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.005 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:01.005 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.005 01:54:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:01.268 01:54:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:01.268 01:54:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1196 -- # local i=0 00:16:01.268 01:54:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:16:01.268 01:54:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # [[ -n '' ]] 00:16:01.268 01:54:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # sleep 2 00:16:03.176 01:54:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:16:03.176 01:54:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:16:03.176 01:54:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:16:03.176 01:54:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:16:03.176 01:54:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:16:03.176 01:54:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # return 0 00:16:03.176 01:54:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:03.436 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:03.436 01:54:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:03.436 01:54:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1217 -- # local i=0 00:16:03.436 01:54:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:16:03.436 01:54:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1218 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:03.436 01:54:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:16:03.436 01:54:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1225 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:03.436 01:54:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # return 0 00:16:03.436 01:54:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:03.436 01:54:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:03.436 01:54:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:03.436 01:54:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:03.436 01:54:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:03.436 01:54:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:03.436 01:54:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:03.436 01:54:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:03.436 01:54:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:03.436 01:54:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:03.436 01:54:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:03.436 01:54:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:03.436 01:54:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:03.436 01:54:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:03.436 01:54:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:03.436 01:54:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:03.436 [2024-07-24 01:54:18.181400] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:03.436 01:54:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:03.436 01:54:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:03.436 01:54:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:03.436 01:54:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:03.436 01:54:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:03.436 01:54:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:03.436 01:54:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:03.436 01:54:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:03.436 01:54:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:03.436 01:54:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:04.004 01:54:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:04.004 01:54:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1196 -- # local i=0 00:16:04.004 01:54:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:16:04.004 01:54:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # [[ -n '' ]] 00:16:04.004 01:54:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # sleep 2 00:16:05.908 01:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:16:05.908 01:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:16:05.908 01:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:16:06.167 01:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:16:06.167 01:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:16:06.167 01:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # return 0 00:16:06.167 01:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:06.167 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:06.167 01:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:06.167 01:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1217 -- # local i=0 00:16:06.167 01:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:16:06.167 01:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1218 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:06.167 01:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:16:06.167 01:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1225 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:06.167 01:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # return 0 00:16:06.167 01:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:06.167 01:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.167 01:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:06.167 01:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:06.167 01:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:06.167 01:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.167 01:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:06.167 01:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:06.167 01:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:06.167 01:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:06.167 01:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.167 01:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:06.167 01:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:06.167 01:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:06.167 01:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.167 01:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:06.167 [2024-07-24 01:54:20.916146] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:06.167 01:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:06.167 01:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:06.167 01:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.167 01:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:06.167 01:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:06.167 01:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:06.167 01:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.167 01:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:06.167 01:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:06.167 01:54:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:06.735 01:54:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:06.735 01:54:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1196 -- # local i=0 00:16:06.735 01:54:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:16:06.735 01:54:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # [[ -n '' ]] 00:16:06.735 01:54:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # sleep 2 00:16:09.280 01:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:16:09.280 01:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:16:09.280 01:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:16:09.280 01:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:16:09.280 01:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:16:09.280 01:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # return 0 00:16:09.280 01:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:09.280 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:09.280 01:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:09.280 01:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1217 -- # local i=0 00:16:09.280 01:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:16:09.280 01:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1218 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:09.280 01:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:16:09.280 01:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1225 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:09.280 01:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # return 0 00:16:09.280 01:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:09.280 01:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.280 01:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:09.280 01:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.280 01:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:09.280 01:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.280 01:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:09.280 01:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.280 01:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:09.280 01:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:09.280 01:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.280 01:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:09.280 01:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.280 01:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:09.280 01:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.280 01:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:09.280 [2024-07-24 01:54:23.653477] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:09.280 01:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.280 01:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:09.280 01:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.280 01:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:09.280 01:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.280 01:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:09.280 01:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.280 01:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:09.280 01:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.280 01:54:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:09.541 01:54:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:09.541 01:54:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1196 -- # local i=0 00:16:09.541 01:54:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:16:09.541 01:54:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # [[ -n '' ]] 00:16:09.541 01:54:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # sleep 2 00:16:11.444 01:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:16:11.444 01:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:16:11.444 01:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:16:11.444 01:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:16:11.444 01:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:16:11.444 01:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # return 0 00:16:11.444 01:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:11.703 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:11.703 01:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:11.703 01:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1217 -- # local i=0 00:16:11.703 01:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:16:11.703 01:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1218 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:11.703 01:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:16:11.703 01:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1225 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:11.703 01:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # return 0 00:16:11.703 01:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:11.703 01:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:11.703 01:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:11.703 01:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:11.703 01:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:11.703 01:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:11.703 01:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:11.704 01:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:11.704 01:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:11.704 01:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:11.704 01:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:11.704 01:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:11.704 01:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:11.704 01:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:11.704 01:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:11.704 01:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:11.704 [2024-07-24 01:54:26.428294] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:11.704 01:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:11.704 01:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:11.704 01:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:11.704 01:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:11.704 01:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:11.704 01:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:11.704 01:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:11.704 01:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:11.704 01:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:11.704 01:54:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:12.270 01:54:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:12.270 01:54:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1196 -- # local i=0 00:16:12.271 01:54:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:16:12.271 01:54:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # [[ -n '' ]] 00:16:12.271 01:54:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # sleep 2 00:16:14.177 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:16:14.177 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:16:14.177 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:16:14.177 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:16:14.177 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:16:14.177 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # return 0 00:16:14.177 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:14.436 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:14.436 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:14.436 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1217 -- # local i=0 00:16:14.436 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:16:14.436 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1218 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:14.436 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:16:14.436 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1225 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:14.436 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # return 0 00:16:14.436 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:14.436 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.436 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:14.436 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.436 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:14.436 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.436 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:14.436 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.436 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:16:14.436 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:14.436 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:14.436 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.436 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:14.436 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.436 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:14.436 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.436 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:14.436 [2024-07-24 01:54:29.153052] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:14.436 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.436 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:14.436 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.436 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:14.436 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.436 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:14.436 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.436 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:14.436 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.436 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:14.436 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.436 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:14.436 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.436 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:14.436 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.436 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:14.436 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.436 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:14.436 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:14.436 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.436 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:14.436 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.436 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:14.436 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.436 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:14.436 [2024-07-24 01:54:29.201085] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:14.436 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.436 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:14.436 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.436 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:14.436 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.436 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:14.436 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.436 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:14.436 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.436 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:14.436 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.436 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:14.436 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.436 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:14.436 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.436 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:14.436 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.436 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:14.436 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:14.436 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.436 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:14.436 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.436 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:14.436 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.436 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:14.436 [2024-07-24 01:54:29.249248] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:14.436 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.436 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:14.436 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.436 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:14.436 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.436 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:14.436 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.436 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:14.436 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.436 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:14.436 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.436 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:14.436 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.436 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:14.436 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.436 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:14.436 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.436 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:14.436 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:14.436 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.436 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:14.436 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.437 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:14.437 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.437 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:14.437 [2024-07-24 01:54:29.297462] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:14.437 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.437 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:14.437 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.437 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:14.437 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.437 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:14.437 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.437 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:14.437 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.437 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:14.437 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.437 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:14.437 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.437 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:14.437 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.437 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:14.697 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.697 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:14.697 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:14.697 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.697 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:14.697 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.697 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:14.697 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.697 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:14.697 [2024-07-24 01:54:29.345628] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:14.697 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.697 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:14.697 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.697 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:14.697 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.697 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:14.697 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.697 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:14.697 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.697 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:14.697 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.697 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:14.697 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.697 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:14.697 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.697 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:14.697 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.697 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:16:14.697 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.697 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:14.697 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.697 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:16:14.697 "tick_rate": 2700000000, 00:16:14.697 "poll_groups": [ 00:16:14.697 { 00:16:14.697 "name": "nvmf_tgt_poll_group_000", 00:16:14.697 "admin_qpairs": 2, 00:16:14.697 "io_qpairs": 84, 00:16:14.697 "current_admin_qpairs": 0, 00:16:14.697 "current_io_qpairs": 0, 00:16:14.697 "pending_bdev_io": 0, 00:16:14.697 "completed_nvme_io": 137, 00:16:14.697 "transports": [ 00:16:14.697 { 00:16:14.697 "trtype": "TCP" 00:16:14.697 } 00:16:14.697 ] 00:16:14.697 }, 00:16:14.697 { 00:16:14.697 "name": "nvmf_tgt_poll_group_001", 00:16:14.697 "admin_qpairs": 2, 00:16:14.697 "io_qpairs": 84, 00:16:14.697 "current_admin_qpairs": 0, 00:16:14.697 "current_io_qpairs": 0, 00:16:14.697 "pending_bdev_io": 0, 00:16:14.697 "completed_nvme_io": 183, 00:16:14.697 "transports": [ 00:16:14.697 { 00:16:14.697 "trtype": "TCP" 00:16:14.697 } 00:16:14.697 ] 00:16:14.697 }, 00:16:14.697 { 00:16:14.697 "name": "nvmf_tgt_poll_group_002", 00:16:14.697 "admin_qpairs": 1, 00:16:14.697 "io_qpairs": 84, 00:16:14.697 "current_admin_qpairs": 0, 00:16:14.697 "current_io_qpairs": 0, 00:16:14.697 "pending_bdev_io": 0, 00:16:14.697 "completed_nvme_io": 144, 00:16:14.697 "transports": [ 00:16:14.697 { 00:16:14.697 "trtype": "TCP" 00:16:14.697 } 00:16:14.697 ] 00:16:14.697 }, 00:16:14.697 { 00:16:14.697 "name": "nvmf_tgt_poll_group_003", 00:16:14.697 "admin_qpairs": 2, 00:16:14.697 "io_qpairs": 84, 00:16:14.697 "current_admin_qpairs": 0, 00:16:14.697 "current_io_qpairs": 0, 00:16:14.697 "pending_bdev_io": 0, 00:16:14.697 "completed_nvme_io": 222, 00:16:14.697 "transports": [ 00:16:14.697 { 00:16:14.697 "trtype": "TCP" 00:16:14.697 } 00:16:14.697 ] 00:16:14.697 } 00:16:14.697 ] 00:16:14.697 }' 00:16:14.697 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:16:14.697 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:16:14.697 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:16:14.697 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:14.697 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:16:14.697 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:16:14.697 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:16:14.698 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:16:14.698 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:14.698 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:16:14.698 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:16:14.698 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:16:14.698 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:16:14.698 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:14.698 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:16:14.698 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:14.698 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:16:14.698 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:14.698 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:14.698 rmmod nvme_tcp 00:16:14.698 rmmod nvme_fabrics 00:16:14.698 rmmod nvme_keyring 00:16:14.698 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:14.698 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:16:14.698 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:16:14.698 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 1404116 ']' 00:16:14.698 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 1404116 00:16:14.698 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@948 -- # '[' -z 1404116 ']' 00:16:14.698 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@952 -- # kill -0 1404116 00:16:14.698 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@953 -- # uname 00:16:14.698 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:14.698 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1404116 00:16:14.698 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:14.698 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:14.698 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1404116' 00:16:14.698 killing process with pid 1404116 00:16:14.698 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@967 -- # kill 1404116 00:16:14.698 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # wait 1404116 00:16:14.956 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:14.956 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:14.956 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:14.956 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:14.956 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:14.956 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:14.956 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:14.956 01:54:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:17.496 01:54:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:17.496 00:16:17.496 real 0m24.797s 00:16:17.496 user 1m20.604s 00:16:17.496 sys 0m4.016s 00:16:17.497 01:54:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:17.497 01:54:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:17.497 ************************************ 00:16:17.497 END TEST nvmf_rpc 00:16:17.497 ************************************ 00:16:17.497 01:54:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:16:17.497 01:54:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:17.497 01:54:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:17.497 01:54:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:17.497 ************************************ 00:16:17.497 START TEST nvmf_invalid 00:16:17.497 ************************************ 00:16:17.497 01:54:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:16:17.497 * Looking for test storage... 00:16:17.497 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:17.497 01:54:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:17.497 01:54:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:16:17.497 01:54:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:17.497 01:54:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:17.497 01:54:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:17.497 01:54:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:17.497 01:54:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:17.497 01:54:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:17.497 01:54:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:17.497 01:54:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:17.497 01:54:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:17.497 01:54:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:17.497 01:54:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:17.497 01:54:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:17.497 01:54:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:17.497 01:54:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:17.497 01:54:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:17.497 01:54:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:17.497 01:54:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:17.497 01:54:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:17.497 01:54:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:17.497 01:54:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:17.497 01:54:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:17.497 01:54:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:17.497 01:54:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:17.497 01:54:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:16:17.497 01:54:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:17.497 01:54:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:16:17.497 01:54:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:17.497 01:54:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:17.497 01:54:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:17.497 01:54:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:17.497 01:54:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:17.497 01:54:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:17.497 01:54:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:17.497 01:54:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:17.497 01:54:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:16:17.497 01:54:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:17.497 01:54:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:16:17.497 01:54:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:16:17.497 01:54:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:16:17.497 01:54:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:16:17.497 01:54:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:17.497 01:54:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:17.497 01:54:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:17.497 01:54:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:17.497 01:54:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:17.497 01:54:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:17.497 01:54:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:17.497 01:54:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:17.497 01:54:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:17.497 01:54:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:17.497 01:54:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:16:17.497 01:54:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:19.401 01:54:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:19.401 01:54:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:16:19.401 01:54:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:19.401 01:54:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:19.401 01:54:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:19.401 01:54:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:19.401 01:54:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:19.401 01:54:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:16:19.401 01:54:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:19.401 01:54:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:16:19.401 01:54:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:16:19.401 01:54:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:16:19.401 01:54:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:16:19.401 01:54:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:16:19.401 01:54:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:16:19.401 01:54:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:19.401 01:54:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:19.401 01:54:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:19.401 01:54:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:19.401 01:54:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:19.401 01:54:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:19.401 01:54:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:19.401 01:54:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:19.401 01:54:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:19.401 01:54:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:19.401 01:54:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:19.401 01:54:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:19.401 01:54:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:19.401 01:54:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:19.401 01:54:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:19.401 01:54:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:19.401 01:54:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:19.401 01:54:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:19.401 01:54:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:19.401 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:19.401 01:54:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:19.401 01:54:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:19.401 01:54:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:19.401 01:54:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:19.401 01:54:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:19.401 01:54:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:19.401 01:54:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:19.401 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:19.401 01:54:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:19.401 01:54:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:19.401 01:54:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:19.401 01:54:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:19.401 01:54:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:19.401 01:54:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:19.401 01:54:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:19.401 01:54:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:19.401 01:54:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:19.401 01:54:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:19.401 01:54:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:19.401 01:54:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:19.401 01:54:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:19.401 01:54:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:19.402 01:54:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:19.402 01:54:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:19.402 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:19.402 01:54:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:19.402 01:54:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:19.402 01:54:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:19.402 01:54:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:19.402 01:54:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:19.402 01:54:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:19.402 01:54:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:19.402 01:54:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:19.402 01:54:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:19.402 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:19.402 01:54:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:19.402 01:54:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:19.402 01:54:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:16:19.402 01:54:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:19.402 01:54:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:19.402 01:54:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:19.402 01:54:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:19.402 01:54:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:19.402 01:54:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:19.402 01:54:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:19.402 01:54:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:19.402 01:54:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:19.402 01:54:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:19.402 01:54:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:19.402 01:54:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:19.402 01:54:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:19.402 01:54:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:19.402 01:54:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:19.402 01:54:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:19.402 01:54:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:19.402 01:54:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:19.402 01:54:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:19.402 01:54:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:19.402 01:54:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:19.402 01:54:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:19.402 01:54:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:19.402 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:19.402 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.267 ms 00:16:19.402 00:16:19.402 --- 10.0.0.2 ping statistics --- 00:16:19.402 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:19.402 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:16:19.402 01:54:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:19.402 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:19.402 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.145 ms 00:16:19.402 00:16:19.402 --- 10.0.0.1 ping statistics --- 00:16:19.402 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:19.402 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:16:19.402 01:54:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:19.402 01:54:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:16:19.402 01:54:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:19.402 01:54:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:19.402 01:54:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:19.402 01:54:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:19.402 01:54:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:19.402 01:54:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:19.402 01:54:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:19.402 01:54:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:16:19.402 01:54:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:19.402 01:54:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:19.402 01:54:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:19.402 01:54:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=1408601 00:16:19.402 01:54:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:19.402 01:54:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 1408601 00:16:19.402 01:54:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@829 -- # '[' -z 1408601 ']' 00:16:19.402 01:54:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:19.402 01:54:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:19.402 01:54:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:19.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:19.402 01:54:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:19.402 01:54:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:19.402 [2024-07-24 01:54:34.186505] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:16:19.402 [2024-07-24 01:54:34.186587] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:19.402 EAL: No free 2048 kB hugepages reported on node 1 00:16:19.402 [2024-07-24 01:54:34.251922] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:19.661 [2024-07-24 01:54:34.342135] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:19.661 [2024-07-24 01:54:34.342198] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:19.661 [2024-07-24 01:54:34.342215] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:19.661 [2024-07-24 01:54:34.342228] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:19.661 [2024-07-24 01:54:34.342240] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:19.661 [2024-07-24 01:54:34.342501] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:19.661 [2024-07-24 01:54:34.342523] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:19.661 [2024-07-24 01:54:34.342578] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:19.661 [2024-07-24 01:54:34.342581] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:19.661 01:54:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:19.661 01:54:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@862 -- # return 0 00:16:19.661 01:54:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:19.661 01:54:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:19.661 01:54:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:19.661 01:54:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:19.661 01:54:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:16:19.661 01:54:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode1889 00:16:19.947 [2024-07-24 01:54:34.705409] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:16:19.947 01:54:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:16:19.947 { 00:16:19.947 "nqn": "nqn.2016-06.io.spdk:cnode1889", 00:16:19.947 "tgt_name": "foobar", 00:16:19.947 "method": "nvmf_create_subsystem", 00:16:19.947 "req_id": 1 00:16:19.947 } 00:16:19.947 Got JSON-RPC error response 00:16:19.947 response: 00:16:19.947 { 00:16:19.947 "code": -32603, 00:16:19.947 "message": "Unable to find target foobar" 00:16:19.947 }' 00:16:19.947 01:54:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:16:19.947 { 00:16:19.947 "nqn": "nqn.2016-06.io.spdk:cnode1889", 00:16:19.947 "tgt_name": "foobar", 00:16:19.947 "method": "nvmf_create_subsystem", 00:16:19.947 "req_id": 1 00:16:19.947 } 00:16:19.947 Got JSON-RPC error response 00:16:19.947 response: 00:16:19.947 { 00:16:19.947 "code": -32603, 00:16:19.947 "message": "Unable to find target foobar" 00:16:19.947 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:16:19.947 01:54:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:16:19.947 01:54:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode3418 00:16:20.205 [2024-07-24 01:54:34.962262] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3418: invalid serial number 'SPDKISFASTANDAWESOME' 00:16:20.205 01:54:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:16:20.205 { 00:16:20.205 "nqn": "nqn.2016-06.io.spdk:cnode3418", 00:16:20.205 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:16:20.205 "method": "nvmf_create_subsystem", 00:16:20.205 "req_id": 1 00:16:20.205 } 00:16:20.205 Got JSON-RPC error response 00:16:20.205 response: 00:16:20.205 { 00:16:20.205 "code": -32602, 00:16:20.205 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:16:20.205 }' 00:16:20.205 01:54:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:16:20.205 { 00:16:20.205 "nqn": "nqn.2016-06.io.spdk:cnode3418", 00:16:20.205 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:16:20.205 "method": "nvmf_create_subsystem", 00:16:20.205 "req_id": 1 00:16:20.205 } 00:16:20.205 Got JSON-RPC error response 00:16:20.205 response: 00:16:20.205 { 00:16:20.205 "code": -32602, 00:16:20.205 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:16:20.205 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:16:20.205 01:54:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:16:20.205 01:54:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode7848 00:16:20.464 [2024-07-24 01:54:35.211064] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7848: invalid model number 'SPDK_Controller' 00:16:20.464 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:16:20.464 { 00:16:20.464 "nqn": "nqn.2016-06.io.spdk:cnode7848", 00:16:20.464 "model_number": "SPDK_Controller\u001f", 00:16:20.464 "method": "nvmf_create_subsystem", 00:16:20.464 "req_id": 1 00:16:20.464 } 00:16:20.464 Got JSON-RPC error response 00:16:20.464 response: 00:16:20.464 { 00:16:20.464 "code": -32602, 00:16:20.464 "message": "Invalid MN SPDK_Controller\u001f" 00:16:20.464 }' 00:16:20.464 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:16:20.464 { 00:16:20.464 "nqn": "nqn.2016-06.io.spdk:cnode7848", 00:16:20.464 "model_number": "SPDK_Controller\u001f", 00:16:20.464 "method": "nvmf_create_subsystem", 00:16:20.464 "req_id": 1 00:16:20.464 } 00:16:20.464 Got JSON-RPC error response 00:16:20.464 response: 00:16:20.464 { 00:16:20.464 "code": -32602, 00:16:20.464 "message": "Invalid MN SPDK_Controller\u001f" 00:16:20.464 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:16:20.464 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:16:20.464 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:16:20.464 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:16:20.464 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:16:20.464 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:16:20.464 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:16:20.464 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:20.464 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:16:20.464 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:16:20.464 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:16:20.464 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:20.464 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:20.464 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:16:20.464 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:16:20.464 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:16:20.464 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:20.464 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:20.464 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:16:20.464 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:16:20.464 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:16:20.464 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:20.464 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:20.464 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:16:20.464 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:16:20.464 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:16:20.464 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:20.464 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:20.464 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:16:20.464 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:16:20.464 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:16:20.464 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:20.464 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:20.464 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:16:20.464 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:16:20.464 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:16:20.464 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:20.464 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:20.464 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:16:20.464 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:16:20.464 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:16:20.464 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:20.464 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:20.464 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:16:20.464 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:16:20.464 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:16:20.464 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:20.464 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:20.464 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:16:20.464 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:16:20.464 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:16:20.464 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:20.464 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:20.464 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:16:20.464 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:16:20.464 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:16:20.464 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:20.464 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:20.464 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:16:20.464 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:16:20.464 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:16:20.464 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:20.464 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:20.464 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:16:20.464 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:16:20.464 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:16:20.464 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:20.464 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:20.464 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:16:20.464 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:16:20.464 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:16:20.464 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:20.464 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:20.464 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:16:20.464 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:16:20.464 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:16:20.465 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:20.465 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:20.465 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:16:20.465 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:16:20.465 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:16:20.465 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:20.465 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:20.465 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:16:20.465 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:16:20.465 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:16:20.465 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:20.465 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:20.465 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:16:20.465 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:16:20.465 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:16:20.465 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:20.465 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:20.465 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:16:20.465 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:16:20.465 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:16:20.465 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:20.465 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:20.465 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:16:20.465 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:16:20.465 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:16:20.465 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:20.465 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:20.465 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:16:20.465 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:16:20.465 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:16:20.465 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:20.465 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:20.465 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:16:20.465 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:16:20.465 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:16:20.465 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:20.465 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:20.465 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ v == \- ]] 00:16:20.465 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'vLt(h ca(^K&:8`\Z@Nt' 00:16:20.465 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'vLt(h ca(^K&:8`\Z@Nt' nqn.2016-06.io.spdk:cnode4318 00:16:20.723 [2024-07-24 01:54:35.544197] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4318: invalid serial number 'vLt(h ca(^K&:8`\Z@Nt' 00:16:20.724 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:16:20.724 { 00:16:20.724 "nqn": "nqn.2016-06.io.spdk:cnode4318", 00:16:20.724 "serial_number": "vLt\u007f(h ca(^K&:8`\\Z@Nt", 00:16:20.724 "method": "nvmf_create_subsystem", 00:16:20.724 "req_id": 1 00:16:20.724 } 00:16:20.724 Got JSON-RPC error response 00:16:20.724 response: 00:16:20.724 { 00:16:20.724 "code": -32602, 00:16:20.724 "message": "Invalid SN vLt\u007f(h ca(^K&:8`\\Z@Nt" 00:16:20.724 }' 00:16:20.724 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:16:20.724 { 00:16:20.724 "nqn": "nqn.2016-06.io.spdk:cnode4318", 00:16:20.724 "serial_number": "vLt\u007f(h ca(^K&:8`\\Z@Nt", 00:16:20.724 "method": "nvmf_create_subsystem", 00:16:20.724 "req_id": 1 00:16:20.724 } 00:16:20.724 Got JSON-RPC error response 00:16:20.724 response: 00:16:20.724 { 00:16:20.724 "code": -32602, 00:16:20.724 "message": "Invalid SN vLt\u007f(h ca(^K&:8`\\Z@Nt" 00:16:20.724 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:16:20.724 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:16:20.724 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:16:20.724 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:16:20.724 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:16:20.724 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:16:20.724 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:16:20.724 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:20.724 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:16:20.724 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:16:20.724 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:16:20.724 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:20.724 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:20.724 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:16:20.724 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:16:20.724 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:16:20.724 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:20.724 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:20.724 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:16:20.724 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:16:20.724 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:16:20.724 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:20.724 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:20.724 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:16:20.724 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:16:20.724 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:16:20.724 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:20.724 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:20.724 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:16:20.724 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:16:20.724 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:16:20.724 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:20.724 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:20.724 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:16:20.724 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:16:20.724 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:16:20.724 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:20.724 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:20.724 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:16:20.724 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:16:20.724 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:16:20.724 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:20.724 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:20.724 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:16:20.724 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:16:20.724 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:16:20.724 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:20.724 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:20.724 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:16:20.724 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:16:20.724 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:16:20.724 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:20.724 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:20.724 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:16:20.724 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:16:20.724 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:16:20.724 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:20.724 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:20.724 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:16:20.724 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:16:20.724 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:16:20.724 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:20.724 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:20.724 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:16:20.724 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:16:20.724 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:16:20.724 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:20.724 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:20.724 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:16:20.724 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:16:20.724 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:16:20.724 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:20.724 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:20.724 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:16:20.724 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:16:20.724 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:16:20.724 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:20.724 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:20.724 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:16:20.724 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:16:20.724 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:16:20.724 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:20.724 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:20.724 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:16:20.724 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:16:20.724 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:16:20.725 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:20.725 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:20.725 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:16:20.725 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:16:20.725 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:16:20.725 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:20.725 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:20.725 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:16:20.725 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:16:20.725 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:16:20.725 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:20.725 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:20.725 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:16:20.984 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:16:20.984 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:16:20.984 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:20.984 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:20.984 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:16:20.984 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:16:20.984 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:16:20.984 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:20.984 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:20.984 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:16:20.984 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:16:20.984 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:16:20.984 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:20.984 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:20.984 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:16:20.984 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:16:20.984 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:16:20.984 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:20.984 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:20.984 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:16:20.984 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:16:20.984 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:16:20.984 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:20.984 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:20.984 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:16:20.984 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:16:20.984 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:16:20.984 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:20.984 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:20.984 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:16:20.984 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:16:20.984 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:16:20.984 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:20.984 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:20.984 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:16:20.984 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:16:20.984 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:16:20.984 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:20.984 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:20.984 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:16:20.984 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:16:20.984 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:16:20.984 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:20.984 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:20.984 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:16:20.984 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:16:20.984 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:16:20.984 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:20.984 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:20.984 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:16:20.984 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:16:20.984 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:16:20.984 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:20.984 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:20.984 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:16:20.984 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:16:20.984 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:16:20.984 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:20.984 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:20.984 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:16:20.984 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:16:20.984 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:16:20.984 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:20.984 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:20.984 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:16:20.984 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:16:20.984 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:16:20.984 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:20.984 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:20.984 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:16:20.984 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:16:20.984 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:16:20.984 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:20.984 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:20.984 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:16:20.984 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:16:20.984 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:16:20.984 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:20.984 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:20.984 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:16:20.984 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:16:20.984 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:16:20.984 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:20.984 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:20.984 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:16:20.984 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:16:20.984 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:16:20.984 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:20.984 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:20.984 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:16:20.984 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:16:20.984 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:16:20.984 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:20.984 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:20.984 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:16:20.984 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:16:20.984 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:16:20.984 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:20.984 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:20.984 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:16:20.985 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:16:20.985 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:16:20.985 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:20.985 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:20.985 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:16:20.985 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:16:20.985 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:16:20.985 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:20.985 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:20.985 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:16:20.985 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:16:20.985 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:16:20.985 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:20.985 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:20.985 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ [ == \- ]] 00:16:20.985 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '[Hw0#wb=Qa8v}LgO1/8v(ZaOF*%n`&N=]Q`^?d]C9' 00:16:20.985 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '[Hw0#wb=Qa8v}LgO1/8v(ZaOF*%n`&N=]Q`^?d]C9' nqn.2016-06.io.spdk:cnode18034 00:16:21.243 [2024-07-24 01:54:35.917389] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18034: invalid model number '[Hw0#wb=Qa8v}LgO1/8v(ZaOF*%n`&N=]Q`^?d]C9' 00:16:21.243 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:16:21.243 { 00:16:21.243 "nqn": "nqn.2016-06.io.spdk:cnode18034", 00:16:21.243 "model_number": "[Hw0#wb=Qa8v}LgO1/8v(ZaOF*%n`&N=]Q`^?d]C9", 00:16:21.243 "method": "nvmf_create_subsystem", 00:16:21.243 "req_id": 1 00:16:21.243 } 00:16:21.243 Got JSON-RPC error response 00:16:21.243 response: 00:16:21.243 { 00:16:21.243 "code": -32602, 00:16:21.243 "message": "Invalid MN [Hw0#wb=Qa8v}LgO1/8v(ZaOF*%n`&N=]Q`^?d]C9" 00:16:21.243 }' 00:16:21.243 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:16:21.243 { 00:16:21.243 "nqn": "nqn.2016-06.io.spdk:cnode18034", 00:16:21.243 "model_number": "[Hw0#wb=Qa8v}LgO1/8v(ZaOF*%n`&N=]Q`^?d]C9", 00:16:21.243 "method": "nvmf_create_subsystem", 00:16:21.243 "req_id": 1 00:16:21.243 } 00:16:21.243 Got JSON-RPC error response 00:16:21.243 response: 00:16:21.243 { 00:16:21.243 "code": -32602, 00:16:21.243 "message": "Invalid MN [Hw0#wb=Qa8v}LgO1/8v(ZaOF*%n`&N=]Q`^?d]C9" 00:16:21.243 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:16:21.243 01:54:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:16:21.501 [2024-07-24 01:54:36.162239] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:21.501 01:54:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:16:21.759 01:54:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:16:21.759 01:54:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:16:21.759 01:54:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:16:21.759 01:54:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:16:21.759 01:54:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:16:22.017 [2024-07-24 01:54:36.667889] nvmf_rpc.c: 809:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:16:22.017 01:54:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:16:22.017 { 00:16:22.017 "nqn": "nqn.2016-06.io.spdk:cnode", 00:16:22.017 "listen_address": { 00:16:22.017 "trtype": "tcp", 00:16:22.017 "traddr": "", 00:16:22.017 "trsvcid": "4421" 00:16:22.017 }, 00:16:22.017 "method": "nvmf_subsystem_remove_listener", 00:16:22.017 "req_id": 1 00:16:22.017 } 00:16:22.017 Got JSON-RPC error response 00:16:22.017 response: 00:16:22.017 { 00:16:22.017 "code": -32602, 00:16:22.017 "message": "Invalid parameters" 00:16:22.017 }' 00:16:22.017 01:54:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:16:22.017 { 00:16:22.017 "nqn": "nqn.2016-06.io.spdk:cnode", 00:16:22.017 "listen_address": { 00:16:22.017 "trtype": "tcp", 00:16:22.017 "traddr": "", 00:16:22.017 "trsvcid": "4421" 00:16:22.017 }, 00:16:22.017 "method": "nvmf_subsystem_remove_listener", 00:16:22.017 "req_id": 1 00:16:22.017 } 00:16:22.017 Got JSON-RPC error response 00:16:22.017 response: 00:16:22.017 { 00:16:22.017 "code": -32602, 00:16:22.017 "message": "Invalid parameters" 00:16:22.017 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:16:22.017 01:54:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode22394 -i 0 00:16:22.276 [2024-07-24 01:54:36.928726] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode22394: invalid cntlid range [0-65519] 00:16:22.276 01:54:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:16:22.276 { 00:16:22.276 "nqn": "nqn.2016-06.io.spdk:cnode22394", 00:16:22.276 "min_cntlid": 0, 00:16:22.276 "method": "nvmf_create_subsystem", 00:16:22.276 "req_id": 1 00:16:22.276 } 00:16:22.276 Got JSON-RPC error response 00:16:22.276 response: 00:16:22.276 { 00:16:22.276 "code": -32602, 00:16:22.276 "message": "Invalid cntlid range [0-65519]" 00:16:22.276 }' 00:16:22.276 01:54:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:16:22.276 { 00:16:22.276 "nqn": "nqn.2016-06.io.spdk:cnode22394", 00:16:22.276 "min_cntlid": 0, 00:16:22.276 "method": "nvmf_create_subsystem", 00:16:22.276 "req_id": 1 00:16:22.276 } 00:16:22.276 Got JSON-RPC error response 00:16:22.276 response: 00:16:22.276 { 00:16:22.276 "code": -32602, 00:16:22.276 "message": "Invalid cntlid range [0-65519]" 00:16:22.276 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:22.276 01:54:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4495 -i 65520 00:16:22.534 [2024-07-24 01:54:37.185559] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4495: invalid cntlid range [65520-65519] 00:16:22.534 01:54:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:16:22.534 { 00:16:22.534 "nqn": "nqn.2016-06.io.spdk:cnode4495", 00:16:22.534 "min_cntlid": 65520, 00:16:22.534 "method": "nvmf_create_subsystem", 00:16:22.534 "req_id": 1 00:16:22.534 } 00:16:22.534 Got JSON-RPC error response 00:16:22.534 response: 00:16:22.534 { 00:16:22.534 "code": -32602, 00:16:22.534 "message": "Invalid cntlid range [65520-65519]" 00:16:22.534 }' 00:16:22.534 01:54:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:16:22.534 { 00:16:22.534 "nqn": "nqn.2016-06.io.spdk:cnode4495", 00:16:22.534 "min_cntlid": 65520, 00:16:22.534 "method": "nvmf_create_subsystem", 00:16:22.534 "req_id": 1 00:16:22.534 } 00:16:22.534 Got JSON-RPC error response 00:16:22.534 response: 00:16:22.534 { 00:16:22.534 "code": -32602, 00:16:22.534 "message": "Invalid cntlid range [65520-65519]" 00:16:22.534 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:22.534 01:54:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11762 -I 0 00:16:22.791 [2024-07-24 01:54:37.438444] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11762: invalid cntlid range [1-0] 00:16:22.791 01:54:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:16:22.791 { 00:16:22.791 "nqn": "nqn.2016-06.io.spdk:cnode11762", 00:16:22.791 "max_cntlid": 0, 00:16:22.791 "method": "nvmf_create_subsystem", 00:16:22.791 "req_id": 1 00:16:22.791 } 00:16:22.791 Got JSON-RPC error response 00:16:22.791 response: 00:16:22.791 { 00:16:22.791 "code": -32602, 00:16:22.791 "message": "Invalid cntlid range [1-0]" 00:16:22.791 }' 00:16:22.791 01:54:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:16:22.791 { 00:16:22.791 "nqn": "nqn.2016-06.io.spdk:cnode11762", 00:16:22.791 "max_cntlid": 0, 00:16:22.791 "method": "nvmf_create_subsystem", 00:16:22.791 "req_id": 1 00:16:22.791 } 00:16:22.791 Got JSON-RPC error response 00:16:22.791 response: 00:16:22.791 { 00:16:22.791 "code": -32602, 00:16:22.791 "message": "Invalid cntlid range [1-0]" 00:16:22.791 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:22.791 01:54:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode24955 -I 65520 00:16:23.048 [2024-07-24 01:54:37.691190] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24955: invalid cntlid range [1-65520] 00:16:23.048 01:54:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:16:23.048 { 00:16:23.048 "nqn": "nqn.2016-06.io.spdk:cnode24955", 00:16:23.048 "max_cntlid": 65520, 00:16:23.048 "method": "nvmf_create_subsystem", 00:16:23.048 "req_id": 1 00:16:23.048 } 00:16:23.048 Got JSON-RPC error response 00:16:23.048 response: 00:16:23.048 { 00:16:23.048 "code": -32602, 00:16:23.048 "message": "Invalid cntlid range [1-65520]" 00:16:23.048 }' 00:16:23.048 01:54:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:16:23.048 { 00:16:23.048 "nqn": "nqn.2016-06.io.spdk:cnode24955", 00:16:23.048 "max_cntlid": 65520, 00:16:23.048 "method": "nvmf_create_subsystem", 00:16:23.048 "req_id": 1 00:16:23.048 } 00:16:23.048 Got JSON-RPC error response 00:16:23.048 response: 00:16:23.048 { 00:16:23.048 "code": -32602, 00:16:23.048 "message": "Invalid cntlid range [1-65520]" 00:16:23.048 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:23.048 01:54:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode31956 -i 6 -I 5 00:16:23.048 [2024-07-24 01:54:37.940048] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31956: invalid cntlid range [6-5] 00:16:23.306 01:54:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:16:23.306 { 00:16:23.306 "nqn": "nqn.2016-06.io.spdk:cnode31956", 00:16:23.306 "min_cntlid": 6, 00:16:23.306 "max_cntlid": 5, 00:16:23.306 "method": "nvmf_create_subsystem", 00:16:23.306 "req_id": 1 00:16:23.306 } 00:16:23.306 Got JSON-RPC error response 00:16:23.306 response: 00:16:23.306 { 00:16:23.306 "code": -32602, 00:16:23.306 "message": "Invalid cntlid range [6-5]" 00:16:23.306 }' 00:16:23.306 01:54:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:16:23.306 { 00:16:23.306 "nqn": "nqn.2016-06.io.spdk:cnode31956", 00:16:23.306 "min_cntlid": 6, 00:16:23.306 "max_cntlid": 5, 00:16:23.306 "method": "nvmf_create_subsystem", 00:16:23.306 "req_id": 1 00:16:23.306 } 00:16:23.306 Got JSON-RPC error response 00:16:23.306 response: 00:16:23.306 { 00:16:23.306 "code": -32602, 00:16:23.306 "message": "Invalid cntlid range [6-5]" 00:16:23.306 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:23.306 01:54:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:16:23.306 01:54:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:16:23.306 { 00:16:23.306 "name": "foobar", 00:16:23.306 "method": "nvmf_delete_target", 00:16:23.306 "req_id": 1 00:16:23.306 } 00:16:23.306 Got JSON-RPC error response 00:16:23.306 response: 00:16:23.306 { 00:16:23.306 "code": -32602, 00:16:23.306 "message": "The specified target doesn'\''t exist, cannot delete it." 00:16:23.306 }' 00:16:23.306 01:54:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:16:23.306 { 00:16:23.306 "name": "foobar", 00:16:23.306 "method": "nvmf_delete_target", 00:16:23.306 "req_id": 1 00:16:23.306 } 00:16:23.306 Got JSON-RPC error response 00:16:23.306 response: 00:16:23.306 { 00:16:23.306 "code": -32602, 00:16:23.306 "message": "The specified target doesn't exist, cannot delete it." 00:16:23.306 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:16:23.306 01:54:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:16:23.306 01:54:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:16:23.306 01:54:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:23.306 01:54:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:16:23.306 01:54:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:23.306 01:54:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:16:23.306 01:54:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:23.306 01:54:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:23.306 rmmod nvme_tcp 00:16:23.306 rmmod nvme_fabrics 00:16:23.306 rmmod nvme_keyring 00:16:23.306 01:54:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:23.306 01:54:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:16:23.306 01:54:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:16:23.306 01:54:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 1408601 ']' 00:16:23.306 01:54:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 1408601 00:16:23.306 01:54:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@948 -- # '[' -z 1408601 ']' 00:16:23.306 01:54:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@952 -- # kill -0 1408601 00:16:23.306 01:54:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@953 -- # uname 00:16:23.306 01:54:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:23.306 01:54:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1408601 00:16:23.306 01:54:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:23.306 01:54:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:23.306 01:54:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1408601' 00:16:23.306 killing process with pid 1408601 00:16:23.306 01:54:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@967 -- # kill 1408601 00:16:23.306 01:54:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # wait 1408601 00:16:23.564 01:54:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:23.564 01:54:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:23.564 01:54:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:23.564 01:54:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:23.564 01:54:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:23.564 01:54:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:23.564 01:54:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:23.564 01:54:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:26.101 01:54:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:26.101 00:16:26.101 real 0m8.524s 00:16:26.101 user 0m19.732s 00:16:26.101 sys 0m2.423s 00:16:26.101 01:54:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:26.101 01:54:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:26.101 ************************************ 00:16:26.101 END TEST nvmf_invalid 00:16:26.101 ************************************ 00:16:26.101 01:54:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:16:26.101 01:54:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:26.101 01:54:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:26.101 01:54:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:26.101 ************************************ 00:16:26.101 START TEST nvmf_connect_stress 00:16:26.101 ************************************ 00:16:26.101 01:54:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:16:26.101 * Looking for test storage... 00:16:26.101 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:26.101 01:54:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:26.101 01:54:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:16:26.101 01:54:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:26.101 01:54:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:26.101 01:54:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:26.101 01:54:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:26.101 01:54:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:26.101 01:54:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:26.101 01:54:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:26.101 01:54:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:26.101 01:54:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:26.101 01:54:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:26.101 01:54:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:26.101 01:54:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:26.101 01:54:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:26.101 01:54:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:26.101 01:54:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:26.101 01:54:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:26.101 01:54:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:26.101 01:54:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:26.101 01:54:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:26.101 01:54:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:26.101 01:54:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:26.101 01:54:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:26.101 01:54:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:26.101 01:54:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:16:26.101 01:54:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:26.101 01:54:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:16:26.101 01:54:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:26.101 01:54:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:26.101 01:54:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:26.101 01:54:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:26.101 01:54:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:26.101 01:54:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:26.101 01:54:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:26.101 01:54:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:26.101 01:54:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:16:26.101 01:54:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:26.101 01:54:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:26.101 01:54:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:26.101 01:54:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:26.101 01:54:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:26.101 01:54:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:26.101 01:54:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:26.101 01:54:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:26.101 01:54:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:26.102 01:54:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:26.102 01:54:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:16:26.102 01:54:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:28.058 01:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:28.058 01:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:16:28.058 01:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:28.058 01:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:28.058 01:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:28.058 01:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:28.058 01:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:28.058 01:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:16:28.058 01:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:28.058 01:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:16:28.058 01:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:16:28.058 01:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:16:28.058 01:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:16:28.058 01:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:16:28.058 01:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:16:28.058 01:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:28.058 01:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:28.058 01:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:28.058 01:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:28.058 01:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:28.058 01:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:28.058 01:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:28.058 01:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:28.058 01:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:28.058 01:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:28.058 01:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:28.058 01:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:28.058 01:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:28.058 01:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:28.058 01:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:28.058 01:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:28.058 01:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:28.058 01:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:28.058 01:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:28.058 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:28.058 01:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:28.058 01:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:28.058 01:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:28.058 01:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:28.058 01:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:28.058 01:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:28.058 01:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:28.058 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:28.058 01:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:28.058 01:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:28.058 01:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:28.058 01:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:28.058 01:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:28.058 01:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:28.058 01:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:28.058 01:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:28.058 01:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:28.058 01:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:28.058 01:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:28.058 01:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:28.058 01:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:28.058 01:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:28.058 01:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:28.058 01:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:28.058 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:28.058 01:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:28.058 01:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:28.058 01:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:28.058 01:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:28.058 01:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:28.058 01:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:28.058 01:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:28.058 01:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:28.058 01:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:28.058 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:28.058 01:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:28.058 01:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:28.058 01:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:16:28.058 01:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:28.058 01:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:28.058 01:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:28.058 01:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:28.059 01:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:28.059 01:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:28.059 01:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:28.059 01:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:28.059 01:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:28.059 01:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:28.059 01:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:28.059 01:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:28.059 01:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:28.059 01:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:28.059 01:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:28.059 01:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:28.059 01:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:28.059 01:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:28.059 01:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:28.059 01:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:28.059 01:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:28.059 01:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:28.059 01:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:28.059 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:28.059 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.232 ms 00:16:28.059 00:16:28.059 --- 10.0.0.2 ping statistics --- 00:16:28.059 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:28.059 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:16:28.059 01:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:28.059 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:28.059 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.078 ms 00:16:28.059 00:16:28.059 --- 10.0.0.1 ping statistics --- 00:16:28.059 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:28.059 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:16:28.059 01:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:28.059 01:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:16:28.059 01:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:28.059 01:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:28.059 01:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:28.059 01:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:28.059 01:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:28.059 01:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:28.059 01:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:28.059 01:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:16:28.059 01:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:28.059 01:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:28.059 01:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:28.059 01:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=1411165 00:16:28.059 01:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:16:28.059 01:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 1411165 00:16:28.059 01:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@829 -- # '[' -z 1411165 ']' 00:16:28.059 01:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:28.059 01:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:28.059 01:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:28.059 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:28.059 01:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:28.059 01:54:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:28.059 [2024-07-24 01:54:42.717841] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:16:28.059 [2024-07-24 01:54:42.717926] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:28.059 EAL: No free 2048 kB hugepages reported on node 1 00:16:28.059 [2024-07-24 01:54:42.791822] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:28.059 [2024-07-24 01:54:42.884509] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:28.059 [2024-07-24 01:54:42.884571] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:28.059 [2024-07-24 01:54:42.884584] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:28.059 [2024-07-24 01:54:42.884610] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:28.059 [2024-07-24 01:54:42.884619] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:28.059 [2024-07-24 01:54:42.884702] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:28.059 [2024-07-24 01:54:42.884776] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:28.059 [2024-07-24 01:54:42.884778] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:28.318 01:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:28.318 01:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@862 -- # return 0 00:16:28.318 01:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:28.318 01:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:28.318 01:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:28.318 01:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:28.318 01:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:28.318 01:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.318 01:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:28.318 [2024-07-24 01:54:43.028654] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:28.318 01:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.318 01:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:28.318 01:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.318 01:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:28.318 01:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.318 01:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:28.318 01:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.318 01:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:28.318 [2024-07-24 01:54:43.063474] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:28.318 01:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.318 01:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:16:28.318 01:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.318 01:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:28.318 NULL1 00:16:28.318 01:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.318 01:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=1411260 00:16:28.318 01:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:16:28.318 01:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:16:28.318 01:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:16:28.318 01:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:16:28.318 01:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:28.318 01:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:28.318 01:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:28.318 01:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:28.319 01:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:28.319 01:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:28.319 01:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:28.319 01:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:28.319 01:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:28.319 01:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:28.319 01:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:28.319 01:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:28.319 01:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:28.319 01:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:28.319 01:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:28.319 01:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:28.319 01:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:28.319 01:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:28.319 01:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:28.319 01:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:28.319 01:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:28.319 01:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:28.319 EAL: No free 2048 kB hugepages reported on node 1 00:16:28.319 01:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:28.319 01:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:28.319 01:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:28.319 01:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:28.319 01:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:28.319 01:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:28.319 01:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:28.319 01:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:28.319 01:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:28.319 01:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:28.319 01:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:28.319 01:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:28.319 01:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:28.319 01:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:28.319 01:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:28.319 01:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:28.319 01:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:28.319 01:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:28.319 01:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1411260 00:16:28.319 01:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:28.319 01:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.319 01:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:28.591 01:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.591 01:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1411260 00:16:28.591 01:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:28.591 01:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.591 01:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:29.159 01:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.159 01:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1411260 00:16:29.159 01:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:29.159 01:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.159 01:54:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:29.418 01:54:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.418 01:54:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1411260 00:16:29.418 01:54:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:29.418 01:54:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.418 01:54:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:29.677 01:54:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.677 01:54:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1411260 00:16:29.677 01:54:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:29.677 01:54:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.677 01:54:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:29.934 01:54:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.934 01:54:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1411260 00:16:29.934 01:54:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:29.934 01:54:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.934 01:54:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:30.194 01:54:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:30.194 01:54:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1411260 00:16:30.194 01:54:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:30.194 01:54:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:30.194 01:54:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:30.763 01:54:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:30.764 01:54:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1411260 00:16:30.764 01:54:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:30.764 01:54:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:30.764 01:54:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:31.023 01:54:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.023 01:54:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1411260 00:16:31.023 01:54:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:31.023 01:54:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.023 01:54:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:31.280 01:54:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.281 01:54:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1411260 00:16:31.281 01:54:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:31.281 01:54:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.281 01:54:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:31.538 01:54:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.538 01:54:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1411260 00:16:31.538 01:54:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:31.538 01:54:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.538 01:54:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:31.798 01:54:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.798 01:54:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1411260 00:16:31.798 01:54:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:31.798 01:54:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.798 01:54:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:32.367 01:54:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:32.367 01:54:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1411260 00:16:32.367 01:54:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:32.367 01:54:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:32.367 01:54:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:32.625 01:54:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:32.625 01:54:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1411260 00:16:32.625 01:54:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:32.625 01:54:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:32.625 01:54:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:32.882 01:54:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:32.882 01:54:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1411260 00:16:32.882 01:54:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:32.882 01:54:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:32.882 01:54:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:33.139 01:54:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:33.139 01:54:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1411260 00:16:33.139 01:54:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:33.139 01:54:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:33.139 01:54:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:33.397 01:54:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:33.397 01:54:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1411260 00:16:33.397 01:54:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:33.397 01:54:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:33.397 01:54:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:33.966 01:54:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:33.966 01:54:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1411260 00:16:33.966 01:54:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:33.966 01:54:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:33.966 01:54:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:34.224 01:54:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.224 01:54:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1411260 00:16:34.224 01:54:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:34.224 01:54:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.224 01:54:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:34.483 01:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.483 01:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1411260 00:16:34.483 01:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:34.483 01:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.483 01:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:34.742 01:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.742 01:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1411260 00:16:34.742 01:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:34.742 01:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.742 01:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:35.002 01:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.002 01:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1411260 00:16:35.002 01:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:35.002 01:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.002 01:54:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:35.571 01:54:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.571 01:54:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1411260 00:16:35.571 01:54:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:35.572 01:54:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.572 01:54:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:35.830 01:54:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.830 01:54:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1411260 00:16:35.830 01:54:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:35.830 01:54:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.830 01:54:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:36.087 01:54:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:36.087 01:54:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1411260 00:16:36.087 01:54:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:36.087 01:54:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:36.087 01:54:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:36.346 01:54:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:36.346 01:54:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1411260 00:16:36.346 01:54:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:36.346 01:54:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:36.346 01:54:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:36.605 01:54:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:36.605 01:54:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1411260 00:16:36.605 01:54:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:36.605 01:54:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:36.605 01:54:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:37.172 01:54:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:37.172 01:54:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1411260 00:16:37.172 01:54:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:37.172 01:54:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:37.172 01:54:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:37.431 01:54:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:37.431 01:54:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1411260 00:16:37.431 01:54:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:37.431 01:54:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:37.431 01:54:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:37.717 01:54:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:37.717 01:54:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1411260 00:16:37.717 01:54:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:37.717 01:54:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:37.717 01:54:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:37.975 01:54:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:37.975 01:54:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1411260 00:16:37.975 01:54:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:37.975 01:54:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:37.975 01:54:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:38.234 01:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:38.234 01:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1411260 00:16:38.234 01:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:38.234 01:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:38.234 01:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:38.494 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:38.754 01:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:38.754 01:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1411260 00:16:38.754 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (1411260) - No such process 00:16:38.754 01:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 1411260 00:16:38.754 01:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:16:38.754 01:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:16:38.754 01:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:16:38.754 01:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:38.754 01:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:16:38.754 01:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:38.754 01:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:16:38.754 01:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:38.754 01:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:38.754 rmmod nvme_tcp 00:16:38.754 rmmod nvme_fabrics 00:16:38.754 rmmod nvme_keyring 00:16:38.754 01:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:38.754 01:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:16:38.754 01:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:16:38.754 01:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 1411165 ']' 00:16:38.754 01:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 1411165 00:16:38.754 01:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@948 -- # '[' -z 1411165 ']' 00:16:38.754 01:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@952 -- # kill -0 1411165 00:16:38.754 01:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@953 -- # uname 00:16:38.754 01:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:38.754 01:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1411165 00:16:38.754 01:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:38.754 01:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:38.754 01:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1411165' 00:16:38.754 killing process with pid 1411165 00:16:38.754 01:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@967 -- # kill 1411165 00:16:38.754 01:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # wait 1411165 00:16:39.013 01:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:39.013 01:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:39.013 01:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:39.013 01:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:39.013 01:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:39.013 01:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:39.013 01:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:39.013 01:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:40.916 01:54:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:40.916 00:16:40.916 real 0m15.275s 00:16:40.916 user 0m38.337s 00:16:40.916 sys 0m5.839s 00:16:40.916 01:54:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:40.916 01:54:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:40.916 ************************************ 00:16:40.916 END TEST nvmf_connect_stress 00:16:40.916 ************************************ 00:16:40.916 01:54:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:16:40.916 01:54:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:40.916 01:54:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:40.916 01:54:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:40.916 ************************************ 00:16:40.916 START TEST nvmf_fused_ordering 00:16:40.916 ************************************ 00:16:40.916 01:54:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:16:41.174 * Looking for test storage... 00:16:41.174 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:41.174 01:54:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:41.174 01:54:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:16:41.174 01:54:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:41.174 01:54:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:41.174 01:54:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:41.174 01:54:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:41.174 01:54:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:41.174 01:54:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:41.174 01:54:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:41.174 01:54:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:41.174 01:54:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:41.174 01:54:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:41.174 01:54:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:41.174 01:54:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:41.174 01:54:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:41.174 01:54:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:41.174 01:54:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:41.174 01:54:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:41.174 01:54:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:41.174 01:54:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:41.175 01:54:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:41.175 01:54:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:41.175 01:54:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:41.175 01:54:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:41.175 01:54:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:41.175 01:54:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:16:41.175 01:54:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:41.175 01:54:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:16:41.175 01:54:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:41.175 01:54:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:41.175 01:54:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:41.175 01:54:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:41.175 01:54:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:41.175 01:54:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:41.175 01:54:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:41.175 01:54:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:41.175 01:54:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:16:41.175 01:54:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:41.175 01:54:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:41.175 01:54:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:41.175 01:54:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:41.175 01:54:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:41.175 01:54:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:41.175 01:54:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:41.175 01:54:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:41.175 01:54:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:41.175 01:54:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:41.175 01:54:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:16:41.175 01:54:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:43.076 01:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:43.076 01:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:16:43.076 01:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:43.076 01:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:43.337 01:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:43.337 01:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:43.337 01:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:43.337 01:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:16:43.337 01:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:43.337 01:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:16:43.337 01:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:16:43.337 01:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:16:43.337 01:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:16:43.337 01:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:16:43.337 01:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:16:43.338 01:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:43.338 01:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:43.338 01:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:43.338 01:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:43.338 01:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:43.338 01:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:43.338 01:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:43.338 01:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:43.338 01:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:43.338 01:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:43.338 01:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:43.338 01:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:43.338 01:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:43.338 01:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:43.338 01:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:43.338 01:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:43.338 01:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:43.338 01:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:43.338 01:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:43.338 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:43.338 01:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:43.338 01:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:43.338 01:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:43.338 01:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:43.338 01:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:43.338 01:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:43.338 01:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:43.338 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:43.338 01:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:43.338 01:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:43.338 01:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:43.338 01:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:43.338 01:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:43.338 01:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:43.338 01:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:43.338 01:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:43.338 01:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:43.338 01:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:43.338 01:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:43.338 01:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:43.338 01:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:43.338 01:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:43.338 01:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:43.338 01:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:43.338 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:43.338 01:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:43.338 01:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:43.338 01:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:43.338 01:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:43.338 01:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:43.338 01:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:43.338 01:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:43.338 01:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:43.338 01:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:43.338 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:43.338 01:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:43.338 01:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:43.338 01:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:16:43.338 01:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:43.338 01:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:43.338 01:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:43.338 01:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:43.338 01:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:43.338 01:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:43.338 01:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:43.338 01:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:43.338 01:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:43.338 01:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:43.338 01:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:43.338 01:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:43.338 01:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:43.338 01:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:43.338 01:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:43.338 01:54:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:43.338 01:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:43.338 01:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:43.338 01:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:43.338 01:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:43.338 01:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:43.338 01:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:43.338 01:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:43.338 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:43.338 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.224 ms 00:16:43.338 00:16:43.338 --- 10.0.0.2 ping statistics --- 00:16:43.338 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:43.338 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:16:43.338 01:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:43.338 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:43.338 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.155 ms 00:16:43.338 00:16:43.338 --- 10.0.0.1 ping statistics --- 00:16:43.338 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:43.338 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:16:43.338 01:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:43.338 01:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:16:43.338 01:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:43.338 01:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:43.338 01:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:43.338 01:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:43.338 01:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:43.338 01:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:43.338 01:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:43.338 01:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:16:43.338 01:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:43.338 01:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:43.338 01:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:43.338 01:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=1414398 00:16:43.339 01:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:43.339 01:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 1414398 00:16:43.339 01:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@829 -- # '[' -z 1414398 ']' 00:16:43.339 01:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:43.339 01:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:43.339 01:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:43.339 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:43.339 01:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:43.339 01:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:43.339 [2024-07-24 01:54:58.191930] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:16:43.339 [2024-07-24 01:54:58.192014] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:43.339 EAL: No free 2048 kB hugepages reported on node 1 00:16:43.598 [2024-07-24 01:54:58.257334] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:43.598 [2024-07-24 01:54:58.346160] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:43.598 [2024-07-24 01:54:58.346221] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:43.598 [2024-07-24 01:54:58.346235] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:43.598 [2024-07-24 01:54:58.346246] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:43.598 [2024-07-24 01:54:58.346255] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:43.598 [2024-07-24 01:54:58.346281] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:43.598 01:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:43.598 01:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # return 0 00:16:43.598 01:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:43.598 01:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:43.598 01:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:43.598 01:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:43.598 01:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:43.598 01:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:43.598 01:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:43.598 [2024-07-24 01:54:58.487076] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:43.598 01:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.598 01:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:43.598 01:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:43.598 01:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:43.858 01:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.858 01:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:43.858 01:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:43.858 01:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:43.858 [2024-07-24 01:54:58.503286] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:43.858 01:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.858 01:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:16:43.858 01:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:43.858 01:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:43.858 NULL1 00:16:43.858 01:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.858 01:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:16:43.858 01:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:43.858 01:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:43.858 01:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.858 01:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:16:43.858 01:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:43.858 01:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:43.858 01:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.858 01:54:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:16:43.858 [2024-07-24 01:54:58.549607] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:16:43.858 [2024-07-24 01:54:58.549651] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1414515 ] 00:16:43.858 EAL: No free 2048 kB hugepages reported on node 1 00:16:44.118 Attached to nqn.2016-06.io.spdk:cnode1 00:16:44.118 Namespace ID: 1 size: 1GB 00:16:44.118 fused_ordering(0) 00:16:44.118 fused_ordering(1) 00:16:44.118 fused_ordering(2) 00:16:44.118 fused_ordering(3) 00:16:44.118 fused_ordering(4) 00:16:44.118 fused_ordering(5) 00:16:44.118 fused_ordering(6) 00:16:44.118 fused_ordering(7) 00:16:44.118 fused_ordering(8) 00:16:44.118 fused_ordering(9) 00:16:44.118 fused_ordering(10) 00:16:44.118 fused_ordering(11) 00:16:44.118 fused_ordering(12) 00:16:44.118 fused_ordering(13) 00:16:44.118 fused_ordering(14) 00:16:44.118 fused_ordering(15) 00:16:44.118 fused_ordering(16) 00:16:44.118 fused_ordering(17) 00:16:44.118 fused_ordering(18) 00:16:44.118 fused_ordering(19) 00:16:44.118 fused_ordering(20) 00:16:44.118 fused_ordering(21) 00:16:44.118 fused_ordering(22) 00:16:44.118 fused_ordering(23) 00:16:44.118 fused_ordering(24) 00:16:44.118 fused_ordering(25) 00:16:44.118 fused_ordering(26) 00:16:44.118 fused_ordering(27) 00:16:44.118 fused_ordering(28) 00:16:44.118 fused_ordering(29) 00:16:44.118 fused_ordering(30) 00:16:44.118 fused_ordering(31) 00:16:44.118 fused_ordering(32) 00:16:44.118 fused_ordering(33) 00:16:44.118 fused_ordering(34) 00:16:44.118 fused_ordering(35) 00:16:44.118 fused_ordering(36) 00:16:44.118 fused_ordering(37) 00:16:44.118 fused_ordering(38) 00:16:44.118 fused_ordering(39) 00:16:44.118 fused_ordering(40) 00:16:44.118 fused_ordering(41) 00:16:44.118 fused_ordering(42) 00:16:44.118 fused_ordering(43) 00:16:44.118 fused_ordering(44) 00:16:44.118 fused_ordering(45) 00:16:44.118 fused_ordering(46) 00:16:44.118 fused_ordering(47) 00:16:44.118 fused_ordering(48) 00:16:44.118 fused_ordering(49) 00:16:44.118 fused_ordering(50) 00:16:44.118 fused_ordering(51) 00:16:44.118 fused_ordering(52) 00:16:44.118 fused_ordering(53) 00:16:44.118 fused_ordering(54) 00:16:44.118 fused_ordering(55) 00:16:44.118 fused_ordering(56) 00:16:44.118 fused_ordering(57) 00:16:44.118 fused_ordering(58) 00:16:44.118 fused_ordering(59) 00:16:44.118 fused_ordering(60) 00:16:44.118 fused_ordering(61) 00:16:44.118 fused_ordering(62) 00:16:44.118 fused_ordering(63) 00:16:44.118 fused_ordering(64) 00:16:44.118 fused_ordering(65) 00:16:44.118 fused_ordering(66) 00:16:44.118 fused_ordering(67) 00:16:44.118 fused_ordering(68) 00:16:44.118 fused_ordering(69) 00:16:44.118 fused_ordering(70) 00:16:44.118 fused_ordering(71) 00:16:44.118 fused_ordering(72) 00:16:44.118 fused_ordering(73) 00:16:44.118 fused_ordering(74) 00:16:44.118 fused_ordering(75) 00:16:44.118 fused_ordering(76) 00:16:44.118 fused_ordering(77) 00:16:44.118 fused_ordering(78) 00:16:44.118 fused_ordering(79) 00:16:44.118 fused_ordering(80) 00:16:44.118 fused_ordering(81) 00:16:44.118 fused_ordering(82) 00:16:44.118 fused_ordering(83) 00:16:44.118 fused_ordering(84) 00:16:44.118 fused_ordering(85) 00:16:44.118 fused_ordering(86) 00:16:44.118 fused_ordering(87) 00:16:44.119 fused_ordering(88) 00:16:44.119 fused_ordering(89) 00:16:44.119 fused_ordering(90) 00:16:44.119 fused_ordering(91) 00:16:44.119 fused_ordering(92) 00:16:44.119 fused_ordering(93) 00:16:44.119 fused_ordering(94) 00:16:44.119 fused_ordering(95) 00:16:44.119 fused_ordering(96) 00:16:44.119 fused_ordering(97) 00:16:44.119 fused_ordering(98) 00:16:44.119 fused_ordering(99) 00:16:44.119 fused_ordering(100) 00:16:44.119 fused_ordering(101) 00:16:44.119 fused_ordering(102) 00:16:44.119 fused_ordering(103) 00:16:44.119 fused_ordering(104) 00:16:44.119 fused_ordering(105) 00:16:44.119 fused_ordering(106) 00:16:44.119 fused_ordering(107) 00:16:44.119 fused_ordering(108) 00:16:44.119 fused_ordering(109) 00:16:44.119 fused_ordering(110) 00:16:44.119 fused_ordering(111) 00:16:44.119 fused_ordering(112) 00:16:44.119 fused_ordering(113) 00:16:44.119 fused_ordering(114) 00:16:44.119 fused_ordering(115) 00:16:44.119 fused_ordering(116) 00:16:44.119 fused_ordering(117) 00:16:44.119 fused_ordering(118) 00:16:44.119 fused_ordering(119) 00:16:44.119 fused_ordering(120) 00:16:44.119 fused_ordering(121) 00:16:44.119 fused_ordering(122) 00:16:44.119 fused_ordering(123) 00:16:44.119 fused_ordering(124) 00:16:44.119 fused_ordering(125) 00:16:44.119 fused_ordering(126) 00:16:44.119 fused_ordering(127) 00:16:44.119 fused_ordering(128) 00:16:44.119 fused_ordering(129) 00:16:44.119 fused_ordering(130) 00:16:44.119 fused_ordering(131) 00:16:44.119 fused_ordering(132) 00:16:44.119 fused_ordering(133) 00:16:44.119 fused_ordering(134) 00:16:44.119 fused_ordering(135) 00:16:44.119 fused_ordering(136) 00:16:44.119 fused_ordering(137) 00:16:44.119 fused_ordering(138) 00:16:44.119 fused_ordering(139) 00:16:44.119 fused_ordering(140) 00:16:44.119 fused_ordering(141) 00:16:44.119 fused_ordering(142) 00:16:44.119 fused_ordering(143) 00:16:44.119 fused_ordering(144) 00:16:44.119 fused_ordering(145) 00:16:44.119 fused_ordering(146) 00:16:44.119 fused_ordering(147) 00:16:44.119 fused_ordering(148) 00:16:44.119 fused_ordering(149) 00:16:44.119 fused_ordering(150) 00:16:44.119 fused_ordering(151) 00:16:44.119 fused_ordering(152) 00:16:44.119 fused_ordering(153) 00:16:44.119 fused_ordering(154) 00:16:44.119 fused_ordering(155) 00:16:44.119 fused_ordering(156) 00:16:44.119 fused_ordering(157) 00:16:44.119 fused_ordering(158) 00:16:44.119 fused_ordering(159) 00:16:44.119 fused_ordering(160) 00:16:44.119 fused_ordering(161) 00:16:44.119 fused_ordering(162) 00:16:44.119 fused_ordering(163) 00:16:44.119 fused_ordering(164) 00:16:44.119 fused_ordering(165) 00:16:44.119 fused_ordering(166) 00:16:44.119 fused_ordering(167) 00:16:44.119 fused_ordering(168) 00:16:44.119 fused_ordering(169) 00:16:44.119 fused_ordering(170) 00:16:44.119 fused_ordering(171) 00:16:44.119 fused_ordering(172) 00:16:44.119 fused_ordering(173) 00:16:44.119 fused_ordering(174) 00:16:44.119 fused_ordering(175) 00:16:44.119 fused_ordering(176) 00:16:44.119 fused_ordering(177) 00:16:44.119 fused_ordering(178) 00:16:44.119 fused_ordering(179) 00:16:44.119 fused_ordering(180) 00:16:44.119 fused_ordering(181) 00:16:44.119 fused_ordering(182) 00:16:44.119 fused_ordering(183) 00:16:44.119 fused_ordering(184) 00:16:44.119 fused_ordering(185) 00:16:44.119 fused_ordering(186) 00:16:44.119 fused_ordering(187) 00:16:44.119 fused_ordering(188) 00:16:44.119 fused_ordering(189) 00:16:44.119 fused_ordering(190) 00:16:44.119 fused_ordering(191) 00:16:44.119 fused_ordering(192) 00:16:44.119 fused_ordering(193) 00:16:44.119 fused_ordering(194) 00:16:44.119 fused_ordering(195) 00:16:44.119 fused_ordering(196) 00:16:44.119 fused_ordering(197) 00:16:44.119 fused_ordering(198) 00:16:44.119 fused_ordering(199) 00:16:44.119 fused_ordering(200) 00:16:44.119 fused_ordering(201) 00:16:44.119 fused_ordering(202) 00:16:44.119 fused_ordering(203) 00:16:44.119 fused_ordering(204) 00:16:44.119 fused_ordering(205) 00:16:44.688 fused_ordering(206) 00:16:44.688 fused_ordering(207) 00:16:44.688 fused_ordering(208) 00:16:44.688 fused_ordering(209) 00:16:44.688 fused_ordering(210) 00:16:44.688 fused_ordering(211) 00:16:44.688 fused_ordering(212) 00:16:44.688 fused_ordering(213) 00:16:44.688 fused_ordering(214) 00:16:44.688 fused_ordering(215) 00:16:44.688 fused_ordering(216) 00:16:44.688 fused_ordering(217) 00:16:44.688 fused_ordering(218) 00:16:44.688 fused_ordering(219) 00:16:44.688 fused_ordering(220) 00:16:44.688 fused_ordering(221) 00:16:44.688 fused_ordering(222) 00:16:44.688 fused_ordering(223) 00:16:44.688 fused_ordering(224) 00:16:44.688 fused_ordering(225) 00:16:44.688 fused_ordering(226) 00:16:44.688 fused_ordering(227) 00:16:44.688 fused_ordering(228) 00:16:44.688 fused_ordering(229) 00:16:44.688 fused_ordering(230) 00:16:44.688 fused_ordering(231) 00:16:44.688 fused_ordering(232) 00:16:44.688 fused_ordering(233) 00:16:44.688 fused_ordering(234) 00:16:44.688 fused_ordering(235) 00:16:44.688 fused_ordering(236) 00:16:44.688 fused_ordering(237) 00:16:44.688 fused_ordering(238) 00:16:44.688 fused_ordering(239) 00:16:44.688 fused_ordering(240) 00:16:44.688 fused_ordering(241) 00:16:44.688 fused_ordering(242) 00:16:44.688 fused_ordering(243) 00:16:44.688 fused_ordering(244) 00:16:44.688 fused_ordering(245) 00:16:44.688 fused_ordering(246) 00:16:44.688 fused_ordering(247) 00:16:44.688 fused_ordering(248) 00:16:44.688 fused_ordering(249) 00:16:44.688 fused_ordering(250) 00:16:44.688 fused_ordering(251) 00:16:44.688 fused_ordering(252) 00:16:44.688 fused_ordering(253) 00:16:44.688 fused_ordering(254) 00:16:44.688 fused_ordering(255) 00:16:44.688 fused_ordering(256) 00:16:44.688 fused_ordering(257) 00:16:44.688 fused_ordering(258) 00:16:44.688 fused_ordering(259) 00:16:44.688 fused_ordering(260) 00:16:44.688 fused_ordering(261) 00:16:44.688 fused_ordering(262) 00:16:44.688 fused_ordering(263) 00:16:44.688 fused_ordering(264) 00:16:44.688 fused_ordering(265) 00:16:44.688 fused_ordering(266) 00:16:44.688 fused_ordering(267) 00:16:44.688 fused_ordering(268) 00:16:44.688 fused_ordering(269) 00:16:44.688 fused_ordering(270) 00:16:44.688 fused_ordering(271) 00:16:44.688 fused_ordering(272) 00:16:44.688 fused_ordering(273) 00:16:44.688 fused_ordering(274) 00:16:44.688 fused_ordering(275) 00:16:44.688 fused_ordering(276) 00:16:44.688 fused_ordering(277) 00:16:44.688 fused_ordering(278) 00:16:44.688 fused_ordering(279) 00:16:44.688 fused_ordering(280) 00:16:44.688 fused_ordering(281) 00:16:44.688 fused_ordering(282) 00:16:44.688 fused_ordering(283) 00:16:44.688 fused_ordering(284) 00:16:44.688 fused_ordering(285) 00:16:44.688 fused_ordering(286) 00:16:44.688 fused_ordering(287) 00:16:44.688 fused_ordering(288) 00:16:44.688 fused_ordering(289) 00:16:44.688 fused_ordering(290) 00:16:44.688 fused_ordering(291) 00:16:44.688 fused_ordering(292) 00:16:44.688 fused_ordering(293) 00:16:44.688 fused_ordering(294) 00:16:44.688 fused_ordering(295) 00:16:44.688 fused_ordering(296) 00:16:44.688 fused_ordering(297) 00:16:44.688 fused_ordering(298) 00:16:44.688 fused_ordering(299) 00:16:44.688 fused_ordering(300) 00:16:44.688 fused_ordering(301) 00:16:44.688 fused_ordering(302) 00:16:44.688 fused_ordering(303) 00:16:44.688 fused_ordering(304) 00:16:44.688 fused_ordering(305) 00:16:44.688 fused_ordering(306) 00:16:44.688 fused_ordering(307) 00:16:44.688 fused_ordering(308) 00:16:44.688 fused_ordering(309) 00:16:44.688 fused_ordering(310) 00:16:44.688 fused_ordering(311) 00:16:44.688 fused_ordering(312) 00:16:44.688 fused_ordering(313) 00:16:44.688 fused_ordering(314) 00:16:44.688 fused_ordering(315) 00:16:44.688 fused_ordering(316) 00:16:44.688 fused_ordering(317) 00:16:44.688 fused_ordering(318) 00:16:44.688 fused_ordering(319) 00:16:44.688 fused_ordering(320) 00:16:44.688 fused_ordering(321) 00:16:44.688 fused_ordering(322) 00:16:44.688 fused_ordering(323) 00:16:44.688 fused_ordering(324) 00:16:44.688 fused_ordering(325) 00:16:44.688 fused_ordering(326) 00:16:44.688 fused_ordering(327) 00:16:44.688 fused_ordering(328) 00:16:44.688 fused_ordering(329) 00:16:44.688 fused_ordering(330) 00:16:44.688 fused_ordering(331) 00:16:44.688 fused_ordering(332) 00:16:44.688 fused_ordering(333) 00:16:44.688 fused_ordering(334) 00:16:44.688 fused_ordering(335) 00:16:44.688 fused_ordering(336) 00:16:44.688 fused_ordering(337) 00:16:44.688 fused_ordering(338) 00:16:44.688 fused_ordering(339) 00:16:44.688 fused_ordering(340) 00:16:44.688 fused_ordering(341) 00:16:44.688 fused_ordering(342) 00:16:44.688 fused_ordering(343) 00:16:44.688 fused_ordering(344) 00:16:44.688 fused_ordering(345) 00:16:44.688 fused_ordering(346) 00:16:44.688 fused_ordering(347) 00:16:44.688 fused_ordering(348) 00:16:44.688 fused_ordering(349) 00:16:44.688 fused_ordering(350) 00:16:44.688 fused_ordering(351) 00:16:44.688 fused_ordering(352) 00:16:44.688 fused_ordering(353) 00:16:44.688 fused_ordering(354) 00:16:44.688 fused_ordering(355) 00:16:44.688 fused_ordering(356) 00:16:44.688 fused_ordering(357) 00:16:44.688 fused_ordering(358) 00:16:44.688 fused_ordering(359) 00:16:44.688 fused_ordering(360) 00:16:44.688 fused_ordering(361) 00:16:44.688 fused_ordering(362) 00:16:44.688 fused_ordering(363) 00:16:44.688 fused_ordering(364) 00:16:44.688 fused_ordering(365) 00:16:44.688 fused_ordering(366) 00:16:44.688 fused_ordering(367) 00:16:44.688 fused_ordering(368) 00:16:44.688 fused_ordering(369) 00:16:44.688 fused_ordering(370) 00:16:44.688 fused_ordering(371) 00:16:44.688 fused_ordering(372) 00:16:44.688 fused_ordering(373) 00:16:44.688 fused_ordering(374) 00:16:44.688 fused_ordering(375) 00:16:44.688 fused_ordering(376) 00:16:44.688 fused_ordering(377) 00:16:44.688 fused_ordering(378) 00:16:44.688 fused_ordering(379) 00:16:44.688 fused_ordering(380) 00:16:44.688 fused_ordering(381) 00:16:44.688 fused_ordering(382) 00:16:44.688 fused_ordering(383) 00:16:44.688 fused_ordering(384) 00:16:44.688 fused_ordering(385) 00:16:44.688 fused_ordering(386) 00:16:44.688 fused_ordering(387) 00:16:44.688 fused_ordering(388) 00:16:44.688 fused_ordering(389) 00:16:44.688 fused_ordering(390) 00:16:44.688 fused_ordering(391) 00:16:44.688 fused_ordering(392) 00:16:44.688 fused_ordering(393) 00:16:44.688 fused_ordering(394) 00:16:44.688 fused_ordering(395) 00:16:44.688 fused_ordering(396) 00:16:44.688 fused_ordering(397) 00:16:44.688 fused_ordering(398) 00:16:44.688 fused_ordering(399) 00:16:44.688 fused_ordering(400) 00:16:44.688 fused_ordering(401) 00:16:44.688 fused_ordering(402) 00:16:44.688 fused_ordering(403) 00:16:44.688 fused_ordering(404) 00:16:44.688 fused_ordering(405) 00:16:44.688 fused_ordering(406) 00:16:44.688 fused_ordering(407) 00:16:44.688 fused_ordering(408) 00:16:44.688 fused_ordering(409) 00:16:44.688 fused_ordering(410) 00:16:44.947 fused_ordering(411) 00:16:44.947 fused_ordering(412) 00:16:44.947 fused_ordering(413) 00:16:44.947 fused_ordering(414) 00:16:44.947 fused_ordering(415) 00:16:44.947 fused_ordering(416) 00:16:44.947 fused_ordering(417) 00:16:44.947 fused_ordering(418) 00:16:44.947 fused_ordering(419) 00:16:44.947 fused_ordering(420) 00:16:44.947 fused_ordering(421) 00:16:44.947 fused_ordering(422) 00:16:44.947 fused_ordering(423) 00:16:44.947 fused_ordering(424) 00:16:44.947 fused_ordering(425) 00:16:44.947 fused_ordering(426) 00:16:44.947 fused_ordering(427) 00:16:44.947 fused_ordering(428) 00:16:44.947 fused_ordering(429) 00:16:44.947 fused_ordering(430) 00:16:44.947 fused_ordering(431) 00:16:44.947 fused_ordering(432) 00:16:44.947 fused_ordering(433) 00:16:44.947 fused_ordering(434) 00:16:44.947 fused_ordering(435) 00:16:44.947 fused_ordering(436) 00:16:44.947 fused_ordering(437) 00:16:44.947 fused_ordering(438) 00:16:44.947 fused_ordering(439) 00:16:44.947 fused_ordering(440) 00:16:44.947 fused_ordering(441) 00:16:44.947 fused_ordering(442) 00:16:44.947 fused_ordering(443) 00:16:44.947 fused_ordering(444) 00:16:44.947 fused_ordering(445) 00:16:44.947 fused_ordering(446) 00:16:44.947 fused_ordering(447) 00:16:44.947 fused_ordering(448) 00:16:44.947 fused_ordering(449) 00:16:44.947 fused_ordering(450) 00:16:44.947 fused_ordering(451) 00:16:44.947 fused_ordering(452) 00:16:44.947 fused_ordering(453) 00:16:44.947 fused_ordering(454) 00:16:44.947 fused_ordering(455) 00:16:44.947 fused_ordering(456) 00:16:44.947 fused_ordering(457) 00:16:44.947 fused_ordering(458) 00:16:44.947 fused_ordering(459) 00:16:44.947 fused_ordering(460) 00:16:44.947 fused_ordering(461) 00:16:44.947 fused_ordering(462) 00:16:44.947 fused_ordering(463) 00:16:44.947 fused_ordering(464) 00:16:44.947 fused_ordering(465) 00:16:44.947 fused_ordering(466) 00:16:44.947 fused_ordering(467) 00:16:44.947 fused_ordering(468) 00:16:44.947 fused_ordering(469) 00:16:44.947 fused_ordering(470) 00:16:44.947 fused_ordering(471) 00:16:44.947 fused_ordering(472) 00:16:44.947 fused_ordering(473) 00:16:44.947 fused_ordering(474) 00:16:44.947 fused_ordering(475) 00:16:44.947 fused_ordering(476) 00:16:44.947 fused_ordering(477) 00:16:44.947 fused_ordering(478) 00:16:44.947 fused_ordering(479) 00:16:44.947 fused_ordering(480) 00:16:44.947 fused_ordering(481) 00:16:44.947 fused_ordering(482) 00:16:44.947 fused_ordering(483) 00:16:44.947 fused_ordering(484) 00:16:44.947 fused_ordering(485) 00:16:44.947 fused_ordering(486) 00:16:44.947 fused_ordering(487) 00:16:44.947 fused_ordering(488) 00:16:44.947 fused_ordering(489) 00:16:44.947 fused_ordering(490) 00:16:44.947 fused_ordering(491) 00:16:44.947 fused_ordering(492) 00:16:44.947 fused_ordering(493) 00:16:44.947 fused_ordering(494) 00:16:44.947 fused_ordering(495) 00:16:44.947 fused_ordering(496) 00:16:44.947 fused_ordering(497) 00:16:44.947 fused_ordering(498) 00:16:44.947 fused_ordering(499) 00:16:44.947 fused_ordering(500) 00:16:44.947 fused_ordering(501) 00:16:44.947 fused_ordering(502) 00:16:44.947 fused_ordering(503) 00:16:44.947 fused_ordering(504) 00:16:44.947 fused_ordering(505) 00:16:44.947 fused_ordering(506) 00:16:44.947 fused_ordering(507) 00:16:44.947 fused_ordering(508) 00:16:44.947 fused_ordering(509) 00:16:44.947 fused_ordering(510) 00:16:44.947 fused_ordering(511) 00:16:44.947 fused_ordering(512) 00:16:44.947 fused_ordering(513) 00:16:44.947 fused_ordering(514) 00:16:44.947 fused_ordering(515) 00:16:44.947 fused_ordering(516) 00:16:44.947 fused_ordering(517) 00:16:44.947 fused_ordering(518) 00:16:44.947 fused_ordering(519) 00:16:44.947 fused_ordering(520) 00:16:44.947 fused_ordering(521) 00:16:44.947 fused_ordering(522) 00:16:44.947 fused_ordering(523) 00:16:44.947 fused_ordering(524) 00:16:44.947 fused_ordering(525) 00:16:44.947 fused_ordering(526) 00:16:44.947 fused_ordering(527) 00:16:44.947 fused_ordering(528) 00:16:44.947 fused_ordering(529) 00:16:44.947 fused_ordering(530) 00:16:44.947 fused_ordering(531) 00:16:44.947 fused_ordering(532) 00:16:44.947 fused_ordering(533) 00:16:44.947 fused_ordering(534) 00:16:44.947 fused_ordering(535) 00:16:44.947 fused_ordering(536) 00:16:44.947 fused_ordering(537) 00:16:44.947 fused_ordering(538) 00:16:44.947 fused_ordering(539) 00:16:44.947 fused_ordering(540) 00:16:44.947 fused_ordering(541) 00:16:44.947 fused_ordering(542) 00:16:44.947 fused_ordering(543) 00:16:44.947 fused_ordering(544) 00:16:44.947 fused_ordering(545) 00:16:44.947 fused_ordering(546) 00:16:44.947 fused_ordering(547) 00:16:44.947 fused_ordering(548) 00:16:44.947 fused_ordering(549) 00:16:44.947 fused_ordering(550) 00:16:44.947 fused_ordering(551) 00:16:44.947 fused_ordering(552) 00:16:44.947 fused_ordering(553) 00:16:44.947 fused_ordering(554) 00:16:44.947 fused_ordering(555) 00:16:44.947 fused_ordering(556) 00:16:44.947 fused_ordering(557) 00:16:44.947 fused_ordering(558) 00:16:44.947 fused_ordering(559) 00:16:44.947 fused_ordering(560) 00:16:44.947 fused_ordering(561) 00:16:44.947 fused_ordering(562) 00:16:44.947 fused_ordering(563) 00:16:44.947 fused_ordering(564) 00:16:44.947 fused_ordering(565) 00:16:44.947 fused_ordering(566) 00:16:44.947 fused_ordering(567) 00:16:44.947 fused_ordering(568) 00:16:44.947 fused_ordering(569) 00:16:44.947 fused_ordering(570) 00:16:44.947 fused_ordering(571) 00:16:44.947 fused_ordering(572) 00:16:44.947 fused_ordering(573) 00:16:44.947 fused_ordering(574) 00:16:44.947 fused_ordering(575) 00:16:44.947 fused_ordering(576) 00:16:44.947 fused_ordering(577) 00:16:44.947 fused_ordering(578) 00:16:44.947 fused_ordering(579) 00:16:44.947 fused_ordering(580) 00:16:44.947 fused_ordering(581) 00:16:44.947 fused_ordering(582) 00:16:44.947 fused_ordering(583) 00:16:44.947 fused_ordering(584) 00:16:44.947 fused_ordering(585) 00:16:44.947 fused_ordering(586) 00:16:44.947 fused_ordering(587) 00:16:44.947 fused_ordering(588) 00:16:44.947 fused_ordering(589) 00:16:44.947 fused_ordering(590) 00:16:44.947 fused_ordering(591) 00:16:44.947 fused_ordering(592) 00:16:44.947 fused_ordering(593) 00:16:44.947 fused_ordering(594) 00:16:44.947 fused_ordering(595) 00:16:44.947 fused_ordering(596) 00:16:44.947 fused_ordering(597) 00:16:44.947 fused_ordering(598) 00:16:44.947 fused_ordering(599) 00:16:44.947 fused_ordering(600) 00:16:44.947 fused_ordering(601) 00:16:44.947 fused_ordering(602) 00:16:44.947 fused_ordering(603) 00:16:44.947 fused_ordering(604) 00:16:44.947 fused_ordering(605) 00:16:44.947 fused_ordering(606) 00:16:44.947 fused_ordering(607) 00:16:44.947 fused_ordering(608) 00:16:44.947 fused_ordering(609) 00:16:44.947 fused_ordering(610) 00:16:44.947 fused_ordering(611) 00:16:44.947 fused_ordering(612) 00:16:44.947 fused_ordering(613) 00:16:44.947 fused_ordering(614) 00:16:44.947 fused_ordering(615) 00:16:45.518 fused_ordering(616) 00:16:45.518 fused_ordering(617) 00:16:45.518 fused_ordering(618) 00:16:45.518 fused_ordering(619) 00:16:45.518 fused_ordering(620) 00:16:45.518 fused_ordering(621) 00:16:45.518 fused_ordering(622) 00:16:45.518 fused_ordering(623) 00:16:45.518 fused_ordering(624) 00:16:45.518 fused_ordering(625) 00:16:45.518 fused_ordering(626) 00:16:45.518 fused_ordering(627) 00:16:45.518 fused_ordering(628) 00:16:45.518 fused_ordering(629) 00:16:45.518 fused_ordering(630) 00:16:45.518 fused_ordering(631) 00:16:45.518 fused_ordering(632) 00:16:45.518 fused_ordering(633) 00:16:45.518 fused_ordering(634) 00:16:45.518 fused_ordering(635) 00:16:45.518 fused_ordering(636) 00:16:45.518 fused_ordering(637) 00:16:45.518 fused_ordering(638) 00:16:45.518 fused_ordering(639) 00:16:45.518 fused_ordering(640) 00:16:45.518 fused_ordering(641) 00:16:45.518 fused_ordering(642) 00:16:45.518 fused_ordering(643) 00:16:45.518 fused_ordering(644) 00:16:45.518 fused_ordering(645) 00:16:45.518 fused_ordering(646) 00:16:45.518 fused_ordering(647) 00:16:45.518 fused_ordering(648) 00:16:45.518 fused_ordering(649) 00:16:45.518 fused_ordering(650) 00:16:45.518 fused_ordering(651) 00:16:45.518 fused_ordering(652) 00:16:45.518 fused_ordering(653) 00:16:45.518 fused_ordering(654) 00:16:45.518 fused_ordering(655) 00:16:45.518 fused_ordering(656) 00:16:45.518 fused_ordering(657) 00:16:45.518 fused_ordering(658) 00:16:45.518 fused_ordering(659) 00:16:45.518 fused_ordering(660) 00:16:45.518 fused_ordering(661) 00:16:45.518 fused_ordering(662) 00:16:45.518 fused_ordering(663) 00:16:45.518 fused_ordering(664) 00:16:45.518 fused_ordering(665) 00:16:45.518 fused_ordering(666) 00:16:45.518 fused_ordering(667) 00:16:45.518 fused_ordering(668) 00:16:45.518 fused_ordering(669) 00:16:45.518 fused_ordering(670) 00:16:45.518 fused_ordering(671) 00:16:45.518 fused_ordering(672) 00:16:45.518 fused_ordering(673) 00:16:45.518 fused_ordering(674) 00:16:45.518 fused_ordering(675) 00:16:45.518 fused_ordering(676) 00:16:45.518 fused_ordering(677) 00:16:45.518 fused_ordering(678) 00:16:45.518 fused_ordering(679) 00:16:45.518 fused_ordering(680) 00:16:45.518 fused_ordering(681) 00:16:45.518 fused_ordering(682) 00:16:45.518 fused_ordering(683) 00:16:45.518 fused_ordering(684) 00:16:45.518 fused_ordering(685) 00:16:45.518 fused_ordering(686) 00:16:45.518 fused_ordering(687) 00:16:45.518 fused_ordering(688) 00:16:45.518 fused_ordering(689) 00:16:45.518 fused_ordering(690) 00:16:45.518 fused_ordering(691) 00:16:45.518 fused_ordering(692) 00:16:45.518 fused_ordering(693) 00:16:45.518 fused_ordering(694) 00:16:45.518 fused_ordering(695) 00:16:45.518 fused_ordering(696) 00:16:45.518 fused_ordering(697) 00:16:45.518 fused_ordering(698) 00:16:45.518 fused_ordering(699) 00:16:45.518 fused_ordering(700) 00:16:45.518 fused_ordering(701) 00:16:45.518 fused_ordering(702) 00:16:45.518 fused_ordering(703) 00:16:45.518 fused_ordering(704) 00:16:45.518 fused_ordering(705) 00:16:45.518 fused_ordering(706) 00:16:45.518 fused_ordering(707) 00:16:45.518 fused_ordering(708) 00:16:45.518 fused_ordering(709) 00:16:45.518 fused_ordering(710) 00:16:45.518 fused_ordering(711) 00:16:45.518 fused_ordering(712) 00:16:45.518 fused_ordering(713) 00:16:45.518 fused_ordering(714) 00:16:45.518 fused_ordering(715) 00:16:45.518 fused_ordering(716) 00:16:45.518 fused_ordering(717) 00:16:45.518 fused_ordering(718) 00:16:45.518 fused_ordering(719) 00:16:45.518 fused_ordering(720) 00:16:45.518 fused_ordering(721) 00:16:45.518 fused_ordering(722) 00:16:45.518 fused_ordering(723) 00:16:45.518 fused_ordering(724) 00:16:45.518 fused_ordering(725) 00:16:45.518 fused_ordering(726) 00:16:45.518 fused_ordering(727) 00:16:45.518 fused_ordering(728) 00:16:45.518 fused_ordering(729) 00:16:45.518 fused_ordering(730) 00:16:45.518 fused_ordering(731) 00:16:45.518 fused_ordering(732) 00:16:45.518 fused_ordering(733) 00:16:45.518 fused_ordering(734) 00:16:45.518 fused_ordering(735) 00:16:45.518 fused_ordering(736) 00:16:45.518 fused_ordering(737) 00:16:45.518 fused_ordering(738) 00:16:45.518 fused_ordering(739) 00:16:45.518 fused_ordering(740) 00:16:45.518 fused_ordering(741) 00:16:45.518 fused_ordering(742) 00:16:45.518 fused_ordering(743) 00:16:45.518 fused_ordering(744) 00:16:45.518 fused_ordering(745) 00:16:45.518 fused_ordering(746) 00:16:45.518 fused_ordering(747) 00:16:45.518 fused_ordering(748) 00:16:45.518 fused_ordering(749) 00:16:45.518 fused_ordering(750) 00:16:45.518 fused_ordering(751) 00:16:45.518 fused_ordering(752) 00:16:45.518 fused_ordering(753) 00:16:45.518 fused_ordering(754) 00:16:45.518 fused_ordering(755) 00:16:45.518 fused_ordering(756) 00:16:45.518 fused_ordering(757) 00:16:45.518 fused_ordering(758) 00:16:45.518 fused_ordering(759) 00:16:45.518 fused_ordering(760) 00:16:45.518 fused_ordering(761) 00:16:45.518 fused_ordering(762) 00:16:45.518 fused_ordering(763) 00:16:45.518 fused_ordering(764) 00:16:45.518 fused_ordering(765) 00:16:45.518 fused_ordering(766) 00:16:45.518 fused_ordering(767) 00:16:45.518 fused_ordering(768) 00:16:45.518 fused_ordering(769) 00:16:45.518 fused_ordering(770) 00:16:45.518 fused_ordering(771) 00:16:45.518 fused_ordering(772) 00:16:45.518 fused_ordering(773) 00:16:45.518 fused_ordering(774) 00:16:45.518 fused_ordering(775) 00:16:45.518 fused_ordering(776) 00:16:45.518 fused_ordering(777) 00:16:45.518 fused_ordering(778) 00:16:45.518 fused_ordering(779) 00:16:45.518 fused_ordering(780) 00:16:45.518 fused_ordering(781) 00:16:45.518 fused_ordering(782) 00:16:45.518 fused_ordering(783) 00:16:45.518 fused_ordering(784) 00:16:45.518 fused_ordering(785) 00:16:45.518 fused_ordering(786) 00:16:45.519 fused_ordering(787) 00:16:45.519 fused_ordering(788) 00:16:45.519 fused_ordering(789) 00:16:45.519 fused_ordering(790) 00:16:45.519 fused_ordering(791) 00:16:45.519 fused_ordering(792) 00:16:45.519 fused_ordering(793) 00:16:45.519 fused_ordering(794) 00:16:45.519 fused_ordering(795) 00:16:45.519 fused_ordering(796) 00:16:45.519 fused_ordering(797) 00:16:45.519 fused_ordering(798) 00:16:45.519 fused_ordering(799) 00:16:45.519 fused_ordering(800) 00:16:45.519 fused_ordering(801) 00:16:45.519 fused_ordering(802) 00:16:45.519 fused_ordering(803) 00:16:45.519 fused_ordering(804) 00:16:45.519 fused_ordering(805) 00:16:45.519 fused_ordering(806) 00:16:45.519 fused_ordering(807) 00:16:45.519 fused_ordering(808) 00:16:45.519 fused_ordering(809) 00:16:45.519 fused_ordering(810) 00:16:45.519 fused_ordering(811) 00:16:45.519 fused_ordering(812) 00:16:45.519 fused_ordering(813) 00:16:45.519 fused_ordering(814) 00:16:45.519 fused_ordering(815) 00:16:45.519 fused_ordering(816) 00:16:45.519 fused_ordering(817) 00:16:45.519 fused_ordering(818) 00:16:45.519 fused_ordering(819) 00:16:45.519 fused_ordering(820) 00:16:46.459 fused_ordering(821) 00:16:46.459 fused_ordering(822) 00:16:46.459 fused_ordering(823) 00:16:46.459 fused_ordering(824) 00:16:46.459 fused_ordering(825) 00:16:46.459 fused_ordering(826) 00:16:46.459 fused_ordering(827) 00:16:46.459 fused_ordering(828) 00:16:46.459 fused_ordering(829) 00:16:46.459 fused_ordering(830) 00:16:46.459 fused_ordering(831) 00:16:46.459 fused_ordering(832) 00:16:46.459 fused_ordering(833) 00:16:46.459 fused_ordering(834) 00:16:46.459 fused_ordering(835) 00:16:46.459 fused_ordering(836) 00:16:46.459 fused_ordering(837) 00:16:46.459 fused_ordering(838) 00:16:46.459 fused_ordering(839) 00:16:46.459 fused_ordering(840) 00:16:46.459 fused_ordering(841) 00:16:46.459 fused_ordering(842) 00:16:46.459 fused_ordering(843) 00:16:46.459 fused_ordering(844) 00:16:46.459 fused_ordering(845) 00:16:46.459 fused_ordering(846) 00:16:46.459 fused_ordering(847) 00:16:46.459 fused_ordering(848) 00:16:46.459 fused_ordering(849) 00:16:46.459 fused_ordering(850) 00:16:46.459 fused_ordering(851) 00:16:46.459 fused_ordering(852) 00:16:46.459 fused_ordering(853) 00:16:46.459 fused_ordering(854) 00:16:46.459 fused_ordering(855) 00:16:46.459 fused_ordering(856) 00:16:46.459 fused_ordering(857) 00:16:46.459 fused_ordering(858) 00:16:46.459 fused_ordering(859) 00:16:46.459 fused_ordering(860) 00:16:46.459 fused_ordering(861) 00:16:46.459 fused_ordering(862) 00:16:46.459 fused_ordering(863) 00:16:46.459 fused_ordering(864) 00:16:46.459 fused_ordering(865) 00:16:46.459 fused_ordering(866) 00:16:46.459 fused_ordering(867) 00:16:46.459 fused_ordering(868) 00:16:46.459 fused_ordering(869) 00:16:46.459 fused_ordering(870) 00:16:46.459 fused_ordering(871) 00:16:46.459 fused_ordering(872) 00:16:46.459 fused_ordering(873) 00:16:46.459 fused_ordering(874) 00:16:46.459 fused_ordering(875) 00:16:46.459 fused_ordering(876) 00:16:46.459 fused_ordering(877) 00:16:46.459 fused_ordering(878) 00:16:46.459 fused_ordering(879) 00:16:46.459 fused_ordering(880) 00:16:46.459 fused_ordering(881) 00:16:46.459 fused_ordering(882) 00:16:46.459 fused_ordering(883) 00:16:46.459 fused_ordering(884) 00:16:46.459 fused_ordering(885) 00:16:46.459 fused_ordering(886) 00:16:46.459 fused_ordering(887) 00:16:46.459 fused_ordering(888) 00:16:46.459 fused_ordering(889) 00:16:46.459 fused_ordering(890) 00:16:46.459 fused_ordering(891) 00:16:46.459 fused_ordering(892) 00:16:46.459 fused_ordering(893) 00:16:46.459 fused_ordering(894) 00:16:46.459 fused_ordering(895) 00:16:46.459 fused_ordering(896) 00:16:46.459 fused_ordering(897) 00:16:46.459 fused_ordering(898) 00:16:46.459 fused_ordering(899) 00:16:46.459 fused_ordering(900) 00:16:46.459 fused_ordering(901) 00:16:46.459 fused_ordering(902) 00:16:46.459 fused_ordering(903) 00:16:46.459 fused_ordering(904) 00:16:46.459 fused_ordering(905) 00:16:46.459 fused_ordering(906) 00:16:46.459 fused_ordering(907) 00:16:46.459 fused_ordering(908) 00:16:46.459 fused_ordering(909) 00:16:46.459 fused_ordering(910) 00:16:46.459 fused_ordering(911) 00:16:46.459 fused_ordering(912) 00:16:46.459 fused_ordering(913) 00:16:46.459 fused_ordering(914) 00:16:46.459 fused_ordering(915) 00:16:46.459 fused_ordering(916) 00:16:46.459 fused_ordering(917) 00:16:46.459 fused_ordering(918) 00:16:46.459 fused_ordering(919) 00:16:46.459 fused_ordering(920) 00:16:46.459 fused_ordering(921) 00:16:46.459 fused_ordering(922) 00:16:46.459 fused_ordering(923) 00:16:46.459 fused_ordering(924) 00:16:46.459 fused_ordering(925) 00:16:46.459 fused_ordering(926) 00:16:46.459 fused_ordering(927) 00:16:46.459 fused_ordering(928) 00:16:46.459 fused_ordering(929) 00:16:46.459 fused_ordering(930) 00:16:46.459 fused_ordering(931) 00:16:46.459 fused_ordering(932) 00:16:46.459 fused_ordering(933) 00:16:46.459 fused_ordering(934) 00:16:46.459 fused_ordering(935) 00:16:46.459 fused_ordering(936) 00:16:46.459 fused_ordering(937) 00:16:46.459 fused_ordering(938) 00:16:46.459 fused_ordering(939) 00:16:46.459 fused_ordering(940) 00:16:46.459 fused_ordering(941) 00:16:46.459 fused_ordering(942) 00:16:46.459 fused_ordering(943) 00:16:46.459 fused_ordering(944) 00:16:46.459 fused_ordering(945) 00:16:46.459 fused_ordering(946) 00:16:46.459 fused_ordering(947) 00:16:46.459 fused_ordering(948) 00:16:46.459 fused_ordering(949) 00:16:46.459 fused_ordering(950) 00:16:46.459 fused_ordering(951) 00:16:46.459 fused_ordering(952) 00:16:46.459 fused_ordering(953) 00:16:46.459 fused_ordering(954) 00:16:46.459 fused_ordering(955) 00:16:46.459 fused_ordering(956) 00:16:46.459 fused_ordering(957) 00:16:46.459 fused_ordering(958) 00:16:46.459 fused_ordering(959) 00:16:46.459 fused_ordering(960) 00:16:46.459 fused_ordering(961) 00:16:46.459 fused_ordering(962) 00:16:46.459 fused_ordering(963) 00:16:46.459 fused_ordering(964) 00:16:46.459 fused_ordering(965) 00:16:46.459 fused_ordering(966) 00:16:46.459 fused_ordering(967) 00:16:46.459 fused_ordering(968) 00:16:46.459 fused_ordering(969) 00:16:46.459 fused_ordering(970) 00:16:46.459 fused_ordering(971) 00:16:46.459 fused_ordering(972) 00:16:46.459 fused_ordering(973) 00:16:46.459 fused_ordering(974) 00:16:46.459 fused_ordering(975) 00:16:46.459 fused_ordering(976) 00:16:46.459 fused_ordering(977) 00:16:46.459 fused_ordering(978) 00:16:46.459 fused_ordering(979) 00:16:46.459 fused_ordering(980) 00:16:46.459 fused_ordering(981) 00:16:46.459 fused_ordering(982) 00:16:46.459 fused_ordering(983) 00:16:46.459 fused_ordering(984) 00:16:46.459 fused_ordering(985) 00:16:46.459 fused_ordering(986) 00:16:46.459 fused_ordering(987) 00:16:46.459 fused_ordering(988) 00:16:46.459 fused_ordering(989) 00:16:46.459 fused_ordering(990) 00:16:46.459 fused_ordering(991) 00:16:46.459 fused_ordering(992) 00:16:46.459 fused_ordering(993) 00:16:46.459 fused_ordering(994) 00:16:46.459 fused_ordering(995) 00:16:46.459 fused_ordering(996) 00:16:46.459 fused_ordering(997) 00:16:46.459 fused_ordering(998) 00:16:46.459 fused_ordering(999) 00:16:46.459 fused_ordering(1000) 00:16:46.459 fused_ordering(1001) 00:16:46.459 fused_ordering(1002) 00:16:46.459 fused_ordering(1003) 00:16:46.459 fused_ordering(1004) 00:16:46.459 fused_ordering(1005) 00:16:46.459 fused_ordering(1006) 00:16:46.459 fused_ordering(1007) 00:16:46.459 fused_ordering(1008) 00:16:46.459 fused_ordering(1009) 00:16:46.459 fused_ordering(1010) 00:16:46.459 fused_ordering(1011) 00:16:46.459 fused_ordering(1012) 00:16:46.459 fused_ordering(1013) 00:16:46.459 fused_ordering(1014) 00:16:46.459 fused_ordering(1015) 00:16:46.459 fused_ordering(1016) 00:16:46.459 fused_ordering(1017) 00:16:46.459 fused_ordering(1018) 00:16:46.459 fused_ordering(1019) 00:16:46.459 fused_ordering(1020) 00:16:46.459 fused_ordering(1021) 00:16:46.459 fused_ordering(1022) 00:16:46.459 fused_ordering(1023) 00:16:46.459 01:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:16:46.459 01:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:16:46.459 01:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:46.459 01:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:16:46.459 01:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:46.459 01:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:16:46.459 01:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:46.459 01:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:46.459 rmmod nvme_tcp 00:16:46.459 rmmod nvme_fabrics 00:16:46.459 rmmod nvme_keyring 00:16:46.459 01:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:46.459 01:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:16:46.460 01:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:16:46.460 01:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 1414398 ']' 00:16:46.460 01:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 1414398 00:16:46.460 01:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@948 -- # '[' -z 1414398 ']' 00:16:46.460 01:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # kill -0 1414398 00:16:46.460 01:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # uname 00:16:46.460 01:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:46.460 01:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1414398 00:16:46.460 01:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:46.460 01:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:46.460 01:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1414398' 00:16:46.460 killing process with pid 1414398 00:16:46.460 01:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # kill 1414398 00:16:46.460 01:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # wait 1414398 00:16:46.718 01:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:46.718 01:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:46.718 01:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:46.718 01:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:46.718 01:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:46.718 01:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:46.718 01:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:46.719 01:55:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:49.262 01:55:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:49.262 00:16:49.262 real 0m7.726s 00:16:49.262 user 0m5.175s 00:16:49.262 sys 0m3.480s 00:16:49.262 01:55:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:49.262 01:55:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:49.262 ************************************ 00:16:49.262 END TEST nvmf_fused_ordering 00:16:49.262 ************************************ 00:16:49.262 01:55:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:16:49.262 01:55:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:49.262 01:55:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:49.262 01:55:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:49.262 ************************************ 00:16:49.262 START TEST nvmf_ns_masking 00:16:49.262 ************************************ 00:16:49.262 01:55:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1123 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:16:49.262 * Looking for test storage... 00:16:49.262 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:49.262 01:55:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:49.262 01:55:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:16:49.262 01:55:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:49.262 01:55:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:49.262 01:55:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:49.262 01:55:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:49.262 01:55:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:49.262 01:55:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:49.262 01:55:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:49.262 01:55:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:49.262 01:55:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:49.262 01:55:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:49.262 01:55:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:49.262 01:55:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:49.262 01:55:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:49.262 01:55:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:49.262 01:55:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:49.262 01:55:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:49.262 01:55:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:49.262 01:55:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:49.262 01:55:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:49.262 01:55:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:49.262 01:55:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:49.262 01:55:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:49.262 01:55:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:49.262 01:55:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:16:49.262 01:55:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:49.262 01:55:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:16:49.262 01:55:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:49.262 01:55:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:49.262 01:55:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:49.262 01:55:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:49.262 01:55:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:49.262 01:55:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:49.262 01:55:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:49.262 01:55:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:49.262 01:55:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:49.262 01:55:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:16:49.262 01:55:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:16:49.262 01:55:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:16:49.262 01:55:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=05a96fce-9983-4262-86e8-c9d4d2c76bab 00:16:49.262 01:55:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:16:49.262 01:55:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=ebcb2082-b323-4753-bf69-7ce2a94eaafc 00:16:49.262 01:55:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:16:49.262 01:55:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:16:49.262 01:55:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:16:49.262 01:55:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:16:49.262 01:55:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=30f1530e-607e-4062-8c3a-a3d251e84bde 00:16:49.262 01:55:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:16:49.262 01:55:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:49.262 01:55:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:49.262 01:55:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:49.262 01:55:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:49.263 01:55:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:49.263 01:55:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:49.263 01:55:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:49.263 01:55:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:49.263 01:55:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:49.263 01:55:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:49.263 01:55:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:16:49.263 01:55:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:51.201 01:55:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:51.201 01:55:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:16:51.201 01:55:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:51.201 01:55:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:51.201 01:55:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:51.201 01:55:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:51.201 01:55:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:51.201 01:55:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:16:51.201 01:55:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:51.201 01:55:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:16:51.201 01:55:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:16:51.201 01:55:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:16:51.201 01:55:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:16:51.201 01:55:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:16:51.201 01:55:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:16:51.201 01:55:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:51.201 01:55:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:51.201 01:55:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:51.201 01:55:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:51.201 01:55:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:51.201 01:55:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:51.201 01:55:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:51.201 01:55:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:51.201 01:55:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:51.201 01:55:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:51.201 01:55:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:51.201 01:55:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:51.201 01:55:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:51.201 01:55:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:51.201 01:55:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:51.201 01:55:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:51.201 01:55:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:51.201 01:55:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:51.201 01:55:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:51.201 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:51.201 01:55:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:51.201 01:55:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:51.201 01:55:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:51.201 01:55:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:51.201 01:55:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:51.201 01:55:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:51.201 01:55:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:51.201 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:51.201 01:55:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:51.201 01:55:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:51.201 01:55:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:51.201 01:55:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:51.201 01:55:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:51.201 01:55:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:51.201 01:55:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:51.201 01:55:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:51.201 01:55:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:51.201 01:55:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:51.201 01:55:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:51.201 01:55:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:51.201 01:55:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:51.201 01:55:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:51.202 01:55:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:51.202 01:55:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:51.202 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:51.202 01:55:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:51.202 01:55:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:51.202 01:55:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:51.202 01:55:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:51.202 01:55:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:51.202 01:55:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:51.202 01:55:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:51.202 01:55:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:51.202 01:55:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:51.202 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:51.202 01:55:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:51.202 01:55:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:51.202 01:55:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:16:51.202 01:55:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:51.202 01:55:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:51.202 01:55:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:51.202 01:55:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:51.202 01:55:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:51.202 01:55:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:51.202 01:55:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:51.202 01:55:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:51.202 01:55:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:51.202 01:55:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:51.202 01:55:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:51.202 01:55:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:51.202 01:55:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:51.202 01:55:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:51.202 01:55:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:51.202 01:55:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:51.202 01:55:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:51.202 01:55:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:51.202 01:55:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:51.202 01:55:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:51.202 01:55:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:51.202 01:55:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:51.202 01:55:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:51.202 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:51.202 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.262 ms 00:16:51.202 00:16:51.202 --- 10.0.0.2 ping statistics --- 00:16:51.202 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:51.202 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:16:51.202 01:55:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:51.202 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:51.202 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.164 ms 00:16:51.202 00:16:51.202 --- 10.0.0.1 ping statistics --- 00:16:51.202 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:51.202 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:16:51.202 01:55:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:51.202 01:55:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:16:51.202 01:55:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:51.202 01:55:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:51.202 01:55:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:51.202 01:55:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:51.202 01:55:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:51.202 01:55:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:51.202 01:55:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:51.202 01:55:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:16:51.202 01:55:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:51.202 01:55:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:51.202 01:55:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:51.202 01:55:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=1416749 00:16:51.202 01:55:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:16:51.202 01:55:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 1416749 00:16:51.202 01:55:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 1416749 ']' 00:16:51.202 01:55:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:51.202 01:55:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:51.202 01:55:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:51.202 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:51.202 01:55:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:51.202 01:55:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:51.202 [2024-07-24 01:55:05.853032] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:16:51.202 [2024-07-24 01:55:05.853130] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:51.202 EAL: No free 2048 kB hugepages reported on node 1 00:16:51.202 [2024-07-24 01:55:05.929378] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:51.202 [2024-07-24 01:55:06.027277] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:51.202 [2024-07-24 01:55:06.027353] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:51.202 [2024-07-24 01:55:06.027370] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:51.202 [2024-07-24 01:55:06.027384] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:51.202 [2024-07-24 01:55:06.027396] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:51.202 [2024-07-24 01:55:06.027427] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:51.461 01:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:51.461 01:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:16:51.461 01:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:51.461 01:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:51.461 01:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:51.461 01:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:51.461 01:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:51.720 [2024-07-24 01:55:06.448176] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:51.720 01:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:16:51.720 01:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:16:51.720 01:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:51.978 Malloc1 00:16:51.978 01:55:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:16:52.236 Malloc2 00:16:52.236 01:55:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:52.494 01:55:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:16:52.752 01:55:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:53.010 [2024-07-24 01:55:07.746038] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:53.010 01:55:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:16:53.010 01:55:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 30f1530e-607e-4062-8c3a-a3d251e84bde -a 10.0.0.2 -s 4420 -i 4 00:16:53.269 01:55:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:16:53.269 01:55:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # local i=0 00:16:53.269 01:55:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:16:53.269 01:55:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # [[ -n '' ]] 00:16:53.269 01:55:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # sleep 2 00:16:55.173 01:55:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:16:55.173 01:55:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:16:55.173 01:55:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:16:55.173 01:55:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:16:55.173 01:55:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:16:55.173 01:55:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # return 0 00:16:55.173 01:55:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:16:55.173 01:55:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:16:55.173 01:55:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:16:55.173 01:55:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:16:55.173 01:55:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:16:55.173 01:55:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:55.173 01:55:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:55.173 [ 0]:0x1 00:16:55.173 01:55:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:55.173 01:55:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:55.432 01:55:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=36c0f9a3b9384c6ab3d140179688b41e 00:16:55.432 01:55:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 36c0f9a3b9384c6ab3d140179688b41e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:55.433 01:55:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:16:55.691 01:55:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:16:55.691 01:55:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:55.691 01:55:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:55.691 [ 0]:0x1 00:16:55.691 01:55:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:55.691 01:55:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:55.691 01:55:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=36c0f9a3b9384c6ab3d140179688b41e 00:16:55.691 01:55:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 36c0f9a3b9384c6ab3d140179688b41e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:55.691 01:55:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:16:55.691 01:55:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:55.691 01:55:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:55.691 [ 1]:0x2 00:16:55.691 01:55:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:55.691 01:55:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:55.691 01:55:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=07003cedf9644d6aae9c7a444de39d4c 00:16:55.691 01:55:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 07003cedf9644d6aae9c7a444de39d4c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:55.691 01:55:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:16:55.691 01:55:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:55.691 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:55.691 01:55:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:56.258 01:55:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:16:56.518 01:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:16:56.518 01:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 30f1530e-607e-4062-8c3a-a3d251e84bde -a 10.0.0.2 -s 4420 -i 4 00:16:56.518 01:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:16:56.518 01:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # local i=0 00:16:56.518 01:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:16:56.518 01:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # [[ -n 1 ]] 00:16:56.518 01:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # nvme_device_counter=1 00:16:56.518 01:55:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # sleep 2 00:16:59.053 01:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:16:59.053 01:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:16:59.053 01:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:16:59.053 01:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:16:59.053 01:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:16:59.053 01:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # return 0 00:16:59.053 01:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:16:59.053 01:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:16:59.053 01:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:16:59.053 01:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:16:59.053 01:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:16:59.053 01:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:16:59.053 01:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:16:59.053 01:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:16:59.053 01:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:59.053 01:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:16:59.053 01:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:59.053 01:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:16:59.053 01:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:59.053 01:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:59.053 01:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:59.053 01:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:59.053 01:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:16:59.053 01:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:59.053 01:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:16:59.053 01:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:59.053 01:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:59.053 01:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:59.053 01:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:16:59.053 01:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:59.053 01:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:59.053 [ 0]:0x2 00:16:59.053 01:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:59.053 01:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:59.053 01:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=07003cedf9644d6aae9c7a444de39d4c 00:16:59.053 01:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 07003cedf9644d6aae9c7a444de39d4c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:59.053 01:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:59.053 01:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:16:59.053 01:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:59.053 01:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:59.053 [ 0]:0x1 00:16:59.053 01:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:59.053 01:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:59.310 01:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=36c0f9a3b9384c6ab3d140179688b41e 00:16:59.310 01:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 36c0f9a3b9384c6ab3d140179688b41e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:59.310 01:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:16:59.310 01:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:59.310 01:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:59.310 [ 1]:0x2 00:16:59.310 01:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:59.310 01:55:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:59.310 01:55:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=07003cedf9644d6aae9c7a444de39d4c 00:16:59.310 01:55:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 07003cedf9644d6aae9c7a444de39d4c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:59.310 01:55:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:59.568 01:55:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:16:59.568 01:55:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:16:59.568 01:55:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:16:59.568 01:55:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:16:59.568 01:55:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:59.568 01:55:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:16:59.568 01:55:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:59.568 01:55:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:16:59.568 01:55:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:59.568 01:55:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:59.568 01:55:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:59.568 01:55:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:59.568 01:55:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:16:59.568 01:55:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:59.568 01:55:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:16:59.568 01:55:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:59.568 01:55:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:59.568 01:55:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:59.568 01:55:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:16:59.568 01:55:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:59.568 01:55:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:59.568 [ 0]:0x2 00:16:59.568 01:55:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:59.568 01:55:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:59.568 01:55:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=07003cedf9644d6aae9c7a444de39d4c 00:16:59.568 01:55:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 07003cedf9644d6aae9c7a444de39d4c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:59.568 01:55:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:16:59.568 01:55:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:59.568 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:59.568 01:55:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:59.825 01:55:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:16:59.825 01:55:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 30f1530e-607e-4062-8c3a-a3d251e84bde -a 10.0.0.2 -s 4420 -i 4 00:17:00.085 01:55:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:17:00.085 01:55:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # local i=0 00:17:00.085 01:55:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:17:00.085 01:55:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # [[ -n 2 ]] 00:17:00.085 01:55:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # nvme_device_counter=2 00:17:00.085 01:55:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # sleep 2 00:17:01.992 01:55:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:17:01.992 01:55:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:17:01.992 01:55:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:17:01.992 01:55:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_devices=2 00:17:01.992 01:55:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:17:01.992 01:55:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # return 0 00:17:01.992 01:55:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:17:01.992 01:55:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:17:01.992 01:55:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:17:01.992 01:55:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:17:01.992 01:55:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:17:01.992 01:55:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:01.992 01:55:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:01.992 [ 0]:0x1 00:17:01.992 01:55:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:01.992 01:55:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:02.250 01:55:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=36c0f9a3b9384c6ab3d140179688b41e 00:17:02.250 01:55:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 36c0f9a3b9384c6ab3d140179688b41e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:02.250 01:55:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:17:02.250 01:55:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:02.250 01:55:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:02.250 [ 1]:0x2 00:17:02.250 01:55:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:02.250 01:55:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:02.250 01:55:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=07003cedf9644d6aae9c7a444de39d4c 00:17:02.250 01:55:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 07003cedf9644d6aae9c7a444de39d4c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:02.250 01:55:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:02.510 01:55:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:17:02.510 01:55:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:17:02.510 01:55:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:17:02.510 01:55:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:17:02.510 01:55:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:02.510 01:55:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:17:02.510 01:55:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:02.510 01:55:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:17:02.510 01:55:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:02.510 01:55:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:02.510 01:55:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:02.510 01:55:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:02.510 01:55:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:02.510 01:55:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:02.510 01:55:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:17:02.510 01:55:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:02.510 01:55:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:02.510 01:55:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:02.510 01:55:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:17:02.510 01:55:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:02.510 01:55:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:02.510 [ 0]:0x2 00:17:02.510 01:55:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:02.510 01:55:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:02.510 01:55:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=07003cedf9644d6aae9c7a444de39d4c 00:17:02.510 01:55:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 07003cedf9644d6aae9c7a444de39d4c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:02.510 01:55:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:17:02.510 01:55:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:17:02.510 01:55:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:17:02.510 01:55:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:02.510 01:55:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:02.510 01:55:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:02.510 01:55:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:02.510 01:55:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:02.510 01:55:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:02.510 01:55:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:02.510 01:55:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:02.510 01:55:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:17:02.769 [2024-07-24 01:55:17.559618] nvmf_rpc.c:1798:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:17:02.769 request: 00:17:02.769 { 00:17:02.769 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:02.769 "nsid": 2, 00:17:02.769 "host": "nqn.2016-06.io.spdk:host1", 00:17:02.769 "method": "nvmf_ns_remove_host", 00:17:02.769 "req_id": 1 00:17:02.769 } 00:17:02.769 Got JSON-RPC error response 00:17:02.769 response: 00:17:02.769 { 00:17:02.769 "code": -32602, 00:17:02.769 "message": "Invalid parameters" 00:17:02.769 } 00:17:02.769 01:55:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:17:02.769 01:55:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:02.769 01:55:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:02.769 01:55:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:02.769 01:55:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:17:02.769 01:55:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:17:02.769 01:55:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:17:02.769 01:55:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:17:02.769 01:55:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:02.769 01:55:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:17:02.769 01:55:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:02.769 01:55:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:17:02.769 01:55:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:02.769 01:55:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:02.769 01:55:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:02.769 01:55:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:02.769 01:55:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:02.769 01:55:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:02.769 01:55:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:17:02.769 01:55:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:02.769 01:55:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:02.769 01:55:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:02.769 01:55:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:17:02.769 01:55:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:02.769 01:55:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:02.769 [ 0]:0x2 00:17:02.769 01:55:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:02.769 01:55:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:03.028 01:55:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=07003cedf9644d6aae9c7a444de39d4c 00:17:03.028 01:55:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 07003cedf9644d6aae9c7a444de39d4c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:03.028 01:55:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:17:03.028 01:55:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:03.028 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:03.028 01:55:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=1418236 00:17:03.028 01:55:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:17:03.028 01:55:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:17:03.028 01:55:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 1418236 /var/tmp/host.sock 00:17:03.028 01:55:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 1418236 ']' 00:17:03.028 01:55:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:17:03.028 01:55:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:03.028 01:55:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:17:03.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:17:03.028 01:55:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:03.028 01:55:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:03.028 [2024-07-24 01:55:17.766219] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:17:03.028 [2024-07-24 01:55:17.766295] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1418236 ] 00:17:03.028 EAL: No free 2048 kB hugepages reported on node 1 00:17:03.028 [2024-07-24 01:55:17.830466] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:03.285 [2024-07-24 01:55:17.927918] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:03.543 01:55:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:03.543 01:55:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:17:03.543 01:55:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:03.543 01:55:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:17:03.801 01:55:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 05a96fce-9983-4262-86e8-c9d4d2c76bab 00:17:03.801 01:55:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:17:03.801 01:55:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 05A96FCE9983426286E8C9D4D2C76BAB -i 00:17:04.059 01:55:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid ebcb2082-b323-4753-bf69-7ce2a94eaafc 00:17:04.059 01:55:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:17:04.059 01:55:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g EBCB2082B3234753BF697CE2A94EAAFC -i 00:17:04.316 01:55:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:04.574 01:55:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:17:04.832 01:55:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:17:04.832 01:55:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:17:05.401 nvme0n1 00:17:05.401 01:55:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:17:05.401 01:55:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:17:05.660 nvme1n2 00:17:05.660 01:55:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:17:05.660 01:55:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:17:05.660 01:55:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:17:05.660 01:55:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:17:05.660 01:55:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:17:05.918 01:55:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:17:05.918 01:55:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:17:05.918 01:55:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:17:05.918 01:55:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:17:06.176 01:55:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 05a96fce-9983-4262-86e8-c9d4d2c76bab == \0\5\a\9\6\f\c\e\-\9\9\8\3\-\4\2\6\2\-\8\6\e\8\-\c\9\d\4\d\2\c\7\6\b\a\b ]] 00:17:06.176 01:55:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:17:06.176 01:55:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:17:06.176 01:55:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:17:06.436 01:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ ebcb2082-b323-4753-bf69-7ce2a94eaafc == \e\b\c\b\2\0\8\2\-\b\3\2\3\-\4\7\5\3\-\b\f\6\9\-\7\c\e\2\a\9\4\e\a\a\f\c ]] 00:17:06.436 01:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 1418236 00:17:06.436 01:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 1418236 ']' 00:17:06.436 01:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 1418236 00:17:06.436 01:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:17:06.436 01:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:06.436 01:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1418236 00:17:06.436 01:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:06.436 01:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:06.436 01:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1418236' 00:17:06.436 killing process with pid 1418236 00:17:06.436 01:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 1418236 00:17:06.436 01:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 1418236 00:17:07.003 01:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:07.003 01:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:17:07.003 01:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:17:07.003 01:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:07.003 01:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:17:07.003 01:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:07.003 01:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:17:07.003 01:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:07.003 01:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:07.003 rmmod nvme_tcp 00:17:07.003 rmmod nvme_fabrics 00:17:07.263 rmmod nvme_keyring 00:17:07.263 01:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:07.263 01:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:17:07.263 01:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:17:07.263 01:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 1416749 ']' 00:17:07.263 01:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 1416749 00:17:07.263 01:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 1416749 ']' 00:17:07.263 01:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 1416749 00:17:07.263 01:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:17:07.263 01:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:07.263 01:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1416749 00:17:07.263 01:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:07.263 01:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:07.263 01:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1416749' 00:17:07.263 killing process with pid 1416749 00:17:07.263 01:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 1416749 00:17:07.263 01:55:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 1416749 00:17:07.523 01:55:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:07.523 01:55:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:07.523 01:55:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:07.523 01:55:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:07.523 01:55:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:07.523 01:55:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:07.523 01:55:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:07.523 01:55:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:09.430 01:55:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:09.430 00:17:09.430 real 0m20.715s 00:17:09.430 user 0m26.867s 00:17:09.430 sys 0m4.036s 00:17:09.430 01:55:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:09.430 01:55:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:09.430 ************************************ 00:17:09.430 END TEST nvmf_ns_masking 00:17:09.430 ************************************ 00:17:09.430 01:55:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:17:09.430 01:55:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:17:09.430 01:55:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:09.430 01:55:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:09.430 01:55:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:09.688 ************************************ 00:17:09.688 START TEST nvmf_nvme_cli 00:17:09.688 ************************************ 00:17:09.688 01:55:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:17:09.688 * Looking for test storage... 00:17:09.688 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:09.688 01:55:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:09.688 01:55:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:17:09.688 01:55:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:09.688 01:55:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:09.688 01:55:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:09.688 01:55:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:09.688 01:55:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:09.688 01:55:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:09.688 01:55:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:09.688 01:55:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:09.688 01:55:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:09.688 01:55:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:09.688 01:55:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:09.688 01:55:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:09.688 01:55:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:09.688 01:55:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:09.688 01:55:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:09.688 01:55:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:09.688 01:55:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:09.688 01:55:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:09.688 01:55:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:09.688 01:55:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:09.688 01:55:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:09.688 01:55:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:09.689 01:55:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:09.689 01:55:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:17:09.689 01:55:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:09.689 01:55:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:17:09.689 01:55:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:09.689 01:55:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:09.689 01:55:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:09.689 01:55:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:09.689 01:55:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:09.689 01:55:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:09.689 01:55:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:09.689 01:55:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:09.689 01:55:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:09.689 01:55:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:09.689 01:55:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:17:09.689 01:55:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:17:09.689 01:55:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:09.689 01:55:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:09.689 01:55:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:09.689 01:55:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:09.689 01:55:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:09.689 01:55:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:09.689 01:55:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:09.689 01:55:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:09.689 01:55:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:09.689 01:55:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:09.689 01:55:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:17:09.689 01:55:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:11.595 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:11.595 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:17:11.595 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:11.595 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:11.595 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:11.595 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:11.595 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:11.595 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:17:11.595 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:11.595 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:17:11.595 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:17:11.595 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:17:11.595 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:17:11.595 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:17:11.595 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:17:11.595 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:11.595 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:11.595 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:11.595 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:11.595 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:11.595 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:11.595 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:11.595 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:11.595 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:11.595 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:11.595 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:11.595 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:11.595 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:11.595 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:11.595 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:11.595 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:11.595 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:11.595 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:11.595 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:11.595 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:11.595 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:11.595 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:11.595 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:11.595 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:11.595 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:11.595 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:11.595 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:11.595 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:11.595 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:11.595 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:11.595 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:11.595 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:11.595 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:11.595 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:11.595 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:11.595 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:11.595 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:11.595 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:11.595 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:11.595 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:11.595 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:11.595 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:11.595 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:11.595 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:11.595 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:11.595 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:11.595 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:11.595 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:11.595 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:11.595 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:11.595 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:11.595 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:11.595 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:11.595 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:11.595 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:11.595 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:11.595 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:11.595 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:17:11.595 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:11.595 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:11.595 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:11.595 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:11.595 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:11.595 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:11.595 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:11.595 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:11.595 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:11.595 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:11.595 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:11.595 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:11.595 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:11.596 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:11.596 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:11.596 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:11.596 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:11.596 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:11.596 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:11.596 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:11.855 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:11.855 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:11.855 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:11.855 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:11.855 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.226 ms 00:17:11.855 00:17:11.855 --- 10.0.0.2 ping statistics --- 00:17:11.855 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:11.855 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:17:11.855 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:11.855 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:11.855 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.142 ms 00:17:11.855 00:17:11.855 --- 10.0.0.1 ping statistics --- 00:17:11.855 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:11.855 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:17:11.855 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:11.855 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:17:11.855 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:11.855 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:11.855 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:11.855 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:11.855 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:11.855 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:11.855 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:11.855 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:17:11.855 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:11.855 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:11.855 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:11.855 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=1420721 00:17:11.855 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:11.855 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 1420721 00:17:11.855 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@829 -- # '[' -z 1420721 ']' 00:17:11.855 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:11.855 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:11.855 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:11.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:11.855 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:11.855 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:11.855 [2024-07-24 01:55:26.600554] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:17:11.855 [2024-07-24 01:55:26.600636] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:11.855 EAL: No free 2048 kB hugepages reported on node 1 00:17:11.855 [2024-07-24 01:55:26.669141] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:12.113 [2024-07-24 01:55:26.763931] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:12.113 [2024-07-24 01:55:26.763993] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:12.113 [2024-07-24 01:55:26.764009] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:12.113 [2024-07-24 01:55:26.764023] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:12.113 [2024-07-24 01:55:26.764035] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:12.113 [2024-07-24 01:55:26.764126] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:12.113 [2024-07-24 01:55:26.764181] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:12.113 [2024-07-24 01:55:26.764243] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:12.113 [2024-07-24 01:55:26.764246] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:12.113 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:12.113 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # return 0 00:17:12.113 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:12.113 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:12.113 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:12.113 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:12.114 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:12.114 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.114 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:12.114 [2024-07-24 01:55:26.921868] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:12.114 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.114 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:12.114 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.114 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:12.114 Malloc0 00:17:12.114 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.114 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:17:12.114 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.114 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:12.114 Malloc1 00:17:12.114 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.114 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:17:12.114 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.114 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:12.114 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.114 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:12.114 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.114 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:12.114 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.114 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:12.114 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.114 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:12.114 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.114 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:12.114 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.114 01:55:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:12.114 [2024-07-24 01:55:27.002936] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:12.114 01:55:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.114 01:55:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:12.114 01:55:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.114 01:55:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:12.372 01:55:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.372 01:55:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:17:12.372 00:17:12.372 Discovery Log Number of Records 2, Generation counter 2 00:17:12.372 =====Discovery Log Entry 0====== 00:17:12.372 trtype: tcp 00:17:12.372 adrfam: ipv4 00:17:12.372 subtype: current discovery subsystem 00:17:12.372 treq: not required 00:17:12.372 portid: 0 00:17:12.372 trsvcid: 4420 00:17:12.372 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:17:12.372 traddr: 10.0.0.2 00:17:12.372 eflags: explicit discovery connections, duplicate discovery information 00:17:12.372 sectype: none 00:17:12.372 =====Discovery Log Entry 1====== 00:17:12.372 trtype: tcp 00:17:12.372 adrfam: ipv4 00:17:12.372 subtype: nvme subsystem 00:17:12.372 treq: not required 00:17:12.372 portid: 0 00:17:12.372 trsvcid: 4420 00:17:12.372 subnqn: nqn.2016-06.io.spdk:cnode1 00:17:12.372 traddr: 10.0.0.2 00:17:12.372 eflags: none 00:17:12.372 sectype: none 00:17:12.372 01:55:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:17:12.372 01:55:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:17:12.372 01:55:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:17:12.372 01:55:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:17:12.372 01:55:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:17:12.372 01:55:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:17:12.372 01:55:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:17:12.372 01:55:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:17:12.372 01:55:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:17:12.372 01:55:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:17:12.372 01:55:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:12.941 01:55:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:17:12.941 01:55:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1196 -- # local i=0 00:17:12.941 01:55:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:17:12.941 01:55:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # [[ -n 2 ]] 00:17:12.941 01:55:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # nvme_device_counter=2 00:17:12.941 01:55:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # sleep 2 00:17:14.842 01:55:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:17:14.842 01:55:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:17:14.842 01:55:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:17:14.842 01:55:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_devices=2 00:17:14.842 01:55:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:17:14.842 01:55:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # return 0 00:17:14.842 01:55:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:17:14.842 01:55:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:17:14.842 01:55:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:17:14.842 01:55:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:17:14.842 01:55:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:17:14.842 01:55:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:17:14.842 01:55:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:17:14.842 01:55:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:17:14.842 01:55:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:17:14.842 01:55:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:17:14.842 01:55:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:17:14.842 01:55:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:17:14.842 01:55:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:17:14.842 01:55:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:17:14.842 01:55:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:17:14.842 /dev/nvme0n1 ]] 00:17:14.842 01:55:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:17:14.842 01:55:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:17:14.842 01:55:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:17:14.842 01:55:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:17:14.842 01:55:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:17:14.842 01:55:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:17:14.842 01:55:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:17:14.842 01:55:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:17:14.842 01:55:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:17:14.842 01:55:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:17:14.842 01:55:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:17:14.842 01:55:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:17:14.842 01:55:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:17:14.842 01:55:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:17:14.842 01:55:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:17:14.843 01:55:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:17:14.843 01:55:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:15.100 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:15.100 01:55:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:15.100 01:55:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1217 -- # local i=0 00:17:15.100 01:55:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:17:15.100 01:55:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1218 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:15.101 01:55:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:17:15.101 01:55:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1225 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:15.101 01:55:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1229 -- # return 0 00:17:15.101 01:55:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:17:15.101 01:55:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:15.101 01:55:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.101 01:55:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:15.101 01:55:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.101 01:55:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:17:15.101 01:55:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:17:15.101 01:55:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:15.101 01:55:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:17:15.101 01:55:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:15.101 01:55:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:17:15.101 01:55:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:15.101 01:55:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:15.101 rmmod nvme_tcp 00:17:15.101 rmmod nvme_fabrics 00:17:15.101 rmmod nvme_keyring 00:17:15.101 01:55:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:15.101 01:55:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:17:15.101 01:55:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:17:15.101 01:55:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 1420721 ']' 00:17:15.101 01:55:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 1420721 00:17:15.101 01:55:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@948 -- # '[' -z 1420721 ']' 00:17:15.101 01:55:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # kill -0 1420721 00:17:15.101 01:55:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # uname 00:17:15.101 01:55:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:15.101 01:55:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1420721 00:17:15.101 01:55:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:15.101 01:55:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:15.101 01:55:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1420721' 00:17:15.101 killing process with pid 1420721 00:17:15.101 01:55:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@967 -- # kill 1420721 00:17:15.101 01:55:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # wait 1420721 00:17:15.359 01:55:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:15.359 01:55:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:15.359 01:55:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:15.359 01:55:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:15.359 01:55:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:15.359 01:55:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:15.359 01:55:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:15.359 01:55:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:17.897 01:55:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:17.897 00:17:17.897 real 0m7.894s 00:17:17.897 user 0m14.086s 00:17:17.897 sys 0m2.181s 00:17:17.897 01:55:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:17.897 01:55:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:17.897 ************************************ 00:17:17.897 END TEST nvmf_nvme_cli 00:17:17.897 ************************************ 00:17:17.897 01:55:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:17:17.897 01:55:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:17:17.897 01:55:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:17.897 01:55:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:17.897 01:55:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:17.897 ************************************ 00:17:17.897 START TEST nvmf_vfio_user 00:17:17.897 ************************************ 00:17:17.897 01:55:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:17:17.897 * Looking for test storage... 00:17:17.897 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:17.897 01:55:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:17.897 01:55:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:17:17.897 01:55:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:17.897 01:55:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:17.897 01:55:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:17.897 01:55:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:17.897 01:55:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:17.898 01:55:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:17.898 01:55:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:17.898 01:55:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:17.898 01:55:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:17.898 01:55:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:17.898 01:55:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:17.898 01:55:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:17.898 01:55:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:17.898 01:55:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:17.898 01:55:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:17.898 01:55:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:17.898 01:55:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:17.898 01:55:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:17.898 01:55:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:17.898 01:55:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:17.898 01:55:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:17.898 01:55:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:17.898 01:55:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:17.898 01:55:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:17:17.898 01:55:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:17.898 01:55:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:17:17.898 01:55:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:17.898 01:55:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:17.898 01:55:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:17.898 01:55:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:17.898 01:55:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:17.898 01:55:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:17.898 01:55:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:17.898 01:55:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:17.898 01:55:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:17:17.898 01:55:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:17:17.898 01:55:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:17:17.898 01:55:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:17.898 01:55:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:17:17.898 01:55:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:17:17.898 01:55:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:17:17.898 01:55:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:17:17.898 01:55:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:17:17.898 01:55:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:17:17.898 01:55:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1421524 00:17:17.898 01:55:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:17:17.898 01:55:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1421524' 00:17:17.898 Process pid: 1421524 00:17:17.898 01:55:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:17:17.898 01:55:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1421524 00:17:17.898 01:55:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 1421524 ']' 00:17:17.898 01:55:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:17.898 01:55:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:17.898 01:55:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:17.898 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:17.898 01:55:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:17.898 01:55:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:17:17.898 [2024-07-24 01:55:32.412930] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:17:17.898 [2024-07-24 01:55:32.413026] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:17.898 EAL: No free 2048 kB hugepages reported on node 1 00:17:17.898 [2024-07-24 01:55:32.471392] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:17.898 [2024-07-24 01:55:32.559122] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:17.898 [2024-07-24 01:55:32.559176] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:17.898 [2024-07-24 01:55:32.559204] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:17.898 [2024-07-24 01:55:32.559215] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:17.898 [2024-07-24 01:55:32.559231] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:17.898 [2024-07-24 01:55:32.559312] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:17.898 [2024-07-24 01:55:32.559391] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:17.898 [2024-07-24 01:55:32.559451] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:17.898 [2024-07-24 01:55:32.559454] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:17.898 01:55:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:17.898 01:55:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:17:17.898 01:55:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:17:18.870 01:55:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:17:19.128 01:55:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:17:19.128 01:55:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:17:19.128 01:55:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:19.128 01:55:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:17:19.128 01:55:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:17:19.692 Malloc1 00:17:19.692 01:55:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:17:19.692 01:55:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:17:19.949 01:55:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:17:20.207 01:55:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:20.207 01:55:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:17:20.207 01:55:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:17:20.465 Malloc2 00:17:20.465 01:55:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:17:20.723 01:55:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:17:20.981 01:55:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:17:21.239 01:55:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:17:21.239 01:55:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:17:21.239 01:55:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:21.239 01:55:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:17:21.239 01:55:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:17:21.239 01:55:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:17:21.239 [2024-07-24 01:55:36.113477] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:17:21.239 [2024-07-24 01:55:36.113518] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1422061 ] 00:17:21.239 EAL: No free 2048 kB hugepages reported on node 1 00:17:21.498 [2024-07-24 01:55:36.145567] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:17:21.498 [2024-07-24 01:55:36.154750] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:17:21.498 [2024-07-24 01:55:36.154777] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f96df34b000 00:17:21.498 [2024-07-24 01:55:36.155750] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:21.498 [2024-07-24 01:55:36.156742] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:21.498 [2024-07-24 01:55:36.157746] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:21.498 [2024-07-24 01:55:36.158752] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:21.498 [2024-07-24 01:55:36.159756] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:21.498 [2024-07-24 01:55:36.160763] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:21.498 [2024-07-24 01:55:36.161767] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:21.498 [2024-07-24 01:55:36.162771] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:21.498 [2024-07-24 01:55:36.163782] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:17:21.498 [2024-07-24 01:55:36.163802] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f96de0ff000 00:17:21.498 [2024-07-24 01:55:36.164923] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:17:21.498 [2024-07-24 01:55:36.180571] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:17:21.498 [2024-07-24 01:55:36.180621] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:17:21.498 [2024-07-24 01:55:36.182889] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:17:21.498 [2024-07-24 01:55:36.182941] nvme_pcie_common.c: 133:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:17:21.498 [2024-07-24 01:55:36.183032] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:17:21.498 [2024-07-24 01:55:36.183057] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:17:21.498 [2024-07-24 01:55:36.183067] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:17:21.498 [2024-07-24 01:55:36.183885] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:17:21.498 [2024-07-24 01:55:36.183913] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:17:21.498 [2024-07-24 01:55:36.183926] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:17:21.498 [2024-07-24 01:55:36.184886] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:17:21.498 [2024-07-24 01:55:36.184905] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:17:21.498 [2024-07-24 01:55:36.184917] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:17:21.498 [2024-07-24 01:55:36.185890] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:17:21.498 [2024-07-24 01:55:36.185907] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:21.498 [2024-07-24 01:55:36.186901] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:17:21.498 [2024-07-24 01:55:36.186920] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:17:21.498 [2024-07-24 01:55:36.186929] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:17:21.498 [2024-07-24 01:55:36.186939] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:21.498 [2024-07-24 01:55:36.187048] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:17:21.498 [2024-07-24 01:55:36.187056] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:21.498 [2024-07-24 01:55:36.187064] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:17:21.498 [2024-07-24 01:55:36.187911] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:17:21.498 [2024-07-24 01:55:36.188912] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:17:21.498 [2024-07-24 01:55:36.189917] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:17:21.498 [2024-07-24 01:55:36.190911] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:21.498 [2024-07-24 01:55:36.191015] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:21.498 [2024-07-24 01:55:36.191933] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:17:21.498 [2024-07-24 01:55:36.191950] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:21.498 [2024-07-24 01:55:36.191959] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:17:21.498 [2024-07-24 01:55:36.191982] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:17:21.498 [2024-07-24 01:55:36.191994] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:17:21.498 [2024-07-24 01:55:36.192020] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:21.499 [2024-07-24 01:55:36.192029] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:21.499 [2024-07-24 01:55:36.192035] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:21.499 [2024-07-24 01:55:36.192053] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:21.499 [2024-07-24 01:55:36.192100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:17:21.499 [2024-07-24 01:55:36.192115] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:17:21.499 [2024-07-24 01:55:36.192123] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:17:21.499 [2024-07-24 01:55:36.192130] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:17:21.499 [2024-07-24 01:55:36.192137] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:17:21.499 [2024-07-24 01:55:36.192144] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:17:21.499 [2024-07-24 01:55:36.192152] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:17:21.499 [2024-07-24 01:55:36.192159] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:17:21.499 [2024-07-24 01:55:36.192171] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:17:21.499 [2024-07-24 01:55:36.192188] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:17:21.499 [2024-07-24 01:55:36.192203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:17:21.499 [2024-07-24 01:55:36.192222] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:17:21.499 [2024-07-24 01:55:36.192234] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:17:21.499 [2024-07-24 01:55:36.192245] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:17:21.499 [2024-07-24 01:55:36.192256] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:17:21.499 [2024-07-24 01:55:36.192264] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:17:21.499 [2024-07-24 01:55:36.192280] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:21.499 [2024-07-24 01:55:36.192294] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:17:21.499 [2024-07-24 01:55:36.192329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:17:21.499 [2024-07-24 01:55:36.192340] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:17:21.499 [2024-07-24 01:55:36.192349] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:17:21.499 [2024-07-24 01:55:36.192364] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:17:21.499 [2024-07-24 01:55:36.192377] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:17:21.499 [2024-07-24 01:55:36.192391] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:17:21.499 [2024-07-24 01:55:36.192403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:17:21.499 [2024-07-24 01:55:36.192469] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:17:21.499 [2024-07-24 01:55:36.192485] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:17:21.499 [2024-07-24 01:55:36.192498] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:17:21.499 [2024-07-24 01:55:36.192506] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:17:21.499 [2024-07-24 01:55:36.192512] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:21.499 [2024-07-24 01:55:36.192522] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:17:21.499 [2024-07-24 01:55:36.192541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:17:21.499 [2024-07-24 01:55:36.192556] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:17:21.499 [2024-07-24 01:55:36.192576] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:17:21.499 [2024-07-24 01:55:36.192604] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:17:21.499 [2024-07-24 01:55:36.192617] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:21.499 [2024-07-24 01:55:36.192624] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:21.499 [2024-07-24 01:55:36.192630] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:21.499 [2024-07-24 01:55:36.192639] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:21.499 [2024-07-24 01:55:36.192679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:17:21.499 [2024-07-24 01:55:36.192699] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:17:21.499 [2024-07-24 01:55:36.192712] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:17:21.499 [2024-07-24 01:55:36.192723] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:21.499 [2024-07-24 01:55:36.192731] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:21.499 [2024-07-24 01:55:36.192736] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:21.499 [2024-07-24 01:55:36.192745] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:21.499 [2024-07-24 01:55:36.192755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:17:21.499 [2024-07-24 01:55:36.192768] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:17:21.499 [2024-07-24 01:55:36.192782] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:17:21.499 [2024-07-24 01:55:36.192794] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:17:21.499 [2024-07-24 01:55:36.192806] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:17:21.499 [2024-07-24 01:55:36.192814] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:17:21.499 [2024-07-24 01:55:36.192822] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:17:21.499 [2024-07-24 01:55:36.192830] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:17:21.499 [2024-07-24 01:55:36.192837] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:17:21.499 [2024-07-24 01:55:36.192845] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:17:21.499 [2024-07-24 01:55:36.192869] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:17:21.499 [2024-07-24 01:55:36.192886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:17:21.499 [2024-07-24 01:55:36.192904] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:17:21.499 [2024-07-24 01:55:36.192915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:17:21.499 [2024-07-24 01:55:36.192930] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:17:21.499 [2024-07-24 01:55:36.192943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:17:21.499 [2024-07-24 01:55:36.192958] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:17:21.499 [2024-07-24 01:55:36.192969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:17:21.499 [2024-07-24 01:55:36.192989] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:17:21.499 [2024-07-24 01:55:36.192998] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:17:21.499 [2024-07-24 01:55:36.193004] nvme_pcie_common.c:1239:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:17:21.499 [2024-07-24 01:55:36.193010] nvme_pcie_common.c:1255:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:17:21.499 [2024-07-24 01:55:36.193016] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:17:21.499 [2024-07-24 01:55:36.193024] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:17:21.499 [2024-07-24 01:55:36.193035] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:17:21.499 [2024-07-24 01:55:36.193042] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:17:21.499 [2024-07-24 01:55:36.193048] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:21.499 [2024-07-24 01:55:36.193056] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:17:21.499 [2024-07-24 01:55:36.193066] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:17:21.499 [2024-07-24 01:55:36.193077] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:21.500 [2024-07-24 01:55:36.193083] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:21.500 [2024-07-24 01:55:36.193092] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:21.500 [2024-07-24 01:55:36.193103] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:17:21.500 [2024-07-24 01:55:36.193110] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:17:21.500 [2024-07-24 01:55:36.193116] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:21.500 [2024-07-24 01:55:36.193124] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:17:21.500 [2024-07-24 01:55:36.193135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:17:21.500 [2024-07-24 01:55:36.193153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:17:21.500 [2024-07-24 01:55:36.193171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:17:21.500 [2024-07-24 01:55:36.193183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:17:21.500 ===================================================== 00:17:21.500 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:21.500 ===================================================== 00:17:21.500 Controller Capabilities/Features 00:17:21.500 ================================ 00:17:21.500 Vendor ID: 4e58 00:17:21.500 Subsystem Vendor ID: 4e58 00:17:21.500 Serial Number: SPDK1 00:17:21.500 Model Number: SPDK bdev Controller 00:17:21.500 Firmware Version: 24.09 00:17:21.500 Recommended Arb Burst: 6 00:17:21.500 IEEE OUI Identifier: 8d 6b 50 00:17:21.500 Multi-path I/O 00:17:21.500 May have multiple subsystem ports: Yes 00:17:21.500 May have multiple controllers: Yes 00:17:21.500 Associated with SR-IOV VF: No 00:17:21.500 Max Data Transfer Size: 131072 00:17:21.500 Max Number of Namespaces: 32 00:17:21.500 Max Number of I/O Queues: 127 00:17:21.500 NVMe Specification Version (VS): 1.3 00:17:21.500 NVMe Specification Version (Identify): 1.3 00:17:21.500 Maximum Queue Entries: 256 00:17:21.500 Contiguous Queues Required: Yes 00:17:21.500 Arbitration Mechanisms Supported 00:17:21.500 Weighted Round Robin: Not Supported 00:17:21.500 Vendor Specific: Not Supported 00:17:21.500 Reset Timeout: 15000 ms 00:17:21.500 Doorbell Stride: 4 bytes 00:17:21.500 NVM Subsystem Reset: Not Supported 00:17:21.500 Command Sets Supported 00:17:21.500 NVM Command Set: Supported 00:17:21.500 Boot Partition: Not Supported 00:17:21.500 Memory Page Size Minimum: 4096 bytes 00:17:21.500 Memory Page Size Maximum: 4096 bytes 00:17:21.500 Persistent Memory Region: Not Supported 00:17:21.500 Optional Asynchronous Events Supported 00:17:21.500 Namespace Attribute Notices: Supported 00:17:21.500 Firmware Activation Notices: Not Supported 00:17:21.500 ANA Change Notices: Not Supported 00:17:21.500 PLE Aggregate Log Change Notices: Not Supported 00:17:21.500 LBA Status Info Alert Notices: Not Supported 00:17:21.500 EGE Aggregate Log Change Notices: Not Supported 00:17:21.500 Normal NVM Subsystem Shutdown event: Not Supported 00:17:21.500 Zone Descriptor Change Notices: Not Supported 00:17:21.500 Discovery Log Change Notices: Not Supported 00:17:21.500 Controller Attributes 00:17:21.500 128-bit Host Identifier: Supported 00:17:21.500 Non-Operational Permissive Mode: Not Supported 00:17:21.500 NVM Sets: Not Supported 00:17:21.500 Read Recovery Levels: Not Supported 00:17:21.500 Endurance Groups: Not Supported 00:17:21.500 Predictable Latency Mode: Not Supported 00:17:21.500 Traffic Based Keep ALive: Not Supported 00:17:21.500 Namespace Granularity: Not Supported 00:17:21.500 SQ Associations: Not Supported 00:17:21.500 UUID List: Not Supported 00:17:21.500 Multi-Domain Subsystem: Not Supported 00:17:21.500 Fixed Capacity Management: Not Supported 00:17:21.500 Variable Capacity Management: Not Supported 00:17:21.500 Delete Endurance Group: Not Supported 00:17:21.500 Delete NVM Set: Not Supported 00:17:21.500 Extended LBA Formats Supported: Not Supported 00:17:21.500 Flexible Data Placement Supported: Not Supported 00:17:21.500 00:17:21.500 Controller Memory Buffer Support 00:17:21.500 ================================ 00:17:21.500 Supported: No 00:17:21.500 00:17:21.500 Persistent Memory Region Support 00:17:21.500 ================================ 00:17:21.500 Supported: No 00:17:21.500 00:17:21.500 Admin Command Set Attributes 00:17:21.500 ============================ 00:17:21.500 Security Send/Receive: Not Supported 00:17:21.500 Format NVM: Not Supported 00:17:21.500 Firmware Activate/Download: Not Supported 00:17:21.500 Namespace Management: Not Supported 00:17:21.500 Device Self-Test: Not Supported 00:17:21.500 Directives: Not Supported 00:17:21.500 NVMe-MI: Not Supported 00:17:21.500 Virtualization Management: Not Supported 00:17:21.500 Doorbell Buffer Config: Not Supported 00:17:21.500 Get LBA Status Capability: Not Supported 00:17:21.500 Command & Feature Lockdown Capability: Not Supported 00:17:21.500 Abort Command Limit: 4 00:17:21.500 Async Event Request Limit: 4 00:17:21.500 Number of Firmware Slots: N/A 00:17:21.500 Firmware Slot 1 Read-Only: N/A 00:17:21.500 Firmware Activation Without Reset: N/A 00:17:21.500 Multiple Update Detection Support: N/A 00:17:21.500 Firmware Update Granularity: No Information Provided 00:17:21.500 Per-Namespace SMART Log: No 00:17:21.500 Asymmetric Namespace Access Log Page: Not Supported 00:17:21.500 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:17:21.500 Command Effects Log Page: Supported 00:17:21.500 Get Log Page Extended Data: Supported 00:17:21.500 Telemetry Log Pages: Not Supported 00:17:21.500 Persistent Event Log Pages: Not Supported 00:17:21.500 Supported Log Pages Log Page: May Support 00:17:21.500 Commands Supported & Effects Log Page: Not Supported 00:17:21.500 Feature Identifiers & Effects Log Page:May Support 00:17:21.500 NVMe-MI Commands & Effects Log Page: May Support 00:17:21.500 Data Area 4 for Telemetry Log: Not Supported 00:17:21.500 Error Log Page Entries Supported: 128 00:17:21.500 Keep Alive: Supported 00:17:21.500 Keep Alive Granularity: 10000 ms 00:17:21.500 00:17:21.500 NVM Command Set Attributes 00:17:21.500 ========================== 00:17:21.500 Submission Queue Entry Size 00:17:21.500 Max: 64 00:17:21.500 Min: 64 00:17:21.500 Completion Queue Entry Size 00:17:21.500 Max: 16 00:17:21.500 Min: 16 00:17:21.500 Number of Namespaces: 32 00:17:21.500 Compare Command: Supported 00:17:21.500 Write Uncorrectable Command: Not Supported 00:17:21.500 Dataset Management Command: Supported 00:17:21.500 Write Zeroes Command: Supported 00:17:21.500 Set Features Save Field: Not Supported 00:17:21.500 Reservations: Not Supported 00:17:21.500 Timestamp: Not Supported 00:17:21.500 Copy: Supported 00:17:21.500 Volatile Write Cache: Present 00:17:21.500 Atomic Write Unit (Normal): 1 00:17:21.500 Atomic Write Unit (PFail): 1 00:17:21.500 Atomic Compare & Write Unit: 1 00:17:21.500 Fused Compare & Write: Supported 00:17:21.500 Scatter-Gather List 00:17:21.500 SGL Command Set: Supported (Dword aligned) 00:17:21.500 SGL Keyed: Not Supported 00:17:21.500 SGL Bit Bucket Descriptor: Not Supported 00:17:21.500 SGL Metadata Pointer: Not Supported 00:17:21.500 Oversized SGL: Not Supported 00:17:21.500 SGL Metadata Address: Not Supported 00:17:21.500 SGL Offset: Not Supported 00:17:21.500 Transport SGL Data Block: Not Supported 00:17:21.500 Replay Protected Memory Block: Not Supported 00:17:21.500 00:17:21.500 Firmware Slot Information 00:17:21.500 ========================= 00:17:21.500 Active slot: 1 00:17:21.500 Slot 1 Firmware Revision: 24.09 00:17:21.500 00:17:21.500 00:17:21.500 Commands Supported and Effects 00:17:21.500 ============================== 00:17:21.500 Admin Commands 00:17:21.500 -------------- 00:17:21.500 Get Log Page (02h): Supported 00:17:21.500 Identify (06h): Supported 00:17:21.500 Abort (08h): Supported 00:17:21.500 Set Features (09h): Supported 00:17:21.500 Get Features (0Ah): Supported 00:17:21.500 Asynchronous Event Request (0Ch): Supported 00:17:21.500 Keep Alive (18h): Supported 00:17:21.500 I/O Commands 00:17:21.500 ------------ 00:17:21.500 Flush (00h): Supported LBA-Change 00:17:21.500 Write (01h): Supported LBA-Change 00:17:21.500 Read (02h): Supported 00:17:21.500 Compare (05h): Supported 00:17:21.500 Write Zeroes (08h): Supported LBA-Change 00:17:21.500 Dataset Management (09h): Supported LBA-Change 00:17:21.500 Copy (19h): Supported LBA-Change 00:17:21.501 00:17:21.501 Error Log 00:17:21.501 ========= 00:17:21.501 00:17:21.501 Arbitration 00:17:21.501 =========== 00:17:21.501 Arbitration Burst: 1 00:17:21.501 00:17:21.501 Power Management 00:17:21.501 ================ 00:17:21.501 Number of Power States: 1 00:17:21.501 Current Power State: Power State #0 00:17:21.501 Power State #0: 00:17:21.501 Max Power: 0.00 W 00:17:21.501 Non-Operational State: Operational 00:17:21.501 Entry Latency: Not Reported 00:17:21.501 Exit Latency: Not Reported 00:17:21.501 Relative Read Throughput: 0 00:17:21.501 Relative Read Latency: 0 00:17:21.501 Relative Write Throughput: 0 00:17:21.501 Relative Write Latency: 0 00:17:21.501 Idle Power: Not Reported 00:17:21.501 Active Power: Not Reported 00:17:21.501 Non-Operational Permissive Mode: Not Supported 00:17:21.501 00:17:21.501 Health Information 00:17:21.501 ================== 00:17:21.501 Critical Warnings: 00:17:21.501 Available Spare Space: OK 00:17:21.501 Temperature: OK 00:17:21.501 Device Reliability: OK 00:17:21.501 Read Only: No 00:17:21.501 Volatile Memory Backup: OK 00:17:21.501 Current Temperature: 0 Kelvin (-273 Celsius) 00:17:21.501 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:17:21.501 Available Spare: 0% 00:17:21.501 Available Sp[2024-07-24 01:55:36.193308] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:17:21.501 [2024-07-24 01:55:36.193334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:17:21.501 [2024-07-24 01:55:36.193376] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:17:21.501 [2024-07-24 01:55:36.193394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:21.501 [2024-07-24 01:55:36.193404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:21.501 [2024-07-24 01:55:36.193414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:21.501 [2024-07-24 01:55:36.193424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:21.501 [2024-07-24 01:55:36.196327] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:17:21.501 [2024-07-24 01:55:36.196349] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:17:21.501 [2024-07-24 01:55:36.196958] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:21.501 [2024-07-24 01:55:36.197041] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:17:21.501 [2024-07-24 01:55:36.197055] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:17:21.501 [2024-07-24 01:55:36.197967] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:17:21.501 [2024-07-24 01:55:36.197989] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:17:21.501 [2024-07-24 01:55:36.198041] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:17:21.501 [2024-07-24 01:55:36.201326] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:17:21.501 are Threshold: 0% 00:17:21.501 Life Percentage Used: 0% 00:17:21.501 Data Units Read: 0 00:17:21.501 Data Units Written: 0 00:17:21.501 Host Read Commands: 0 00:17:21.501 Host Write Commands: 0 00:17:21.501 Controller Busy Time: 0 minutes 00:17:21.501 Power Cycles: 0 00:17:21.501 Power On Hours: 0 hours 00:17:21.501 Unsafe Shutdowns: 0 00:17:21.501 Unrecoverable Media Errors: 0 00:17:21.501 Lifetime Error Log Entries: 0 00:17:21.501 Warning Temperature Time: 0 minutes 00:17:21.501 Critical Temperature Time: 0 minutes 00:17:21.501 00:17:21.501 Number of Queues 00:17:21.501 ================ 00:17:21.501 Number of I/O Submission Queues: 127 00:17:21.501 Number of I/O Completion Queues: 127 00:17:21.501 00:17:21.501 Active Namespaces 00:17:21.501 ================= 00:17:21.501 Namespace ID:1 00:17:21.501 Error Recovery Timeout: Unlimited 00:17:21.501 Command Set Identifier: NVM (00h) 00:17:21.501 Deallocate: Supported 00:17:21.501 Deallocated/Unwritten Error: Not Supported 00:17:21.501 Deallocated Read Value: Unknown 00:17:21.501 Deallocate in Write Zeroes: Not Supported 00:17:21.501 Deallocated Guard Field: 0xFFFF 00:17:21.501 Flush: Supported 00:17:21.501 Reservation: Supported 00:17:21.501 Namespace Sharing Capabilities: Multiple Controllers 00:17:21.501 Size (in LBAs): 131072 (0GiB) 00:17:21.501 Capacity (in LBAs): 131072 (0GiB) 00:17:21.501 Utilization (in LBAs): 131072 (0GiB) 00:17:21.501 NGUID: DD8BACE42E9D491F8F85B052D94096A7 00:17:21.501 UUID: dd8bace4-2e9d-491f-8f85-b052d94096a7 00:17:21.501 Thin Provisioning: Not Supported 00:17:21.501 Per-NS Atomic Units: Yes 00:17:21.501 Atomic Boundary Size (Normal): 0 00:17:21.501 Atomic Boundary Size (PFail): 0 00:17:21.501 Atomic Boundary Offset: 0 00:17:21.501 Maximum Single Source Range Length: 65535 00:17:21.501 Maximum Copy Length: 65535 00:17:21.501 Maximum Source Range Count: 1 00:17:21.501 NGUID/EUI64 Never Reused: No 00:17:21.501 Namespace Write Protected: No 00:17:21.501 Number of LBA Formats: 1 00:17:21.501 Current LBA Format: LBA Format #00 00:17:21.501 LBA Format #00: Data Size: 512 Metadata Size: 0 00:17:21.501 00:17:21.501 01:55:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:17:21.501 EAL: No free 2048 kB hugepages reported on node 1 00:17:21.759 [2024-07-24 01:55:36.423175] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:27.033 Initializing NVMe Controllers 00:17:27.033 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:27.033 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:17:27.033 Initialization complete. Launching workers. 00:17:27.033 ======================================================== 00:17:27.033 Latency(us) 00:17:27.033 Device Information : IOPS MiB/s Average min max 00:17:27.033 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 33692.83 131.61 3799.26 1164.71 8274.71 00:17:27.033 ======================================================== 00:17:27.033 Total : 33692.83 131.61 3799.26 1164.71 8274.71 00:17:27.033 00:17:27.033 [2024-07-24 01:55:41.442811] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:27.033 01:55:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:17:27.033 EAL: No free 2048 kB hugepages reported on node 1 00:17:27.033 [2024-07-24 01:55:41.688978] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:32.305 Initializing NVMe Controllers 00:17:32.305 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:32.305 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:17:32.305 Initialization complete. Launching workers. 00:17:32.305 ======================================================== 00:17:32.305 Latency(us) 00:17:32.305 Device Information : IOPS MiB/s Average min max 00:17:32.305 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16025.60 62.60 7997.18 7189.78 12061.37 00:17:32.305 ======================================================== 00:17:32.305 Total : 16025.60 62.60 7997.18 7189.78 12061.37 00:17:32.305 00:17:32.305 [2024-07-24 01:55:46.725652] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:32.305 01:55:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:17:32.305 EAL: No free 2048 kB hugepages reported on node 1 00:17:32.305 [2024-07-24 01:55:46.936760] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:37.574 [2024-07-24 01:55:52.019723] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:37.574 Initializing NVMe Controllers 00:17:37.574 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:37.574 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:37.574 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:17:37.574 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:17:37.574 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:17:37.574 Initialization complete. Launching workers. 00:17:37.574 Starting thread on core 2 00:17:37.574 Starting thread on core 3 00:17:37.574 Starting thread on core 1 00:17:37.574 01:55:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:17:37.574 EAL: No free 2048 kB hugepages reported on node 1 00:17:37.574 [2024-07-24 01:55:52.316791] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:40.868 [2024-07-24 01:55:55.382011] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:40.868 Initializing NVMe Controllers 00:17:40.868 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:17:40.868 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:17:40.868 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:17:40.868 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:17:40.868 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:17:40.868 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:17:40.868 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:17:40.868 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:17:40.868 Initialization complete. Launching workers. 00:17:40.868 Starting thread on core 1 with urgent priority queue 00:17:40.868 Starting thread on core 2 with urgent priority queue 00:17:40.868 Starting thread on core 3 with urgent priority queue 00:17:40.868 Starting thread on core 0 with urgent priority queue 00:17:40.868 SPDK bdev Controller (SPDK1 ) core 0: 5713.67 IO/s 17.50 secs/100000 ios 00:17:40.868 SPDK bdev Controller (SPDK1 ) core 1: 5361.33 IO/s 18.65 secs/100000 ios 00:17:40.868 SPDK bdev Controller (SPDK1 ) core 2: 5902.00 IO/s 16.94 secs/100000 ios 00:17:40.868 SPDK bdev Controller (SPDK1 ) core 3: 5815.67 IO/s 17.19 secs/100000 ios 00:17:40.868 ======================================================== 00:17:40.868 00:17:40.868 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:17:40.868 EAL: No free 2048 kB hugepages reported on node 1 00:17:40.868 [2024-07-24 01:55:55.672835] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:40.868 Initializing NVMe Controllers 00:17:40.868 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:17:40.868 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:17:40.868 Namespace ID: 1 size: 0GB 00:17:40.868 Initialization complete. 00:17:40.868 INFO: using host memory buffer for IO 00:17:40.868 Hello world! 00:17:40.868 [2024-07-24 01:55:55.706342] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:40.868 01:55:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:17:41.128 EAL: No free 2048 kB hugepages reported on node 1 00:17:41.128 [2024-07-24 01:55:55.990756] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:42.505 Initializing NVMe Controllers 00:17:42.505 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:17:42.505 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:17:42.506 Initialization complete. Launching workers. 00:17:42.506 submit (in ns) avg, min, max = 8110.5, 3518.9, 4018092.2 00:17:42.506 complete (in ns) avg, min, max = 25603.6, 2063.3, 4017495.6 00:17:42.506 00:17:42.506 Submit histogram 00:17:42.506 ================ 00:17:42.506 Range in us Cumulative Count 00:17:42.506 3.508 - 3.532: 0.1109% ( 15) 00:17:42.506 3.532 - 3.556: 1.0059% ( 121) 00:17:42.506 3.556 - 3.579: 2.8994% ( 256) 00:17:42.506 3.579 - 3.603: 7.6701% ( 645) 00:17:42.506 3.603 - 3.627: 15.3772% ( 1042) 00:17:42.506 3.627 - 3.650: 25.2811% ( 1339) 00:17:42.506 3.650 - 3.674: 33.6538% ( 1132) 00:17:42.506 3.674 - 3.698: 40.9172% ( 982) 00:17:42.506 3.698 - 3.721: 47.1524% ( 843) 00:17:42.506 3.721 - 3.745: 52.2559% ( 690) 00:17:42.506 3.745 - 3.769: 57.1080% ( 656) 00:17:42.506 3.769 - 3.793: 61.2352% ( 558) 00:17:42.506 3.793 - 3.816: 64.6820% ( 466) 00:17:42.506 3.816 - 3.840: 68.2249% ( 479) 00:17:42.506 3.840 - 3.864: 72.3595% ( 559) 00:17:42.506 3.864 - 3.887: 76.5533% ( 567) 00:17:42.506 3.887 - 3.911: 80.3254% ( 510) 00:17:42.506 3.911 - 3.935: 83.3210% ( 405) 00:17:42.506 3.935 - 3.959: 85.6139% ( 310) 00:17:42.506 3.959 - 3.982: 87.6036% ( 269) 00:17:42.506 3.982 - 4.006: 89.3195% ( 232) 00:17:42.506 4.006 - 4.030: 90.7175% ( 189) 00:17:42.506 4.030 - 4.053: 92.0118% ( 175) 00:17:42.506 4.053 - 4.077: 92.9438% ( 126) 00:17:42.506 4.077 - 4.101: 93.8388% ( 121) 00:17:42.506 4.101 - 4.124: 94.5414% ( 95) 00:17:42.506 4.124 - 4.148: 95.1553% ( 83) 00:17:42.506 4.148 - 4.172: 95.5473% ( 53) 00:17:42.506 4.172 - 4.196: 95.8284% ( 38) 00:17:42.506 4.196 - 4.219: 96.0651% ( 32) 00:17:42.506 4.219 - 4.243: 96.2722% ( 28) 00:17:42.506 4.243 - 4.267: 96.4275% ( 21) 00:17:42.506 4.267 - 4.290: 96.5237% ( 13) 00:17:42.506 4.290 - 4.314: 96.6050% ( 11) 00:17:42.506 4.314 - 4.338: 96.7382% ( 18) 00:17:42.506 4.338 - 4.361: 96.8121% ( 10) 00:17:42.506 4.361 - 4.385: 96.8491% ( 5) 00:17:42.506 4.385 - 4.409: 96.9379% ( 12) 00:17:42.506 4.409 - 4.433: 96.9749% ( 5) 00:17:42.506 4.433 - 4.456: 96.9970% ( 3) 00:17:42.506 4.456 - 4.480: 97.0562% ( 8) 00:17:42.506 4.480 - 4.504: 97.1006% ( 6) 00:17:42.506 4.504 - 4.527: 97.1154% ( 2) 00:17:42.506 4.527 - 4.551: 97.1376% ( 3) 00:17:42.506 4.551 - 4.575: 97.1820% ( 6) 00:17:42.506 4.575 - 4.599: 97.2189% ( 5) 00:17:42.506 4.599 - 4.622: 97.2707% ( 7) 00:17:42.506 4.622 - 4.646: 97.3151% ( 6) 00:17:42.506 4.646 - 4.670: 97.3743% ( 8) 00:17:42.506 4.670 - 4.693: 97.4408% ( 9) 00:17:42.506 4.693 - 4.717: 97.5000% ( 8) 00:17:42.506 4.717 - 4.741: 97.5592% ( 8) 00:17:42.506 4.741 - 4.764: 97.5814% ( 3) 00:17:42.506 4.764 - 4.788: 97.6183% ( 5) 00:17:42.506 4.788 - 4.812: 97.6775% ( 8) 00:17:42.506 4.812 - 4.836: 97.6997% ( 3) 00:17:42.506 4.836 - 4.859: 97.7367% ( 5) 00:17:42.506 4.859 - 4.883: 97.7885% ( 7) 00:17:42.506 4.883 - 4.907: 97.8180% ( 4) 00:17:42.506 4.907 - 4.930: 97.8550% ( 5) 00:17:42.506 4.930 - 4.954: 97.8920% ( 5) 00:17:42.506 4.954 - 4.978: 97.9290% ( 5) 00:17:42.506 4.978 - 5.001: 97.9660% ( 5) 00:17:42.506 5.001 - 5.025: 97.9734% ( 1) 00:17:42.506 5.025 - 5.049: 97.9956% ( 3) 00:17:42.506 5.049 - 5.073: 98.0325% ( 5) 00:17:42.506 5.073 - 5.096: 98.0399% ( 1) 00:17:42.506 5.096 - 5.120: 98.0621% ( 3) 00:17:42.506 5.120 - 5.144: 98.0695% ( 1) 00:17:42.506 5.144 - 5.167: 98.0769% ( 1) 00:17:42.506 5.191 - 5.215: 98.0991% ( 3) 00:17:42.506 5.239 - 5.262: 98.1287% ( 4) 00:17:42.506 5.286 - 5.310: 98.1509% ( 3) 00:17:42.506 5.333 - 5.357: 98.1583% ( 1) 00:17:42.506 5.404 - 5.428: 98.1657% ( 1) 00:17:42.506 5.452 - 5.476: 98.1805% ( 2) 00:17:42.506 5.476 - 5.499: 98.1879% ( 1) 00:17:42.506 5.547 - 5.570: 98.1953% ( 1) 00:17:42.506 5.618 - 5.641: 98.2027% ( 1) 00:17:42.506 5.689 - 5.713: 98.2101% ( 1) 00:17:42.506 5.713 - 5.736: 98.2175% ( 1) 00:17:42.506 5.926 - 5.950: 98.2249% ( 1) 00:17:42.506 6.068 - 6.116: 98.2322% ( 1) 00:17:42.506 6.258 - 6.305: 98.2396% ( 1) 00:17:42.506 6.353 - 6.400: 98.2544% ( 2) 00:17:42.506 6.447 - 6.495: 98.2618% ( 1) 00:17:42.506 6.590 - 6.637: 98.2692% ( 1) 00:17:42.506 6.637 - 6.684: 98.2766% ( 1) 00:17:42.506 6.874 - 6.921: 98.2840% ( 1) 00:17:42.506 6.921 - 6.969: 98.3136% ( 4) 00:17:42.506 7.111 - 7.159: 98.3210% ( 1) 00:17:42.506 7.348 - 7.396: 98.3284% ( 1) 00:17:42.506 7.443 - 7.490: 98.3432% ( 2) 00:17:42.506 7.538 - 7.585: 98.3654% ( 3) 00:17:42.506 7.585 - 7.633: 98.3728% ( 1) 00:17:42.506 7.633 - 7.680: 98.3802% ( 1) 00:17:42.506 7.680 - 7.727: 98.3876% ( 1) 00:17:42.506 7.775 - 7.822: 98.3950% ( 1) 00:17:42.506 7.822 - 7.870: 98.4024% ( 1) 00:17:42.506 7.870 - 7.917: 98.4098% ( 1) 00:17:42.506 7.917 - 7.964: 98.4246% ( 2) 00:17:42.506 7.964 - 8.012: 98.4393% ( 2) 00:17:42.506 8.012 - 8.059: 98.4763% ( 5) 00:17:42.506 8.059 - 8.107: 98.4837% ( 1) 00:17:42.506 8.107 - 8.154: 98.5133% ( 4) 00:17:42.506 8.201 - 8.249: 98.5207% ( 1) 00:17:42.506 8.249 - 8.296: 98.5355% ( 2) 00:17:42.506 8.296 - 8.344: 98.5429% ( 1) 00:17:42.506 8.344 - 8.391: 98.5577% ( 2) 00:17:42.506 8.391 - 8.439: 98.5725% ( 2) 00:17:42.506 8.439 - 8.486: 98.5799% ( 1) 00:17:42.506 8.486 - 8.533: 98.5873% ( 1) 00:17:42.506 8.533 - 8.581: 98.5947% ( 1) 00:17:42.506 8.581 - 8.628: 98.6021% ( 1) 00:17:42.506 8.628 - 8.676: 98.6095% ( 1) 00:17:42.506 8.770 - 8.818: 98.6243% ( 2) 00:17:42.506 8.865 - 8.913: 98.6317% ( 1) 00:17:42.506 8.913 - 8.960: 98.6391% ( 1) 00:17:42.506 9.150 - 9.197: 98.6464% ( 1) 00:17:42.506 9.197 - 9.244: 98.6538% ( 1) 00:17:42.506 9.244 - 9.292: 98.6686% ( 2) 00:17:42.506 9.387 - 9.434: 98.6760% ( 1) 00:17:42.506 9.481 - 9.529: 98.6834% ( 1) 00:17:42.506 9.529 - 9.576: 98.6982% ( 2) 00:17:42.506 9.956 - 10.003: 98.7056% ( 1) 00:17:42.506 10.145 - 10.193: 98.7278% ( 3) 00:17:42.506 10.193 - 10.240: 98.7352% ( 1) 00:17:42.506 10.761 - 10.809: 98.7426% ( 1) 00:17:42.506 10.999 - 11.046: 98.7500% ( 1) 00:17:42.506 11.330 - 11.378: 98.7648% ( 2) 00:17:42.506 11.520 - 11.567: 98.7722% ( 1) 00:17:42.506 11.567 - 11.615: 98.7870% ( 2) 00:17:42.506 11.852 - 11.899: 98.8018% ( 2) 00:17:42.506 11.899 - 11.947: 98.8092% ( 1) 00:17:42.506 11.994 - 12.041: 98.8166% ( 1) 00:17:42.506 12.326 - 12.421: 98.8240% ( 1) 00:17:42.506 12.421 - 12.516: 98.8314% ( 1) 00:17:42.506 12.516 - 12.610: 98.8388% ( 1) 00:17:42.506 12.610 - 12.705: 98.8462% ( 1) 00:17:42.506 13.084 - 13.179: 98.8536% ( 1) 00:17:42.506 13.179 - 13.274: 98.8609% ( 1) 00:17:42.506 13.464 - 13.559: 98.8683% ( 1) 00:17:42.506 13.748 - 13.843: 98.8831% ( 2) 00:17:42.506 13.938 - 14.033: 98.8905% ( 1) 00:17:42.506 14.317 - 14.412: 98.8979% ( 1) 00:17:42.506 15.360 - 15.455: 98.9053% ( 1) 00:17:42.506 17.067 - 17.161: 98.9127% ( 1) 00:17:42.506 17.161 - 17.256: 98.9201% ( 1) 00:17:42.506 17.351 - 17.446: 98.9349% ( 2) 00:17:42.506 17.446 - 17.541: 98.9423% ( 1) 00:17:42.506 17.541 - 17.636: 98.9719% ( 4) 00:17:42.506 17.636 - 17.730: 99.0089% ( 5) 00:17:42.506 17.730 - 17.825: 99.1198% ( 15) 00:17:42.506 17.825 - 17.920: 99.1272% ( 1) 00:17:42.506 17.920 - 18.015: 99.1864% ( 8) 00:17:42.506 18.015 - 18.110: 99.2160% ( 4) 00:17:42.506 18.110 - 18.204: 99.2825% ( 9) 00:17:42.506 18.204 - 18.299: 99.3713% ( 12) 00:17:42.506 18.299 - 18.394: 99.4675% ( 13) 00:17:42.506 18.394 - 18.489: 99.5192% ( 7) 00:17:42.506 18.489 - 18.584: 99.5858% ( 9) 00:17:42.506 18.584 - 18.679: 99.6228% ( 5) 00:17:42.506 18.679 - 18.773: 99.6450% ( 3) 00:17:42.506 18.773 - 18.868: 99.6893% ( 6) 00:17:42.506 18.868 - 18.963: 99.7189% ( 4) 00:17:42.506 18.963 - 19.058: 99.7633% ( 6) 00:17:42.506 19.247 - 19.342: 99.8003% ( 5) 00:17:42.506 19.342 - 19.437: 99.8077% ( 1) 00:17:42.506 19.437 - 19.532: 99.8151% ( 1) 00:17:42.506 19.816 - 19.911: 99.8373% ( 3) 00:17:42.506 19.911 - 20.006: 99.8447% ( 1) 00:17:42.506 20.290 - 20.385: 99.8521% ( 1) 00:17:42.506 22.092 - 22.187: 99.8595% ( 1) 00:17:42.506 25.031 - 25.221: 99.8669% ( 1) 00:17:42.506 26.359 - 26.548: 99.8743% ( 1) 00:17:42.507 27.876 - 28.065: 99.8817% ( 1) 00:17:42.507 28.255 - 28.444: 99.8891% ( 1) 00:17:42.507 48.166 - 48.356: 99.8964% ( 1) 00:17:42.507 3980.705 - 4004.978: 99.9704% ( 10) 00:17:42.507 4004.978 - 4029.250: 100.0000% ( 4) 00:17:42.507 00:17:42.507 Complete histogram 00:17:42.507 ================== 00:17:42.507 Range in us Cumulative Count 00:17:42.507 2.062 - 2.074: 2.4186% ( 327) 00:17:42.507 2.074 - 2.086: 31.8269% ( 3976) 00:17:42.507 2.086 - 2.098: 38.5799% ( 913) 00:17:42.507 2.098 - 2.110: 44.0311% ( 737) 00:17:42.507 2.110 - 2.121: 56.4053% ( 1673) 00:17:42.507 2.121 - 2.133: 58.3654% ( 265) 00:17:42.507 2.133 - 2.145: 63.4172% ( 683) 00:17:42.507 2.145 - 2.157: 72.5370% ( 1233) 00:17:42.507 2.157 - 2.169: 73.8462% ( 177) 00:17:42.507 2.169 - 2.181: 77.6923% ( 520) 00:17:42.507 2.181 - 2.193: 81.9305% ( 573) 00:17:42.507 2.193 - 2.204: 82.6997% ( 104) 00:17:42.507 2.204 - 2.216: 84.3047% ( 217) 00:17:42.507 2.216 - 2.228: 87.9956% ( 499) 00:17:42.507 2.228 - 2.240: 90.0444% ( 277) 00:17:42.507 2.240 - 2.252: 91.7751% ( 234) 00:17:42.507 2.252 - 2.264: 93.3728% ( 216) 00:17:42.507 2.264 - 2.276: 93.8092% ( 59) 00:17:42.507 2.276 - 2.287: 94.1198% ( 42) 00:17:42.507 2.287 - 2.299: 94.4453% ( 44) 00:17:42.507 2.299 - 2.311: 95.1036% ( 89) 00:17:42.507 2.311 - 2.323: 95.5547% ( 61) 00:17:42.507 2.323 - 2.335: 95.6139% ( 8) 00:17:42.507 2.335 - 2.347: 95.6805% ( 9) 00:17:42.507 2.347 - 2.359: 95.7544% ( 10) 00:17:42.507 2.359 - 2.370: 95.8802% ( 17) 00:17:42.507 2.370 - 2.382: 96.0799% ( 27) 00:17:42.507 2.382 - 2.394: 96.4793% ( 54) 00:17:42.507 2.394 - 2.406: 96.7234% ( 33) 00:17:42.507 2.406 - 2.418: 96.9527% ( 31) 00:17:42.507 2.418 - 2.430: 97.1893% ( 32) 00:17:42.507 2.430 - 2.441: 97.3964% ( 28) 00:17:42.507 2.441 - 2.453: 97.5370% ( 19) 00:17:42.507 2.453 - 2.465: 97.6701% ( 18) 00:17:42.507 2.465 - 2.477: 97.8033% ( 18) 00:17:42.507 2.477 - 2.489: 97.9512% ( 20) 00:17:42.507 2.489 - 2.501: 98.0325% ( 11) 00:17:42.507 2.501 - 2.513: 98.0547% ( 3) 00:17:42.507 2.513 - 2.524: 98.0843% ( 4) 00:17:42.507 2.524 - 2.536: 98.1139% ( 4) 00:17:42.507 2.536 - 2.548: 98.1657% ( 7) 00:17:42.507 2.548 - 2.560: 98.1731% ( 1) 00:17:42.507 2.572 - 2.584: 98.1879% ( 2) 00:17:42.507 2.584 - 2.596: 98.2101% ( 3) 00:17:42.507 2.607 - 2.619: 98.2175% ( 1) 00:17:42.507 2.619 - 2.631: 98.2322% ( 2) 00:17:42.507 2.631 - 2.643: 98.2396% ( 1) 00:17:42.507 2.655 - 2.667: 98.2470% ( 1) 00:17:42.507 2.679 - 2.690: 98.2618% ( 2) 00:17:42.507 2.690 - 2.702: 98.2692% ( 1) 00:17:42.507 2.702 - 2.714: 98.2988% ( 4) 00:17:42.507 2.714 - 2.726: 98.3062% ( 1) 00:17:42.507 2.726 - 2.738: 98.3284% ( 3) 00:17:42.507 2.738 - 2.750: 98.3358% ( 1) 00:17:42.507 2.750 - 2.761: 98.3506% ( 2) 00:17:42.507 2.761 - 2.773: 98.3580% ( 1) 00:17:42.507 2.785 - 2.797: 98.3654% ( 1) 00:17:42.507 2.797 - 2.809: 98.3802% ( 2) 00:17:42.507 2.833 - 2.844: 98.3876% ( 1) 00:17:42.507 2.844 - 2.856: 98.3950% ( 1) 00:17:42.507 2.868 - 2.880: 98.4098% ( 2) 00:17:42.507 2.892 - 2.904: 98.4320% ( 3) 00:17:42.507 2.904 - 2.916: 98.4467% ( 2) 00:17:42.507 2.927 - 2.939: 98.4541% ( 1) 00:17:42.507 2.951 - 2.963: 98.4615% ( 1) 00:17:42.507 2.963 - 2.975: 98.4689% ( 1) 00:17:42.507 3.022 - 3.034: 98.4763% ( 1) 00:17:42.507 3.081 - 3.105: 98.4837% ( 1) 00:17:42.507 3.105 - 3.129: 98.4985% ( 2) 00:17:42.507 3.129 - 3.153: 98.5133% ( 2) 00:17:42.507 3.176 - 3.200: 98.5281% ( 2) 00:17:42.507 3.271 - 3.295: 9[2024-07-24 01:55:57.016946] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:42.507 8.5503% ( 3) 00:17:42.507 3.295 - 3.319: 98.5577% ( 1) 00:17:42.507 3.319 - 3.342: 98.5725% ( 2) 00:17:42.507 3.342 - 3.366: 98.5947% ( 3) 00:17:42.507 3.366 - 3.390: 98.6021% ( 1) 00:17:42.507 3.390 - 3.413: 98.6169% ( 2) 00:17:42.507 3.413 - 3.437: 98.6243% ( 1) 00:17:42.507 3.437 - 3.461: 98.6317% ( 1) 00:17:42.507 3.461 - 3.484: 98.6391% ( 1) 00:17:42.507 3.508 - 3.532: 98.6538% ( 2) 00:17:42.507 3.532 - 3.556: 98.6612% ( 1) 00:17:42.507 3.556 - 3.579: 98.6686% ( 1) 00:17:42.507 3.579 - 3.603: 98.6760% ( 1) 00:17:42.507 3.603 - 3.627: 98.6834% ( 1) 00:17:42.507 3.627 - 3.650: 98.6982% ( 2) 00:17:42.507 3.698 - 3.721: 98.7056% ( 1) 00:17:42.507 3.745 - 3.769: 98.7130% ( 1) 00:17:42.507 3.816 - 3.840: 98.7204% ( 1) 00:17:42.507 3.840 - 3.864: 98.7278% ( 1) 00:17:42.507 3.864 - 3.887: 98.7352% ( 1) 00:17:42.507 3.887 - 3.911: 98.7500% ( 2) 00:17:42.507 5.001 - 5.025: 98.7574% ( 1) 00:17:42.507 5.381 - 5.404: 98.7722% ( 2) 00:17:42.507 5.404 - 5.428: 98.7796% ( 1) 00:17:42.507 5.428 - 5.452: 98.7870% ( 1) 00:17:42.507 5.641 - 5.665: 98.7944% ( 1) 00:17:42.507 5.713 - 5.736: 98.8018% ( 1) 00:17:42.507 5.760 - 5.784: 98.8092% ( 1) 00:17:42.507 5.902 - 5.926: 98.8166% ( 1) 00:17:42.507 5.973 - 5.997: 98.8240% ( 1) 00:17:42.507 6.116 - 6.163: 98.8314% ( 1) 00:17:42.507 6.400 - 6.447: 98.8388% ( 1) 00:17:42.507 6.542 - 6.590: 98.8462% ( 1) 00:17:42.507 6.921 - 6.969: 98.8536% ( 1) 00:17:42.507 7.822 - 7.870: 98.8609% ( 1) 00:17:42.507 8.723 - 8.770: 98.8683% ( 1) 00:17:42.507 8.960 - 9.007: 98.8757% ( 1) 00:17:42.507 13.084 - 13.179: 98.8831% ( 1) 00:17:42.507 15.170 - 15.265: 98.8905% ( 1) 00:17:42.507 15.644 - 15.739: 98.9127% ( 3) 00:17:42.507 15.739 - 15.834: 98.9201% ( 1) 00:17:42.507 15.834 - 15.929: 98.9645% ( 6) 00:17:42.507 15.929 - 16.024: 98.9867% ( 3) 00:17:42.507 16.024 - 16.119: 99.0163% ( 4) 00:17:42.507 16.119 - 16.213: 99.0237% ( 1) 00:17:42.507 16.213 - 16.308: 99.0607% ( 5) 00:17:42.507 16.308 - 16.403: 99.0828% ( 3) 00:17:42.507 16.403 - 16.498: 99.1198% ( 5) 00:17:42.507 16.498 - 16.593: 99.1568% ( 5) 00:17:42.507 16.593 - 16.687: 99.2086% ( 7) 00:17:42.507 16.687 - 16.782: 99.2456% ( 5) 00:17:42.507 16.782 - 16.877: 99.2825% ( 5) 00:17:42.507 16.877 - 16.972: 99.2973% ( 2) 00:17:42.507 16.972 - 17.067: 99.3195% ( 3) 00:17:42.507 17.067 - 17.161: 99.3417% ( 3) 00:17:42.507 17.161 - 17.256: 99.3713% ( 4) 00:17:42.507 17.541 - 17.636: 99.3861% ( 2) 00:17:42.507 18.394 - 18.489: 99.3935% ( 1) 00:17:42.507 18.584 - 18.679: 99.4009% ( 1) 00:17:42.507 21.428 - 21.523: 99.4083% ( 1) 00:17:42.507 30.341 - 30.530: 99.4157% ( 1) 00:17:42.507 3980.705 - 4004.978: 99.8521% ( 59) 00:17:42.507 4004.978 - 4029.250: 100.0000% ( 20) 00:17:42.507 00:17:42.507 01:55:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:17:42.507 01:55:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:17:42.507 01:55:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:17:42.507 01:55:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:17:42.507 01:55:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:17:42.507 [ 00:17:42.507 { 00:17:42.507 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:42.507 "subtype": "Discovery", 00:17:42.507 "listen_addresses": [], 00:17:42.507 "allow_any_host": true, 00:17:42.507 "hosts": [] 00:17:42.507 }, 00:17:42.507 { 00:17:42.507 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:17:42.507 "subtype": "NVMe", 00:17:42.507 "listen_addresses": [ 00:17:42.507 { 00:17:42.507 "trtype": "VFIOUSER", 00:17:42.507 "adrfam": "IPv4", 00:17:42.507 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:17:42.507 "trsvcid": "0" 00:17:42.507 } 00:17:42.507 ], 00:17:42.507 "allow_any_host": true, 00:17:42.507 "hosts": [], 00:17:42.507 "serial_number": "SPDK1", 00:17:42.507 "model_number": "SPDK bdev Controller", 00:17:42.507 "max_namespaces": 32, 00:17:42.507 "min_cntlid": 1, 00:17:42.507 "max_cntlid": 65519, 00:17:42.507 "namespaces": [ 00:17:42.507 { 00:17:42.507 "nsid": 1, 00:17:42.507 "bdev_name": "Malloc1", 00:17:42.507 "name": "Malloc1", 00:17:42.507 "nguid": "DD8BACE42E9D491F8F85B052D94096A7", 00:17:42.508 "uuid": "dd8bace4-2e9d-491f-8f85-b052d94096a7" 00:17:42.508 } 00:17:42.508 ] 00:17:42.508 }, 00:17:42.508 { 00:17:42.508 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:17:42.508 "subtype": "NVMe", 00:17:42.508 "listen_addresses": [ 00:17:42.508 { 00:17:42.508 "trtype": "VFIOUSER", 00:17:42.508 "adrfam": "IPv4", 00:17:42.508 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:17:42.508 "trsvcid": "0" 00:17:42.508 } 00:17:42.508 ], 00:17:42.508 "allow_any_host": true, 00:17:42.508 "hosts": [], 00:17:42.508 "serial_number": "SPDK2", 00:17:42.508 "model_number": "SPDK bdev Controller", 00:17:42.508 "max_namespaces": 32, 00:17:42.508 "min_cntlid": 1, 00:17:42.508 "max_cntlid": 65519, 00:17:42.508 "namespaces": [ 00:17:42.508 { 00:17:42.508 "nsid": 1, 00:17:42.508 "bdev_name": "Malloc2", 00:17:42.508 "name": "Malloc2", 00:17:42.508 "nguid": "BA8589CE740C44C38E811E5457D5D3DA", 00:17:42.508 "uuid": "ba8589ce-740c-44c3-8e81-1e5457d5d3da" 00:17:42.508 } 00:17:42.508 ] 00:17:42.508 } 00:17:42.508 ] 00:17:42.508 01:55:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:17:42.508 01:55:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1424458 00:17:42.508 01:55:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:17:42.508 01:55:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:17:42.508 01:55:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1263 -- # local i=0 00:17:42.508 01:55:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1264 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:42.508 01:55:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:42.508 01:55:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1274 -- # return 0 00:17:42.508 01:55:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:17:42.508 01:55:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:17:42.508 EAL: No free 2048 kB hugepages reported on node 1 00:17:42.766 [2024-07-24 01:55:57.471085] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:42.766 Malloc3 00:17:42.766 01:55:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:17:43.024 [2024-07-24 01:55:57.847883] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:43.024 01:55:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:17:43.024 Asynchronous Event Request test 00:17:43.024 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:17:43.024 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:17:43.024 Registering asynchronous event callbacks... 00:17:43.024 Starting namespace attribute notice tests for all controllers... 00:17:43.024 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:17:43.024 aer_cb - Changed Namespace 00:17:43.024 Cleaning up... 00:17:43.284 [ 00:17:43.284 { 00:17:43.284 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:43.284 "subtype": "Discovery", 00:17:43.284 "listen_addresses": [], 00:17:43.284 "allow_any_host": true, 00:17:43.284 "hosts": [] 00:17:43.284 }, 00:17:43.284 { 00:17:43.284 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:17:43.284 "subtype": "NVMe", 00:17:43.284 "listen_addresses": [ 00:17:43.284 { 00:17:43.284 "trtype": "VFIOUSER", 00:17:43.284 "adrfam": "IPv4", 00:17:43.284 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:17:43.284 "trsvcid": "0" 00:17:43.284 } 00:17:43.284 ], 00:17:43.284 "allow_any_host": true, 00:17:43.284 "hosts": [], 00:17:43.284 "serial_number": "SPDK1", 00:17:43.284 "model_number": "SPDK bdev Controller", 00:17:43.284 "max_namespaces": 32, 00:17:43.284 "min_cntlid": 1, 00:17:43.284 "max_cntlid": 65519, 00:17:43.284 "namespaces": [ 00:17:43.284 { 00:17:43.284 "nsid": 1, 00:17:43.284 "bdev_name": "Malloc1", 00:17:43.284 "name": "Malloc1", 00:17:43.284 "nguid": "DD8BACE42E9D491F8F85B052D94096A7", 00:17:43.284 "uuid": "dd8bace4-2e9d-491f-8f85-b052d94096a7" 00:17:43.284 }, 00:17:43.284 { 00:17:43.284 "nsid": 2, 00:17:43.284 "bdev_name": "Malloc3", 00:17:43.284 "name": "Malloc3", 00:17:43.284 "nguid": "3EAB879F3D6B4737BB690DD55EF3ED09", 00:17:43.284 "uuid": "3eab879f-3d6b-4737-bb69-0dd55ef3ed09" 00:17:43.284 } 00:17:43.284 ] 00:17:43.284 }, 00:17:43.284 { 00:17:43.284 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:17:43.284 "subtype": "NVMe", 00:17:43.284 "listen_addresses": [ 00:17:43.284 { 00:17:43.284 "trtype": "VFIOUSER", 00:17:43.284 "adrfam": "IPv4", 00:17:43.284 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:17:43.284 "trsvcid": "0" 00:17:43.284 } 00:17:43.284 ], 00:17:43.284 "allow_any_host": true, 00:17:43.284 "hosts": [], 00:17:43.284 "serial_number": "SPDK2", 00:17:43.284 "model_number": "SPDK bdev Controller", 00:17:43.284 "max_namespaces": 32, 00:17:43.284 "min_cntlid": 1, 00:17:43.284 "max_cntlid": 65519, 00:17:43.284 "namespaces": [ 00:17:43.284 { 00:17:43.284 "nsid": 1, 00:17:43.284 "bdev_name": "Malloc2", 00:17:43.284 "name": "Malloc2", 00:17:43.284 "nguid": "BA8589CE740C44C38E811E5457D5D3DA", 00:17:43.284 "uuid": "ba8589ce-740c-44c3-8e81-1e5457d5d3da" 00:17:43.284 } 00:17:43.284 ] 00:17:43.284 } 00:17:43.284 ] 00:17:43.284 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1424458 00:17:43.284 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:43.284 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:17:43.284 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:17:43.284 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:17:43.284 [2024-07-24 01:55:58.131747] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:17:43.284 [2024-07-24 01:55:58.131793] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1424582 ] 00:17:43.284 EAL: No free 2048 kB hugepages reported on node 1 00:17:43.284 [2024-07-24 01:55:58.166456] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:17:43.284 [2024-07-24 01:55:58.174639] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:17:43.284 [2024-07-24 01:55:58.174675] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f9f34bf4000 00:17:43.284 [2024-07-24 01:55:58.175637] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:43.284 [2024-07-24 01:55:58.176643] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:43.284 [2024-07-24 01:55:58.177648] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:43.284 [2024-07-24 01:55:58.178655] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:43.545 [2024-07-24 01:55:58.179663] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:43.545 [2024-07-24 01:55:58.180668] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:43.545 [2024-07-24 01:55:58.181691] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:43.545 [2024-07-24 01:55:58.182681] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:43.545 [2024-07-24 01:55:58.183686] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:17:43.545 [2024-07-24 01:55:58.183709] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f9f339a8000 00:17:43.545 [2024-07-24 01:55:58.184825] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:17:43.545 [2024-07-24 01:55:58.197047] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:17:43.545 [2024-07-24 01:55:58.197093] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:17:43.545 [2024-07-24 01:55:58.206235] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:17:43.545 [2024-07-24 01:55:58.206287] nvme_pcie_common.c: 133:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:17:43.545 [2024-07-24 01:55:58.206396] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:17:43.545 [2024-07-24 01:55:58.206420] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:17:43.545 [2024-07-24 01:55:58.206430] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:17:43.545 [2024-07-24 01:55:58.207241] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:17:43.545 [2024-07-24 01:55:58.207267] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:17:43.545 [2024-07-24 01:55:58.207280] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:17:43.545 [2024-07-24 01:55:58.208246] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:17:43.545 [2024-07-24 01:55:58.208265] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:17:43.545 [2024-07-24 01:55:58.208278] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:17:43.545 [2024-07-24 01:55:58.209257] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:17:43.545 [2024-07-24 01:55:58.209277] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:43.545 [2024-07-24 01:55:58.210262] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:17:43.545 [2024-07-24 01:55:58.210281] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:17:43.545 [2024-07-24 01:55:58.210290] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:17:43.545 [2024-07-24 01:55:58.210301] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:43.545 [2024-07-24 01:55:58.210411] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:17:43.545 [2024-07-24 01:55:58.210421] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:43.545 [2024-07-24 01:55:58.210430] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:17:43.545 [2024-07-24 01:55:58.211269] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:17:43.545 [2024-07-24 01:55:58.212272] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:17:43.545 [2024-07-24 01:55:58.213278] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:17:43.545 [2024-07-24 01:55:58.214273] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:43.545 [2024-07-24 01:55:58.214357] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:43.545 [2024-07-24 01:55:58.215293] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:17:43.545 [2024-07-24 01:55:58.215332] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:43.545 [2024-07-24 01:55:58.215342] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:17:43.545 [2024-07-24 01:55:58.215367] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:17:43.545 [2024-07-24 01:55:58.215393] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:17:43.545 [2024-07-24 01:55:58.215413] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:43.545 [2024-07-24 01:55:58.215423] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:43.545 [2024-07-24 01:55:58.215430] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:43.545 [2024-07-24 01:55:58.215447] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:43.545 [2024-07-24 01:55:58.219332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:17:43.545 [2024-07-24 01:55:58.219354] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:17:43.545 [2024-07-24 01:55:58.219384] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:17:43.545 [2024-07-24 01:55:58.219393] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:17:43.545 [2024-07-24 01:55:58.219400] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:17:43.545 [2024-07-24 01:55:58.219409] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:17:43.545 [2024-07-24 01:55:58.219417] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:17:43.545 [2024-07-24 01:55:58.219425] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:17:43.545 [2024-07-24 01:55:58.219439] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:17:43.545 [2024-07-24 01:55:58.219459] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:17:43.545 [2024-07-24 01:55:58.227327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:17:43.545 [2024-07-24 01:55:58.227354] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:17:43.545 [2024-07-24 01:55:58.227376] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:17:43.545 [2024-07-24 01:55:58.227388] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:17:43.545 [2024-07-24 01:55:58.227400] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:17:43.546 [2024-07-24 01:55:58.227409] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:17:43.546 [2024-07-24 01:55:58.227423] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:43.546 [2024-07-24 01:55:58.227438] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:17:43.546 [2024-07-24 01:55:58.235326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:17:43.546 [2024-07-24 01:55:58.235343] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:17:43.546 [2024-07-24 01:55:58.235353] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:17:43.546 [2024-07-24 01:55:58.235369] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:17:43.546 [2024-07-24 01:55:58.235379] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:17:43.546 [2024-07-24 01:55:58.235393] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:17:43.546 [2024-07-24 01:55:58.243329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:17:43.546 [2024-07-24 01:55:58.243405] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:17:43.546 [2024-07-24 01:55:58.243422] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:17:43.546 [2024-07-24 01:55:58.243439] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:17:43.546 [2024-07-24 01:55:58.243448] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:17:43.546 [2024-07-24 01:55:58.243455] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:43.546 [2024-07-24 01:55:58.243465] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:17:43.546 [2024-07-24 01:55:58.251328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:17:43.546 [2024-07-24 01:55:58.251350] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:17:43.546 [2024-07-24 01:55:58.251366] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:17:43.546 [2024-07-24 01:55:58.251380] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:17:43.546 [2024-07-24 01:55:58.251392] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:43.546 [2024-07-24 01:55:58.251400] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:43.546 [2024-07-24 01:55:58.251406] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:43.546 [2024-07-24 01:55:58.251416] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:43.546 [2024-07-24 01:55:58.259341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:17:43.546 [2024-07-24 01:55:58.259368] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:17:43.546 [2024-07-24 01:55:58.259384] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:17:43.546 [2024-07-24 01:55:58.259397] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:43.546 [2024-07-24 01:55:58.259405] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:43.546 [2024-07-24 01:55:58.259411] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:43.546 [2024-07-24 01:55:58.259421] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:43.546 [2024-07-24 01:55:58.267327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:17:43.546 [2024-07-24 01:55:58.267347] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:17:43.546 [2024-07-24 01:55:58.267360] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:17:43.546 [2024-07-24 01:55:58.267376] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:17:43.546 [2024-07-24 01:55:58.267389] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:17:43.546 [2024-07-24 01:55:58.267398] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:17:43.546 [2024-07-24 01:55:58.267406] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:17:43.546 [2024-07-24 01:55:58.267418] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:17:43.546 [2024-07-24 01:55:58.267426] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:17:43.546 [2024-07-24 01:55:58.267435] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:17:43.546 [2024-07-24 01:55:58.267459] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:17:43.546 [2024-07-24 01:55:58.275329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:17:43.546 [2024-07-24 01:55:58.275354] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:17:43.546 [2024-07-24 01:55:58.283330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:17:43.546 [2024-07-24 01:55:58.283355] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:17:43.546 [2024-07-24 01:55:58.291326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:17:43.546 [2024-07-24 01:55:58.291362] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:17:43.546 [2024-07-24 01:55:58.299335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:17:43.546 [2024-07-24 01:55:58.299382] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:17:43.546 [2024-07-24 01:55:58.299394] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:17:43.546 [2024-07-24 01:55:58.299401] nvme_pcie_common.c:1239:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:17:43.546 [2024-07-24 01:55:58.299407] nvme_pcie_common.c:1255:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:17:43.546 [2024-07-24 01:55:58.299413] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:17:43.546 [2024-07-24 01:55:58.299423] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:17:43.546 [2024-07-24 01:55:58.299435] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:17:43.546 [2024-07-24 01:55:58.299444] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:17:43.546 [2024-07-24 01:55:58.299450] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:43.546 [2024-07-24 01:55:58.299459] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:17:43.546 [2024-07-24 01:55:58.299470] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:17:43.546 [2024-07-24 01:55:58.299478] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:43.546 [2024-07-24 01:55:58.299484] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:43.546 [2024-07-24 01:55:58.299493] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:43.546 [2024-07-24 01:55:58.299505] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:17:43.546 [2024-07-24 01:55:58.299513] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:17:43.546 [2024-07-24 01:55:58.299519] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:43.546 [2024-07-24 01:55:58.299528] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:17:43.546 [2024-07-24 01:55:58.307331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:17:43.546 [2024-07-24 01:55:58.307359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:17:43.546 [2024-07-24 01:55:58.307376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:17:43.546 [2024-07-24 01:55:58.307389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:17:43.546 ===================================================== 00:17:43.546 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:43.546 ===================================================== 00:17:43.546 Controller Capabilities/Features 00:17:43.546 ================================ 00:17:43.546 Vendor ID: 4e58 00:17:43.546 Subsystem Vendor ID: 4e58 00:17:43.546 Serial Number: SPDK2 00:17:43.546 Model Number: SPDK bdev Controller 00:17:43.546 Firmware Version: 24.09 00:17:43.546 Recommended Arb Burst: 6 00:17:43.546 IEEE OUI Identifier: 8d 6b 50 00:17:43.546 Multi-path I/O 00:17:43.546 May have multiple subsystem ports: Yes 00:17:43.546 May have multiple controllers: Yes 00:17:43.546 Associated with SR-IOV VF: No 00:17:43.546 Max Data Transfer Size: 131072 00:17:43.546 Max Number of Namespaces: 32 00:17:43.546 Max Number of I/O Queues: 127 00:17:43.547 NVMe Specification Version (VS): 1.3 00:17:43.547 NVMe Specification Version (Identify): 1.3 00:17:43.547 Maximum Queue Entries: 256 00:17:43.547 Contiguous Queues Required: Yes 00:17:43.547 Arbitration Mechanisms Supported 00:17:43.547 Weighted Round Robin: Not Supported 00:17:43.547 Vendor Specific: Not Supported 00:17:43.547 Reset Timeout: 15000 ms 00:17:43.547 Doorbell Stride: 4 bytes 00:17:43.547 NVM Subsystem Reset: Not Supported 00:17:43.547 Command Sets Supported 00:17:43.547 NVM Command Set: Supported 00:17:43.547 Boot Partition: Not Supported 00:17:43.547 Memory Page Size Minimum: 4096 bytes 00:17:43.547 Memory Page Size Maximum: 4096 bytes 00:17:43.547 Persistent Memory Region: Not Supported 00:17:43.547 Optional Asynchronous Events Supported 00:17:43.547 Namespace Attribute Notices: Supported 00:17:43.547 Firmware Activation Notices: Not Supported 00:17:43.547 ANA Change Notices: Not Supported 00:17:43.547 PLE Aggregate Log Change Notices: Not Supported 00:17:43.547 LBA Status Info Alert Notices: Not Supported 00:17:43.547 EGE Aggregate Log Change Notices: Not Supported 00:17:43.547 Normal NVM Subsystem Shutdown event: Not Supported 00:17:43.547 Zone Descriptor Change Notices: Not Supported 00:17:43.547 Discovery Log Change Notices: Not Supported 00:17:43.547 Controller Attributes 00:17:43.547 128-bit Host Identifier: Supported 00:17:43.547 Non-Operational Permissive Mode: Not Supported 00:17:43.547 NVM Sets: Not Supported 00:17:43.547 Read Recovery Levels: Not Supported 00:17:43.547 Endurance Groups: Not Supported 00:17:43.547 Predictable Latency Mode: Not Supported 00:17:43.547 Traffic Based Keep ALive: Not Supported 00:17:43.547 Namespace Granularity: Not Supported 00:17:43.547 SQ Associations: Not Supported 00:17:43.547 UUID List: Not Supported 00:17:43.547 Multi-Domain Subsystem: Not Supported 00:17:43.547 Fixed Capacity Management: Not Supported 00:17:43.547 Variable Capacity Management: Not Supported 00:17:43.547 Delete Endurance Group: Not Supported 00:17:43.547 Delete NVM Set: Not Supported 00:17:43.547 Extended LBA Formats Supported: Not Supported 00:17:43.547 Flexible Data Placement Supported: Not Supported 00:17:43.547 00:17:43.547 Controller Memory Buffer Support 00:17:43.547 ================================ 00:17:43.547 Supported: No 00:17:43.547 00:17:43.547 Persistent Memory Region Support 00:17:43.547 ================================ 00:17:43.547 Supported: No 00:17:43.547 00:17:43.547 Admin Command Set Attributes 00:17:43.547 ============================ 00:17:43.547 Security Send/Receive: Not Supported 00:17:43.547 Format NVM: Not Supported 00:17:43.547 Firmware Activate/Download: Not Supported 00:17:43.547 Namespace Management: Not Supported 00:17:43.547 Device Self-Test: Not Supported 00:17:43.547 Directives: Not Supported 00:17:43.547 NVMe-MI: Not Supported 00:17:43.547 Virtualization Management: Not Supported 00:17:43.547 Doorbell Buffer Config: Not Supported 00:17:43.547 Get LBA Status Capability: Not Supported 00:17:43.547 Command & Feature Lockdown Capability: Not Supported 00:17:43.547 Abort Command Limit: 4 00:17:43.547 Async Event Request Limit: 4 00:17:43.547 Number of Firmware Slots: N/A 00:17:43.547 Firmware Slot 1 Read-Only: N/A 00:17:43.547 Firmware Activation Without Reset: N/A 00:17:43.547 Multiple Update Detection Support: N/A 00:17:43.547 Firmware Update Granularity: No Information Provided 00:17:43.547 Per-Namespace SMART Log: No 00:17:43.547 Asymmetric Namespace Access Log Page: Not Supported 00:17:43.547 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:17:43.547 Command Effects Log Page: Supported 00:17:43.547 Get Log Page Extended Data: Supported 00:17:43.547 Telemetry Log Pages: Not Supported 00:17:43.547 Persistent Event Log Pages: Not Supported 00:17:43.547 Supported Log Pages Log Page: May Support 00:17:43.547 Commands Supported & Effects Log Page: Not Supported 00:17:43.547 Feature Identifiers & Effects Log Page:May Support 00:17:43.547 NVMe-MI Commands & Effects Log Page: May Support 00:17:43.547 Data Area 4 for Telemetry Log: Not Supported 00:17:43.547 Error Log Page Entries Supported: 128 00:17:43.547 Keep Alive: Supported 00:17:43.547 Keep Alive Granularity: 10000 ms 00:17:43.547 00:17:43.547 NVM Command Set Attributes 00:17:43.547 ========================== 00:17:43.547 Submission Queue Entry Size 00:17:43.547 Max: 64 00:17:43.547 Min: 64 00:17:43.547 Completion Queue Entry Size 00:17:43.547 Max: 16 00:17:43.547 Min: 16 00:17:43.547 Number of Namespaces: 32 00:17:43.547 Compare Command: Supported 00:17:43.547 Write Uncorrectable Command: Not Supported 00:17:43.547 Dataset Management Command: Supported 00:17:43.547 Write Zeroes Command: Supported 00:17:43.547 Set Features Save Field: Not Supported 00:17:43.547 Reservations: Not Supported 00:17:43.547 Timestamp: Not Supported 00:17:43.547 Copy: Supported 00:17:43.547 Volatile Write Cache: Present 00:17:43.547 Atomic Write Unit (Normal): 1 00:17:43.547 Atomic Write Unit (PFail): 1 00:17:43.547 Atomic Compare & Write Unit: 1 00:17:43.547 Fused Compare & Write: Supported 00:17:43.547 Scatter-Gather List 00:17:43.547 SGL Command Set: Supported (Dword aligned) 00:17:43.547 SGL Keyed: Not Supported 00:17:43.547 SGL Bit Bucket Descriptor: Not Supported 00:17:43.547 SGL Metadata Pointer: Not Supported 00:17:43.547 Oversized SGL: Not Supported 00:17:43.547 SGL Metadata Address: Not Supported 00:17:43.547 SGL Offset: Not Supported 00:17:43.547 Transport SGL Data Block: Not Supported 00:17:43.547 Replay Protected Memory Block: Not Supported 00:17:43.547 00:17:43.547 Firmware Slot Information 00:17:43.547 ========================= 00:17:43.547 Active slot: 1 00:17:43.547 Slot 1 Firmware Revision: 24.09 00:17:43.547 00:17:43.547 00:17:43.547 Commands Supported and Effects 00:17:43.547 ============================== 00:17:43.547 Admin Commands 00:17:43.547 -------------- 00:17:43.547 Get Log Page (02h): Supported 00:17:43.547 Identify (06h): Supported 00:17:43.547 Abort (08h): Supported 00:17:43.547 Set Features (09h): Supported 00:17:43.547 Get Features (0Ah): Supported 00:17:43.547 Asynchronous Event Request (0Ch): Supported 00:17:43.547 Keep Alive (18h): Supported 00:17:43.547 I/O Commands 00:17:43.547 ------------ 00:17:43.547 Flush (00h): Supported LBA-Change 00:17:43.547 Write (01h): Supported LBA-Change 00:17:43.547 Read (02h): Supported 00:17:43.547 Compare (05h): Supported 00:17:43.547 Write Zeroes (08h): Supported LBA-Change 00:17:43.547 Dataset Management (09h): Supported LBA-Change 00:17:43.547 Copy (19h): Supported LBA-Change 00:17:43.547 00:17:43.547 Error Log 00:17:43.547 ========= 00:17:43.547 00:17:43.547 Arbitration 00:17:43.547 =========== 00:17:43.547 Arbitration Burst: 1 00:17:43.547 00:17:43.547 Power Management 00:17:43.547 ================ 00:17:43.547 Number of Power States: 1 00:17:43.547 Current Power State: Power State #0 00:17:43.547 Power State #0: 00:17:43.547 Max Power: 0.00 W 00:17:43.547 Non-Operational State: Operational 00:17:43.547 Entry Latency: Not Reported 00:17:43.547 Exit Latency: Not Reported 00:17:43.547 Relative Read Throughput: 0 00:17:43.547 Relative Read Latency: 0 00:17:43.547 Relative Write Throughput: 0 00:17:43.547 Relative Write Latency: 0 00:17:43.547 Idle Power: Not Reported 00:17:43.547 Active Power: Not Reported 00:17:43.547 Non-Operational Permissive Mode: Not Supported 00:17:43.547 00:17:43.547 Health Information 00:17:43.547 ================== 00:17:43.547 Critical Warnings: 00:17:43.547 Available Spare Space: OK 00:17:43.547 Temperature: OK 00:17:43.547 Device Reliability: OK 00:17:43.547 Read Only: No 00:17:43.547 Volatile Memory Backup: OK 00:17:43.547 Current Temperature: 0 Kelvin (-273 Celsius) 00:17:43.547 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:17:43.547 Available Spare: 0% 00:17:43.547 Available Sp[2024-07-24 01:55:58.307505] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:17:43.547 [2024-07-24 01:55:58.315328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:17:43.547 [2024-07-24 01:55:58.315376] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:17:43.547 [2024-07-24 01:55:58.315394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.547 [2024-07-24 01:55:58.315405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.548 [2024-07-24 01:55:58.315416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.548 [2024-07-24 01:55:58.315426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:43.548 [2024-07-24 01:55:58.315495] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:17:43.548 [2024-07-24 01:55:58.315516] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:17:43.548 [2024-07-24 01:55:58.316499] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:43.548 [2024-07-24 01:55:58.316585] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:17:43.548 [2024-07-24 01:55:58.316624] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:17:43.548 [2024-07-24 01:55:58.317525] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:17:43.548 [2024-07-24 01:55:58.317549] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:17:43.548 [2024-07-24 01:55:58.317616] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:17:43.548 [2024-07-24 01:55:58.320329] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:17:43.548 are Threshold: 0% 00:17:43.548 Life Percentage Used: 0% 00:17:43.548 Data Units Read: 0 00:17:43.548 Data Units Written: 0 00:17:43.548 Host Read Commands: 0 00:17:43.548 Host Write Commands: 0 00:17:43.548 Controller Busy Time: 0 minutes 00:17:43.548 Power Cycles: 0 00:17:43.548 Power On Hours: 0 hours 00:17:43.548 Unsafe Shutdowns: 0 00:17:43.548 Unrecoverable Media Errors: 0 00:17:43.548 Lifetime Error Log Entries: 0 00:17:43.548 Warning Temperature Time: 0 minutes 00:17:43.548 Critical Temperature Time: 0 minutes 00:17:43.548 00:17:43.548 Number of Queues 00:17:43.548 ================ 00:17:43.548 Number of I/O Submission Queues: 127 00:17:43.548 Number of I/O Completion Queues: 127 00:17:43.548 00:17:43.548 Active Namespaces 00:17:43.548 ================= 00:17:43.548 Namespace ID:1 00:17:43.548 Error Recovery Timeout: Unlimited 00:17:43.548 Command Set Identifier: NVM (00h) 00:17:43.548 Deallocate: Supported 00:17:43.548 Deallocated/Unwritten Error: Not Supported 00:17:43.548 Deallocated Read Value: Unknown 00:17:43.548 Deallocate in Write Zeroes: Not Supported 00:17:43.548 Deallocated Guard Field: 0xFFFF 00:17:43.548 Flush: Supported 00:17:43.548 Reservation: Supported 00:17:43.548 Namespace Sharing Capabilities: Multiple Controllers 00:17:43.548 Size (in LBAs): 131072 (0GiB) 00:17:43.548 Capacity (in LBAs): 131072 (0GiB) 00:17:43.548 Utilization (in LBAs): 131072 (0GiB) 00:17:43.548 NGUID: BA8589CE740C44C38E811E5457D5D3DA 00:17:43.548 UUID: ba8589ce-740c-44c3-8e81-1e5457d5d3da 00:17:43.548 Thin Provisioning: Not Supported 00:17:43.548 Per-NS Atomic Units: Yes 00:17:43.548 Atomic Boundary Size (Normal): 0 00:17:43.548 Atomic Boundary Size (PFail): 0 00:17:43.548 Atomic Boundary Offset: 0 00:17:43.548 Maximum Single Source Range Length: 65535 00:17:43.548 Maximum Copy Length: 65535 00:17:43.548 Maximum Source Range Count: 1 00:17:43.548 NGUID/EUI64 Never Reused: No 00:17:43.548 Namespace Write Protected: No 00:17:43.548 Number of LBA Formats: 1 00:17:43.548 Current LBA Format: LBA Format #00 00:17:43.548 LBA Format #00: Data Size: 512 Metadata Size: 0 00:17:43.548 00:17:43.548 01:55:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:17:43.548 EAL: No free 2048 kB hugepages reported on node 1 00:17:43.806 [2024-07-24 01:55:58.547126] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:49.112 Initializing NVMe Controllers 00:17:49.112 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:49.112 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:17:49.112 Initialization complete. Launching workers. 00:17:49.112 ======================================================== 00:17:49.112 Latency(us) 00:17:49.112 Device Information : IOPS MiB/s Average min max 00:17:49.112 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 34009.30 132.85 3762.59 1165.53 8988.64 00:17:49.112 ======================================================== 00:17:49.112 Total : 34009.30 132.85 3762.59 1165.53 8988.64 00:17:49.112 00:17:49.112 [2024-07-24 01:56:03.649662] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:49.112 01:56:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:17:49.112 EAL: No free 2048 kB hugepages reported on node 1 00:17:49.112 [2024-07-24 01:56:03.879309] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:54.386 Initializing NVMe Controllers 00:17:54.386 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:54.386 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:17:54.386 Initialization complete. Launching workers. 00:17:54.386 ======================================================== 00:17:54.386 Latency(us) 00:17:54.386 Device Information : IOPS MiB/s Average min max 00:17:54.386 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 32152.24 125.59 3980.53 1193.29 8267.89 00:17:54.386 ======================================================== 00:17:54.386 Total : 32152.24 125.59 3980.53 1193.29 8267.89 00:17:54.386 00:17:54.386 [2024-07-24 01:56:08.903550] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:54.386 01:56:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:17:54.386 EAL: No free 2048 kB hugepages reported on node 1 00:17:54.386 [2024-07-24 01:56:09.113097] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:59.682 [2024-07-24 01:56:14.252466] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:59.682 Initializing NVMe Controllers 00:17:59.682 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:59.682 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:59.682 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:17:59.682 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:17:59.682 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:17:59.682 Initialization complete. Launching workers. 00:17:59.682 Starting thread on core 2 00:17:59.682 Starting thread on core 3 00:17:59.682 Starting thread on core 1 00:17:59.682 01:56:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:17:59.682 EAL: No free 2048 kB hugepages reported on node 1 00:17:59.682 [2024-07-24 01:56:14.558787] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:02.970 [2024-07-24 01:56:17.633773] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:02.970 Initializing NVMe Controllers 00:18:02.970 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:02.970 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:02.970 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:18:02.970 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:18:02.970 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:18:02.970 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:18:02.970 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:18:02.970 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:18:02.970 Initialization complete. Launching workers. 00:18:02.970 Starting thread on core 1 with urgent priority queue 00:18:02.970 Starting thread on core 2 with urgent priority queue 00:18:02.970 Starting thread on core 0 with urgent priority queue 00:18:02.970 Starting thread on core 3 with urgent priority queue 00:18:02.970 SPDK bdev Controller (SPDK2 ) core 0: 4492.33 IO/s 22.26 secs/100000 ios 00:18:02.970 SPDK bdev Controller (SPDK2 ) core 1: 5569.33 IO/s 17.96 secs/100000 ios 00:18:02.970 SPDK bdev Controller (SPDK2 ) core 2: 5359.67 IO/s 18.66 secs/100000 ios 00:18:02.970 SPDK bdev Controller (SPDK2 ) core 3: 5792.67 IO/s 17.26 secs/100000 ios 00:18:02.970 ======================================================== 00:18:02.970 00:18:02.971 01:56:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:18:02.971 EAL: No free 2048 kB hugepages reported on node 1 00:18:03.228 [2024-07-24 01:56:17.937732] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:03.228 Initializing NVMe Controllers 00:18:03.228 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:03.228 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:03.228 Namespace ID: 1 size: 0GB 00:18:03.228 Initialization complete. 00:18:03.228 INFO: using host memory buffer for IO 00:18:03.228 Hello world! 00:18:03.228 [2024-07-24 01:56:17.949807] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:03.228 01:56:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:18:03.228 EAL: No free 2048 kB hugepages reported on node 1 00:18:03.487 [2024-07-24 01:56:18.226630] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:04.427 Initializing NVMe Controllers 00:18:04.427 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:04.427 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:04.427 Initialization complete. Launching workers. 00:18:04.427 submit (in ns) avg, min, max = 8118.9, 3580.0, 4014976.7 00:18:04.427 complete (in ns) avg, min, max = 26222.3, 2048.9, 4016256.7 00:18:04.427 00:18:04.427 Submit histogram 00:18:04.427 ================ 00:18:04.427 Range in us Cumulative Count 00:18:04.427 3.579 - 3.603: 0.1685% ( 23) 00:18:04.427 3.603 - 3.627: 1.0254% ( 117) 00:18:04.427 3.627 - 3.650: 2.9444% ( 262) 00:18:04.427 3.650 - 3.674: 6.7531% ( 520) 00:18:04.427 3.674 - 3.698: 13.5794% ( 932) 00:18:04.427 3.698 - 3.721: 21.3653% ( 1063) 00:18:04.427 3.721 - 3.745: 30.8504% ( 1295) 00:18:04.427 3.745 - 3.769: 39.0390% ( 1118) 00:18:04.427 3.769 - 3.793: 47.2863% ( 1126) 00:18:04.427 3.793 - 3.816: 53.4754% ( 845) 00:18:04.427 3.816 - 3.840: 58.7563% ( 721) 00:18:04.427 3.840 - 3.864: 63.4366% ( 639) 00:18:04.427 3.864 - 3.887: 67.3112% ( 529) 00:18:04.427 3.887 - 3.911: 70.7317% ( 467) 00:18:04.427 3.911 - 3.935: 74.4012% ( 501) 00:18:04.427 3.935 - 3.959: 78.2832% ( 530) 00:18:04.427 3.959 - 3.982: 81.5791% ( 450) 00:18:04.427 3.982 - 4.006: 84.7506% ( 433) 00:18:04.427 4.006 - 4.030: 87.0505% ( 314) 00:18:04.427 4.030 - 4.053: 88.8669% ( 248) 00:18:04.427 4.053 - 4.077: 90.6101% ( 238) 00:18:04.427 4.077 - 4.101: 92.0018% ( 190) 00:18:04.427 4.101 - 4.124: 93.2030% ( 164) 00:18:04.427 4.124 - 4.148: 94.0526% ( 116) 00:18:04.427 4.148 - 4.172: 94.7191% ( 91) 00:18:04.427 4.172 - 4.196: 95.1879% ( 64) 00:18:04.427 4.196 - 4.219: 95.6127% ( 58) 00:18:04.427 4.219 - 4.243: 96.0082% ( 54) 00:18:04.427 4.243 - 4.267: 96.2646% ( 35) 00:18:04.427 4.267 - 4.290: 96.4623% ( 27) 00:18:04.427 4.290 - 4.314: 96.6161% ( 21) 00:18:04.427 4.314 - 4.338: 96.7333% ( 16) 00:18:04.427 4.338 - 4.361: 96.7919% ( 8) 00:18:04.427 4.361 - 4.385: 96.8285% ( 5) 00:18:04.427 4.385 - 4.409: 96.9018% ( 10) 00:18:04.427 4.409 - 4.433: 96.9897% ( 12) 00:18:04.427 4.433 - 4.456: 97.0483% ( 8) 00:18:04.427 4.456 - 4.480: 97.0922% ( 6) 00:18:04.427 4.480 - 4.504: 97.1288% ( 5) 00:18:04.427 4.504 - 4.527: 97.1581% ( 4) 00:18:04.427 4.527 - 4.551: 97.1874% ( 4) 00:18:04.427 4.551 - 4.575: 97.2094% ( 3) 00:18:04.427 4.575 - 4.599: 97.2167% ( 1) 00:18:04.428 4.622 - 4.646: 97.2314% ( 2) 00:18:04.428 4.646 - 4.670: 97.2680% ( 5) 00:18:04.428 4.670 - 4.693: 97.2826% ( 2) 00:18:04.428 4.693 - 4.717: 97.2973% ( 2) 00:18:04.428 4.741 - 4.764: 97.3119% ( 2) 00:18:04.428 4.764 - 4.788: 97.3339% ( 3) 00:18:04.428 4.788 - 4.812: 97.3632% ( 4) 00:18:04.428 4.812 - 4.836: 97.3779% ( 2) 00:18:04.428 4.836 - 4.859: 97.3925% ( 2) 00:18:04.428 4.859 - 4.883: 97.3998% ( 1) 00:18:04.428 4.883 - 4.907: 97.4218% ( 3) 00:18:04.428 4.907 - 4.930: 97.4877% ( 9) 00:18:04.428 4.930 - 4.954: 97.5463% ( 8) 00:18:04.428 4.954 - 4.978: 97.5976% ( 7) 00:18:04.428 4.978 - 5.001: 97.6855% ( 12) 00:18:04.428 5.001 - 5.025: 97.7514% ( 9) 00:18:04.428 5.025 - 5.049: 97.8173% ( 9) 00:18:04.428 5.049 - 5.073: 97.8613% ( 6) 00:18:04.428 5.073 - 5.096: 97.8979% ( 5) 00:18:04.428 5.096 - 5.120: 97.9492% ( 7) 00:18:04.428 5.120 - 5.144: 97.9638% ( 2) 00:18:04.428 5.144 - 5.167: 98.0004% ( 5) 00:18:04.428 5.167 - 5.191: 98.0151% ( 2) 00:18:04.428 5.191 - 5.215: 98.0371% ( 3) 00:18:04.428 5.215 - 5.239: 98.0590% ( 3) 00:18:04.428 5.239 - 5.262: 98.1030% ( 6) 00:18:04.428 5.262 - 5.286: 98.1176% ( 2) 00:18:04.428 5.286 - 5.310: 98.1396% ( 3) 00:18:04.428 5.310 - 5.333: 98.1469% ( 1) 00:18:04.428 5.333 - 5.357: 98.1689% ( 3) 00:18:04.428 5.357 - 5.381: 98.1762% ( 1) 00:18:04.428 5.381 - 5.404: 98.1835% ( 1) 00:18:04.428 5.404 - 5.428: 98.1982% ( 2) 00:18:04.428 5.428 - 5.452: 98.2128% ( 2) 00:18:04.428 5.452 - 5.476: 98.2348% ( 3) 00:18:04.428 5.499 - 5.523: 98.2421% ( 1) 00:18:04.428 5.547 - 5.570: 98.2568% ( 2) 00:18:04.428 5.594 - 5.618: 98.2641% ( 1) 00:18:04.428 5.618 - 5.641: 98.2714% ( 1) 00:18:04.428 5.641 - 5.665: 98.2788% ( 1) 00:18:04.428 5.689 - 5.713: 98.2861% ( 1) 00:18:04.428 5.736 - 5.760: 98.2934% ( 1) 00:18:04.428 5.784 - 5.807: 98.3081% ( 2) 00:18:04.428 5.855 - 5.879: 98.3227% ( 2) 00:18:04.428 5.879 - 5.902: 98.3300% ( 1) 00:18:04.428 5.902 - 5.926: 98.3374% ( 1) 00:18:04.428 6.258 - 6.305: 98.3447% ( 1) 00:18:04.428 6.305 - 6.353: 98.3520% ( 1) 00:18:04.428 6.447 - 6.495: 98.3593% ( 1) 00:18:04.428 6.732 - 6.779: 98.3667% ( 1) 00:18:04.428 7.111 - 7.159: 98.3740% ( 1) 00:18:04.428 7.159 - 7.206: 98.3813% ( 1) 00:18:04.428 7.206 - 7.253: 98.3886% ( 1) 00:18:04.428 7.301 - 7.348: 98.4033% ( 2) 00:18:04.428 7.348 - 7.396: 98.4253% ( 3) 00:18:04.428 7.490 - 7.538: 98.4326% ( 1) 00:18:04.428 7.585 - 7.633: 98.4399% ( 1) 00:18:04.428 7.633 - 7.680: 98.4472% ( 1) 00:18:04.428 7.727 - 7.775: 98.4619% ( 2) 00:18:04.428 7.775 - 7.822: 98.4692% ( 1) 00:18:04.428 7.822 - 7.870: 98.4838% ( 2) 00:18:04.428 7.870 - 7.917: 98.5131% ( 4) 00:18:04.428 7.917 - 7.964: 98.5205% ( 1) 00:18:04.428 7.964 - 8.012: 98.5424% ( 3) 00:18:04.428 8.059 - 8.107: 98.5571% ( 2) 00:18:04.428 8.107 - 8.154: 98.5717% ( 2) 00:18:04.428 8.154 - 8.201: 98.5937% ( 3) 00:18:04.428 8.201 - 8.249: 98.6010% ( 1) 00:18:04.428 8.249 - 8.296: 98.6157% ( 2) 00:18:04.428 8.344 - 8.391: 98.6230% ( 1) 00:18:04.428 8.439 - 8.486: 98.6377% ( 2) 00:18:04.428 8.533 - 8.581: 98.6523% ( 2) 00:18:04.428 8.628 - 8.676: 98.6596% ( 1) 00:18:04.428 8.676 - 8.723: 98.6743% ( 2) 00:18:04.428 8.865 - 8.913: 98.6816% ( 1) 00:18:04.428 8.913 - 8.960: 98.6889% ( 1) 00:18:04.428 9.055 - 9.102: 98.7036% ( 2) 00:18:04.428 9.102 - 9.150: 98.7109% ( 1) 00:18:04.428 9.150 - 9.197: 98.7256% ( 2) 00:18:04.428 9.197 - 9.244: 98.7329% ( 1) 00:18:04.428 9.339 - 9.387: 98.7402% ( 1) 00:18:04.428 9.387 - 9.434: 98.7549% ( 2) 00:18:04.428 9.434 - 9.481: 98.7622% ( 1) 00:18:04.428 9.529 - 9.576: 98.7695% ( 1) 00:18:04.428 9.576 - 9.624: 98.7768% ( 1) 00:18:04.428 9.956 - 10.003: 98.7842% ( 1) 00:18:04.428 10.240 - 10.287: 98.7915% ( 1) 00:18:04.428 10.382 - 10.430: 98.7988% ( 1) 00:18:04.428 10.430 - 10.477: 98.8061% ( 1) 00:18:04.428 10.524 - 10.572: 98.8134% ( 1) 00:18:04.428 10.714 - 10.761: 98.8208% ( 1) 00:18:04.428 10.904 - 10.951: 98.8281% ( 1) 00:18:04.428 11.093 - 11.141: 98.8354% ( 1) 00:18:04.428 11.188 - 11.236: 98.8501% ( 2) 00:18:04.428 11.236 - 11.283: 98.8647% ( 2) 00:18:04.428 11.330 - 11.378: 98.8867% ( 3) 00:18:04.428 11.710 - 11.757: 98.8940% ( 1) 00:18:04.428 12.136 - 12.231: 98.9087% ( 2) 00:18:04.428 12.326 - 12.421: 98.9160% ( 1) 00:18:04.428 12.516 - 12.610: 98.9233% ( 1) 00:18:04.428 12.800 - 12.895: 98.9306% ( 1) 00:18:04.428 12.990 - 13.084: 98.9453% ( 2) 00:18:04.428 13.179 - 13.274: 98.9526% ( 1) 00:18:04.428 13.274 - 13.369: 98.9746% ( 3) 00:18:04.428 13.464 - 13.559: 98.9892% ( 2) 00:18:04.428 13.938 - 14.033: 98.9966% ( 1) 00:18:04.428 14.601 - 14.696: 99.0112% ( 2) 00:18:04.428 15.076 - 15.170: 99.0185% ( 1) 00:18:04.428 17.256 - 17.351: 99.0405% ( 3) 00:18:04.428 17.351 - 17.446: 99.0478% ( 1) 00:18:04.428 17.446 - 17.541: 99.0845% ( 5) 00:18:04.428 17.541 - 17.636: 99.1137% ( 4) 00:18:04.428 17.636 - 17.730: 99.1577% ( 6) 00:18:04.428 17.730 - 17.825: 99.1943% ( 5) 00:18:04.428 17.825 - 17.920: 99.2236% ( 4) 00:18:04.428 17.920 - 18.015: 99.2895% ( 9) 00:18:04.428 18.015 - 18.110: 99.3188% ( 4) 00:18:04.428 18.110 - 18.204: 99.3848% ( 9) 00:18:04.428 18.204 - 18.299: 99.4653% ( 11) 00:18:04.428 18.299 - 18.394: 99.5093% ( 6) 00:18:04.428 18.394 - 18.489: 99.5679% ( 8) 00:18:04.428 18.489 - 18.584: 99.6118% ( 6) 00:18:04.428 18.584 - 18.679: 99.6558% ( 6) 00:18:04.428 18.679 - 18.773: 99.6924% ( 5) 00:18:04.428 18.773 - 18.868: 99.7510% ( 8) 00:18:04.428 18.868 - 18.963: 99.7583% ( 1) 00:18:04.428 18.963 - 19.058: 99.7876% ( 4) 00:18:04.428 19.058 - 19.153: 99.8096% ( 3) 00:18:04.428 19.153 - 19.247: 99.8242% ( 2) 00:18:04.428 19.247 - 19.342: 99.8462% ( 3) 00:18:04.428 19.721 - 19.816: 99.8535% ( 1) 00:18:04.428 20.101 - 20.196: 99.8608% ( 1) 00:18:04.428 20.196 - 20.290: 99.8755% ( 2) 00:18:04.428 21.997 - 22.092: 99.8828% ( 1) 00:18:04.428 27.876 - 28.065: 99.8901% ( 1) 00:18:04.428 28.824 - 29.013: 99.8975% ( 1) 00:18:04.428 3980.705 - 4004.978: 99.9854% ( 12) 00:18:04.428 4004.978 - 4029.250: 100.0000% ( 2) 00:18:04.428 00:18:04.428 Complete histogram 00:18:04.428 ================== 00:18:04.428 Range in us Cumulative Count 00:18:04.428 2.039 - 2.050: 0.0220% ( 3) 00:18:04.428 2.050 - 2.062: 1.1792% ( 158) 00:18:04.428 2.062 - 2.074: 6.2917% ( 698) 00:18:04.428 2.074 - 2.086: 16.8974% ( 1448) 00:18:04.428 2.086 - 2.098: 28.0158% ( 1518) 00:18:04.428 2.098 - 2.110: 37.0907% ( 1239) 00:18:04.428 2.110 - 2.121: 49.5495% ( 1701) 00:18:04.428 2.121 - 2.133: 58.4194% ( 1211) 00:18:04.428 2.133 - 2.145: 63.6783% ( 718) 00:18:04.428 2.145 - 2.157: 69.3034% ( 768) 00:18:04.428 2.157 - 2.169: 74.7162% ( 739) 00:18:04.428 2.169 - 2.181: 78.8618% ( 566) 00:18:04.428 2.181 - 2.193: 83.9522% ( 695) 00:18:04.428 2.193 - 2.204: 87.3874% ( 469) 00:18:04.428 2.204 - 2.216: 88.8230% ( 196) 00:18:04.428 2.216 - 2.228: 89.8704% ( 143) 00:18:04.428 2.228 - 2.240: 90.8299% ( 131) 00:18:04.428 2.240 - 2.252: 91.8186% ( 135) 00:18:04.428 2.252 - 2.264: 93.2469% ( 195) 00:18:04.428 2.264 - 2.276: 94.1918% ( 129) 00:18:04.428 2.276 - 2.287: 94.8949% ( 96) 00:18:04.428 2.287 - 2.299: 95.1293% ( 32) 00:18:04.428 2.299 - 2.311: 95.2758% ( 20) 00:18:04.428 2.311 - 2.323: 95.4149% ( 19) 00:18:04.428 2.323 - 2.335: 95.6420% ( 31) 00:18:04.428 2.335 - 2.347: 95.8544% ( 29) 00:18:04.428 2.347 - 2.359: 95.9423% ( 12) 00:18:04.428 2.359 - 2.370: 96.0521% ( 15) 00:18:04.428 2.370 - 2.382: 96.1327% ( 11) 00:18:04.428 2.382 - 2.394: 96.2646% ( 18) 00:18:04.428 2.394 - 2.406: 96.5429% ( 38) 00:18:04.428 2.406 - 2.418: 96.7992% ( 35) 00:18:04.428 2.418 - 2.430: 97.0556% ( 35) 00:18:04.428 2.430 - 2.441: 97.3046% ( 34) 00:18:04.428 2.441 - 2.453: 97.5170% ( 29) 00:18:04.428 2.453 - 2.465: 97.7368% ( 30) 00:18:04.428 2.465 - 2.477: 97.8613% ( 17) 00:18:04.428 2.477 - 2.489: 97.9638% ( 14) 00:18:04.428 2.489 - 2.501: 98.0810% ( 16) 00:18:04.428 2.501 - 2.513: 98.1616% ( 11) 00:18:04.428 2.513 - 2.524: 98.1909% ( 4) 00:18:04.428 2.524 - 2.536: 98.2055% ( 2) 00:18:04.429 2.536 - 2.548: 98.2348% ( 4) 00:18:04.429 2.548 - 2.560: 98.2641% ( 4) 00:18:04.429 2.560 - 2.572: 98.2934% ( 4) 00:18:04.429 2.572 - 2.584: 98.3081% ( 2) 00:18:04.429 2.584 - 2.596: 98.3154% ( 1) 00:18:04.429 2.596 - 2.607: 98.3300% ( 2) 00:18:04.429 2.607 - 2.619: 98.3447% ( 2) 00:18:04.429 2.619 - 2.631: 98.3667% ( 3) 00:18:04.429 2.631 - 2.643: 98.3740% ( 1) 00:18:04.429 2.643 - 2.655: 98.3886% ( 2) 00:18:04.429 2.667 - 2.679: 98.4033% ( 2) 00:18:04.429 2.702 - 2.714: 98.4253% ( 3) 00:18:04.429 2.738 - 2.750: 98.4326% ( 1) 00:18:04.429 2.761 - 2.773: 98.4399% ( 1) 00:18:04.429 2.785 - 2.797: 98.4546% ( 2) 00:18:04.429 2.797 - 2.809: 98.4619% ( 1) 00:18:04.429 2.809 - 2.821: 98.4692% ( 1) 00:18:04.429 2.821 - 2.833: 98.4838% ( 2) 00:18:04.429 2.833 - 2.844: 98.4985% ( 2) 00:18:04.429 2.844 - 2.856: 98.5058% ( 1) 00:18:04.429 2.856 - 2.868: 98.5131% ( 1) 00:18:04.429 2.892 - 2.904: 98.5205% ( 1) 00:18:04.429 2.916 - 2.927: 98.5424% ( 3) 00:18:04.429 2.951 - 2.963: 98.5498% ( 1) 00:18:04.429 2.963 - 2.975: 98.5571% ( 1) 00:18:04.429 3.022 - 3.034: 98.5644% ( 1) 00:18:04.429 3.081 - 3.105: 98.5717% ( 1) 00:18:04.429 3.176 - 3.200: 98.5791% ( 1) 00:18:04.429 3.247 - 3.271: 9[2024-07-24 01:56:19.321200] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:04.688 8.5864% ( 1) 00:18:04.688 3.319 - 3.342: 98.5937% ( 1) 00:18:04.688 3.342 - 3.366: 98.6084% ( 2) 00:18:04.688 3.390 - 3.413: 98.6157% ( 1) 00:18:04.688 3.413 - 3.437: 98.6230% ( 1) 00:18:04.688 3.437 - 3.461: 98.6303% ( 1) 00:18:04.688 3.484 - 3.508: 98.6377% ( 1) 00:18:04.688 3.508 - 3.532: 98.6450% ( 1) 00:18:04.688 3.532 - 3.556: 98.6596% ( 2) 00:18:04.688 3.579 - 3.603: 98.6670% ( 1) 00:18:04.688 3.627 - 3.650: 98.6743% ( 1) 00:18:04.688 3.721 - 3.745: 98.6889% ( 2) 00:18:04.688 3.769 - 3.793: 98.6963% ( 1) 00:18:04.688 3.816 - 3.840: 98.7036% ( 1) 00:18:04.688 3.887 - 3.911: 98.7109% ( 1) 00:18:04.688 3.935 - 3.959: 98.7182% ( 1) 00:18:04.688 3.959 - 3.982: 98.7329% ( 2) 00:18:04.688 4.053 - 4.077: 98.7549% ( 3) 00:18:04.688 4.124 - 4.148: 98.7622% ( 1) 00:18:04.688 5.215 - 5.239: 98.7695% ( 1) 00:18:04.688 5.784 - 5.807: 98.7768% ( 1) 00:18:04.688 5.807 - 5.831: 98.7842% ( 1) 00:18:04.688 5.902 - 5.926: 98.7915% ( 1) 00:18:04.688 6.021 - 6.044: 98.7988% ( 1) 00:18:04.688 6.163 - 6.210: 98.8061% ( 1) 00:18:04.688 6.353 - 6.400: 98.8134% ( 1) 00:18:04.688 6.447 - 6.495: 98.8208% ( 1) 00:18:04.688 6.495 - 6.542: 98.8281% ( 1) 00:18:04.688 6.732 - 6.779: 98.8354% ( 1) 00:18:04.688 6.779 - 6.827: 98.8427% ( 1) 00:18:04.688 6.921 - 6.969: 98.8501% ( 1) 00:18:04.688 7.064 - 7.111: 98.8574% ( 1) 00:18:04.688 7.396 - 7.443: 98.8647% ( 1) 00:18:04.688 7.727 - 7.775: 98.8720% ( 1) 00:18:04.688 7.822 - 7.870: 98.8794% ( 1) 00:18:04.688 8.059 - 8.107: 98.8867% ( 1) 00:18:04.688 8.154 - 8.201: 98.8940% ( 1) 00:18:04.688 8.391 - 8.439: 98.9013% ( 1) 00:18:04.688 8.818 - 8.865: 98.9087% ( 1) 00:18:04.688 9.576 - 9.624: 98.9160% ( 1) 00:18:04.688 10.382 - 10.430: 98.9233% ( 1) 00:18:04.688 15.644 - 15.739: 98.9306% ( 1) 00:18:04.688 15.739 - 15.834: 98.9453% ( 2) 00:18:04.688 15.834 - 15.929: 98.9526% ( 1) 00:18:04.688 15.929 - 16.024: 98.9892% ( 5) 00:18:04.688 16.024 - 16.119: 99.0039% ( 2) 00:18:04.688 16.119 - 16.213: 99.0405% ( 5) 00:18:04.688 16.213 - 16.308: 99.0625% ( 3) 00:18:04.688 16.308 - 16.403: 99.0698% ( 1) 00:18:04.688 16.403 - 16.498: 99.1211% ( 7) 00:18:04.688 16.498 - 16.593: 99.1797% ( 8) 00:18:04.688 16.593 - 16.687: 99.2383% ( 8) 00:18:04.688 16.687 - 16.782: 99.2749% ( 5) 00:18:04.688 16.782 - 16.877: 99.2895% ( 2) 00:18:04.688 16.877 - 16.972: 99.2969% ( 1) 00:18:04.688 16.972 - 17.067: 99.3262% ( 4) 00:18:04.688 17.067 - 17.161: 99.3335% ( 1) 00:18:04.688 17.161 - 17.256: 99.3408% ( 1) 00:18:04.688 17.446 - 17.541: 99.3555% ( 2) 00:18:04.688 17.825 - 17.920: 99.3628% ( 1) 00:18:04.688 18.110 - 18.204: 99.3701% ( 1) 00:18:04.688 18.394 - 18.489: 99.3774% ( 1) 00:18:04.688 18.679 - 18.773: 99.3848% ( 1) 00:18:04.688 21.049 - 21.144: 99.3921% ( 1) 00:18:04.688 39.443 - 39.633: 99.3994% ( 1) 00:18:04.688 3616.616 - 3640.889: 99.4067% ( 1) 00:18:04.688 3980.705 - 4004.978: 99.8755% ( 64) 00:18:04.688 4004.978 - 4029.250: 100.0000% ( 17) 00:18:04.688 00:18:04.688 01:56:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:18:04.688 01:56:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:18:04.688 01:56:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:18:04.688 01:56:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:18:04.688 01:56:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:04.946 [ 00:18:04.946 { 00:18:04.946 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:04.946 "subtype": "Discovery", 00:18:04.946 "listen_addresses": [], 00:18:04.946 "allow_any_host": true, 00:18:04.946 "hosts": [] 00:18:04.946 }, 00:18:04.946 { 00:18:04.946 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:04.946 "subtype": "NVMe", 00:18:04.946 "listen_addresses": [ 00:18:04.946 { 00:18:04.946 "trtype": "VFIOUSER", 00:18:04.946 "adrfam": "IPv4", 00:18:04.946 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:04.946 "trsvcid": "0" 00:18:04.946 } 00:18:04.946 ], 00:18:04.946 "allow_any_host": true, 00:18:04.946 "hosts": [], 00:18:04.946 "serial_number": "SPDK1", 00:18:04.946 "model_number": "SPDK bdev Controller", 00:18:04.946 "max_namespaces": 32, 00:18:04.946 "min_cntlid": 1, 00:18:04.946 "max_cntlid": 65519, 00:18:04.946 "namespaces": [ 00:18:04.946 { 00:18:04.946 "nsid": 1, 00:18:04.946 "bdev_name": "Malloc1", 00:18:04.946 "name": "Malloc1", 00:18:04.946 "nguid": "DD8BACE42E9D491F8F85B052D94096A7", 00:18:04.946 "uuid": "dd8bace4-2e9d-491f-8f85-b052d94096a7" 00:18:04.946 }, 00:18:04.946 { 00:18:04.946 "nsid": 2, 00:18:04.946 "bdev_name": "Malloc3", 00:18:04.946 "name": "Malloc3", 00:18:04.946 "nguid": "3EAB879F3D6B4737BB690DD55EF3ED09", 00:18:04.946 "uuid": "3eab879f-3d6b-4737-bb69-0dd55ef3ed09" 00:18:04.946 } 00:18:04.946 ] 00:18:04.946 }, 00:18:04.946 { 00:18:04.946 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:04.946 "subtype": "NVMe", 00:18:04.946 "listen_addresses": [ 00:18:04.946 { 00:18:04.946 "trtype": "VFIOUSER", 00:18:04.946 "adrfam": "IPv4", 00:18:04.946 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:04.946 "trsvcid": "0" 00:18:04.946 } 00:18:04.946 ], 00:18:04.946 "allow_any_host": true, 00:18:04.946 "hosts": [], 00:18:04.946 "serial_number": "SPDK2", 00:18:04.946 "model_number": "SPDK bdev Controller", 00:18:04.946 "max_namespaces": 32, 00:18:04.947 "min_cntlid": 1, 00:18:04.947 "max_cntlid": 65519, 00:18:04.947 "namespaces": [ 00:18:04.947 { 00:18:04.947 "nsid": 1, 00:18:04.947 "bdev_name": "Malloc2", 00:18:04.947 "name": "Malloc2", 00:18:04.947 "nguid": "BA8589CE740C44C38E811E5457D5D3DA", 00:18:04.947 "uuid": "ba8589ce-740c-44c3-8e81-1e5457d5d3da" 00:18:04.947 } 00:18:04.947 ] 00:18:04.947 } 00:18:04.947 ] 00:18:04.947 01:56:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:18:04.947 01:56:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1427102 00:18:04.947 01:56:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:18:04.947 01:56:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:18:04.947 01:56:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1263 -- # local i=0 00:18:04.947 01:56:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1264 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:04.947 01:56:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:04.947 01:56:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1274 -- # return 0 00:18:04.947 01:56:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:18:04.947 01:56:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:18:04.947 EAL: No free 2048 kB hugepages reported on node 1 00:18:04.947 [2024-07-24 01:56:19.776779] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:05.205 Malloc4 00:18:05.205 01:56:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:18:05.462 [2024-07-24 01:56:20.152587] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:05.462 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:05.462 Asynchronous Event Request test 00:18:05.462 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:18:05.462 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:18:05.462 Registering asynchronous event callbacks... 00:18:05.462 Starting namespace attribute notice tests for all controllers... 00:18:05.462 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:18:05.462 aer_cb - Changed Namespace 00:18:05.462 Cleaning up... 00:18:05.722 [ 00:18:05.722 { 00:18:05.722 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:05.722 "subtype": "Discovery", 00:18:05.722 "listen_addresses": [], 00:18:05.722 "allow_any_host": true, 00:18:05.722 "hosts": [] 00:18:05.722 }, 00:18:05.722 { 00:18:05.722 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:05.722 "subtype": "NVMe", 00:18:05.722 "listen_addresses": [ 00:18:05.722 { 00:18:05.722 "trtype": "VFIOUSER", 00:18:05.722 "adrfam": "IPv4", 00:18:05.722 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:05.722 "trsvcid": "0" 00:18:05.722 } 00:18:05.722 ], 00:18:05.722 "allow_any_host": true, 00:18:05.722 "hosts": [], 00:18:05.722 "serial_number": "SPDK1", 00:18:05.722 "model_number": "SPDK bdev Controller", 00:18:05.722 "max_namespaces": 32, 00:18:05.722 "min_cntlid": 1, 00:18:05.722 "max_cntlid": 65519, 00:18:05.722 "namespaces": [ 00:18:05.722 { 00:18:05.722 "nsid": 1, 00:18:05.722 "bdev_name": "Malloc1", 00:18:05.722 "name": "Malloc1", 00:18:05.722 "nguid": "DD8BACE42E9D491F8F85B052D94096A7", 00:18:05.722 "uuid": "dd8bace4-2e9d-491f-8f85-b052d94096a7" 00:18:05.722 }, 00:18:05.722 { 00:18:05.722 "nsid": 2, 00:18:05.722 "bdev_name": "Malloc3", 00:18:05.722 "name": "Malloc3", 00:18:05.722 "nguid": "3EAB879F3D6B4737BB690DD55EF3ED09", 00:18:05.722 "uuid": "3eab879f-3d6b-4737-bb69-0dd55ef3ed09" 00:18:05.722 } 00:18:05.722 ] 00:18:05.722 }, 00:18:05.722 { 00:18:05.722 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:05.722 "subtype": "NVMe", 00:18:05.722 "listen_addresses": [ 00:18:05.722 { 00:18:05.722 "trtype": "VFIOUSER", 00:18:05.722 "adrfam": "IPv4", 00:18:05.722 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:05.722 "trsvcid": "0" 00:18:05.722 } 00:18:05.722 ], 00:18:05.722 "allow_any_host": true, 00:18:05.722 "hosts": [], 00:18:05.722 "serial_number": "SPDK2", 00:18:05.722 "model_number": "SPDK bdev Controller", 00:18:05.722 "max_namespaces": 32, 00:18:05.722 "min_cntlid": 1, 00:18:05.722 "max_cntlid": 65519, 00:18:05.722 "namespaces": [ 00:18:05.722 { 00:18:05.722 "nsid": 1, 00:18:05.722 "bdev_name": "Malloc2", 00:18:05.722 "name": "Malloc2", 00:18:05.722 "nguid": "BA8589CE740C44C38E811E5457D5D3DA", 00:18:05.722 "uuid": "ba8589ce-740c-44c3-8e81-1e5457d5d3da" 00:18:05.722 }, 00:18:05.722 { 00:18:05.722 "nsid": 2, 00:18:05.722 "bdev_name": "Malloc4", 00:18:05.722 "name": "Malloc4", 00:18:05.722 "nguid": "C6CA39D85C734F26BAA729C88B5D05A7", 00:18:05.722 "uuid": "c6ca39d8-5c73-4f26-baa7-29c88b5d05a7" 00:18:05.722 } 00:18:05.722 ] 00:18:05.722 } 00:18:05.722 ] 00:18:05.722 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1427102 00:18:05.722 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:18:05.722 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1421524 00:18:05.722 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 1421524 ']' 00:18:05.722 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 1421524 00:18:05.722 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:18:05.722 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:05.722 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1421524 00:18:05.722 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:05.722 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:05.722 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1421524' 00:18:05.722 killing process with pid 1421524 00:18:05.722 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 1421524 00:18:05.722 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 1421524 00:18:05.981 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:18:05.981 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:18:05.981 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:18:05.981 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:18:05.981 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:18:05.981 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1427240 00:18:05.981 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:18:05.981 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1427240' 00:18:05.981 Process pid: 1427240 00:18:05.981 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:18:05.981 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1427240 00:18:05.981 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 1427240 ']' 00:18:05.981 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:05.981 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:05.981 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:05.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:05.981 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:05.981 01:56:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:18:05.981 [2024-07-24 01:56:20.840307] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:18:05.981 [2024-07-24 01:56:20.841340] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:18:05.981 [2024-07-24 01:56:20.841408] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:05.981 EAL: No free 2048 kB hugepages reported on node 1 00:18:06.239 [2024-07-24 01:56:20.904080] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:06.239 [2024-07-24 01:56:20.990949] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:06.239 [2024-07-24 01:56:20.991000] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:06.239 [2024-07-24 01:56:20.991028] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:06.239 [2024-07-24 01:56:20.991039] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:06.239 [2024-07-24 01:56:20.991049] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:06.239 [2024-07-24 01:56:20.991141] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:06.239 [2024-07-24 01:56:20.994336] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:06.239 [2024-07-24 01:56:20.994403] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:18:06.239 [2024-07-24 01:56:20.994407] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:06.239 [2024-07-24 01:56:21.096460] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:18:06.239 [2024-07-24 01:56:21.096683] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:18:06.239 [2024-07-24 01:56:21.096971] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:18:06.239 [2024-07-24 01:56:21.097526] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:18:06.239 [2024-07-24 01:56:21.097806] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:18:06.239 01:56:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:06.239 01:56:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:18:06.239 01:56:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:18:07.618 01:56:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:18:07.618 01:56:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:18:07.618 01:56:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:18:07.618 01:56:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:07.618 01:56:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:18:07.618 01:56:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:18:07.876 Malloc1 00:18:07.876 01:56:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:18:08.134 01:56:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:18:08.392 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:18:08.651 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:08.651 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:18:08.651 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:18:08.909 Malloc2 00:18:08.909 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:18:09.167 01:56:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:18:09.425 01:56:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:18:09.682 01:56:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:18:09.682 01:56:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1427240 00:18:09.682 01:56:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 1427240 ']' 00:18:09.682 01:56:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 1427240 00:18:09.682 01:56:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:18:09.682 01:56:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:09.682 01:56:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1427240 00:18:09.682 01:56:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:09.682 01:56:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:09.682 01:56:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1427240' 00:18:09.682 killing process with pid 1427240 00:18:09.683 01:56:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 1427240 00:18:09.683 01:56:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 1427240 00:18:09.941 01:56:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:18:09.941 01:56:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:18:09.941 00:18:09.941 real 0m52.465s 00:18:09.941 user 3m27.329s 00:18:09.941 sys 0m4.351s 00:18:09.941 01:56:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:09.941 01:56:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:18:09.941 ************************************ 00:18:09.941 END TEST nvmf_vfio_user 00:18:09.941 ************************************ 00:18:09.941 01:56:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:18:09.941 01:56:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:09.941 01:56:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:09.941 01:56:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:09.941 ************************************ 00:18:09.941 START TEST nvmf_vfio_user_nvme_compliance 00:18:09.941 ************************************ 00:18:09.941 01:56:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:18:10.200 * Looking for test storage... 00:18:10.200 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:18:10.200 01:56:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:10.200 01:56:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:18:10.200 01:56:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:10.200 01:56:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:10.200 01:56:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:10.200 01:56:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:10.200 01:56:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:10.200 01:56:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:10.200 01:56:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:10.200 01:56:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:10.200 01:56:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:10.200 01:56:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:10.200 01:56:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:10.200 01:56:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:10.200 01:56:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:10.200 01:56:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:10.200 01:56:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:10.200 01:56:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:10.200 01:56:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:10.200 01:56:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:10.200 01:56:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:10.200 01:56:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:10.200 01:56:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:10.200 01:56:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:10.200 01:56:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:10.200 01:56:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:18:10.200 01:56:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:10.200 01:56:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:18:10.200 01:56:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:10.201 01:56:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:10.201 01:56:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:10.201 01:56:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:10.201 01:56:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:10.201 01:56:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:10.201 01:56:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:10.201 01:56:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:10.201 01:56:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:10.201 01:56:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:10.201 01:56:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:18:10.201 01:56:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:18:10.201 01:56:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:18:10.201 01:56:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=1427839 00:18:10.201 01:56:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:18:10.201 01:56:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 1427839' 00:18:10.201 Process pid: 1427839 00:18:10.201 01:56:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:18:10.201 01:56:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 1427839 00:18:10.201 01:56:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@829 -- # '[' -z 1427839 ']' 00:18:10.201 01:56:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:10.201 01:56:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:10.201 01:56:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:10.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:10.201 01:56:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:10.201 01:56:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:10.201 [2024-07-24 01:56:24.923510] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:18:10.201 [2024-07-24 01:56:24.923593] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:10.201 EAL: No free 2048 kB hugepages reported on node 1 00:18:10.201 [2024-07-24 01:56:24.980497] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:10.201 [2024-07-24 01:56:25.064219] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:10.201 [2024-07-24 01:56:25.064276] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:10.201 [2024-07-24 01:56:25.064304] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:10.201 [2024-07-24 01:56:25.064322] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:10.201 [2024-07-24 01:56:25.064334] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:10.201 [2024-07-24 01:56:25.064419] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:10.201 [2024-07-24 01:56:25.064480] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:10.201 [2024-07-24 01:56:25.064483] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:10.460 01:56:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:10.460 01:56:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@862 -- # return 0 00:18:10.460 01:56:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:18:11.393 01:56:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:18:11.393 01:56:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:18:11.393 01:56:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:18:11.393 01:56:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.393 01:56:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:11.393 01:56:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.393 01:56:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:18:11.393 01:56:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:18:11.393 01:56:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.393 01:56:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:11.393 malloc0 00:18:11.393 01:56:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.393 01:56:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:18:11.393 01:56:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.393 01:56:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:11.393 01:56:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.393 01:56:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:18:11.393 01:56:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.393 01:56:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:11.393 01:56:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.393 01:56:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:18:11.393 01:56:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.394 01:56:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:11.394 01:56:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.394 01:56:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:18:11.652 EAL: No free 2048 kB hugepages reported on node 1 00:18:11.652 00:18:11.652 00:18:11.652 CUnit - A unit testing framework for C - Version 2.1-3 00:18:11.652 http://cunit.sourceforge.net/ 00:18:11.652 00:18:11.652 00:18:11.652 Suite: nvme_compliance 00:18:11.652 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-24 01:56:26.399436] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:11.652 [2024-07-24 01:56:26.400887] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:18:11.652 [2024-07-24 01:56:26.400911] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:18:11.652 [2024-07-24 01:56:26.400938] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:18:11.652 [2024-07-24 01:56:26.403466] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:11.652 passed 00:18:11.652 Test: admin_identify_ctrlr_verify_fused ...[2024-07-24 01:56:26.490079] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:11.652 [2024-07-24 01:56:26.493089] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:11.652 passed 00:18:11.910 Test: admin_identify_ns ...[2024-07-24 01:56:26.575795] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:11.910 [2024-07-24 01:56:26.635338] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:18:11.910 [2024-07-24 01:56:26.643335] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:18:11.910 [2024-07-24 01:56:26.664463] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:11.910 passed 00:18:11.910 Test: admin_get_features_mandatory_features ...[2024-07-24 01:56:26.748918] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:11.910 [2024-07-24 01:56:26.753946] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:11.910 passed 00:18:12.168 Test: admin_get_features_optional_features ...[2024-07-24 01:56:26.836497] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:12.168 [2024-07-24 01:56:26.839523] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:12.168 passed 00:18:12.168 Test: admin_set_features_number_of_queues ...[2024-07-24 01:56:26.921623] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:12.168 [2024-07-24 01:56:27.030430] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:12.168 passed 00:18:12.426 Test: admin_get_log_page_mandatory_logs ...[2024-07-24 01:56:27.112429] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:12.426 [2024-07-24 01:56:27.115456] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:12.426 passed 00:18:12.426 Test: admin_get_log_page_with_lpo ...[2024-07-24 01:56:27.199817] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:12.426 [2024-07-24 01:56:27.267344] ctrlr.c:2688:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:18:12.426 [2024-07-24 01:56:27.280426] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:12.426 passed 00:18:12.711 Test: fabric_property_get ...[2024-07-24 01:56:27.363859] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:12.711 [2024-07-24 01:56:27.365136] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:18:12.711 [2024-07-24 01:56:27.366878] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:12.711 passed 00:18:12.711 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-24 01:56:27.449404] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:12.711 [2024-07-24 01:56:27.450718] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:18:12.711 [2024-07-24 01:56:27.452428] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:12.711 passed 00:18:12.711 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-24 01:56:27.535556] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:12.969 [2024-07-24 01:56:27.627341] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:18:12.969 [2024-07-24 01:56:27.643330] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:18:12.969 [2024-07-24 01:56:27.651455] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:12.969 passed 00:18:12.969 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-24 01:56:27.730932] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:12.969 [2024-07-24 01:56:27.732239] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:18:12.969 [2024-07-24 01:56:27.733949] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:12.969 passed 00:18:12.969 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-24 01:56:27.816119] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:13.228 [2024-07-24 01:56:27.892329] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:18:13.228 [2024-07-24 01:56:27.916326] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:18:13.228 [2024-07-24 01:56:27.921444] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:13.228 passed 00:18:13.228 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-24 01:56:28.004981] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:13.228 [2024-07-24 01:56:28.006266] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:18:13.228 [2024-07-24 01:56:28.006332] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:18:13.228 [2024-07-24 01:56:28.008006] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:13.228 passed 00:18:13.228 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-24 01:56:28.091107] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:13.488 [2024-07-24 01:56:28.182327] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:18:13.488 [2024-07-24 01:56:28.190341] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:18:13.488 [2024-07-24 01:56:28.198326] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:18:13.488 [2024-07-24 01:56:28.206329] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:18:13.488 [2024-07-24 01:56:28.235441] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:13.488 passed 00:18:13.488 Test: admin_create_io_sq_verify_pc ...[2024-07-24 01:56:28.320342] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:13.488 [2024-07-24 01:56:28.336353] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:18:13.488 [2024-07-24 01:56:28.353684] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:13.747 passed 00:18:13.747 Test: admin_create_io_qp_max_qps ...[2024-07-24 01:56:28.436215] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:14.684 [2024-07-24 01:56:29.535332] nvme_ctrlr.c:5465:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:18:15.252 [2024-07-24 01:56:29.923235] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:15.252 passed 00:18:15.252 Test: admin_create_io_sq_shared_cq ...[2024-07-24 01:56:30.006473] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:18:15.252 [2024-07-24 01:56:30.138342] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:18:15.512 [2024-07-24 01:56:30.175443] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:18:15.512 passed 00:18:15.512 00:18:15.512 Run Summary: Type Total Ran Passed Failed Inactive 00:18:15.512 suites 1 1 n/a 0 0 00:18:15.512 tests 18 18 18 0 0 00:18:15.512 asserts 360 360 360 0 n/a 00:18:15.512 00:18:15.512 Elapsed time = 1.565 seconds 00:18:15.512 01:56:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 1427839 00:18:15.512 01:56:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@948 -- # '[' -z 1427839 ']' 00:18:15.512 01:56:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # kill -0 1427839 00:18:15.512 01:56:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # uname 00:18:15.512 01:56:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:15.512 01:56:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1427839 00:18:15.512 01:56:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:15.512 01:56:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:15.512 01:56:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1427839' 00:18:15.512 killing process with pid 1427839 00:18:15.512 01:56:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@967 -- # kill 1427839 00:18:15.512 01:56:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # wait 1427839 00:18:15.772 01:56:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:18:15.772 01:56:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:18:15.772 00:18:15.772 real 0m5.711s 00:18:15.772 user 0m16.047s 00:18:15.772 sys 0m0.555s 00:18:15.772 01:56:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:15.772 01:56:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:18:15.772 ************************************ 00:18:15.772 END TEST nvmf_vfio_user_nvme_compliance 00:18:15.772 ************************************ 00:18:15.772 01:56:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:18:15.772 01:56:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:15.772 01:56:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:15.772 01:56:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:15.772 ************************************ 00:18:15.772 START TEST nvmf_vfio_user_fuzz 00:18:15.772 ************************************ 00:18:15.772 01:56:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:18:15.772 * Looking for test storage... 00:18:15.772 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:15.772 01:56:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:15.772 01:56:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:18:15.772 01:56:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:15.772 01:56:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:15.772 01:56:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:15.772 01:56:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:15.772 01:56:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:15.772 01:56:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:15.772 01:56:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:15.772 01:56:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:15.772 01:56:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:15.772 01:56:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:15.772 01:56:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:15.772 01:56:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:15.772 01:56:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:15.772 01:56:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:15.772 01:56:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:15.772 01:56:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:15.772 01:56:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:15.772 01:56:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:15.772 01:56:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:15.772 01:56:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:15.772 01:56:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:15.772 01:56:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:15.772 01:56:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:15.772 01:56:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:18:15.772 01:56:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:15.772 01:56:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:18:15.772 01:56:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:15.772 01:56:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:15.772 01:56:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:15.772 01:56:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:15.772 01:56:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:15.772 01:56:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:15.772 01:56:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:15.772 01:56:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:15.772 01:56:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:18:15.772 01:56:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:18:15.772 01:56:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:18:15.772 01:56:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:18:15.772 01:56:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:18:15.772 01:56:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:18:15.772 01:56:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:18:15.772 01:56:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=1428561 00:18:15.772 01:56:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:18:15.772 01:56:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 1428561' 00:18:15.772 Process pid: 1428561 00:18:15.772 01:56:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:18:15.772 01:56:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 1428561 00:18:15.772 01:56:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@829 -- # '[' -z 1428561 ']' 00:18:15.772 01:56:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:15.772 01:56:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:15.772 01:56:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:15.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:15.773 01:56:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:15.773 01:56:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:16.338 01:56:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:16.338 01:56:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@862 -- # return 0 00:18:16.338 01:56:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:18:17.276 01:56:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:18:17.276 01:56:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.276 01:56:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:17.276 01:56:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:17.276 01:56:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:18:17.276 01:56:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:18:17.276 01:56:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.276 01:56:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:17.276 malloc0 00:18:17.276 01:56:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:17.276 01:56:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:18:17.276 01:56:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.276 01:56:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:17.276 01:56:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:17.276 01:56:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:18:17.276 01:56:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.276 01:56:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:17.276 01:56:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:17.276 01:56:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:18:17.276 01:56:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.276 01:56:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:17.276 01:56:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:17.277 01:56:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:18:17.277 01:56:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:18:49.354 Fuzzing completed. Shutting down the fuzz application 00:18:49.354 00:18:49.354 Dumping successful admin opcodes: 00:18:49.354 8, 9, 10, 24, 00:18:49.354 Dumping successful io opcodes: 00:18:49.354 0, 00:18:49.354 NS: 0x200003a1ef00 I/O qp, Total commands completed: 603439, total successful commands: 2330, random_seed: 2513635328 00:18:49.354 NS: 0x200003a1ef00 admin qp, Total commands completed: 123890, total successful commands: 1019, random_seed: 1804476928 00:18:49.354 01:57:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:18:49.354 01:57:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.354 01:57:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:49.354 01:57:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.354 01:57:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 1428561 00:18:49.354 01:57:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@948 -- # '[' -z 1428561 ']' 00:18:49.354 01:57:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # kill -0 1428561 00:18:49.354 01:57:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # uname 00:18:49.354 01:57:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:49.354 01:57:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1428561 00:18:49.354 01:57:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:49.354 01:57:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:49.354 01:57:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1428561' 00:18:49.354 killing process with pid 1428561 00:18:49.354 01:57:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@967 -- # kill 1428561 00:18:49.354 01:57:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # wait 1428561 00:18:49.354 01:57:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:18:49.354 01:57:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:18:49.354 00:18:49.354 real 0m32.215s 00:18:49.354 user 0m31.200s 00:18:49.354 sys 0m30.009s 00:18:49.354 01:57:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:49.354 01:57:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:49.354 ************************************ 00:18:49.354 END TEST nvmf_vfio_user_fuzz 00:18:49.354 ************************************ 00:18:49.354 01:57:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:18:49.354 01:57:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:49.354 01:57:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:49.354 01:57:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:49.354 ************************************ 00:18:49.354 START TEST nvmf_auth_target 00:18:49.354 ************************************ 00:18:49.354 01:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:18:49.354 * Looking for test storage... 00:18:49.354 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:49.354 01:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:49.354 01:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:18:49.354 01:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:49.354 01:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:49.354 01:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:49.354 01:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:49.354 01:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:49.354 01:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:49.354 01:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:49.354 01:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:49.354 01:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:49.354 01:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:49.354 01:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:49.354 01:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:49.354 01:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:49.354 01:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:49.354 01:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:49.354 01:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:49.354 01:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:49.354 01:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:49.354 01:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:49.354 01:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:49.354 01:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:49.354 01:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:49.354 01:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:49.354 01:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:18:49.354 01:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:49.354 01:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:18:49.354 01:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:49.354 01:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:49.354 01:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:49.354 01:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:49.354 01:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:49.354 01:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:49.354 01:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:49.354 01:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:49.354 01:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:18:49.355 01:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:18:49.355 01:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:18:49.355 01:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:49.355 01:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:18:49.355 01:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:18:49.355 01:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:18:49.355 01:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:18:49.355 01:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:49.355 01:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:49.355 01:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:49.355 01:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:49.355 01:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:49.355 01:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:49.355 01:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:49.355 01:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:49.355 01:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:49.355 01:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:49.355 01:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:18:49.355 01:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.921 01:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:49.921 01:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:18:49.921 01:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:49.921 01:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:49.921 01:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:49.921 01:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:49.921 01:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:49.921 01:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:18:49.921 01:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:49.921 01:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:18:49.921 01:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:18:49.921 01:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:18:49.921 01:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:18:49.921 01:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:18:49.921 01:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:18:49.921 01:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:49.921 01:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:49.921 01:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:49.921 01:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:49.921 01:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:49.921 01:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:49.921 01:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:49.921 01:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:49.921 01:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:49.922 01:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:49.922 01:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:49.922 01:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:49.922 01:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:49.922 01:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:49.922 01:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:49.922 01:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:49.922 01:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:49.922 01:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:49.922 01:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:49.922 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:50.182 01:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:50.182 01:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:50.182 01:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:50.182 01:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:50.182 01:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:50.182 01:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:50.182 01:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:50.182 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:50.182 01:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:50.182 01:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:50.182 01:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:50.182 01:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:50.182 01:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:50.182 01:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:50.182 01:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:50.182 01:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:50.182 01:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:50.182 01:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:50.182 01:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:50.182 01:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:50.182 01:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:50.182 01:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:50.182 01:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:50.182 01:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:50.182 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:50.182 01:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:50.182 01:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:50.182 01:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:50.182 01:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:50.182 01:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:50.182 01:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:50.182 01:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:50.182 01:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:50.182 01:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:50.182 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:50.182 01:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:50.182 01:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:50.182 01:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:18:50.182 01:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:50.182 01:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:50.182 01:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:50.182 01:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:50.182 01:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:50.182 01:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:50.182 01:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:50.182 01:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:50.182 01:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:50.182 01:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:50.182 01:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:50.182 01:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:50.182 01:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:50.182 01:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:50.182 01:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:50.182 01:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:50.182 01:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:50.182 01:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:50.182 01:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:50.182 01:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:50.182 01:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:50.182 01:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:50.182 01:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:50.182 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:50.182 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.231 ms 00:18:50.182 00:18:50.182 --- 10.0.0.2 ping statistics --- 00:18:50.182 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:50.182 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:18:50.182 01:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:50.182 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:50.182 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.099 ms 00:18:50.182 00:18:50.182 --- 10.0.0.1 ping statistics --- 00:18:50.182 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:50.182 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:18:50.182 01:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:50.182 01:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:18:50.182 01:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:50.182 01:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:50.182 01:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:50.182 01:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:50.182 01:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:50.182 01:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:50.182 01:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:50.182 01:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:18:50.182 01:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:50.182 01:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:50.182 01:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.182 01:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=1434104 00:18:50.182 01:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:18:50.182 01:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 1434104 00:18:50.182 01:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1434104 ']' 00:18:50.183 01:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:50.183 01:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:50.183 01:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:50.183 01:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:50.183 01:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.441 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:50.441 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:18:50.441 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:50.441 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:50.441 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.441 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:50.441 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=1434124 00:18:50.441 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:18:50.441 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:50.441 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:18:50.441 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:50.441 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:50.441 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:50.441 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:18:50.441 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:18:50.441 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:50.441 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=46ac9f7d5346ec3941af787d6f339e8780febab317c9d1fa 00:18:50.441 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:18:50.441 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.4EB 00:18:50.441 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 46ac9f7d5346ec3941af787d6f339e8780febab317c9d1fa 0 00:18:50.441 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 46ac9f7d5346ec3941af787d6f339e8780febab317c9d1fa 0 00:18:50.441 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:50.441 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:50.441 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=46ac9f7d5346ec3941af787d6f339e8780febab317c9d1fa 00:18:50.441 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:18:50.441 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:50.700 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.4EB 00:18:50.700 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.4EB 00:18:50.701 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.4EB 00:18:50.701 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:18:50.701 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:50.701 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:50.701 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:50.701 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:18:50.701 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:18:50.701 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:50.701 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=8c6cff7b3f04de660a9e56aa12d27e8e03ec9b28e2d68f5bd88b4d002135dd8a 00:18:50.701 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:18:50.701 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.2Qc 00:18:50.701 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 8c6cff7b3f04de660a9e56aa12d27e8e03ec9b28e2d68f5bd88b4d002135dd8a 3 00:18:50.701 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 8c6cff7b3f04de660a9e56aa12d27e8e03ec9b28e2d68f5bd88b4d002135dd8a 3 00:18:50.701 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:50.701 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:50.701 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=8c6cff7b3f04de660a9e56aa12d27e8e03ec9b28e2d68f5bd88b4d002135dd8a 00:18:50.701 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:18:50.701 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:50.701 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.2Qc 00:18:50.701 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.2Qc 00:18:50.701 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.2Qc 00:18:50.701 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:18:50.701 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:50.701 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:50.701 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:50.701 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:18:50.701 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:18:50.701 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:50.701 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=e82b7db692aa40eec234e025eb597992 00:18:50.701 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:18:50.701 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.GJi 00:18:50.701 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key e82b7db692aa40eec234e025eb597992 1 00:18:50.701 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 e82b7db692aa40eec234e025eb597992 1 00:18:50.701 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:50.701 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:50.701 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=e82b7db692aa40eec234e025eb597992 00:18:50.701 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:18:50.701 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:50.701 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.GJi 00:18:50.701 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.GJi 00:18:50.701 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.GJi 00:18:50.701 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:18:50.701 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:50.701 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:50.701 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:50.701 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:18:50.701 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:18:50.701 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:50.701 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=d2fcc9355bf14c6e4aee594081138c6bdcce6ae2d6a5b87f 00:18:50.701 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:18:50.701 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.XJG 00:18:50.701 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key d2fcc9355bf14c6e4aee594081138c6bdcce6ae2d6a5b87f 2 00:18:50.701 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 d2fcc9355bf14c6e4aee594081138c6bdcce6ae2d6a5b87f 2 00:18:50.701 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:50.701 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:50.701 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=d2fcc9355bf14c6e4aee594081138c6bdcce6ae2d6a5b87f 00:18:50.701 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:18:50.701 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:50.701 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.XJG 00:18:50.701 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.XJG 00:18:50.701 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.XJG 00:18:50.701 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:18:50.701 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:50.701 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:50.701 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:50.701 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:18:50.701 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:18:50.701 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:50.702 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=02ae8568d3178ac6786f05a425c1cccc49293fb5a8ae1207 00:18:50.702 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:18:50.702 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.ZOe 00:18:50.702 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 02ae8568d3178ac6786f05a425c1cccc49293fb5a8ae1207 2 00:18:50.702 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 02ae8568d3178ac6786f05a425c1cccc49293fb5a8ae1207 2 00:18:50.702 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:50.702 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:50.702 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=02ae8568d3178ac6786f05a425c1cccc49293fb5a8ae1207 00:18:50.702 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:18:50.702 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:50.702 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.ZOe 00:18:50.702 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.ZOe 00:18:50.702 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.ZOe 00:18:50.702 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:18:50.702 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:50.702 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:50.702 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:50.702 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:18:50.702 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:18:50.702 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:50.702 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=b6c8196645ff0d5903f75041bc73ca43 00:18:50.702 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:18:50.702 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.NQC 00:18:50.702 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key b6c8196645ff0d5903f75041bc73ca43 1 00:18:50.702 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 b6c8196645ff0d5903f75041bc73ca43 1 00:18:50.702 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:50.702 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:50.702 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=b6c8196645ff0d5903f75041bc73ca43 00:18:50.702 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:18:50.702 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:50.960 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.NQC 00:18:50.960 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.NQC 00:18:50.960 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.NQC 00:18:50.960 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:18:50.960 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:50.960 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:50.960 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:50.960 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:18:50.960 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:18:50.960 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:50.960 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=144483ae34c2c0eec6391d7dfa389e9b57aeb07936d7cbaee81637ad4606a872 00:18:50.960 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:18:50.960 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.Coj 00:18:50.960 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 144483ae34c2c0eec6391d7dfa389e9b57aeb07936d7cbaee81637ad4606a872 3 00:18:50.961 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 144483ae34c2c0eec6391d7dfa389e9b57aeb07936d7cbaee81637ad4606a872 3 00:18:50.961 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:50.961 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:50.961 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=144483ae34c2c0eec6391d7dfa389e9b57aeb07936d7cbaee81637ad4606a872 00:18:50.961 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:18:50.961 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:50.961 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.Coj 00:18:50.961 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.Coj 00:18:50.961 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.Coj 00:18:50.961 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:18:50.961 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 1434104 00:18:50.961 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1434104 ']' 00:18:50.961 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:50.961 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:50.961 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:50.961 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:50.961 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:50.961 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.220 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:51.220 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:18:51.220 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 1434124 /var/tmp/host.sock 00:18:51.220 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1434124 ']' 00:18:51.220 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:18:51.220 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:51.220 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:18:51.220 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:18:51.220 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:51.220 01:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.478 01:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:51.478 01:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:18:51.478 01:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:18:51.478 01:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.478 01:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.478 01:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.478 01:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:18:51.479 01:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.4EB 00:18:51.479 01:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.479 01:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.479 01:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.479 01:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.4EB 00:18:51.479 01:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.4EB 00:18:51.737 01:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.2Qc ]] 00:18:51.737 01:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.2Qc 00:18:51.737 01:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.737 01:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.737 01:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.737 01:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.2Qc 00:18:51.737 01:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.2Qc 00:18:51.995 01:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:18:51.995 01:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.GJi 00:18:51.995 01:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.995 01:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.995 01:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.995 01:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.GJi 00:18:51.995 01:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.GJi 00:18:52.254 01:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.XJG ]] 00:18:52.254 01:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.XJG 00:18:52.254 01:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.254 01:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.254 01:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.254 01:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.XJG 00:18:52.254 01:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.XJG 00:18:52.513 01:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:18:52.513 01:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.ZOe 00:18:52.513 01:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.513 01:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.513 01:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.513 01:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.ZOe 00:18:52.513 01:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.ZOe 00:18:52.771 01:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.NQC ]] 00:18:52.771 01:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.NQC 00:18:52.771 01:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.771 01:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.771 01:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.771 01:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.NQC 00:18:52.771 01:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.NQC 00:18:53.029 01:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:18:53.029 01:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.Coj 00:18:53.029 01:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.029 01:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.030 01:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.030 01:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.Coj 00:18:53.030 01:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.Coj 00:18:53.288 01:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:18:53.288 01:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:18:53.288 01:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:53.288 01:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:53.288 01:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:53.288 01:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:53.545 01:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:18:53.546 01:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:53.546 01:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:53.546 01:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:53.546 01:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:53.546 01:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:53.546 01:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:53.546 01:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.546 01:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.546 01:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.546 01:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:53.546 01:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:53.803 00:18:53.803 01:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:53.803 01:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:53.803 01:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:54.061 01:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:54.061 01:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:54.061 01:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.061 01:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.061 01:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.061 01:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:54.061 { 00:18:54.061 "cntlid": 1, 00:18:54.061 "qid": 0, 00:18:54.061 "state": "enabled", 00:18:54.061 "thread": "nvmf_tgt_poll_group_000", 00:18:54.061 "listen_address": { 00:18:54.061 "trtype": "TCP", 00:18:54.061 "adrfam": "IPv4", 00:18:54.061 "traddr": "10.0.0.2", 00:18:54.061 "trsvcid": "4420" 00:18:54.061 }, 00:18:54.061 "peer_address": { 00:18:54.061 "trtype": "TCP", 00:18:54.061 "adrfam": "IPv4", 00:18:54.061 "traddr": "10.0.0.1", 00:18:54.061 "trsvcid": "38358" 00:18:54.061 }, 00:18:54.061 "auth": { 00:18:54.061 "state": "completed", 00:18:54.061 "digest": "sha256", 00:18:54.061 "dhgroup": "null" 00:18:54.061 } 00:18:54.061 } 00:18:54.061 ]' 00:18:54.061 01:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:54.061 01:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:54.061 01:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:54.319 01:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:54.319 01:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:54.319 01:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:54.319 01:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:54.319 01:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:54.577 01:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NDZhYzlmN2Q1MzQ2ZWMzOTQxYWY3ODdkNmYzMzllODc4MGZlYmFiMzE3YzlkMWZhDWmyZA==: --dhchap-ctrl-secret DHHC-1:03:OGM2Y2ZmN2IzZjA0ZGU2NjBhOWU1NmFhMTJkMjdlOGUwM2VjOWIyOGUyZDY4ZjViZDg4YjRkMDAyMTM1ZGQ4YaApqQU=: 00:18:55.514 01:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:55.514 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:55.514 01:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:55.514 01:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.514 01:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.514 01:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.514 01:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:55.514 01:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:55.514 01:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:55.772 01:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:18:55.772 01:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:55.772 01:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:55.772 01:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:55.772 01:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:55.772 01:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:55.772 01:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:55.772 01:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.772 01:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.772 01:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.772 01:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:55.772 01:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:56.030 00:18:56.030 01:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:56.030 01:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:56.030 01:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:56.290 01:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:56.290 01:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:56.290 01:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.290 01:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.290 01:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.290 01:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:56.290 { 00:18:56.290 "cntlid": 3, 00:18:56.290 "qid": 0, 00:18:56.290 "state": "enabled", 00:18:56.290 "thread": "nvmf_tgt_poll_group_000", 00:18:56.290 "listen_address": { 00:18:56.290 "trtype": "TCP", 00:18:56.290 "adrfam": "IPv4", 00:18:56.290 "traddr": "10.0.0.2", 00:18:56.290 "trsvcid": "4420" 00:18:56.290 }, 00:18:56.290 "peer_address": { 00:18:56.290 "trtype": "TCP", 00:18:56.290 "adrfam": "IPv4", 00:18:56.290 "traddr": "10.0.0.1", 00:18:56.290 "trsvcid": "49552" 00:18:56.290 }, 00:18:56.290 "auth": { 00:18:56.290 "state": "completed", 00:18:56.290 "digest": "sha256", 00:18:56.290 "dhgroup": "null" 00:18:56.290 } 00:18:56.290 } 00:18:56.290 ]' 00:18:56.290 01:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:56.290 01:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:56.290 01:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:56.290 01:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:56.290 01:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:56.290 01:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:56.290 01:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:56.290 01:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:56.559 01:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZTgyYjdkYjY5MmFhNDBlZWMyMzRlMDI1ZWI1OTc5OTIkog7U: --dhchap-ctrl-secret DHHC-1:02:ZDJmY2M5MzU1YmYxNGM2ZTRhZWU1OTQwODExMzhjNmJkY2NlNmFlMmQ2YTViODdm1yPkNQ==: 00:18:57.507 01:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:57.507 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:57.507 01:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:57.507 01:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:57.507 01:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.507 01:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:57.507 01:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:57.507 01:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:57.507 01:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:57.765 01:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:18:57.765 01:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:57.765 01:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:57.765 01:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:57.765 01:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:57.765 01:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:57.765 01:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:57.765 01:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:57.765 01:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.765 01:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:57.765 01:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:57.765 01:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:58.332 00:18:58.332 01:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:58.332 01:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:58.332 01:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:58.332 01:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:58.332 01:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:58.332 01:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.332 01:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.332 01:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.332 01:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:58.332 { 00:18:58.332 "cntlid": 5, 00:18:58.332 "qid": 0, 00:18:58.332 "state": "enabled", 00:18:58.332 "thread": "nvmf_tgt_poll_group_000", 00:18:58.332 "listen_address": { 00:18:58.332 "trtype": "TCP", 00:18:58.332 "adrfam": "IPv4", 00:18:58.332 "traddr": "10.0.0.2", 00:18:58.332 "trsvcid": "4420" 00:18:58.332 }, 00:18:58.332 "peer_address": { 00:18:58.332 "trtype": "TCP", 00:18:58.332 "adrfam": "IPv4", 00:18:58.332 "traddr": "10.0.0.1", 00:18:58.332 "trsvcid": "49586" 00:18:58.332 }, 00:18:58.332 "auth": { 00:18:58.332 "state": "completed", 00:18:58.332 "digest": "sha256", 00:18:58.332 "dhgroup": "null" 00:18:58.332 } 00:18:58.332 } 00:18:58.332 ]' 00:18:58.332 01:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:58.597 01:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:58.597 01:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:58.597 01:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:58.597 01:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:58.597 01:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:58.597 01:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:58.597 01:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:58.854 01:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MDJhZTg1NjhkMzE3OGFjNjc4NmYwNWE0MjVjMWNjY2M0OTI5M2ZiNWE4YWUxMjA3f8fA1A==: --dhchap-ctrl-secret DHHC-1:01:YjZjODE5NjY0NWZmMGQ1OTAzZjc1MDQxYmM3M2NhNDPOAk4W: 00:18:59.788 01:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:59.788 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:59.788 01:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:59.788 01:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.788 01:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.788 01:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.788 01:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:59.788 01:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:59.788 01:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:00.045 01:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:19:00.045 01:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:00.045 01:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:00.045 01:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:00.045 01:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:00.045 01:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:00.045 01:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:00.045 01:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.045 01:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.045 01:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.045 01:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:00.046 01:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:00.303 00:19:00.303 01:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:00.303 01:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:00.303 01:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:00.561 01:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:00.561 01:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:00.561 01:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.561 01:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.561 01:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.561 01:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:00.561 { 00:19:00.561 "cntlid": 7, 00:19:00.561 "qid": 0, 00:19:00.561 "state": "enabled", 00:19:00.561 "thread": "nvmf_tgt_poll_group_000", 00:19:00.561 "listen_address": { 00:19:00.561 "trtype": "TCP", 00:19:00.561 "adrfam": "IPv4", 00:19:00.561 "traddr": "10.0.0.2", 00:19:00.561 "trsvcid": "4420" 00:19:00.561 }, 00:19:00.561 "peer_address": { 00:19:00.561 "trtype": "TCP", 00:19:00.561 "adrfam": "IPv4", 00:19:00.561 "traddr": "10.0.0.1", 00:19:00.561 "trsvcid": "49612" 00:19:00.561 }, 00:19:00.561 "auth": { 00:19:00.561 "state": "completed", 00:19:00.561 "digest": "sha256", 00:19:00.561 "dhgroup": "null" 00:19:00.561 } 00:19:00.561 } 00:19:00.561 ]' 00:19:00.561 01:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:00.819 01:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:00.819 01:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:00.819 01:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:00.819 01:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:00.819 01:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:00.819 01:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:00.819 01:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:01.076 01:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MTQ0NDgzYWUzNGMyYzBlZWM2MzkxZDdkZmEzODllOWI1N2FlYjA3OTM2ZDdjYmFlZTgxNjM3YWQ0NjA2YTg3Mhs9fsM=: 00:19:02.008 01:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:02.008 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:02.008 01:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:02.008 01:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:02.008 01:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.008 01:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:02.008 01:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:02.008 01:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:02.008 01:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:02.008 01:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:02.266 01:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:19:02.266 01:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:02.266 01:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:02.266 01:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:02.266 01:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:02.266 01:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:02.266 01:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:02.266 01:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:02.266 01:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.266 01:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:02.266 01:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:02.266 01:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:02.523 00:19:02.523 01:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:02.523 01:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:02.523 01:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:02.781 01:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:02.781 01:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:02.781 01:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:02.781 01:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.781 01:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:02.781 01:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:02.781 { 00:19:02.781 "cntlid": 9, 00:19:02.781 "qid": 0, 00:19:02.781 "state": "enabled", 00:19:02.781 "thread": "nvmf_tgt_poll_group_000", 00:19:02.781 "listen_address": { 00:19:02.781 "trtype": "TCP", 00:19:02.781 "adrfam": "IPv4", 00:19:02.781 "traddr": "10.0.0.2", 00:19:02.781 "trsvcid": "4420" 00:19:02.781 }, 00:19:02.781 "peer_address": { 00:19:02.781 "trtype": "TCP", 00:19:02.781 "adrfam": "IPv4", 00:19:02.781 "traddr": "10.0.0.1", 00:19:02.781 "trsvcid": "49624" 00:19:02.781 }, 00:19:02.781 "auth": { 00:19:02.781 "state": "completed", 00:19:02.781 "digest": "sha256", 00:19:02.781 "dhgroup": "ffdhe2048" 00:19:02.781 } 00:19:02.781 } 00:19:02.781 ]' 00:19:02.781 01:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:02.781 01:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:02.781 01:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:03.038 01:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:03.038 01:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:03.038 01:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:03.038 01:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:03.038 01:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:03.296 01:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NDZhYzlmN2Q1MzQ2ZWMzOTQxYWY3ODdkNmYzMzllODc4MGZlYmFiMzE3YzlkMWZhDWmyZA==: --dhchap-ctrl-secret DHHC-1:03:OGM2Y2ZmN2IzZjA0ZGU2NjBhOWU1NmFhMTJkMjdlOGUwM2VjOWIyOGUyZDY4ZjViZDg4YjRkMDAyMTM1ZGQ4YaApqQU=: 00:19:04.231 01:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:04.231 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:04.231 01:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:04.231 01:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.231 01:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.231 01:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.231 01:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:04.231 01:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:04.231 01:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:04.489 01:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:19:04.489 01:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:04.489 01:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:04.489 01:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:04.489 01:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:04.489 01:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:04.489 01:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:04.489 01:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.489 01:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.489 01:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.489 01:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:04.489 01:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:04.747 00:19:04.748 01:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:04.748 01:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:04.748 01:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:05.006 01:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:05.006 01:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:05.006 01:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.006 01:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.006 01:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.006 01:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:05.006 { 00:19:05.006 "cntlid": 11, 00:19:05.006 "qid": 0, 00:19:05.006 "state": "enabled", 00:19:05.006 "thread": "nvmf_tgt_poll_group_000", 00:19:05.006 "listen_address": { 00:19:05.006 "trtype": "TCP", 00:19:05.006 "adrfam": "IPv4", 00:19:05.006 "traddr": "10.0.0.2", 00:19:05.006 "trsvcid": "4420" 00:19:05.006 }, 00:19:05.006 "peer_address": { 00:19:05.006 "trtype": "TCP", 00:19:05.006 "adrfam": "IPv4", 00:19:05.006 "traddr": "10.0.0.1", 00:19:05.006 "trsvcid": "49648" 00:19:05.006 }, 00:19:05.006 "auth": { 00:19:05.006 "state": "completed", 00:19:05.006 "digest": "sha256", 00:19:05.006 "dhgroup": "ffdhe2048" 00:19:05.006 } 00:19:05.006 } 00:19:05.006 ]' 00:19:05.006 01:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:05.006 01:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:05.006 01:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:05.006 01:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:05.006 01:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:05.006 01:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:05.006 01:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:05.006 01:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:05.265 01:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZTgyYjdkYjY5MmFhNDBlZWMyMzRlMDI1ZWI1OTc5OTIkog7U: --dhchap-ctrl-secret DHHC-1:02:ZDJmY2M5MzU1YmYxNGM2ZTRhZWU1OTQwODExMzhjNmJkY2NlNmFlMmQ2YTViODdm1yPkNQ==: 00:19:06.640 01:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:06.640 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:06.640 01:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:06.640 01:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.640 01:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.640 01:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.640 01:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:06.640 01:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:06.640 01:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:06.641 01:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:19:06.641 01:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:06.641 01:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:06.641 01:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:06.641 01:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:06.641 01:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:06.641 01:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:06.641 01:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.641 01:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.641 01:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.641 01:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:06.641 01:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:06.899 00:19:06.899 01:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:06.899 01:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:06.899 01:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:07.156 01:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:07.156 01:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:07.156 01:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.156 01:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.156 01:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.156 01:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:07.156 { 00:19:07.156 "cntlid": 13, 00:19:07.156 "qid": 0, 00:19:07.156 "state": "enabled", 00:19:07.156 "thread": "nvmf_tgt_poll_group_000", 00:19:07.156 "listen_address": { 00:19:07.156 "trtype": "TCP", 00:19:07.156 "adrfam": "IPv4", 00:19:07.156 "traddr": "10.0.0.2", 00:19:07.156 "trsvcid": "4420" 00:19:07.156 }, 00:19:07.156 "peer_address": { 00:19:07.156 "trtype": "TCP", 00:19:07.156 "adrfam": "IPv4", 00:19:07.156 "traddr": "10.0.0.1", 00:19:07.156 "trsvcid": "51082" 00:19:07.156 }, 00:19:07.156 "auth": { 00:19:07.156 "state": "completed", 00:19:07.156 "digest": "sha256", 00:19:07.156 "dhgroup": "ffdhe2048" 00:19:07.156 } 00:19:07.156 } 00:19:07.156 ]' 00:19:07.156 01:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:07.156 01:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:07.156 01:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:07.414 01:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:07.414 01:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:07.414 01:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:07.414 01:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:07.414 01:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:07.672 01:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MDJhZTg1NjhkMzE3OGFjNjc4NmYwNWE0MjVjMWNjY2M0OTI5M2ZiNWE4YWUxMjA3f8fA1A==: --dhchap-ctrl-secret DHHC-1:01:YjZjODE5NjY0NWZmMGQ1OTAzZjc1MDQxYmM3M2NhNDPOAk4W: 00:19:08.606 01:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:08.606 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:08.606 01:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:08.606 01:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.606 01:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.606 01:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.606 01:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:08.606 01:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:08.606 01:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:08.863 01:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:19:08.863 01:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:08.863 01:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:08.863 01:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:08.863 01:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:08.863 01:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:08.863 01:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:08.863 01:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.863 01:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.863 01:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.863 01:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:08.863 01:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:09.121 00:19:09.121 01:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:09.121 01:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:09.121 01:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:09.379 01:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:09.379 01:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:09.379 01:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:09.379 01:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.379 01:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:09.379 01:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:09.379 { 00:19:09.379 "cntlid": 15, 00:19:09.379 "qid": 0, 00:19:09.379 "state": "enabled", 00:19:09.379 "thread": "nvmf_tgt_poll_group_000", 00:19:09.379 "listen_address": { 00:19:09.379 "trtype": "TCP", 00:19:09.379 "adrfam": "IPv4", 00:19:09.379 "traddr": "10.0.0.2", 00:19:09.379 "trsvcid": "4420" 00:19:09.379 }, 00:19:09.379 "peer_address": { 00:19:09.379 "trtype": "TCP", 00:19:09.379 "adrfam": "IPv4", 00:19:09.379 "traddr": "10.0.0.1", 00:19:09.379 "trsvcid": "51110" 00:19:09.379 }, 00:19:09.379 "auth": { 00:19:09.379 "state": "completed", 00:19:09.379 "digest": "sha256", 00:19:09.379 "dhgroup": "ffdhe2048" 00:19:09.379 } 00:19:09.379 } 00:19:09.379 ]' 00:19:09.379 01:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:09.379 01:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:09.379 01:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:09.379 01:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:09.379 01:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:09.379 01:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:09.379 01:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:09.379 01:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:09.638 01:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MTQ0NDgzYWUzNGMyYzBlZWM2MzkxZDdkZmEzODllOWI1N2FlYjA3OTM2ZDdjYmFlZTgxNjM3YWQ0NjA2YTg3Mhs9fsM=: 00:19:10.573 01:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:10.573 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:10.573 01:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:10.573 01:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.573 01:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.573 01:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.573 01:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:10.573 01:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:10.573 01:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:10.573 01:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:10.831 01:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:19:10.831 01:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:10.831 01:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:10.831 01:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:10.831 01:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:10.832 01:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:10.832 01:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:10.832 01:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.832 01:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.832 01:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.832 01:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:10.832 01:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:11.401 00:19:11.401 01:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:11.401 01:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:11.401 01:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:11.660 01:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:11.660 01:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:11.660 01:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:11.660 01:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.660 01:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:11.660 01:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:11.660 { 00:19:11.660 "cntlid": 17, 00:19:11.660 "qid": 0, 00:19:11.660 "state": "enabled", 00:19:11.660 "thread": "nvmf_tgt_poll_group_000", 00:19:11.660 "listen_address": { 00:19:11.660 "trtype": "TCP", 00:19:11.660 "adrfam": "IPv4", 00:19:11.660 "traddr": "10.0.0.2", 00:19:11.660 "trsvcid": "4420" 00:19:11.660 }, 00:19:11.660 "peer_address": { 00:19:11.660 "trtype": "TCP", 00:19:11.660 "adrfam": "IPv4", 00:19:11.660 "traddr": "10.0.0.1", 00:19:11.660 "trsvcid": "51134" 00:19:11.660 }, 00:19:11.660 "auth": { 00:19:11.660 "state": "completed", 00:19:11.660 "digest": "sha256", 00:19:11.660 "dhgroup": "ffdhe3072" 00:19:11.660 } 00:19:11.660 } 00:19:11.660 ]' 00:19:11.660 01:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:11.660 01:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:11.660 01:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:11.660 01:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:11.660 01:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:11.660 01:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:11.660 01:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:11.660 01:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:11.918 01:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NDZhYzlmN2Q1MzQ2ZWMzOTQxYWY3ODdkNmYzMzllODc4MGZlYmFiMzE3YzlkMWZhDWmyZA==: --dhchap-ctrl-secret DHHC-1:03:OGM2Y2ZmN2IzZjA0ZGU2NjBhOWU1NmFhMTJkMjdlOGUwM2VjOWIyOGUyZDY4ZjViZDg4YjRkMDAyMTM1ZGQ4YaApqQU=: 00:19:12.855 01:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:12.855 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:12.855 01:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:12.855 01:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.855 01:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.856 01:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.856 01:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:12.856 01:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:12.856 01:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:13.114 01:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:19:13.114 01:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:13.114 01:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:13.114 01:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:13.114 01:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:13.114 01:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:13.114 01:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:13.114 01:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.114 01:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.114 01:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.114 01:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:13.114 01:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:13.373 00:19:13.633 01:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:13.633 01:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:13.633 01:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:13.897 01:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:13.897 01:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:13.897 01:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.897 01:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.897 01:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.897 01:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:13.897 { 00:19:13.897 "cntlid": 19, 00:19:13.897 "qid": 0, 00:19:13.897 "state": "enabled", 00:19:13.897 "thread": "nvmf_tgt_poll_group_000", 00:19:13.897 "listen_address": { 00:19:13.897 "trtype": "TCP", 00:19:13.897 "adrfam": "IPv4", 00:19:13.897 "traddr": "10.0.0.2", 00:19:13.897 "trsvcid": "4420" 00:19:13.897 }, 00:19:13.897 "peer_address": { 00:19:13.897 "trtype": "TCP", 00:19:13.897 "adrfam": "IPv4", 00:19:13.897 "traddr": "10.0.0.1", 00:19:13.897 "trsvcid": "51172" 00:19:13.897 }, 00:19:13.897 "auth": { 00:19:13.897 "state": "completed", 00:19:13.897 "digest": "sha256", 00:19:13.897 "dhgroup": "ffdhe3072" 00:19:13.897 } 00:19:13.897 } 00:19:13.897 ]' 00:19:13.897 01:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:13.897 01:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:13.897 01:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:13.897 01:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:13.897 01:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:13.897 01:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:13.897 01:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:13.897 01:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:14.203 01:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZTgyYjdkYjY5MmFhNDBlZWMyMzRlMDI1ZWI1OTc5OTIkog7U: --dhchap-ctrl-secret DHHC-1:02:ZDJmY2M5MzU1YmYxNGM2ZTRhZWU1OTQwODExMzhjNmJkY2NlNmFlMmQ2YTViODdm1yPkNQ==: 00:19:15.139 01:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:15.139 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:15.139 01:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:15.139 01:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.139 01:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.139 01:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.139 01:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:15.139 01:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:15.139 01:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:15.398 01:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:19:15.398 01:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:15.398 01:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:15.398 01:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:15.398 01:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:15.398 01:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:15.398 01:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:15.398 01:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.398 01:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.398 01:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.398 01:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:15.398 01:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:15.656 00:19:15.656 01:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:15.656 01:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:15.656 01:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:15.914 01:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:15.914 01:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:15.914 01:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.914 01:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.914 01:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.914 01:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:15.914 { 00:19:15.914 "cntlid": 21, 00:19:15.914 "qid": 0, 00:19:15.914 "state": "enabled", 00:19:15.914 "thread": "nvmf_tgt_poll_group_000", 00:19:15.914 "listen_address": { 00:19:15.914 "trtype": "TCP", 00:19:15.914 "adrfam": "IPv4", 00:19:15.914 "traddr": "10.0.0.2", 00:19:15.914 "trsvcid": "4420" 00:19:15.914 }, 00:19:15.914 "peer_address": { 00:19:15.914 "trtype": "TCP", 00:19:15.914 "adrfam": "IPv4", 00:19:15.914 "traddr": "10.0.0.1", 00:19:15.914 "trsvcid": "39900" 00:19:15.914 }, 00:19:15.914 "auth": { 00:19:15.914 "state": "completed", 00:19:15.914 "digest": "sha256", 00:19:15.914 "dhgroup": "ffdhe3072" 00:19:15.914 } 00:19:15.914 } 00:19:15.914 ]' 00:19:15.914 01:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:15.914 01:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:15.914 01:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:16.172 01:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:16.172 01:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:16.172 01:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:16.172 01:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:16.172 01:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:16.431 01:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MDJhZTg1NjhkMzE3OGFjNjc4NmYwNWE0MjVjMWNjY2M0OTI5M2ZiNWE4YWUxMjA3f8fA1A==: --dhchap-ctrl-secret DHHC-1:01:YjZjODE5NjY0NWZmMGQ1OTAzZjc1MDQxYmM3M2NhNDPOAk4W: 00:19:17.368 01:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:17.368 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:17.368 01:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:17.368 01:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.368 01:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.368 01:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.368 01:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:17.368 01:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:17.368 01:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:17.626 01:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:19:17.626 01:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:17.626 01:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:17.626 01:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:17.626 01:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:17.626 01:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:17.627 01:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:17.627 01:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.627 01:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.627 01:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.627 01:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:17.627 01:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:17.885 00:19:17.885 01:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:17.885 01:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:17.885 01:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:18.143 01:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:18.143 01:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:18.143 01:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.143 01:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.143 01:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.143 01:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:18.143 { 00:19:18.143 "cntlid": 23, 00:19:18.143 "qid": 0, 00:19:18.143 "state": "enabled", 00:19:18.143 "thread": "nvmf_tgt_poll_group_000", 00:19:18.143 "listen_address": { 00:19:18.143 "trtype": "TCP", 00:19:18.143 "adrfam": "IPv4", 00:19:18.143 "traddr": "10.0.0.2", 00:19:18.143 "trsvcid": "4420" 00:19:18.143 }, 00:19:18.143 "peer_address": { 00:19:18.143 "trtype": "TCP", 00:19:18.143 "adrfam": "IPv4", 00:19:18.143 "traddr": "10.0.0.1", 00:19:18.143 "trsvcid": "39920" 00:19:18.143 }, 00:19:18.143 "auth": { 00:19:18.143 "state": "completed", 00:19:18.143 "digest": "sha256", 00:19:18.143 "dhgroup": "ffdhe3072" 00:19:18.143 } 00:19:18.143 } 00:19:18.143 ]' 00:19:18.143 01:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:18.143 01:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:18.143 01:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:18.402 01:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:18.402 01:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:18.402 01:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:18.402 01:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:18.402 01:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:18.660 01:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MTQ0NDgzYWUzNGMyYzBlZWM2MzkxZDdkZmEzODllOWI1N2FlYjA3OTM2ZDdjYmFlZTgxNjM3YWQ0NjA2YTg3Mhs9fsM=: 00:19:19.594 01:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:19.594 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:19.594 01:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:19.594 01:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.594 01:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.594 01:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.594 01:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:19.594 01:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:19.594 01:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:19.594 01:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:19.851 01:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:19:19.851 01:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:19.851 01:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:19.851 01:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:19.851 01:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:19.851 01:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:19.851 01:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:19.852 01:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.852 01:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.852 01:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.852 01:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:19.852 01:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:20.109 00:19:20.109 01:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:20.109 01:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:20.109 01:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:20.366 01:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:20.366 01:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:20.366 01:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.366 01:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.366 01:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.366 01:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:20.366 { 00:19:20.366 "cntlid": 25, 00:19:20.366 "qid": 0, 00:19:20.366 "state": "enabled", 00:19:20.366 "thread": "nvmf_tgt_poll_group_000", 00:19:20.366 "listen_address": { 00:19:20.366 "trtype": "TCP", 00:19:20.366 "adrfam": "IPv4", 00:19:20.366 "traddr": "10.0.0.2", 00:19:20.366 "trsvcid": "4420" 00:19:20.366 }, 00:19:20.366 "peer_address": { 00:19:20.366 "trtype": "TCP", 00:19:20.366 "adrfam": "IPv4", 00:19:20.366 "traddr": "10.0.0.1", 00:19:20.366 "trsvcid": "39950" 00:19:20.366 }, 00:19:20.366 "auth": { 00:19:20.366 "state": "completed", 00:19:20.366 "digest": "sha256", 00:19:20.366 "dhgroup": "ffdhe4096" 00:19:20.366 } 00:19:20.366 } 00:19:20.366 ]' 00:19:20.366 01:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:20.366 01:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:20.366 01:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:20.366 01:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:20.366 01:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:20.623 01:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:20.623 01:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:20.623 01:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:20.880 01:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NDZhYzlmN2Q1MzQ2ZWMzOTQxYWY3ODdkNmYzMzllODc4MGZlYmFiMzE3YzlkMWZhDWmyZA==: --dhchap-ctrl-secret DHHC-1:03:OGM2Y2ZmN2IzZjA0ZGU2NjBhOWU1NmFhMTJkMjdlOGUwM2VjOWIyOGUyZDY4ZjViZDg4YjRkMDAyMTM1ZGQ4YaApqQU=: 00:19:21.815 01:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:21.815 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:21.815 01:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:21.815 01:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.815 01:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.815 01:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.815 01:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:21.815 01:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:21.815 01:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:22.073 01:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:19:22.073 01:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:22.073 01:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:22.073 01:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:22.073 01:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:22.073 01:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:22.073 01:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:22.073 01:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:22.073 01:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.073 01:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:22.073 01:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:22.073 01:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:22.330 00:19:22.330 01:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:22.330 01:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:22.330 01:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:22.587 01:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:22.587 01:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:22.587 01:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:22.587 01:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.587 01:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:22.587 01:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:22.587 { 00:19:22.587 "cntlid": 27, 00:19:22.587 "qid": 0, 00:19:22.587 "state": "enabled", 00:19:22.587 "thread": "nvmf_tgt_poll_group_000", 00:19:22.587 "listen_address": { 00:19:22.587 "trtype": "TCP", 00:19:22.587 "adrfam": "IPv4", 00:19:22.587 "traddr": "10.0.0.2", 00:19:22.587 "trsvcid": "4420" 00:19:22.587 }, 00:19:22.587 "peer_address": { 00:19:22.587 "trtype": "TCP", 00:19:22.587 "adrfam": "IPv4", 00:19:22.587 "traddr": "10.0.0.1", 00:19:22.587 "trsvcid": "39970" 00:19:22.587 }, 00:19:22.587 "auth": { 00:19:22.587 "state": "completed", 00:19:22.587 "digest": "sha256", 00:19:22.587 "dhgroup": "ffdhe4096" 00:19:22.587 } 00:19:22.587 } 00:19:22.587 ]' 00:19:22.587 01:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:22.587 01:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:22.587 01:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:22.587 01:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:22.587 01:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:22.846 01:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:22.846 01:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:22.846 01:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:23.105 01:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZTgyYjdkYjY5MmFhNDBlZWMyMzRlMDI1ZWI1OTc5OTIkog7U: --dhchap-ctrl-secret DHHC-1:02:ZDJmY2M5MzU1YmYxNGM2ZTRhZWU1OTQwODExMzhjNmJkY2NlNmFlMmQ2YTViODdm1yPkNQ==: 00:19:24.038 01:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:24.038 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:24.038 01:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:24.038 01:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.038 01:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.038 01:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.038 01:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:24.038 01:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:24.038 01:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:24.295 01:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:19:24.295 01:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:24.295 01:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:24.295 01:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:24.295 01:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:24.295 01:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:24.295 01:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:24.295 01:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.295 01:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.295 01:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.295 01:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:24.295 01:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:24.553 00:19:24.553 01:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:24.553 01:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:24.553 01:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:24.811 01:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:24.811 01:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:24.811 01:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.811 01:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.811 01:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.811 01:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:24.811 { 00:19:24.811 "cntlid": 29, 00:19:24.811 "qid": 0, 00:19:24.811 "state": "enabled", 00:19:24.811 "thread": "nvmf_tgt_poll_group_000", 00:19:24.811 "listen_address": { 00:19:24.811 "trtype": "TCP", 00:19:24.811 "adrfam": "IPv4", 00:19:24.811 "traddr": "10.0.0.2", 00:19:24.811 "trsvcid": "4420" 00:19:24.811 }, 00:19:24.811 "peer_address": { 00:19:24.811 "trtype": "TCP", 00:19:24.811 "adrfam": "IPv4", 00:19:24.811 "traddr": "10.0.0.1", 00:19:24.811 "trsvcid": "40006" 00:19:24.811 }, 00:19:24.811 "auth": { 00:19:24.811 "state": "completed", 00:19:24.811 "digest": "sha256", 00:19:24.811 "dhgroup": "ffdhe4096" 00:19:24.811 } 00:19:24.811 } 00:19:24.811 ]' 00:19:24.811 01:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:24.811 01:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:24.811 01:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:25.069 01:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:25.069 01:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:25.069 01:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:25.069 01:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:25.069 01:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:25.326 01:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MDJhZTg1NjhkMzE3OGFjNjc4NmYwNWE0MjVjMWNjY2M0OTI5M2ZiNWE4YWUxMjA3f8fA1A==: --dhchap-ctrl-secret DHHC-1:01:YjZjODE5NjY0NWZmMGQ1OTAzZjc1MDQxYmM3M2NhNDPOAk4W: 00:19:26.258 01:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:26.258 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:26.258 01:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:26.258 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.258 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.258 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.258 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:26.258 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:26.258 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:26.516 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:19:26.516 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:26.516 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:26.516 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:26.516 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:26.516 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:26.516 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:26.516 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.516 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.516 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.516 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:26.516 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:26.773 00:19:26.773 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:26.773 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:26.773 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:27.031 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:27.031 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:27.031 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.031 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.031 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.031 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:27.031 { 00:19:27.031 "cntlid": 31, 00:19:27.031 "qid": 0, 00:19:27.031 "state": "enabled", 00:19:27.031 "thread": "nvmf_tgt_poll_group_000", 00:19:27.031 "listen_address": { 00:19:27.031 "trtype": "TCP", 00:19:27.031 "adrfam": "IPv4", 00:19:27.031 "traddr": "10.0.0.2", 00:19:27.031 "trsvcid": "4420" 00:19:27.031 }, 00:19:27.031 "peer_address": { 00:19:27.031 "trtype": "TCP", 00:19:27.031 "adrfam": "IPv4", 00:19:27.031 "traddr": "10.0.0.1", 00:19:27.031 "trsvcid": "39666" 00:19:27.031 }, 00:19:27.031 "auth": { 00:19:27.031 "state": "completed", 00:19:27.031 "digest": "sha256", 00:19:27.031 "dhgroup": "ffdhe4096" 00:19:27.031 } 00:19:27.031 } 00:19:27.031 ]' 00:19:27.031 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:27.289 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:27.289 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:27.289 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:27.289 01:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:27.289 01:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:27.289 01:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:27.289 01:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:27.548 01:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MTQ0NDgzYWUzNGMyYzBlZWM2MzkxZDdkZmEzODllOWI1N2FlYjA3OTM2ZDdjYmFlZTgxNjM3YWQ0NjA2YTg3Mhs9fsM=: 00:19:28.485 01:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:28.485 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:28.485 01:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:28.485 01:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.485 01:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.485 01:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.485 01:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:28.485 01:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:28.485 01:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:28.485 01:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:28.744 01:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:19:28.744 01:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:28.744 01:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:28.744 01:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:28.744 01:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:28.744 01:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:28.744 01:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:28.744 01:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.744 01:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.744 01:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.744 01:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:28.744 01:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:29.308 00:19:29.308 01:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:29.308 01:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:29.308 01:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:29.586 01:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:29.586 01:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:29.586 01:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.586 01:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.586 01:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.586 01:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:29.586 { 00:19:29.586 "cntlid": 33, 00:19:29.586 "qid": 0, 00:19:29.586 "state": "enabled", 00:19:29.586 "thread": "nvmf_tgt_poll_group_000", 00:19:29.586 "listen_address": { 00:19:29.586 "trtype": "TCP", 00:19:29.586 "adrfam": "IPv4", 00:19:29.586 "traddr": "10.0.0.2", 00:19:29.586 "trsvcid": "4420" 00:19:29.586 }, 00:19:29.586 "peer_address": { 00:19:29.586 "trtype": "TCP", 00:19:29.586 "adrfam": "IPv4", 00:19:29.586 "traddr": "10.0.0.1", 00:19:29.586 "trsvcid": "39684" 00:19:29.586 }, 00:19:29.586 "auth": { 00:19:29.586 "state": "completed", 00:19:29.586 "digest": "sha256", 00:19:29.586 "dhgroup": "ffdhe6144" 00:19:29.586 } 00:19:29.586 } 00:19:29.586 ]' 00:19:29.586 01:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:29.586 01:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:29.586 01:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:29.586 01:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:29.586 01:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:29.586 01:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:29.586 01:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:29.586 01:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:29.860 01:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NDZhYzlmN2Q1MzQ2ZWMzOTQxYWY3ODdkNmYzMzllODc4MGZlYmFiMzE3YzlkMWZhDWmyZA==: --dhchap-ctrl-secret DHHC-1:03:OGM2Y2ZmN2IzZjA0ZGU2NjBhOWU1NmFhMTJkMjdlOGUwM2VjOWIyOGUyZDY4ZjViZDg4YjRkMDAyMTM1ZGQ4YaApqQU=: 00:19:30.795 01:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:30.795 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:30.795 01:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:30.795 01:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.795 01:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.795 01:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.795 01:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:30.795 01:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:30.795 01:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:31.053 01:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:19:31.053 01:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:31.053 01:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:31.053 01:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:31.053 01:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:31.053 01:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:31.053 01:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:31.053 01:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.053 01:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.053 01:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.053 01:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:31.053 01:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:31.621 00:19:31.621 01:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:31.621 01:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:31.621 01:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:31.880 01:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:31.880 01:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:31.880 01:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.880 01:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.880 01:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.880 01:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:31.880 { 00:19:31.880 "cntlid": 35, 00:19:31.880 "qid": 0, 00:19:31.880 "state": "enabled", 00:19:31.880 "thread": "nvmf_tgt_poll_group_000", 00:19:31.880 "listen_address": { 00:19:31.880 "trtype": "TCP", 00:19:31.880 "adrfam": "IPv4", 00:19:31.880 "traddr": "10.0.0.2", 00:19:31.880 "trsvcid": "4420" 00:19:31.880 }, 00:19:31.880 "peer_address": { 00:19:31.880 "trtype": "TCP", 00:19:31.880 "adrfam": "IPv4", 00:19:31.880 "traddr": "10.0.0.1", 00:19:31.880 "trsvcid": "39728" 00:19:31.880 }, 00:19:31.880 "auth": { 00:19:31.880 "state": "completed", 00:19:31.880 "digest": "sha256", 00:19:31.880 "dhgroup": "ffdhe6144" 00:19:31.880 } 00:19:31.880 } 00:19:31.880 ]' 00:19:31.880 01:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:31.880 01:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:31.880 01:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:31.880 01:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:31.880 01:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:31.880 01:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:31.880 01:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:31.880 01:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:32.140 01:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZTgyYjdkYjY5MmFhNDBlZWMyMzRlMDI1ZWI1OTc5OTIkog7U: --dhchap-ctrl-secret DHHC-1:02:ZDJmY2M5MzU1YmYxNGM2ZTRhZWU1OTQwODExMzhjNmJkY2NlNmFlMmQ2YTViODdm1yPkNQ==: 00:19:33.079 01:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:33.079 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:33.079 01:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:33.079 01:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.079 01:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.079 01:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.079 01:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:33.079 01:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:33.079 01:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:33.337 01:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:19:33.337 01:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:33.337 01:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:33.337 01:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:33.337 01:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:33.337 01:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:33.337 01:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:33.337 01:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.337 01:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.337 01:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.337 01:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:33.337 01:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:33.903 00:19:33.903 01:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:33.903 01:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:33.903 01:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:34.162 01:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:34.162 01:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:34.162 01:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.162 01:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.162 01:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.162 01:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:34.162 { 00:19:34.162 "cntlid": 37, 00:19:34.162 "qid": 0, 00:19:34.162 "state": "enabled", 00:19:34.162 "thread": "nvmf_tgt_poll_group_000", 00:19:34.162 "listen_address": { 00:19:34.162 "trtype": "TCP", 00:19:34.162 "adrfam": "IPv4", 00:19:34.162 "traddr": "10.0.0.2", 00:19:34.162 "trsvcid": "4420" 00:19:34.162 }, 00:19:34.162 "peer_address": { 00:19:34.162 "trtype": "TCP", 00:19:34.162 "adrfam": "IPv4", 00:19:34.162 "traddr": "10.0.0.1", 00:19:34.162 "trsvcid": "39756" 00:19:34.162 }, 00:19:34.162 "auth": { 00:19:34.162 "state": "completed", 00:19:34.162 "digest": "sha256", 00:19:34.162 "dhgroup": "ffdhe6144" 00:19:34.162 } 00:19:34.162 } 00:19:34.162 ]' 00:19:34.162 01:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:34.162 01:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:34.162 01:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:34.420 01:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:34.420 01:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:34.420 01:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:34.420 01:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:34.420 01:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:34.679 01:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MDJhZTg1NjhkMzE3OGFjNjc4NmYwNWE0MjVjMWNjY2M0OTI5M2ZiNWE4YWUxMjA3f8fA1A==: --dhchap-ctrl-secret DHHC-1:01:YjZjODE5NjY0NWZmMGQ1OTAzZjc1MDQxYmM3M2NhNDPOAk4W: 00:19:35.613 01:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:35.613 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:35.613 01:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:35.613 01:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.613 01:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.613 01:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.613 01:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:35.613 01:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:35.613 01:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:35.870 01:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:19:35.870 01:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:35.870 01:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:35.870 01:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:35.870 01:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:35.870 01:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:35.870 01:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:35.870 01:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.870 01:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.870 01:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.870 01:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:35.870 01:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:36.439 00:19:36.439 01:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:36.439 01:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:36.439 01:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:36.698 01:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:36.698 01:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:36.698 01:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.698 01:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.698 01:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.698 01:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:36.698 { 00:19:36.698 "cntlid": 39, 00:19:36.698 "qid": 0, 00:19:36.698 "state": "enabled", 00:19:36.698 "thread": "nvmf_tgt_poll_group_000", 00:19:36.698 "listen_address": { 00:19:36.698 "trtype": "TCP", 00:19:36.698 "adrfam": "IPv4", 00:19:36.698 "traddr": "10.0.0.2", 00:19:36.698 "trsvcid": "4420" 00:19:36.698 }, 00:19:36.698 "peer_address": { 00:19:36.698 "trtype": "TCP", 00:19:36.698 "adrfam": "IPv4", 00:19:36.698 "traddr": "10.0.0.1", 00:19:36.698 "trsvcid": "39704" 00:19:36.698 }, 00:19:36.698 "auth": { 00:19:36.698 "state": "completed", 00:19:36.698 "digest": "sha256", 00:19:36.698 "dhgroup": "ffdhe6144" 00:19:36.698 } 00:19:36.698 } 00:19:36.698 ]' 00:19:36.698 01:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:36.698 01:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:36.698 01:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:36.698 01:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:36.698 01:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:36.698 01:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:36.698 01:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:36.698 01:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:36.955 01:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MTQ0NDgzYWUzNGMyYzBlZWM2MzkxZDdkZmEzODllOWI1N2FlYjA3OTM2ZDdjYmFlZTgxNjM3YWQ0NjA2YTg3Mhs9fsM=: 00:19:37.889 01:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:37.889 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:37.889 01:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:37.889 01:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.889 01:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.889 01:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.889 01:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:37.889 01:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:37.889 01:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:37.889 01:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:38.147 01:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:19:38.147 01:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:38.147 01:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:38.147 01:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:38.147 01:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:38.147 01:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:38.148 01:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:38.148 01:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.148 01:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.148 01:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.148 01:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:38.148 01:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:39.080 00:19:39.080 01:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:39.080 01:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:39.080 01:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:39.337 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:39.337 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:39.337 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.337 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.337 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.337 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:39.337 { 00:19:39.337 "cntlid": 41, 00:19:39.337 "qid": 0, 00:19:39.337 "state": "enabled", 00:19:39.337 "thread": "nvmf_tgt_poll_group_000", 00:19:39.337 "listen_address": { 00:19:39.337 "trtype": "TCP", 00:19:39.337 "adrfam": "IPv4", 00:19:39.337 "traddr": "10.0.0.2", 00:19:39.337 "trsvcid": "4420" 00:19:39.337 }, 00:19:39.337 "peer_address": { 00:19:39.337 "trtype": "TCP", 00:19:39.337 "adrfam": "IPv4", 00:19:39.337 "traddr": "10.0.0.1", 00:19:39.337 "trsvcid": "39732" 00:19:39.337 }, 00:19:39.337 "auth": { 00:19:39.337 "state": "completed", 00:19:39.337 "digest": "sha256", 00:19:39.337 "dhgroup": "ffdhe8192" 00:19:39.337 } 00:19:39.337 } 00:19:39.337 ]' 00:19:39.337 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:39.337 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:39.337 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:39.337 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:39.337 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:39.337 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:39.337 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:39.337 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:39.596 01:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NDZhYzlmN2Q1MzQ2ZWMzOTQxYWY3ODdkNmYzMzllODc4MGZlYmFiMzE3YzlkMWZhDWmyZA==: --dhchap-ctrl-secret DHHC-1:03:OGM2Y2ZmN2IzZjA0ZGU2NjBhOWU1NmFhMTJkMjdlOGUwM2VjOWIyOGUyZDY4ZjViZDg4YjRkMDAyMTM1ZGQ4YaApqQU=: 00:19:40.531 01:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:40.531 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:40.531 01:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:40.531 01:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.531 01:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.531 01:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.531 01:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:40.531 01:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:40.531 01:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:40.789 01:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:19:40.789 01:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:40.789 01:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:40.789 01:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:40.789 01:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:40.789 01:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:40.789 01:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:40.789 01:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.789 01:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.789 01:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.789 01:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:40.789 01:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:41.726 00:19:41.726 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:41.726 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:41.726 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:41.984 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:41.984 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:41.984 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.984 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.984 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.984 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:41.984 { 00:19:41.984 "cntlid": 43, 00:19:41.984 "qid": 0, 00:19:41.984 "state": "enabled", 00:19:41.984 "thread": "nvmf_tgt_poll_group_000", 00:19:41.984 "listen_address": { 00:19:41.984 "trtype": "TCP", 00:19:41.984 "adrfam": "IPv4", 00:19:41.984 "traddr": "10.0.0.2", 00:19:41.984 "trsvcid": "4420" 00:19:41.984 }, 00:19:41.984 "peer_address": { 00:19:41.984 "trtype": "TCP", 00:19:41.984 "adrfam": "IPv4", 00:19:41.984 "traddr": "10.0.0.1", 00:19:41.984 "trsvcid": "39758" 00:19:41.984 }, 00:19:41.984 "auth": { 00:19:41.984 "state": "completed", 00:19:41.984 "digest": "sha256", 00:19:41.984 "dhgroup": "ffdhe8192" 00:19:41.984 } 00:19:41.984 } 00:19:41.984 ]' 00:19:41.984 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:42.242 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:42.242 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:42.242 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:42.242 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:42.242 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:42.242 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:42.242 01:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:42.499 01:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZTgyYjdkYjY5MmFhNDBlZWMyMzRlMDI1ZWI1OTc5OTIkog7U: --dhchap-ctrl-secret DHHC-1:02:ZDJmY2M5MzU1YmYxNGM2ZTRhZWU1OTQwODExMzhjNmJkY2NlNmFlMmQ2YTViODdm1yPkNQ==: 00:19:43.438 01:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:43.438 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:43.438 01:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:43.438 01:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.438 01:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.438 01:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.438 01:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:43.438 01:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:43.438 01:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:43.696 01:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:19:43.696 01:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:43.696 01:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:43.696 01:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:43.696 01:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:43.696 01:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:43.696 01:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:43.696 01:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.696 01:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.696 01:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.696 01:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:43.696 01:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:44.633 00:19:44.633 01:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:44.633 01:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:44.633 01:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:44.891 01:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:44.891 01:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:44.891 01:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.891 01:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.891 01:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.891 01:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:44.891 { 00:19:44.891 "cntlid": 45, 00:19:44.891 "qid": 0, 00:19:44.891 "state": "enabled", 00:19:44.891 "thread": "nvmf_tgt_poll_group_000", 00:19:44.891 "listen_address": { 00:19:44.891 "trtype": "TCP", 00:19:44.891 "adrfam": "IPv4", 00:19:44.891 "traddr": "10.0.0.2", 00:19:44.891 "trsvcid": "4420" 00:19:44.891 }, 00:19:44.891 "peer_address": { 00:19:44.891 "trtype": "TCP", 00:19:44.891 "adrfam": "IPv4", 00:19:44.891 "traddr": "10.0.0.1", 00:19:44.891 "trsvcid": "39774" 00:19:44.891 }, 00:19:44.891 "auth": { 00:19:44.891 "state": "completed", 00:19:44.891 "digest": "sha256", 00:19:44.891 "dhgroup": "ffdhe8192" 00:19:44.891 } 00:19:44.891 } 00:19:44.891 ]' 00:19:44.891 01:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:44.891 01:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:44.891 01:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:44.891 01:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:44.891 01:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:45.150 01:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:45.150 01:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:45.150 01:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:45.410 01:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MDJhZTg1NjhkMzE3OGFjNjc4NmYwNWE0MjVjMWNjY2M0OTI5M2ZiNWE4YWUxMjA3f8fA1A==: --dhchap-ctrl-secret DHHC-1:01:YjZjODE5NjY0NWZmMGQ1OTAzZjc1MDQxYmM3M2NhNDPOAk4W: 00:19:46.366 01:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:46.366 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:46.366 01:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:46.366 01:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.366 01:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.366 01:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.366 01:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:46.366 01:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:46.366 01:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:46.640 01:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:19:46.640 01:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:46.640 01:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:46.640 01:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:46.640 01:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:46.640 01:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:46.640 01:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:46.640 01:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.640 01:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.640 01:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.640 01:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:46.640 01:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:47.576 00:19:47.576 01:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:47.576 01:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:47.576 01:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:47.576 01:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:47.576 01:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:47.576 01:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.576 01:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.576 01:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.576 01:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:47.576 { 00:19:47.576 "cntlid": 47, 00:19:47.576 "qid": 0, 00:19:47.576 "state": "enabled", 00:19:47.576 "thread": "nvmf_tgt_poll_group_000", 00:19:47.576 "listen_address": { 00:19:47.576 "trtype": "TCP", 00:19:47.576 "adrfam": "IPv4", 00:19:47.576 "traddr": "10.0.0.2", 00:19:47.576 "trsvcid": "4420" 00:19:47.576 }, 00:19:47.576 "peer_address": { 00:19:47.576 "trtype": "TCP", 00:19:47.576 "adrfam": "IPv4", 00:19:47.576 "traddr": "10.0.0.1", 00:19:47.576 "trsvcid": "42140" 00:19:47.576 }, 00:19:47.576 "auth": { 00:19:47.576 "state": "completed", 00:19:47.576 "digest": "sha256", 00:19:47.576 "dhgroup": "ffdhe8192" 00:19:47.576 } 00:19:47.576 } 00:19:47.576 ]' 00:19:47.576 01:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:47.576 01:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:47.576 01:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:47.834 01:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:47.834 01:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:47.834 01:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:47.834 01:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:47.834 01:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:48.092 01:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MTQ0NDgzYWUzNGMyYzBlZWM2MzkxZDdkZmEzODllOWI1N2FlYjA3OTM2ZDdjYmFlZTgxNjM3YWQ0NjA2YTg3Mhs9fsM=: 00:19:49.047 01:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:49.047 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:49.047 01:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:49.047 01:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.047 01:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.047 01:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.047 01:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:19:49.047 01:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:49.047 01:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:49.047 01:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:49.047 01:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:49.305 01:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:19:49.305 01:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:49.305 01:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:49.305 01:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:49.305 01:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:49.305 01:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:49.305 01:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:49.305 01:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.305 01:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.305 01:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.305 01:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:49.305 01:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:49.562 00:19:49.562 01:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:49.562 01:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:49.562 01:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:49.820 01:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:49.820 01:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:49.820 01:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.820 01:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.820 01:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.820 01:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:49.820 { 00:19:49.820 "cntlid": 49, 00:19:49.820 "qid": 0, 00:19:49.820 "state": "enabled", 00:19:49.820 "thread": "nvmf_tgt_poll_group_000", 00:19:49.820 "listen_address": { 00:19:49.820 "trtype": "TCP", 00:19:49.820 "adrfam": "IPv4", 00:19:49.820 "traddr": "10.0.0.2", 00:19:49.820 "trsvcid": "4420" 00:19:49.820 }, 00:19:49.820 "peer_address": { 00:19:49.820 "trtype": "TCP", 00:19:49.820 "adrfam": "IPv4", 00:19:49.820 "traddr": "10.0.0.1", 00:19:49.820 "trsvcid": "42158" 00:19:49.820 }, 00:19:49.820 "auth": { 00:19:49.820 "state": "completed", 00:19:49.820 "digest": "sha384", 00:19:49.820 "dhgroup": "null" 00:19:49.820 } 00:19:49.820 } 00:19:49.820 ]' 00:19:49.820 01:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:49.820 01:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:49.820 01:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:50.078 01:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:50.078 01:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:50.078 01:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:50.078 01:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:50.078 01:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:50.336 01:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NDZhYzlmN2Q1MzQ2ZWMzOTQxYWY3ODdkNmYzMzllODc4MGZlYmFiMzE3YzlkMWZhDWmyZA==: --dhchap-ctrl-secret DHHC-1:03:OGM2Y2ZmN2IzZjA0ZGU2NjBhOWU1NmFhMTJkMjdlOGUwM2VjOWIyOGUyZDY4ZjViZDg4YjRkMDAyMTM1ZGQ4YaApqQU=: 00:19:51.268 01:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:51.268 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:51.268 01:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:51.268 01:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.268 01:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.268 01:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.268 01:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:51.268 01:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:51.268 01:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:51.526 01:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:19:51.526 01:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:51.526 01:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:51.526 01:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:51.526 01:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:51.526 01:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:51.526 01:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:51.526 01:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.526 01:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.526 01:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.526 01:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:51.526 01:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:51.784 00:19:51.784 01:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:51.784 01:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:51.784 01:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:52.051 01:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:52.051 01:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:52.051 01:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.051 01:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.051 01:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.051 01:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:52.051 { 00:19:52.051 "cntlid": 51, 00:19:52.051 "qid": 0, 00:19:52.051 "state": "enabled", 00:19:52.051 "thread": "nvmf_tgt_poll_group_000", 00:19:52.051 "listen_address": { 00:19:52.051 "trtype": "TCP", 00:19:52.051 "adrfam": "IPv4", 00:19:52.051 "traddr": "10.0.0.2", 00:19:52.051 "trsvcid": "4420" 00:19:52.051 }, 00:19:52.051 "peer_address": { 00:19:52.051 "trtype": "TCP", 00:19:52.051 "adrfam": "IPv4", 00:19:52.051 "traddr": "10.0.0.1", 00:19:52.051 "trsvcid": "42186" 00:19:52.051 }, 00:19:52.051 "auth": { 00:19:52.051 "state": "completed", 00:19:52.051 "digest": "sha384", 00:19:52.051 "dhgroup": "null" 00:19:52.051 } 00:19:52.051 } 00:19:52.051 ]' 00:19:52.051 01:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:52.051 01:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:52.051 01:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:52.051 01:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:52.051 01:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:52.314 01:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:52.314 01:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:52.314 01:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:52.572 01:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZTgyYjdkYjY5MmFhNDBlZWMyMzRlMDI1ZWI1OTc5OTIkog7U: --dhchap-ctrl-secret DHHC-1:02:ZDJmY2M5MzU1YmYxNGM2ZTRhZWU1OTQwODExMzhjNmJkY2NlNmFlMmQ2YTViODdm1yPkNQ==: 00:19:53.505 01:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:53.505 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:53.505 01:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:53.505 01:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.505 01:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.505 01:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.505 01:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:53.505 01:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:53.505 01:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:53.763 01:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:19:53.763 01:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:53.763 01:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:53.763 01:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:53.763 01:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:53.763 01:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:53.763 01:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:53.763 01:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.763 01:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.763 01:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.763 01:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:53.763 01:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:54.020 00:19:54.020 01:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:54.020 01:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:54.020 01:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:54.277 01:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:54.277 01:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:54.277 01:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.277 01:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.277 01:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.277 01:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:54.277 { 00:19:54.277 "cntlid": 53, 00:19:54.277 "qid": 0, 00:19:54.277 "state": "enabled", 00:19:54.277 "thread": "nvmf_tgt_poll_group_000", 00:19:54.277 "listen_address": { 00:19:54.277 "trtype": "TCP", 00:19:54.277 "adrfam": "IPv4", 00:19:54.277 "traddr": "10.0.0.2", 00:19:54.277 "trsvcid": "4420" 00:19:54.277 }, 00:19:54.277 "peer_address": { 00:19:54.277 "trtype": "TCP", 00:19:54.277 "adrfam": "IPv4", 00:19:54.277 "traddr": "10.0.0.1", 00:19:54.277 "trsvcid": "42202" 00:19:54.277 }, 00:19:54.277 "auth": { 00:19:54.277 "state": "completed", 00:19:54.277 "digest": "sha384", 00:19:54.277 "dhgroup": "null" 00:19:54.277 } 00:19:54.277 } 00:19:54.277 ]' 00:19:54.277 01:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:54.534 01:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:54.534 01:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:54.534 01:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:54.534 01:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:54.534 01:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:54.534 01:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:54.534 01:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:54.792 01:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MDJhZTg1NjhkMzE3OGFjNjc4NmYwNWE0MjVjMWNjY2M0OTI5M2ZiNWE4YWUxMjA3f8fA1A==: --dhchap-ctrl-secret DHHC-1:01:YjZjODE5NjY0NWZmMGQ1OTAzZjc1MDQxYmM3M2NhNDPOAk4W: 00:19:55.731 01:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:55.731 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:55.731 01:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:55.731 01:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.731 01:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.731 01:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.731 01:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:55.731 01:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:55.731 01:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:55.990 01:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:19:55.990 01:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:55.990 01:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:55.990 01:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:55.990 01:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:55.990 01:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:55.990 01:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:55.990 01:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.990 01:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.990 01:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.990 01:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:55.990 01:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:56.248 00:19:56.248 01:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:56.248 01:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:56.248 01:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:56.506 01:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:56.506 01:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:56.506 01:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.506 01:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.506 01:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.506 01:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:56.506 { 00:19:56.506 "cntlid": 55, 00:19:56.506 "qid": 0, 00:19:56.506 "state": "enabled", 00:19:56.506 "thread": "nvmf_tgt_poll_group_000", 00:19:56.506 "listen_address": { 00:19:56.506 "trtype": "TCP", 00:19:56.506 "adrfam": "IPv4", 00:19:56.506 "traddr": "10.0.0.2", 00:19:56.506 "trsvcid": "4420" 00:19:56.506 }, 00:19:56.506 "peer_address": { 00:19:56.506 "trtype": "TCP", 00:19:56.506 "adrfam": "IPv4", 00:19:56.506 "traddr": "10.0.0.1", 00:19:56.506 "trsvcid": "56254" 00:19:56.506 }, 00:19:56.506 "auth": { 00:19:56.506 "state": "completed", 00:19:56.506 "digest": "sha384", 00:19:56.506 "dhgroup": "null" 00:19:56.506 } 00:19:56.506 } 00:19:56.506 ]' 00:19:56.506 01:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:56.764 01:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:56.764 01:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:56.764 01:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:56.764 01:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:56.764 01:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:56.764 01:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:56.764 01:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:57.022 01:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MTQ0NDgzYWUzNGMyYzBlZWM2MzkxZDdkZmEzODllOWI1N2FlYjA3OTM2ZDdjYmFlZTgxNjM3YWQ0NjA2YTg3Mhs9fsM=: 00:19:57.960 01:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:57.960 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:57.960 01:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:57.960 01:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.960 01:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.960 01:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.960 01:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:57.960 01:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:57.960 01:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:57.960 01:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:58.218 01:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:19:58.218 01:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:58.218 01:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:58.218 01:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:58.218 01:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:58.218 01:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:58.218 01:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:58.218 01:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.218 01:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.218 01:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.218 01:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:58.218 01:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:58.477 00:19:58.477 01:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:58.477 01:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:58.477 01:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:58.734 01:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:58.734 01:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:58.734 01:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.734 01:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.734 01:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.734 01:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:58.734 { 00:19:58.734 "cntlid": 57, 00:19:58.734 "qid": 0, 00:19:58.734 "state": "enabled", 00:19:58.734 "thread": "nvmf_tgt_poll_group_000", 00:19:58.734 "listen_address": { 00:19:58.734 "trtype": "TCP", 00:19:58.734 "adrfam": "IPv4", 00:19:58.734 "traddr": "10.0.0.2", 00:19:58.734 "trsvcid": "4420" 00:19:58.734 }, 00:19:58.734 "peer_address": { 00:19:58.734 "trtype": "TCP", 00:19:58.734 "adrfam": "IPv4", 00:19:58.734 "traddr": "10.0.0.1", 00:19:58.734 "trsvcid": "56280" 00:19:58.734 }, 00:19:58.734 "auth": { 00:19:58.734 "state": "completed", 00:19:58.734 "digest": "sha384", 00:19:58.734 "dhgroup": "ffdhe2048" 00:19:58.734 } 00:19:58.734 } 00:19:58.734 ]' 00:19:58.734 01:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:58.992 01:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:58.992 01:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:58.992 01:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:58.992 01:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:58.992 01:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:58.992 01:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:58.992 01:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:59.250 01:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NDZhYzlmN2Q1MzQ2ZWMzOTQxYWY3ODdkNmYzMzllODc4MGZlYmFiMzE3YzlkMWZhDWmyZA==: --dhchap-ctrl-secret DHHC-1:03:OGM2Y2ZmN2IzZjA0ZGU2NjBhOWU1NmFhMTJkMjdlOGUwM2VjOWIyOGUyZDY4ZjViZDg4YjRkMDAyMTM1ZGQ4YaApqQU=: 00:20:00.186 01:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:00.186 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:00.186 01:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:00.186 01:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.186 01:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.186 01:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.186 01:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:00.186 01:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:00.186 01:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:00.445 01:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:20:00.445 01:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:00.445 01:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:00.445 01:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:00.445 01:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:00.445 01:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:00.445 01:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:00.445 01:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.445 01:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.445 01:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.445 01:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:00.445 01:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:00.703 00:20:00.703 01:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:00.703 01:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:00.703 01:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:00.960 01:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:00.960 01:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:00.960 01:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.960 01:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.960 01:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.960 01:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:00.960 { 00:20:00.960 "cntlid": 59, 00:20:00.960 "qid": 0, 00:20:00.960 "state": "enabled", 00:20:00.960 "thread": "nvmf_tgt_poll_group_000", 00:20:00.960 "listen_address": { 00:20:00.960 "trtype": "TCP", 00:20:00.960 "adrfam": "IPv4", 00:20:00.960 "traddr": "10.0.0.2", 00:20:00.960 "trsvcid": "4420" 00:20:00.960 }, 00:20:00.960 "peer_address": { 00:20:00.960 "trtype": "TCP", 00:20:00.960 "adrfam": "IPv4", 00:20:00.960 "traddr": "10.0.0.1", 00:20:00.960 "trsvcid": "56312" 00:20:00.960 }, 00:20:00.960 "auth": { 00:20:00.960 "state": "completed", 00:20:00.960 "digest": "sha384", 00:20:00.960 "dhgroup": "ffdhe2048" 00:20:00.960 } 00:20:00.960 } 00:20:00.960 ]' 00:20:00.960 01:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:01.216 01:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:01.216 01:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:01.216 01:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:01.216 01:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:01.216 01:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:01.216 01:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:01.216 01:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:01.474 01:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZTgyYjdkYjY5MmFhNDBlZWMyMzRlMDI1ZWI1OTc5OTIkog7U: --dhchap-ctrl-secret DHHC-1:02:ZDJmY2M5MzU1YmYxNGM2ZTRhZWU1OTQwODExMzhjNmJkY2NlNmFlMmQ2YTViODdm1yPkNQ==: 00:20:02.410 01:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:02.410 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:02.410 01:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:02.410 01:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.410 01:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.410 01:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.410 01:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:02.410 01:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:02.410 01:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:02.694 01:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:20:02.694 01:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:02.694 01:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:02.694 01:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:02.694 01:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:02.694 01:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:02.694 01:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:02.694 01:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.694 01:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.694 01:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.694 01:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:02.694 01:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:02.956 00:20:03.215 01:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:03.215 01:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:03.215 01:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:03.215 01:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:03.215 01:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:03.215 01:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.215 01:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.473 01:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.473 01:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:03.473 { 00:20:03.473 "cntlid": 61, 00:20:03.473 "qid": 0, 00:20:03.473 "state": "enabled", 00:20:03.473 "thread": "nvmf_tgt_poll_group_000", 00:20:03.473 "listen_address": { 00:20:03.473 "trtype": "TCP", 00:20:03.473 "adrfam": "IPv4", 00:20:03.473 "traddr": "10.0.0.2", 00:20:03.473 "trsvcid": "4420" 00:20:03.473 }, 00:20:03.473 "peer_address": { 00:20:03.473 "trtype": "TCP", 00:20:03.473 "adrfam": "IPv4", 00:20:03.473 "traddr": "10.0.0.1", 00:20:03.473 "trsvcid": "56334" 00:20:03.473 }, 00:20:03.473 "auth": { 00:20:03.473 "state": "completed", 00:20:03.473 "digest": "sha384", 00:20:03.473 "dhgroup": "ffdhe2048" 00:20:03.473 } 00:20:03.473 } 00:20:03.473 ]' 00:20:03.473 01:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:03.473 01:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:03.473 01:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:03.473 01:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:03.473 01:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:03.473 01:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:03.473 01:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:03.473 01:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:03.731 01:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MDJhZTg1NjhkMzE3OGFjNjc4NmYwNWE0MjVjMWNjY2M0OTI5M2ZiNWE4YWUxMjA3f8fA1A==: --dhchap-ctrl-secret DHHC-1:01:YjZjODE5NjY0NWZmMGQ1OTAzZjc1MDQxYmM3M2NhNDPOAk4W: 00:20:04.669 01:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:04.669 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:04.669 01:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:04.669 01:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.669 01:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.669 01:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.669 01:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:04.669 01:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:04.669 01:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:04.926 01:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:20:04.926 01:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:04.926 01:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:04.926 01:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:04.926 01:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:04.926 01:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:04.926 01:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:04.926 01:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.926 01:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.926 01:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.926 01:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:04.927 01:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:05.184 00:20:05.184 01:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:05.184 01:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:05.184 01:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:05.443 01:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:05.443 01:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:05.443 01:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.443 01:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.443 01:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.443 01:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:05.443 { 00:20:05.443 "cntlid": 63, 00:20:05.443 "qid": 0, 00:20:05.443 "state": "enabled", 00:20:05.443 "thread": "nvmf_tgt_poll_group_000", 00:20:05.443 "listen_address": { 00:20:05.443 "trtype": "TCP", 00:20:05.443 "adrfam": "IPv4", 00:20:05.443 "traddr": "10.0.0.2", 00:20:05.443 "trsvcid": "4420" 00:20:05.443 }, 00:20:05.443 "peer_address": { 00:20:05.443 "trtype": "TCP", 00:20:05.443 "adrfam": "IPv4", 00:20:05.443 "traddr": "10.0.0.1", 00:20:05.443 "trsvcid": "50542" 00:20:05.443 }, 00:20:05.443 "auth": { 00:20:05.443 "state": "completed", 00:20:05.443 "digest": "sha384", 00:20:05.443 "dhgroup": "ffdhe2048" 00:20:05.443 } 00:20:05.443 } 00:20:05.443 ]' 00:20:05.443 01:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:05.443 01:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:05.443 01:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:05.701 01:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:05.701 01:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:05.701 01:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:05.701 01:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:05.701 01:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:05.960 01:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MTQ0NDgzYWUzNGMyYzBlZWM2MzkxZDdkZmEzODllOWI1N2FlYjA3OTM2ZDdjYmFlZTgxNjM3YWQ0NjA2YTg3Mhs9fsM=: 00:20:06.895 01:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:06.895 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:06.895 01:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:06.895 01:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.895 01:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.895 01:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.895 01:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:06.895 01:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:06.895 01:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:06.895 01:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:07.153 01:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:20:07.153 01:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:07.153 01:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:07.153 01:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:07.153 01:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:07.153 01:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:07.153 01:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:07.154 01:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.154 01:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.154 01:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.154 01:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:07.154 01:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:07.411 00:20:07.411 01:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:07.412 01:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:07.412 01:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:07.669 01:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:07.669 01:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:07.669 01:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.669 01:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.669 01:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.669 01:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:07.669 { 00:20:07.669 "cntlid": 65, 00:20:07.669 "qid": 0, 00:20:07.669 "state": "enabled", 00:20:07.669 "thread": "nvmf_tgt_poll_group_000", 00:20:07.669 "listen_address": { 00:20:07.669 "trtype": "TCP", 00:20:07.669 "adrfam": "IPv4", 00:20:07.669 "traddr": "10.0.0.2", 00:20:07.669 "trsvcid": "4420" 00:20:07.669 }, 00:20:07.669 "peer_address": { 00:20:07.669 "trtype": "TCP", 00:20:07.669 "adrfam": "IPv4", 00:20:07.670 "traddr": "10.0.0.1", 00:20:07.670 "trsvcid": "50584" 00:20:07.670 }, 00:20:07.670 "auth": { 00:20:07.670 "state": "completed", 00:20:07.670 "digest": "sha384", 00:20:07.670 "dhgroup": "ffdhe3072" 00:20:07.670 } 00:20:07.670 } 00:20:07.670 ]' 00:20:07.670 01:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:07.670 01:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:07.670 01:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:07.670 01:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:07.670 01:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:07.929 01:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:07.929 01:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:07.929 01:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:08.186 01:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NDZhYzlmN2Q1MzQ2ZWMzOTQxYWY3ODdkNmYzMzllODc4MGZlYmFiMzE3YzlkMWZhDWmyZA==: --dhchap-ctrl-secret DHHC-1:03:OGM2Y2ZmN2IzZjA0ZGU2NjBhOWU1NmFhMTJkMjdlOGUwM2VjOWIyOGUyZDY4ZjViZDg4YjRkMDAyMTM1ZGQ4YaApqQU=: 00:20:09.119 01:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:09.119 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:09.119 01:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:09.119 01:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.119 01:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.119 01:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.119 01:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:09.119 01:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:09.119 01:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:09.376 01:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:20:09.376 01:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:09.376 01:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:09.376 01:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:09.376 01:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:09.376 01:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:09.376 01:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:09.376 01:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.376 01:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.376 01:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.376 01:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:09.376 01:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:09.634 00:20:09.634 01:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:09.634 01:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:09.634 01:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:09.892 01:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:09.892 01:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:09.892 01:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.892 01:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.892 01:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.892 01:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:09.892 { 00:20:09.892 "cntlid": 67, 00:20:09.892 "qid": 0, 00:20:09.892 "state": "enabled", 00:20:09.892 "thread": "nvmf_tgt_poll_group_000", 00:20:09.892 "listen_address": { 00:20:09.892 "trtype": "TCP", 00:20:09.892 "adrfam": "IPv4", 00:20:09.892 "traddr": "10.0.0.2", 00:20:09.892 "trsvcid": "4420" 00:20:09.892 }, 00:20:09.892 "peer_address": { 00:20:09.892 "trtype": "TCP", 00:20:09.892 "adrfam": "IPv4", 00:20:09.893 "traddr": "10.0.0.1", 00:20:09.893 "trsvcid": "50608" 00:20:09.893 }, 00:20:09.893 "auth": { 00:20:09.893 "state": "completed", 00:20:09.893 "digest": "sha384", 00:20:09.893 "dhgroup": "ffdhe3072" 00:20:09.893 } 00:20:09.893 } 00:20:09.893 ]' 00:20:09.893 01:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:09.893 01:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:09.893 01:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:10.150 01:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:10.150 01:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:10.150 01:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:10.150 01:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:10.150 01:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:10.408 01:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZTgyYjdkYjY5MmFhNDBlZWMyMzRlMDI1ZWI1OTc5OTIkog7U: --dhchap-ctrl-secret DHHC-1:02:ZDJmY2M5MzU1YmYxNGM2ZTRhZWU1OTQwODExMzhjNmJkY2NlNmFlMmQ2YTViODdm1yPkNQ==: 00:20:11.344 01:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:11.344 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:11.344 01:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:11.344 01:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.344 01:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.344 01:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.344 01:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:11.344 01:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:11.344 01:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:11.603 01:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:20:11.603 01:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:11.603 01:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:11.603 01:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:11.603 01:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:11.603 01:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:11.603 01:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:11.603 01:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.603 01:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.603 01:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.603 01:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:11.603 01:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:11.860 00:20:11.860 01:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:11.860 01:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:11.860 01:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:12.118 01:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:12.118 01:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:12.118 01:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.118 01:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.118 01:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.118 01:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:12.118 { 00:20:12.118 "cntlid": 69, 00:20:12.118 "qid": 0, 00:20:12.118 "state": "enabled", 00:20:12.118 "thread": "nvmf_tgt_poll_group_000", 00:20:12.118 "listen_address": { 00:20:12.118 "trtype": "TCP", 00:20:12.118 "adrfam": "IPv4", 00:20:12.118 "traddr": "10.0.0.2", 00:20:12.118 "trsvcid": "4420" 00:20:12.118 }, 00:20:12.118 "peer_address": { 00:20:12.118 "trtype": "TCP", 00:20:12.118 "adrfam": "IPv4", 00:20:12.118 "traddr": "10.0.0.1", 00:20:12.118 "trsvcid": "50628" 00:20:12.118 }, 00:20:12.118 "auth": { 00:20:12.118 "state": "completed", 00:20:12.118 "digest": "sha384", 00:20:12.118 "dhgroup": "ffdhe3072" 00:20:12.118 } 00:20:12.118 } 00:20:12.118 ]' 00:20:12.118 01:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:12.118 01:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:12.118 01:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:12.118 01:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:12.118 01:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:12.118 01:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:12.119 01:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:12.119 01:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:12.378 01:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MDJhZTg1NjhkMzE3OGFjNjc4NmYwNWE0MjVjMWNjY2M0OTI5M2ZiNWE4YWUxMjA3f8fA1A==: --dhchap-ctrl-secret DHHC-1:01:YjZjODE5NjY0NWZmMGQ1OTAzZjc1MDQxYmM3M2NhNDPOAk4W: 00:20:13.317 01:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:13.317 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:13.317 01:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:13.317 01:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.317 01:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.575 01:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.575 01:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:13.575 01:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:13.575 01:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:13.833 01:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:20:13.833 01:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:13.833 01:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:13.833 01:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:13.833 01:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:13.833 01:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:13.833 01:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:13.833 01:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.833 01:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.833 01:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.833 01:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:13.833 01:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:14.091 00:20:14.091 01:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:14.091 01:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:14.091 01:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:14.349 01:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:14.349 01:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:14.349 01:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.349 01:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.349 01:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.349 01:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:14.349 { 00:20:14.349 "cntlid": 71, 00:20:14.349 "qid": 0, 00:20:14.349 "state": "enabled", 00:20:14.349 "thread": "nvmf_tgt_poll_group_000", 00:20:14.349 "listen_address": { 00:20:14.349 "trtype": "TCP", 00:20:14.349 "adrfam": "IPv4", 00:20:14.349 "traddr": "10.0.0.2", 00:20:14.349 "trsvcid": "4420" 00:20:14.349 }, 00:20:14.349 "peer_address": { 00:20:14.349 "trtype": "TCP", 00:20:14.349 "adrfam": "IPv4", 00:20:14.349 "traddr": "10.0.0.1", 00:20:14.349 "trsvcid": "50646" 00:20:14.349 }, 00:20:14.349 "auth": { 00:20:14.349 "state": "completed", 00:20:14.349 "digest": "sha384", 00:20:14.349 "dhgroup": "ffdhe3072" 00:20:14.349 } 00:20:14.349 } 00:20:14.349 ]' 00:20:14.349 01:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:14.349 01:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:14.349 01:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:14.349 01:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:14.349 01:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:14.607 01:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:14.607 01:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:14.607 01:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:14.865 01:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MTQ0NDgzYWUzNGMyYzBlZWM2MzkxZDdkZmEzODllOWI1N2FlYjA3OTM2ZDdjYmFlZTgxNjM3YWQ0NjA2YTg3Mhs9fsM=: 00:20:15.802 01:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:15.802 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:15.802 01:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:15.802 01:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.802 01:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.802 01:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.802 01:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:15.802 01:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:15.802 01:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:15.802 01:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:16.059 01:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:20:16.059 01:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:16.059 01:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:16.059 01:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:16.059 01:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:16.059 01:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:16.059 01:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:16.059 01:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.059 01:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.059 01:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.059 01:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:16.059 01:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:16.317 00:20:16.317 01:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:16.317 01:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:16.317 01:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:16.575 01:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:16.575 01:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:16.575 01:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.575 01:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.575 01:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.575 01:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:16.575 { 00:20:16.575 "cntlid": 73, 00:20:16.575 "qid": 0, 00:20:16.575 "state": "enabled", 00:20:16.575 "thread": "nvmf_tgt_poll_group_000", 00:20:16.575 "listen_address": { 00:20:16.575 "trtype": "TCP", 00:20:16.575 "adrfam": "IPv4", 00:20:16.575 "traddr": "10.0.0.2", 00:20:16.575 "trsvcid": "4420" 00:20:16.575 }, 00:20:16.575 "peer_address": { 00:20:16.575 "trtype": "TCP", 00:20:16.575 "adrfam": "IPv4", 00:20:16.575 "traddr": "10.0.0.1", 00:20:16.575 "trsvcid": "56474" 00:20:16.575 }, 00:20:16.575 "auth": { 00:20:16.575 "state": "completed", 00:20:16.575 "digest": "sha384", 00:20:16.575 "dhgroup": "ffdhe4096" 00:20:16.575 } 00:20:16.575 } 00:20:16.575 ]' 00:20:16.575 01:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:16.832 01:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:16.832 01:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:16.832 01:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:16.832 01:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:16.832 01:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:16.832 01:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:16.832 01:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:17.090 01:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NDZhYzlmN2Q1MzQ2ZWMzOTQxYWY3ODdkNmYzMzllODc4MGZlYmFiMzE3YzlkMWZhDWmyZA==: --dhchap-ctrl-secret DHHC-1:03:OGM2Y2ZmN2IzZjA0ZGU2NjBhOWU1NmFhMTJkMjdlOGUwM2VjOWIyOGUyZDY4ZjViZDg4YjRkMDAyMTM1ZGQ4YaApqQU=: 00:20:18.024 01:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:18.024 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:18.024 01:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:18.024 01:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.024 01:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.024 01:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.024 01:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:18.024 01:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:18.024 01:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:18.281 01:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:20:18.281 01:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:18.281 01:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:18.281 01:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:18.281 01:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:18.281 01:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:18.281 01:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:18.281 01:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.281 01:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.281 01:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.281 01:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:18.281 01:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:18.879 00:20:18.879 01:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:18.879 01:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:18.879 01:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:19.137 01:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:19.137 01:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:19.137 01:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.137 01:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.137 01:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.137 01:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:19.137 { 00:20:19.137 "cntlid": 75, 00:20:19.137 "qid": 0, 00:20:19.137 "state": "enabled", 00:20:19.137 "thread": "nvmf_tgt_poll_group_000", 00:20:19.137 "listen_address": { 00:20:19.137 "trtype": "TCP", 00:20:19.137 "adrfam": "IPv4", 00:20:19.137 "traddr": "10.0.0.2", 00:20:19.137 "trsvcid": "4420" 00:20:19.137 }, 00:20:19.137 "peer_address": { 00:20:19.137 "trtype": "TCP", 00:20:19.137 "adrfam": "IPv4", 00:20:19.137 "traddr": "10.0.0.1", 00:20:19.137 "trsvcid": "56510" 00:20:19.137 }, 00:20:19.137 "auth": { 00:20:19.137 "state": "completed", 00:20:19.137 "digest": "sha384", 00:20:19.137 "dhgroup": "ffdhe4096" 00:20:19.137 } 00:20:19.137 } 00:20:19.137 ]' 00:20:19.137 01:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:19.137 01:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:19.137 01:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:19.137 01:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:19.137 01:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:19.137 01:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:19.137 01:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:19.137 01:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:19.430 01:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZTgyYjdkYjY5MmFhNDBlZWMyMzRlMDI1ZWI1OTc5OTIkog7U: --dhchap-ctrl-secret DHHC-1:02:ZDJmY2M5MzU1YmYxNGM2ZTRhZWU1OTQwODExMzhjNmJkY2NlNmFlMmQ2YTViODdm1yPkNQ==: 00:20:20.363 01:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:20.363 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:20.363 01:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:20.363 01:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.363 01:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.363 01:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.363 01:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:20.363 01:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:20.363 01:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:20.622 01:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:20:20.622 01:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:20.622 01:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:20.622 01:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:20.622 01:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:20.622 01:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:20.622 01:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:20.622 01:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.622 01:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.622 01:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.622 01:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:20.622 01:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:21.189 00:20:21.189 01:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:21.189 01:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:21.189 01:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:21.448 01:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:21.448 01:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:21.448 01:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.448 01:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.448 01:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.448 01:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:21.448 { 00:20:21.448 "cntlid": 77, 00:20:21.448 "qid": 0, 00:20:21.448 "state": "enabled", 00:20:21.448 "thread": "nvmf_tgt_poll_group_000", 00:20:21.448 "listen_address": { 00:20:21.448 "trtype": "TCP", 00:20:21.448 "adrfam": "IPv4", 00:20:21.448 "traddr": "10.0.0.2", 00:20:21.448 "trsvcid": "4420" 00:20:21.448 }, 00:20:21.448 "peer_address": { 00:20:21.448 "trtype": "TCP", 00:20:21.448 "adrfam": "IPv4", 00:20:21.448 "traddr": "10.0.0.1", 00:20:21.448 "trsvcid": "56540" 00:20:21.448 }, 00:20:21.448 "auth": { 00:20:21.448 "state": "completed", 00:20:21.448 "digest": "sha384", 00:20:21.448 "dhgroup": "ffdhe4096" 00:20:21.448 } 00:20:21.448 } 00:20:21.448 ]' 00:20:21.448 01:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:21.448 01:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:21.448 01:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:21.448 01:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:21.448 01:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:21.448 01:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:21.448 01:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:21.448 01:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:21.706 01:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MDJhZTg1NjhkMzE3OGFjNjc4NmYwNWE0MjVjMWNjY2M0OTI5M2ZiNWE4YWUxMjA3f8fA1A==: --dhchap-ctrl-secret DHHC-1:01:YjZjODE5NjY0NWZmMGQ1OTAzZjc1MDQxYmM3M2NhNDPOAk4W: 00:20:22.642 01:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:22.642 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:22.642 01:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:22.643 01:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.643 01:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.643 01:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.643 01:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:22.643 01:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:22.643 01:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:22.900 01:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:20:22.900 01:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:22.900 01:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:22.900 01:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:22.900 01:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:22.900 01:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:22.900 01:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:22.900 01:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.900 01:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.900 01:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.900 01:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:22.900 01:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:23.467 00:20:23.467 01:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:23.467 01:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:23.467 01:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:23.725 01:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:23.725 01:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:23.725 01:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.725 01:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.725 01:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.725 01:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:23.725 { 00:20:23.725 "cntlid": 79, 00:20:23.725 "qid": 0, 00:20:23.725 "state": "enabled", 00:20:23.725 "thread": "nvmf_tgt_poll_group_000", 00:20:23.725 "listen_address": { 00:20:23.725 "trtype": "TCP", 00:20:23.725 "adrfam": "IPv4", 00:20:23.725 "traddr": "10.0.0.2", 00:20:23.725 "trsvcid": "4420" 00:20:23.725 }, 00:20:23.725 "peer_address": { 00:20:23.725 "trtype": "TCP", 00:20:23.725 "adrfam": "IPv4", 00:20:23.725 "traddr": "10.0.0.1", 00:20:23.725 "trsvcid": "56580" 00:20:23.725 }, 00:20:23.725 "auth": { 00:20:23.725 "state": "completed", 00:20:23.725 "digest": "sha384", 00:20:23.725 "dhgroup": "ffdhe4096" 00:20:23.725 } 00:20:23.725 } 00:20:23.725 ]' 00:20:23.725 01:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:23.725 01:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:23.725 01:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:23.725 01:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:23.725 01:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:23.725 01:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:23.725 01:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:23.726 01:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:23.985 01:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MTQ0NDgzYWUzNGMyYzBlZWM2MzkxZDdkZmEzODllOWI1N2FlYjA3OTM2ZDdjYmFlZTgxNjM3YWQ0NjA2YTg3Mhs9fsM=: 00:20:25.363 01:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:25.363 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:25.363 01:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:25.363 01:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.363 01:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.363 01:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.363 01:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:25.363 01:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:25.363 01:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:25.363 01:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:25.363 01:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:20:25.363 01:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:25.363 01:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:25.363 01:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:25.363 01:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:25.363 01:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:25.363 01:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:25.363 01:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.363 01:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.363 01:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.363 01:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:25.363 01:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:25.930 00:20:25.930 01:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:25.930 01:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:25.930 01:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:26.188 01:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:26.188 01:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:26.188 01:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.188 01:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.188 01:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.188 01:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:26.188 { 00:20:26.188 "cntlid": 81, 00:20:26.188 "qid": 0, 00:20:26.188 "state": "enabled", 00:20:26.188 "thread": "nvmf_tgt_poll_group_000", 00:20:26.188 "listen_address": { 00:20:26.188 "trtype": "TCP", 00:20:26.188 "adrfam": "IPv4", 00:20:26.188 "traddr": "10.0.0.2", 00:20:26.188 "trsvcid": "4420" 00:20:26.188 }, 00:20:26.188 "peer_address": { 00:20:26.188 "trtype": "TCP", 00:20:26.188 "adrfam": "IPv4", 00:20:26.188 "traddr": "10.0.0.1", 00:20:26.188 "trsvcid": "58998" 00:20:26.188 }, 00:20:26.188 "auth": { 00:20:26.188 "state": "completed", 00:20:26.188 "digest": "sha384", 00:20:26.188 "dhgroup": "ffdhe6144" 00:20:26.188 } 00:20:26.188 } 00:20:26.188 ]' 00:20:26.188 01:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:26.447 01:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:26.447 01:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:26.447 01:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:26.447 01:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:26.447 01:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:26.447 01:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:26.447 01:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:26.705 01:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NDZhYzlmN2Q1MzQ2ZWMzOTQxYWY3ODdkNmYzMzllODc4MGZlYmFiMzE3YzlkMWZhDWmyZA==: --dhchap-ctrl-secret DHHC-1:03:OGM2Y2ZmN2IzZjA0ZGU2NjBhOWU1NmFhMTJkMjdlOGUwM2VjOWIyOGUyZDY4ZjViZDg4YjRkMDAyMTM1ZGQ4YaApqQU=: 00:20:27.639 01:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:27.639 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:27.639 01:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:27.639 01:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.639 01:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.639 01:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.639 01:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:27.639 01:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:27.639 01:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:27.897 01:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:20:27.897 01:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:27.897 01:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:27.897 01:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:27.897 01:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:27.897 01:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:27.897 01:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:27.897 01:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.897 01:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.897 01:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.897 01:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:27.897 01:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:28.465 00:20:28.465 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:28.465 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:28.465 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:28.723 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:28.723 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:28.723 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.723 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.723 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.723 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:28.723 { 00:20:28.723 "cntlid": 83, 00:20:28.723 "qid": 0, 00:20:28.723 "state": "enabled", 00:20:28.723 "thread": "nvmf_tgt_poll_group_000", 00:20:28.723 "listen_address": { 00:20:28.723 "trtype": "TCP", 00:20:28.723 "adrfam": "IPv4", 00:20:28.723 "traddr": "10.0.0.2", 00:20:28.723 "trsvcid": "4420" 00:20:28.723 }, 00:20:28.723 "peer_address": { 00:20:28.723 "trtype": "TCP", 00:20:28.723 "adrfam": "IPv4", 00:20:28.723 "traddr": "10.0.0.1", 00:20:28.723 "trsvcid": "59024" 00:20:28.723 }, 00:20:28.723 "auth": { 00:20:28.723 "state": "completed", 00:20:28.723 "digest": "sha384", 00:20:28.723 "dhgroup": "ffdhe6144" 00:20:28.723 } 00:20:28.723 } 00:20:28.723 ]' 00:20:28.723 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:28.723 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:28.723 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:28.723 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:28.723 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:28.723 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:28.723 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:28.723 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:28.981 01:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZTgyYjdkYjY5MmFhNDBlZWMyMzRlMDI1ZWI1OTc5OTIkog7U: --dhchap-ctrl-secret DHHC-1:02:ZDJmY2M5MzU1YmYxNGM2ZTRhZWU1OTQwODExMzhjNmJkY2NlNmFlMmQ2YTViODdm1yPkNQ==: 00:20:29.919 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:29.919 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:29.919 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:29.919 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.919 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.919 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.919 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:29.919 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:29.919 01:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:30.486 01:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:20:30.486 01:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:30.486 01:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:30.486 01:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:30.486 01:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:30.486 01:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:30.486 01:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:30.486 01:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:30.486 01:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.486 01:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:30.486 01:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:30.486 01:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:30.746 00:20:31.005 01:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:31.005 01:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:31.005 01:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:31.005 01:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:31.005 01:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:31.005 01:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.005 01:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.263 01:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.263 01:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:31.263 { 00:20:31.263 "cntlid": 85, 00:20:31.263 "qid": 0, 00:20:31.263 "state": "enabled", 00:20:31.263 "thread": "nvmf_tgt_poll_group_000", 00:20:31.263 "listen_address": { 00:20:31.263 "trtype": "TCP", 00:20:31.263 "adrfam": "IPv4", 00:20:31.263 "traddr": "10.0.0.2", 00:20:31.263 "trsvcid": "4420" 00:20:31.263 }, 00:20:31.263 "peer_address": { 00:20:31.263 "trtype": "TCP", 00:20:31.263 "adrfam": "IPv4", 00:20:31.263 "traddr": "10.0.0.1", 00:20:31.263 "trsvcid": "59050" 00:20:31.263 }, 00:20:31.263 "auth": { 00:20:31.263 "state": "completed", 00:20:31.263 "digest": "sha384", 00:20:31.263 "dhgroup": "ffdhe6144" 00:20:31.263 } 00:20:31.263 } 00:20:31.263 ]' 00:20:31.263 01:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:31.263 01:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:31.263 01:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:31.263 01:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:31.263 01:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:31.263 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:31.263 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:31.263 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:31.523 01:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MDJhZTg1NjhkMzE3OGFjNjc4NmYwNWE0MjVjMWNjY2M0OTI5M2ZiNWE4YWUxMjA3f8fA1A==: --dhchap-ctrl-secret DHHC-1:01:YjZjODE5NjY0NWZmMGQ1OTAzZjc1MDQxYmM3M2NhNDPOAk4W: 00:20:32.462 01:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:32.462 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:32.463 01:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:32.463 01:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.463 01:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.463 01:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.463 01:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:32.463 01:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:32.463 01:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:32.722 01:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:20:32.722 01:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:32.722 01:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:32.722 01:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:32.722 01:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:32.722 01:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:32.722 01:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:32.722 01:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.722 01:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.722 01:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.722 01:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:32.722 01:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:33.290 00:20:33.290 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:33.290 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:33.290 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:33.548 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:33.548 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:33.548 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.549 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.549 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.549 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:33.549 { 00:20:33.549 "cntlid": 87, 00:20:33.549 "qid": 0, 00:20:33.549 "state": "enabled", 00:20:33.549 "thread": "nvmf_tgt_poll_group_000", 00:20:33.549 "listen_address": { 00:20:33.549 "trtype": "TCP", 00:20:33.549 "adrfam": "IPv4", 00:20:33.549 "traddr": "10.0.0.2", 00:20:33.549 "trsvcid": "4420" 00:20:33.549 }, 00:20:33.549 "peer_address": { 00:20:33.549 "trtype": "TCP", 00:20:33.549 "adrfam": "IPv4", 00:20:33.549 "traddr": "10.0.0.1", 00:20:33.549 "trsvcid": "59084" 00:20:33.549 }, 00:20:33.549 "auth": { 00:20:33.549 "state": "completed", 00:20:33.549 "digest": "sha384", 00:20:33.549 "dhgroup": "ffdhe6144" 00:20:33.549 } 00:20:33.549 } 00:20:33.549 ]' 00:20:33.549 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:33.549 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:33.549 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:33.549 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:33.549 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:33.549 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:33.549 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:33.549 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:33.808 01:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MTQ0NDgzYWUzNGMyYzBlZWM2MzkxZDdkZmEzODllOWI1N2FlYjA3OTM2ZDdjYmFlZTgxNjM3YWQ0NjA2YTg3Mhs9fsM=: 00:20:34.743 01:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:34.743 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:34.743 01:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:34.743 01:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.743 01:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.743 01:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.743 01:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:34.743 01:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:34.743 01:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:34.743 01:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:35.003 01:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:20:35.003 01:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:35.003 01:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:35.003 01:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:35.003 01:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:35.003 01:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:35.003 01:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:35.003 01:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.003 01:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.003 01:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.003 01:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:35.003 01:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:35.940 00:20:35.940 01:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:35.940 01:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:35.940 01:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:36.198 01:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:36.198 01:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:36.198 01:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.198 01:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.198 01:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.198 01:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:36.198 { 00:20:36.198 "cntlid": 89, 00:20:36.198 "qid": 0, 00:20:36.198 "state": "enabled", 00:20:36.198 "thread": "nvmf_tgt_poll_group_000", 00:20:36.198 "listen_address": { 00:20:36.198 "trtype": "TCP", 00:20:36.198 "adrfam": "IPv4", 00:20:36.198 "traddr": "10.0.0.2", 00:20:36.198 "trsvcid": "4420" 00:20:36.198 }, 00:20:36.198 "peer_address": { 00:20:36.198 "trtype": "TCP", 00:20:36.198 "adrfam": "IPv4", 00:20:36.198 "traddr": "10.0.0.1", 00:20:36.198 "trsvcid": "41984" 00:20:36.198 }, 00:20:36.198 "auth": { 00:20:36.198 "state": "completed", 00:20:36.198 "digest": "sha384", 00:20:36.198 "dhgroup": "ffdhe8192" 00:20:36.198 } 00:20:36.198 } 00:20:36.198 ]' 00:20:36.198 01:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:36.455 01:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:36.455 01:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:36.455 01:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:36.455 01:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:36.456 01:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:36.456 01:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:36.456 01:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:36.714 01:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NDZhYzlmN2Q1MzQ2ZWMzOTQxYWY3ODdkNmYzMzllODc4MGZlYmFiMzE3YzlkMWZhDWmyZA==: --dhchap-ctrl-secret DHHC-1:03:OGM2Y2ZmN2IzZjA0ZGU2NjBhOWU1NmFhMTJkMjdlOGUwM2VjOWIyOGUyZDY4ZjViZDg4YjRkMDAyMTM1ZGQ4YaApqQU=: 00:20:37.650 01:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:37.650 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:37.650 01:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:37.650 01:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.650 01:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.650 01:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.650 01:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:37.650 01:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:37.650 01:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:37.908 01:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:20:37.908 01:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:37.908 01:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:37.908 01:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:37.908 01:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:37.908 01:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:37.908 01:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:37.908 01:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.908 01:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.908 01:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.908 01:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:37.908 01:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:38.843 00:20:38.843 01:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:38.843 01:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:38.843 01:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:39.104 01:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:39.104 01:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:39.104 01:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.104 01:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.104 01:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.104 01:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:39.104 { 00:20:39.104 "cntlid": 91, 00:20:39.104 "qid": 0, 00:20:39.104 "state": "enabled", 00:20:39.104 "thread": "nvmf_tgt_poll_group_000", 00:20:39.104 "listen_address": { 00:20:39.104 "trtype": "TCP", 00:20:39.104 "adrfam": "IPv4", 00:20:39.104 "traddr": "10.0.0.2", 00:20:39.104 "trsvcid": "4420" 00:20:39.104 }, 00:20:39.104 "peer_address": { 00:20:39.104 "trtype": "TCP", 00:20:39.104 "adrfam": "IPv4", 00:20:39.104 "traddr": "10.0.0.1", 00:20:39.104 "trsvcid": "42010" 00:20:39.104 }, 00:20:39.104 "auth": { 00:20:39.104 "state": "completed", 00:20:39.104 "digest": "sha384", 00:20:39.104 "dhgroup": "ffdhe8192" 00:20:39.104 } 00:20:39.104 } 00:20:39.104 ]' 00:20:39.104 01:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:39.104 01:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:39.104 01:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:39.104 01:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:39.104 01:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:39.104 01:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:39.104 01:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:39.104 01:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:39.673 01:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZTgyYjdkYjY5MmFhNDBlZWMyMzRlMDI1ZWI1OTc5OTIkog7U: --dhchap-ctrl-secret DHHC-1:02:ZDJmY2M5MzU1YmYxNGM2ZTRhZWU1OTQwODExMzhjNmJkY2NlNmFlMmQ2YTViODdm1yPkNQ==: 00:20:40.607 01:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:40.608 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:40.608 01:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:40.608 01:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.608 01:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.608 01:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.608 01:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:40.608 01:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:40.608 01:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:40.866 01:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:20:40.866 01:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:40.866 01:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:40.866 01:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:40.866 01:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:40.866 01:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:40.866 01:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:40.866 01:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.866 01:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.866 01:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.866 01:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:40.866 01:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:41.805 00:20:41.805 01:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:41.805 01:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:41.805 01:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:41.805 01:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:41.805 01:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:41.805 01:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.805 01:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.805 01:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.805 01:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:41.805 { 00:20:41.805 "cntlid": 93, 00:20:41.805 "qid": 0, 00:20:41.805 "state": "enabled", 00:20:41.805 "thread": "nvmf_tgt_poll_group_000", 00:20:41.805 "listen_address": { 00:20:41.805 "trtype": "TCP", 00:20:41.805 "adrfam": "IPv4", 00:20:41.805 "traddr": "10.0.0.2", 00:20:41.805 "trsvcid": "4420" 00:20:41.805 }, 00:20:41.805 "peer_address": { 00:20:41.805 "trtype": "TCP", 00:20:41.805 "adrfam": "IPv4", 00:20:41.805 "traddr": "10.0.0.1", 00:20:41.805 "trsvcid": "42046" 00:20:41.805 }, 00:20:41.805 "auth": { 00:20:41.805 "state": "completed", 00:20:41.805 "digest": "sha384", 00:20:41.805 "dhgroup": "ffdhe8192" 00:20:41.805 } 00:20:41.805 } 00:20:41.805 ]' 00:20:41.805 01:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:42.107 01:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:42.107 01:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:42.107 01:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:42.107 01:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:42.107 01:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:42.107 01:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:42.107 01:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:42.365 01:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MDJhZTg1NjhkMzE3OGFjNjc4NmYwNWE0MjVjMWNjY2M0OTI5M2ZiNWE4YWUxMjA3f8fA1A==: --dhchap-ctrl-secret DHHC-1:01:YjZjODE5NjY0NWZmMGQ1OTAzZjc1MDQxYmM3M2NhNDPOAk4W: 00:20:43.301 01:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:43.301 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:43.301 01:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:43.301 01:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.301 01:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.301 01:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.301 01:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:43.301 01:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:43.301 01:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:43.559 01:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:20:43.559 01:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:43.559 01:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:43.559 01:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:43.559 01:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:43.559 01:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:43.559 01:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:43.559 01:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.559 01:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.559 01:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.559 01:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:43.559 01:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:44.496 00:20:44.496 01:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:44.496 01:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:44.496 01:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:44.755 01:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:44.755 01:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:44.755 01:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.755 01:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.755 01:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.755 01:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:44.755 { 00:20:44.755 "cntlid": 95, 00:20:44.755 "qid": 0, 00:20:44.755 "state": "enabled", 00:20:44.755 "thread": "nvmf_tgt_poll_group_000", 00:20:44.755 "listen_address": { 00:20:44.755 "trtype": "TCP", 00:20:44.755 "adrfam": "IPv4", 00:20:44.755 "traddr": "10.0.0.2", 00:20:44.755 "trsvcid": "4420" 00:20:44.755 }, 00:20:44.755 "peer_address": { 00:20:44.755 "trtype": "TCP", 00:20:44.755 "adrfam": "IPv4", 00:20:44.755 "traddr": "10.0.0.1", 00:20:44.755 "trsvcid": "42076" 00:20:44.755 }, 00:20:44.755 "auth": { 00:20:44.755 "state": "completed", 00:20:44.755 "digest": "sha384", 00:20:44.755 "dhgroup": "ffdhe8192" 00:20:44.755 } 00:20:44.755 } 00:20:44.755 ]' 00:20:44.755 01:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:44.755 01:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:44.755 01:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:44.755 01:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:44.755 01:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:44.755 01:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:44.755 01:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:44.755 01:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:45.014 01:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MTQ0NDgzYWUzNGMyYzBlZWM2MzkxZDdkZmEzODllOWI1N2FlYjA3OTM2ZDdjYmFlZTgxNjM3YWQ0NjA2YTg3Mhs9fsM=: 00:20:45.949 01:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:45.949 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:45.949 01:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:45.949 01:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.950 01:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.950 01:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.950 01:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:20:45.950 01:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:45.950 01:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:45.950 01:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:45.950 01:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:46.208 01:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:20:46.208 01:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:46.208 01:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:46.208 01:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:46.208 01:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:46.208 01:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:46.208 01:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:46.208 01:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.208 01:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.208 01:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.208 01:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:46.208 01:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:46.778 00:20:46.778 01:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:46.778 01:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:46.778 01:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:46.778 01:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:46.778 01:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:46.778 01:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.778 01:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.037 01:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.037 01:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:47.037 { 00:20:47.037 "cntlid": 97, 00:20:47.037 "qid": 0, 00:20:47.037 "state": "enabled", 00:20:47.037 "thread": "nvmf_tgt_poll_group_000", 00:20:47.037 "listen_address": { 00:20:47.037 "trtype": "TCP", 00:20:47.037 "adrfam": "IPv4", 00:20:47.037 "traddr": "10.0.0.2", 00:20:47.037 "trsvcid": "4420" 00:20:47.037 }, 00:20:47.037 "peer_address": { 00:20:47.037 "trtype": "TCP", 00:20:47.037 "adrfam": "IPv4", 00:20:47.037 "traddr": "10.0.0.1", 00:20:47.037 "trsvcid": "40124" 00:20:47.037 }, 00:20:47.037 "auth": { 00:20:47.037 "state": "completed", 00:20:47.037 "digest": "sha512", 00:20:47.037 "dhgroup": "null" 00:20:47.037 } 00:20:47.037 } 00:20:47.037 ]' 00:20:47.037 01:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:47.037 01:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:47.037 01:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:47.037 01:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:47.037 01:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:47.037 01:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:47.037 01:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:47.037 01:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:47.296 01:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NDZhYzlmN2Q1MzQ2ZWMzOTQxYWY3ODdkNmYzMzllODc4MGZlYmFiMzE3YzlkMWZhDWmyZA==: --dhchap-ctrl-secret DHHC-1:03:OGM2Y2ZmN2IzZjA0ZGU2NjBhOWU1NmFhMTJkMjdlOGUwM2VjOWIyOGUyZDY4ZjViZDg4YjRkMDAyMTM1ZGQ4YaApqQU=: 00:20:48.235 01:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:48.235 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:48.235 01:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:48.235 01:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.235 01:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.235 01:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.235 01:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:48.235 01:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:48.235 01:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:48.493 01:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:20:48.493 01:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:48.493 01:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:48.493 01:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:48.493 01:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:48.493 01:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:48.493 01:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:48.493 01:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.493 01:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.493 01:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.493 01:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:48.493 01:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:48.752 00:20:48.752 01:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:48.752 01:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:48.752 01:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:49.010 01:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:49.010 01:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:49.010 01:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.010 01:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.010 01:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.010 01:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:49.010 { 00:20:49.010 "cntlid": 99, 00:20:49.010 "qid": 0, 00:20:49.010 "state": "enabled", 00:20:49.010 "thread": "nvmf_tgt_poll_group_000", 00:20:49.010 "listen_address": { 00:20:49.010 "trtype": "TCP", 00:20:49.010 "adrfam": "IPv4", 00:20:49.010 "traddr": "10.0.0.2", 00:20:49.010 "trsvcid": "4420" 00:20:49.010 }, 00:20:49.010 "peer_address": { 00:20:49.010 "trtype": "TCP", 00:20:49.010 "adrfam": "IPv4", 00:20:49.010 "traddr": "10.0.0.1", 00:20:49.010 "trsvcid": "40152" 00:20:49.010 }, 00:20:49.010 "auth": { 00:20:49.010 "state": "completed", 00:20:49.010 "digest": "sha512", 00:20:49.010 "dhgroup": "null" 00:20:49.010 } 00:20:49.010 } 00:20:49.010 ]' 00:20:49.010 01:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:49.010 01:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:49.010 01:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:49.268 01:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:49.268 01:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:49.268 01:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:49.268 01:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:49.268 01:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:49.526 01:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZTgyYjdkYjY5MmFhNDBlZWMyMzRlMDI1ZWI1OTc5OTIkog7U: --dhchap-ctrl-secret DHHC-1:02:ZDJmY2M5MzU1YmYxNGM2ZTRhZWU1OTQwODExMzhjNmJkY2NlNmFlMmQ2YTViODdm1yPkNQ==: 00:20:50.466 01:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:50.466 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:50.466 01:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:50.466 01:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.466 01:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.466 01:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.466 01:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:50.466 01:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:50.466 01:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:50.724 01:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:20:50.724 01:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:50.724 01:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:50.724 01:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:50.724 01:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:50.724 01:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:50.724 01:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:50.724 01:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.724 01:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.724 01:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.724 01:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:50.724 01:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:50.982 00:20:50.982 01:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:50.982 01:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:50.982 01:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:51.240 01:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:51.240 01:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:51.240 01:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.240 01:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.240 01:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.240 01:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:51.240 { 00:20:51.240 "cntlid": 101, 00:20:51.240 "qid": 0, 00:20:51.241 "state": "enabled", 00:20:51.241 "thread": "nvmf_tgt_poll_group_000", 00:20:51.241 "listen_address": { 00:20:51.241 "trtype": "TCP", 00:20:51.241 "adrfam": "IPv4", 00:20:51.241 "traddr": "10.0.0.2", 00:20:51.241 "trsvcid": "4420" 00:20:51.241 }, 00:20:51.241 "peer_address": { 00:20:51.241 "trtype": "TCP", 00:20:51.241 "adrfam": "IPv4", 00:20:51.241 "traddr": "10.0.0.1", 00:20:51.241 "trsvcid": "40186" 00:20:51.241 }, 00:20:51.241 "auth": { 00:20:51.241 "state": "completed", 00:20:51.241 "digest": "sha512", 00:20:51.241 "dhgroup": "null" 00:20:51.241 } 00:20:51.241 } 00:20:51.241 ]' 00:20:51.241 01:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:51.499 01:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:51.499 01:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:51.499 01:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:51.499 01:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:51.499 01:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:51.499 01:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:51.499 01:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:51.758 01:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MDJhZTg1NjhkMzE3OGFjNjc4NmYwNWE0MjVjMWNjY2M0OTI5M2ZiNWE4YWUxMjA3f8fA1A==: --dhchap-ctrl-secret DHHC-1:01:YjZjODE5NjY0NWZmMGQ1OTAzZjc1MDQxYmM3M2NhNDPOAk4W: 00:20:52.725 01:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:52.725 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:52.725 01:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:52.725 01:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:52.725 01:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.725 01:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:52.726 01:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:52.726 01:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:52.726 01:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:52.985 01:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:20:52.985 01:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:52.985 01:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:52.985 01:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:52.985 01:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:52.985 01:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:52.985 01:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:52.985 01:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:52.985 01:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.985 01:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:52.985 01:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:52.985 01:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:53.244 00:20:53.503 01:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:53.503 01:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:53.503 01:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:53.760 01:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:53.760 01:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:53.760 01:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.760 01:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.760 01:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.760 01:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:53.760 { 00:20:53.760 "cntlid": 103, 00:20:53.760 "qid": 0, 00:20:53.760 "state": "enabled", 00:20:53.760 "thread": "nvmf_tgt_poll_group_000", 00:20:53.760 "listen_address": { 00:20:53.760 "trtype": "TCP", 00:20:53.760 "adrfam": "IPv4", 00:20:53.760 "traddr": "10.0.0.2", 00:20:53.760 "trsvcid": "4420" 00:20:53.760 }, 00:20:53.760 "peer_address": { 00:20:53.760 "trtype": "TCP", 00:20:53.760 "adrfam": "IPv4", 00:20:53.760 "traddr": "10.0.0.1", 00:20:53.760 "trsvcid": "40204" 00:20:53.760 }, 00:20:53.760 "auth": { 00:20:53.760 "state": "completed", 00:20:53.760 "digest": "sha512", 00:20:53.760 "dhgroup": "null" 00:20:53.760 } 00:20:53.760 } 00:20:53.760 ]' 00:20:53.760 01:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:53.760 01:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:53.761 01:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:53.761 01:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:53.761 01:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:53.761 01:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:53.761 01:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:53.761 01:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:54.019 01:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MTQ0NDgzYWUzNGMyYzBlZWM2MzkxZDdkZmEzODllOWI1N2FlYjA3OTM2ZDdjYmFlZTgxNjM3YWQ0NjA2YTg3Mhs9fsM=: 00:20:54.957 01:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:54.957 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:54.957 01:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:54.957 01:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.957 01:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.957 01:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.957 01:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:54.957 01:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:54.957 01:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:54.957 01:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:55.215 01:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:20:55.215 01:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:55.215 01:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:55.215 01:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:55.215 01:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:55.215 01:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:55.215 01:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:55.215 01:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.215 01:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.215 01:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.215 01:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:55.215 01:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:55.784 00:20:55.784 01:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:55.784 01:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:55.784 01:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:55.784 01:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:55.784 01:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:55.784 01:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.784 01:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.784 01:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.784 01:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:55.784 { 00:20:55.784 "cntlid": 105, 00:20:55.784 "qid": 0, 00:20:55.784 "state": "enabled", 00:20:55.784 "thread": "nvmf_tgt_poll_group_000", 00:20:55.784 "listen_address": { 00:20:55.784 "trtype": "TCP", 00:20:55.784 "adrfam": "IPv4", 00:20:55.784 "traddr": "10.0.0.2", 00:20:55.784 "trsvcid": "4420" 00:20:55.784 }, 00:20:55.784 "peer_address": { 00:20:55.784 "trtype": "TCP", 00:20:55.784 "adrfam": "IPv4", 00:20:55.784 "traddr": "10.0.0.1", 00:20:55.784 "trsvcid": "37518" 00:20:55.784 }, 00:20:55.784 "auth": { 00:20:55.784 "state": "completed", 00:20:55.784 "digest": "sha512", 00:20:55.784 "dhgroup": "ffdhe2048" 00:20:55.784 } 00:20:55.784 } 00:20:55.784 ]' 00:20:55.784 01:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:56.042 01:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:56.042 01:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:56.042 01:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:56.042 01:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:56.042 01:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:56.042 01:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:56.042 01:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:56.300 01:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NDZhYzlmN2Q1MzQ2ZWMzOTQxYWY3ODdkNmYzMzllODc4MGZlYmFiMzE3YzlkMWZhDWmyZA==: --dhchap-ctrl-secret DHHC-1:03:OGM2Y2ZmN2IzZjA0ZGU2NjBhOWU1NmFhMTJkMjdlOGUwM2VjOWIyOGUyZDY4ZjViZDg4YjRkMDAyMTM1ZGQ4YaApqQU=: 00:20:57.236 01:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:57.236 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:57.236 01:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:57.236 01:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.236 01:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.236 01:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:57.236 01:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:57.236 01:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:57.236 01:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:57.494 01:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:20:57.494 01:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:57.494 01:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:57.494 01:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:57.494 01:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:57.494 01:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:57.495 01:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:57.495 01:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.495 01:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.495 01:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:57.495 01:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:57.495 01:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:58.061 00:20:58.061 01:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:58.061 01:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:58.061 01:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:58.061 01:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:58.061 01:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:58.061 01:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.061 01:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.061 01:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.061 01:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:58.061 { 00:20:58.061 "cntlid": 107, 00:20:58.061 "qid": 0, 00:20:58.061 "state": "enabled", 00:20:58.061 "thread": "nvmf_tgt_poll_group_000", 00:20:58.061 "listen_address": { 00:20:58.061 "trtype": "TCP", 00:20:58.061 "adrfam": "IPv4", 00:20:58.061 "traddr": "10.0.0.2", 00:20:58.061 "trsvcid": "4420" 00:20:58.061 }, 00:20:58.061 "peer_address": { 00:20:58.061 "trtype": "TCP", 00:20:58.061 "adrfam": "IPv4", 00:20:58.061 "traddr": "10.0.0.1", 00:20:58.061 "trsvcid": "37550" 00:20:58.061 }, 00:20:58.061 "auth": { 00:20:58.061 "state": "completed", 00:20:58.061 "digest": "sha512", 00:20:58.061 "dhgroup": "ffdhe2048" 00:20:58.061 } 00:20:58.061 } 00:20:58.061 ]' 00:20:58.061 01:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:58.319 01:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:58.319 01:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:58.319 01:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:58.319 01:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:58.319 01:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:58.319 01:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:58.319 01:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:58.577 01:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZTgyYjdkYjY5MmFhNDBlZWMyMzRlMDI1ZWI1OTc5OTIkog7U: --dhchap-ctrl-secret DHHC-1:02:ZDJmY2M5MzU1YmYxNGM2ZTRhZWU1OTQwODExMzhjNmJkY2NlNmFlMmQ2YTViODdm1yPkNQ==: 00:20:59.514 01:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:59.514 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:59.514 01:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:59.514 01:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:59.514 01:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.514 01:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:59.514 01:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:59.514 01:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:59.514 01:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:59.773 01:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:20:59.773 01:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:59.773 01:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:59.773 01:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:59.773 01:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:59.773 01:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:59.773 01:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:59.773 01:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:59.773 01:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.773 01:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:59.773 01:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:59.773 01:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:00.031 00:21:00.031 01:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:00.031 01:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:00.031 01:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:00.289 01:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:00.289 01:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:00.289 01:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:00.289 01:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.289 01:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:00.289 01:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:00.289 { 00:21:00.289 "cntlid": 109, 00:21:00.289 "qid": 0, 00:21:00.289 "state": "enabled", 00:21:00.289 "thread": "nvmf_tgt_poll_group_000", 00:21:00.289 "listen_address": { 00:21:00.289 "trtype": "TCP", 00:21:00.289 "adrfam": "IPv4", 00:21:00.289 "traddr": "10.0.0.2", 00:21:00.289 "trsvcid": "4420" 00:21:00.289 }, 00:21:00.289 "peer_address": { 00:21:00.289 "trtype": "TCP", 00:21:00.289 "adrfam": "IPv4", 00:21:00.289 "traddr": "10.0.0.1", 00:21:00.289 "trsvcid": "37574" 00:21:00.289 }, 00:21:00.289 "auth": { 00:21:00.289 "state": "completed", 00:21:00.289 "digest": "sha512", 00:21:00.289 "dhgroup": "ffdhe2048" 00:21:00.289 } 00:21:00.289 } 00:21:00.289 ]' 00:21:00.289 01:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:00.547 01:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:00.547 01:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:00.547 01:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:00.547 01:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:00.547 01:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:00.547 01:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:00.547 01:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:00.805 01:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MDJhZTg1NjhkMzE3OGFjNjc4NmYwNWE0MjVjMWNjY2M0OTI5M2ZiNWE4YWUxMjA3f8fA1A==: --dhchap-ctrl-secret DHHC-1:01:YjZjODE5NjY0NWZmMGQ1OTAzZjc1MDQxYmM3M2NhNDPOAk4W: 00:21:01.742 01:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:01.742 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:01.742 01:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:01.742 01:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.742 01:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.742 01:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.742 01:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:01.742 01:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:01.742 01:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:02.000 01:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:21:02.000 01:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:02.000 01:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:02.000 01:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:02.000 01:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:02.000 01:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:02.000 01:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:02.000 01:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.000 01:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.000 01:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.000 01:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:02.000 01:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:02.258 00:21:02.258 01:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:02.258 01:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:02.258 01:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:02.516 01:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:02.516 01:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:02.516 01:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.516 01:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.516 01:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.516 01:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:02.516 { 00:21:02.516 "cntlid": 111, 00:21:02.516 "qid": 0, 00:21:02.516 "state": "enabled", 00:21:02.516 "thread": "nvmf_tgt_poll_group_000", 00:21:02.516 "listen_address": { 00:21:02.516 "trtype": "TCP", 00:21:02.516 "adrfam": "IPv4", 00:21:02.516 "traddr": "10.0.0.2", 00:21:02.516 "trsvcid": "4420" 00:21:02.516 }, 00:21:02.516 "peer_address": { 00:21:02.516 "trtype": "TCP", 00:21:02.516 "adrfam": "IPv4", 00:21:02.516 "traddr": "10.0.0.1", 00:21:02.516 "trsvcid": "37606" 00:21:02.516 }, 00:21:02.516 "auth": { 00:21:02.516 "state": "completed", 00:21:02.516 "digest": "sha512", 00:21:02.516 "dhgroup": "ffdhe2048" 00:21:02.516 } 00:21:02.516 } 00:21:02.516 ]' 00:21:02.516 01:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:02.516 01:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:02.516 01:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:02.774 01:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:02.774 01:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:02.774 01:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:02.774 01:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:02.774 01:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:03.033 01:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MTQ0NDgzYWUzNGMyYzBlZWM2MzkxZDdkZmEzODllOWI1N2FlYjA3OTM2ZDdjYmFlZTgxNjM3YWQ0NjA2YTg3Mhs9fsM=: 00:21:03.969 01:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:03.969 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:03.970 01:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:03.970 01:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:03.970 01:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.970 01:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:03.970 01:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:03.970 01:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:03.970 01:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:03.970 01:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:04.228 01:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:21:04.228 01:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:04.228 01:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:04.228 01:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:04.228 01:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:04.228 01:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:04.228 01:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:04.228 01:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:04.228 01:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.228 01:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:04.228 01:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:04.228 01:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:04.486 00:21:04.486 01:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:04.486 01:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:04.486 01:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:04.744 01:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:04.745 01:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:04.745 01:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:04.745 01:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.745 01:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:04.745 01:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:04.745 { 00:21:04.745 "cntlid": 113, 00:21:04.745 "qid": 0, 00:21:04.745 "state": "enabled", 00:21:04.745 "thread": "nvmf_tgt_poll_group_000", 00:21:04.745 "listen_address": { 00:21:04.745 "trtype": "TCP", 00:21:04.745 "adrfam": "IPv4", 00:21:04.745 "traddr": "10.0.0.2", 00:21:04.745 "trsvcid": "4420" 00:21:04.745 }, 00:21:04.745 "peer_address": { 00:21:04.745 "trtype": "TCP", 00:21:04.745 "adrfam": "IPv4", 00:21:04.745 "traddr": "10.0.0.1", 00:21:04.745 "trsvcid": "37628" 00:21:04.745 }, 00:21:04.745 "auth": { 00:21:04.745 "state": "completed", 00:21:04.745 "digest": "sha512", 00:21:04.745 "dhgroup": "ffdhe3072" 00:21:04.745 } 00:21:04.745 } 00:21:04.745 ]' 00:21:04.745 01:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:04.745 01:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:04.745 01:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:04.745 01:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:04.745 01:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:04.745 01:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:04.745 01:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:04.745 01:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:05.004 01:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NDZhYzlmN2Q1MzQ2ZWMzOTQxYWY3ODdkNmYzMzllODc4MGZlYmFiMzE3YzlkMWZhDWmyZA==: --dhchap-ctrl-secret DHHC-1:03:OGM2Y2ZmN2IzZjA0ZGU2NjBhOWU1NmFhMTJkMjdlOGUwM2VjOWIyOGUyZDY4ZjViZDg4YjRkMDAyMTM1ZGQ4YaApqQU=: 00:21:05.939 01:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:06.198 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:06.198 01:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:06.198 01:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.198 01:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.198 01:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:06.198 01:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:06.198 01:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:06.198 01:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:06.457 01:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:21:06.457 01:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:06.457 01:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:06.457 01:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:06.457 01:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:06.457 01:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:06.457 01:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:06.457 01:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.457 01:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.457 01:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:06.457 01:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:06.457 01:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:06.715 00:21:06.715 01:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:06.715 01:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:06.715 01:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:06.973 01:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:06.973 01:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:06.973 01:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.973 01:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.973 01:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:06.973 01:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:06.973 { 00:21:06.973 "cntlid": 115, 00:21:06.973 "qid": 0, 00:21:06.973 "state": "enabled", 00:21:06.973 "thread": "nvmf_tgt_poll_group_000", 00:21:06.973 "listen_address": { 00:21:06.973 "trtype": "TCP", 00:21:06.973 "adrfam": "IPv4", 00:21:06.973 "traddr": "10.0.0.2", 00:21:06.973 "trsvcid": "4420" 00:21:06.973 }, 00:21:06.973 "peer_address": { 00:21:06.973 "trtype": "TCP", 00:21:06.973 "adrfam": "IPv4", 00:21:06.973 "traddr": "10.0.0.1", 00:21:06.973 "trsvcid": "50352" 00:21:06.973 }, 00:21:06.973 "auth": { 00:21:06.973 "state": "completed", 00:21:06.973 "digest": "sha512", 00:21:06.973 "dhgroup": "ffdhe3072" 00:21:06.973 } 00:21:06.973 } 00:21:06.973 ]' 00:21:06.973 01:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:06.973 01:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:06.973 01:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:06.973 01:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:06.973 01:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:06.973 01:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:06.973 01:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:06.973 01:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:07.233 01:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZTgyYjdkYjY5MmFhNDBlZWMyMzRlMDI1ZWI1OTc5OTIkog7U: --dhchap-ctrl-secret DHHC-1:02:ZDJmY2M5MzU1YmYxNGM2ZTRhZWU1OTQwODExMzhjNmJkY2NlNmFlMmQ2YTViODdm1yPkNQ==: 00:21:08.616 01:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:08.616 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:08.616 01:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:08.616 01:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:08.616 01:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.616 01:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:08.616 01:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:08.616 01:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:08.616 01:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:08.616 01:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:21:08.616 01:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:08.616 01:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:08.616 01:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:08.616 01:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:08.616 01:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:08.616 01:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:08.616 01:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:08.616 01:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.616 01:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:08.616 01:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:08.616 01:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:08.930 00:21:08.930 01:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:08.930 01:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:08.930 01:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:09.187 01:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:09.187 01:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:09.187 01:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:09.187 01:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.187 01:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:09.187 01:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:09.187 { 00:21:09.187 "cntlid": 117, 00:21:09.187 "qid": 0, 00:21:09.187 "state": "enabled", 00:21:09.187 "thread": "nvmf_tgt_poll_group_000", 00:21:09.187 "listen_address": { 00:21:09.187 "trtype": "TCP", 00:21:09.187 "adrfam": "IPv4", 00:21:09.187 "traddr": "10.0.0.2", 00:21:09.187 "trsvcid": "4420" 00:21:09.187 }, 00:21:09.187 "peer_address": { 00:21:09.187 "trtype": "TCP", 00:21:09.187 "adrfam": "IPv4", 00:21:09.187 "traddr": "10.0.0.1", 00:21:09.187 "trsvcid": "50374" 00:21:09.187 }, 00:21:09.187 "auth": { 00:21:09.187 "state": "completed", 00:21:09.187 "digest": "sha512", 00:21:09.187 "dhgroup": "ffdhe3072" 00:21:09.187 } 00:21:09.187 } 00:21:09.187 ]' 00:21:09.187 01:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:09.187 01:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:09.187 01:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:09.187 01:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:09.187 01:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:09.445 01:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:09.445 01:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:09.445 01:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:09.701 01:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MDJhZTg1NjhkMzE3OGFjNjc4NmYwNWE0MjVjMWNjY2M0OTI5M2ZiNWE4YWUxMjA3f8fA1A==: --dhchap-ctrl-secret DHHC-1:01:YjZjODE5NjY0NWZmMGQ1OTAzZjc1MDQxYmM3M2NhNDPOAk4W: 00:21:10.637 01:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:10.637 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:10.637 01:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:10.637 01:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.637 01:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.637 01:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.637 01:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:10.637 01:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:10.637 01:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:10.896 01:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:21:10.896 01:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:10.896 01:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:10.896 01:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:10.896 01:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:10.896 01:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:10.896 01:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:10.896 01:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.896 01:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.896 01:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.896 01:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:10.896 01:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:11.154 00:21:11.154 01:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:11.154 01:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:11.154 01:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:11.413 01:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:11.413 01:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:11.413 01:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:11.413 01:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.413 01:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:11.413 01:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:11.413 { 00:21:11.413 "cntlid": 119, 00:21:11.413 "qid": 0, 00:21:11.413 "state": "enabled", 00:21:11.413 "thread": "nvmf_tgt_poll_group_000", 00:21:11.413 "listen_address": { 00:21:11.413 "trtype": "TCP", 00:21:11.413 "adrfam": "IPv4", 00:21:11.413 "traddr": "10.0.0.2", 00:21:11.413 "trsvcid": "4420" 00:21:11.413 }, 00:21:11.413 "peer_address": { 00:21:11.413 "trtype": "TCP", 00:21:11.413 "adrfam": "IPv4", 00:21:11.413 "traddr": "10.0.0.1", 00:21:11.413 "trsvcid": "50402" 00:21:11.413 }, 00:21:11.413 "auth": { 00:21:11.413 "state": "completed", 00:21:11.413 "digest": "sha512", 00:21:11.413 "dhgroup": "ffdhe3072" 00:21:11.413 } 00:21:11.413 } 00:21:11.413 ]' 00:21:11.413 01:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:11.413 01:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:11.413 01:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:11.413 01:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:11.413 01:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:11.413 01:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:11.413 01:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:11.413 01:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:11.673 01:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MTQ0NDgzYWUzNGMyYzBlZWM2MzkxZDdkZmEzODllOWI1N2FlYjA3OTM2ZDdjYmFlZTgxNjM3YWQ0NjA2YTg3Mhs9fsM=: 00:21:12.611 01:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:12.869 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:12.869 01:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:12.869 01:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:12.869 01:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.869 01:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:12.869 01:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:12.869 01:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:12.869 01:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:12.869 01:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:13.127 01:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:21:13.127 01:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:13.127 01:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:13.127 01:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:13.127 01:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:13.127 01:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:13.127 01:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:13.127 01:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.127 01:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.127 01:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.127 01:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:13.127 01:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:13.385 00:21:13.385 01:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:13.385 01:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:13.385 01:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:13.642 01:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:13.642 01:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:13.642 01:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.642 01:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.642 01:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.642 01:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:13.642 { 00:21:13.642 "cntlid": 121, 00:21:13.642 "qid": 0, 00:21:13.642 "state": "enabled", 00:21:13.642 "thread": "nvmf_tgt_poll_group_000", 00:21:13.642 "listen_address": { 00:21:13.642 "trtype": "TCP", 00:21:13.642 "adrfam": "IPv4", 00:21:13.642 "traddr": "10.0.0.2", 00:21:13.642 "trsvcid": "4420" 00:21:13.642 }, 00:21:13.642 "peer_address": { 00:21:13.642 "trtype": "TCP", 00:21:13.642 "adrfam": "IPv4", 00:21:13.642 "traddr": "10.0.0.1", 00:21:13.642 "trsvcid": "50424" 00:21:13.642 }, 00:21:13.642 "auth": { 00:21:13.642 "state": "completed", 00:21:13.642 "digest": "sha512", 00:21:13.642 "dhgroup": "ffdhe4096" 00:21:13.642 } 00:21:13.642 } 00:21:13.642 ]' 00:21:13.642 01:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:13.642 01:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:13.642 01:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:13.642 01:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:13.642 01:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:13.899 01:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:13.899 01:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:13.899 01:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:14.159 01:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NDZhYzlmN2Q1MzQ2ZWMzOTQxYWY3ODdkNmYzMzllODc4MGZlYmFiMzE3YzlkMWZhDWmyZA==: --dhchap-ctrl-secret DHHC-1:03:OGM2Y2ZmN2IzZjA0ZGU2NjBhOWU1NmFhMTJkMjdlOGUwM2VjOWIyOGUyZDY4ZjViZDg4YjRkMDAyMTM1ZGQ4YaApqQU=: 00:21:15.093 01:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:15.093 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:15.093 01:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:15.093 01:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:15.093 01:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.093 01:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:15.093 01:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:15.093 01:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:15.093 01:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:15.350 01:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:21:15.350 01:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:15.350 01:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:15.350 01:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:15.350 01:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:15.350 01:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:15.350 01:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:15.350 01:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:15.350 01:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.350 01:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:15.350 01:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:15.350 01:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:15.608 00:21:15.608 01:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:15.608 01:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:15.608 01:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:15.866 01:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:15.866 01:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:15.866 01:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:15.866 01:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.866 01:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:15.866 01:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:15.866 { 00:21:15.866 "cntlid": 123, 00:21:15.866 "qid": 0, 00:21:15.866 "state": "enabled", 00:21:15.866 "thread": "nvmf_tgt_poll_group_000", 00:21:15.866 "listen_address": { 00:21:15.866 "trtype": "TCP", 00:21:15.866 "adrfam": "IPv4", 00:21:15.866 "traddr": "10.0.0.2", 00:21:15.866 "trsvcid": "4420" 00:21:15.866 }, 00:21:15.866 "peer_address": { 00:21:15.866 "trtype": "TCP", 00:21:15.866 "adrfam": "IPv4", 00:21:15.866 "traddr": "10.0.0.1", 00:21:15.866 "trsvcid": "36020" 00:21:15.866 }, 00:21:15.866 "auth": { 00:21:15.866 "state": "completed", 00:21:15.866 "digest": "sha512", 00:21:15.866 "dhgroup": "ffdhe4096" 00:21:15.866 } 00:21:15.866 } 00:21:15.866 ]' 00:21:15.866 01:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:15.866 01:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:15.866 01:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:16.125 01:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:16.125 01:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:16.125 01:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:16.125 01:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:16.125 01:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:16.383 01:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZTgyYjdkYjY5MmFhNDBlZWMyMzRlMDI1ZWI1OTc5OTIkog7U: --dhchap-ctrl-secret DHHC-1:02:ZDJmY2M5MzU1YmYxNGM2ZTRhZWU1OTQwODExMzhjNmJkY2NlNmFlMmQ2YTViODdm1yPkNQ==: 00:21:17.318 01:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:17.318 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:17.318 01:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:17.318 01:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:17.318 01:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.318 01:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:17.319 01:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:17.319 01:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:17.319 01:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:17.576 01:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:21:17.576 01:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:17.576 01:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:17.576 01:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:17.576 01:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:17.576 01:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:17.576 01:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:17.576 01:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:17.576 01:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.576 01:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:17.577 01:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:17.577 01:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:17.834 00:21:17.834 01:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:17.834 01:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:17.834 01:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:18.093 01:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:18.093 01:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:18.093 01:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:18.093 01:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.093 01:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:18.093 01:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:18.093 { 00:21:18.093 "cntlid": 125, 00:21:18.093 "qid": 0, 00:21:18.093 "state": "enabled", 00:21:18.093 "thread": "nvmf_tgt_poll_group_000", 00:21:18.093 "listen_address": { 00:21:18.093 "trtype": "TCP", 00:21:18.093 "adrfam": "IPv4", 00:21:18.093 "traddr": "10.0.0.2", 00:21:18.093 "trsvcid": "4420" 00:21:18.093 }, 00:21:18.093 "peer_address": { 00:21:18.093 "trtype": "TCP", 00:21:18.093 "adrfam": "IPv4", 00:21:18.093 "traddr": "10.0.0.1", 00:21:18.093 "trsvcid": "36046" 00:21:18.093 }, 00:21:18.093 "auth": { 00:21:18.093 "state": "completed", 00:21:18.093 "digest": "sha512", 00:21:18.093 "dhgroup": "ffdhe4096" 00:21:18.093 } 00:21:18.093 } 00:21:18.093 ]' 00:21:18.093 01:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:18.351 01:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:18.351 01:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:18.351 01:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:18.351 01:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:18.351 01:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:18.351 01:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:18.351 01:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:18.609 01:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MDJhZTg1NjhkMzE3OGFjNjc4NmYwNWE0MjVjMWNjY2M0OTI5M2ZiNWE4YWUxMjA3f8fA1A==: --dhchap-ctrl-secret DHHC-1:01:YjZjODE5NjY0NWZmMGQ1OTAzZjc1MDQxYmM3M2NhNDPOAk4W: 00:21:19.544 01:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:19.544 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:19.544 01:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:19.544 01:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:19.544 01:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.544 01:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:19.544 01:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:19.544 01:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:19.544 01:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:19.801 01:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:21:19.801 01:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:19.801 01:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:19.801 01:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:19.801 01:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:19.801 01:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:19.801 01:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:19.801 01:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:19.801 01:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.801 01:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:19.801 01:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:19.802 01:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:20.059 00:21:20.059 01:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:20.059 01:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:20.059 01:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:20.317 01:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:20.317 01:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:20.317 01:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:20.317 01:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.317 01:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:20.317 01:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:20.317 { 00:21:20.317 "cntlid": 127, 00:21:20.317 "qid": 0, 00:21:20.317 "state": "enabled", 00:21:20.317 "thread": "nvmf_tgt_poll_group_000", 00:21:20.317 "listen_address": { 00:21:20.317 "trtype": "TCP", 00:21:20.317 "adrfam": "IPv4", 00:21:20.317 "traddr": "10.0.0.2", 00:21:20.317 "trsvcid": "4420" 00:21:20.317 }, 00:21:20.317 "peer_address": { 00:21:20.317 "trtype": "TCP", 00:21:20.317 "adrfam": "IPv4", 00:21:20.317 "traddr": "10.0.0.1", 00:21:20.317 "trsvcid": "36078" 00:21:20.317 }, 00:21:20.317 "auth": { 00:21:20.317 "state": "completed", 00:21:20.317 "digest": "sha512", 00:21:20.317 "dhgroup": "ffdhe4096" 00:21:20.317 } 00:21:20.317 } 00:21:20.317 ]' 00:21:20.317 01:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:20.317 01:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:20.317 01:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:20.575 01:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:20.575 01:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:20.575 01:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:20.575 01:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:20.575 01:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:20.834 01:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MTQ0NDgzYWUzNGMyYzBlZWM2MzkxZDdkZmEzODllOWI1N2FlYjA3OTM2ZDdjYmFlZTgxNjM3YWQ0NjA2YTg3Mhs9fsM=: 00:21:21.772 01:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:21.772 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:21.772 01:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:21.772 01:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:21.773 01:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.773 01:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:21.773 01:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:21.773 01:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:21.773 01:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:21.773 01:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:22.030 01:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:21:22.030 01:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:22.030 01:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:22.030 01:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:22.030 01:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:22.030 01:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:22.031 01:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:22.031 01:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:22.031 01:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.031 01:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:22.031 01:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:22.031 01:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:22.597 00:21:22.597 01:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:22.597 01:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:22.597 01:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:22.854 01:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:22.854 01:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:22.854 01:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:22.854 01:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.854 01:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:22.854 01:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:22.854 { 00:21:22.854 "cntlid": 129, 00:21:22.854 "qid": 0, 00:21:22.854 "state": "enabled", 00:21:22.854 "thread": "nvmf_tgt_poll_group_000", 00:21:22.854 "listen_address": { 00:21:22.854 "trtype": "TCP", 00:21:22.854 "adrfam": "IPv4", 00:21:22.854 "traddr": "10.0.0.2", 00:21:22.854 "trsvcid": "4420" 00:21:22.854 }, 00:21:22.855 "peer_address": { 00:21:22.855 "trtype": "TCP", 00:21:22.855 "adrfam": "IPv4", 00:21:22.855 "traddr": "10.0.0.1", 00:21:22.855 "trsvcid": "36112" 00:21:22.855 }, 00:21:22.855 "auth": { 00:21:22.855 "state": "completed", 00:21:22.855 "digest": "sha512", 00:21:22.855 "dhgroup": "ffdhe6144" 00:21:22.855 } 00:21:22.855 } 00:21:22.855 ]' 00:21:22.855 01:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:22.855 01:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:22.855 01:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:22.855 01:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:22.855 01:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:22.855 01:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:22.855 01:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:22.855 01:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:23.112 01:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NDZhYzlmN2Q1MzQ2ZWMzOTQxYWY3ODdkNmYzMzllODc4MGZlYmFiMzE3YzlkMWZhDWmyZA==: --dhchap-ctrl-secret DHHC-1:03:OGM2Y2ZmN2IzZjA0ZGU2NjBhOWU1NmFhMTJkMjdlOGUwM2VjOWIyOGUyZDY4ZjViZDg4YjRkMDAyMTM1ZGQ4YaApqQU=: 00:21:24.046 01:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:24.046 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:24.046 01:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:24.046 01:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:24.046 01:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.046 01:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:24.046 01:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:24.046 01:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:24.046 01:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:24.304 01:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:21:24.304 01:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:24.304 01:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:24.304 01:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:24.304 01:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:24.304 01:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:24.304 01:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:24.304 01:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:24.304 01:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.304 01:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:24.304 01:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:24.304 01:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:24.916 00:21:24.916 01:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:24.916 01:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:24.916 01:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:25.174 01:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:25.174 01:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:25.174 01:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:25.174 01:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.174 01:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:25.174 01:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:25.174 { 00:21:25.174 "cntlid": 131, 00:21:25.174 "qid": 0, 00:21:25.174 "state": "enabled", 00:21:25.174 "thread": "nvmf_tgt_poll_group_000", 00:21:25.174 "listen_address": { 00:21:25.174 "trtype": "TCP", 00:21:25.174 "adrfam": "IPv4", 00:21:25.174 "traddr": "10.0.0.2", 00:21:25.174 "trsvcid": "4420" 00:21:25.174 }, 00:21:25.174 "peer_address": { 00:21:25.174 "trtype": "TCP", 00:21:25.174 "adrfam": "IPv4", 00:21:25.174 "traddr": "10.0.0.1", 00:21:25.174 "trsvcid": "36148" 00:21:25.174 }, 00:21:25.174 "auth": { 00:21:25.174 "state": "completed", 00:21:25.174 "digest": "sha512", 00:21:25.174 "dhgroup": "ffdhe6144" 00:21:25.174 } 00:21:25.174 } 00:21:25.174 ]' 00:21:25.174 01:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:25.174 01:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:25.174 01:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:25.432 01:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:25.432 01:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:25.432 01:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:25.432 01:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:25.432 01:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:25.690 01:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZTgyYjdkYjY5MmFhNDBlZWMyMzRlMDI1ZWI1OTc5OTIkog7U: --dhchap-ctrl-secret DHHC-1:02:ZDJmY2M5MzU1YmYxNGM2ZTRhZWU1OTQwODExMzhjNmJkY2NlNmFlMmQ2YTViODdm1yPkNQ==: 00:21:26.628 01:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:26.628 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:26.628 01:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:26.628 01:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:26.628 01:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.628 01:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:26.628 01:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:26.628 01:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:26.628 01:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:26.887 01:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:21:26.887 01:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:26.887 01:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:26.887 01:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:26.887 01:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:26.887 01:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:26.887 01:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:26.887 01:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:26.887 01:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.887 01:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:26.887 01:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:26.887 01:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:27.453 00:21:27.453 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:27.453 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:27.453 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:27.711 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:27.711 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:27.711 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:27.711 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.711 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:27.711 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:27.711 { 00:21:27.711 "cntlid": 133, 00:21:27.711 "qid": 0, 00:21:27.711 "state": "enabled", 00:21:27.711 "thread": "nvmf_tgt_poll_group_000", 00:21:27.711 "listen_address": { 00:21:27.711 "trtype": "TCP", 00:21:27.711 "adrfam": "IPv4", 00:21:27.711 "traddr": "10.0.0.2", 00:21:27.711 "trsvcid": "4420" 00:21:27.711 }, 00:21:27.711 "peer_address": { 00:21:27.711 "trtype": "TCP", 00:21:27.711 "adrfam": "IPv4", 00:21:27.711 "traddr": "10.0.0.1", 00:21:27.711 "trsvcid": "49308" 00:21:27.711 }, 00:21:27.711 "auth": { 00:21:27.711 "state": "completed", 00:21:27.711 "digest": "sha512", 00:21:27.711 "dhgroup": "ffdhe6144" 00:21:27.711 } 00:21:27.711 } 00:21:27.711 ]' 00:21:27.712 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:27.712 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:27.712 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:27.712 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:27.712 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:27.712 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:27.712 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:27.712 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:27.971 01:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MDJhZTg1NjhkMzE3OGFjNjc4NmYwNWE0MjVjMWNjY2M0OTI5M2ZiNWE4YWUxMjA3f8fA1A==: --dhchap-ctrl-secret DHHC-1:01:YjZjODE5NjY0NWZmMGQ1OTAzZjc1MDQxYmM3M2NhNDPOAk4W: 00:21:28.906 01:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:28.906 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:28.906 01:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:28.906 01:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:28.906 01:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.906 01:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:28.906 01:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:28.906 01:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:28.906 01:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:29.165 01:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:21:29.165 01:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:29.165 01:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:29.165 01:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:29.165 01:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:29.165 01:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:29.165 01:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:29.165 01:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:29.165 01:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.423 01:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:29.423 01:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:29.423 01:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:29.991 00:21:29.991 01:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:29.991 01:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:29.991 01:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:30.249 01:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:30.249 01:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:30.249 01:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:30.249 01:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.249 01:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:30.249 01:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:30.249 { 00:21:30.249 "cntlid": 135, 00:21:30.249 "qid": 0, 00:21:30.249 "state": "enabled", 00:21:30.249 "thread": "nvmf_tgt_poll_group_000", 00:21:30.249 "listen_address": { 00:21:30.249 "trtype": "TCP", 00:21:30.249 "adrfam": "IPv4", 00:21:30.249 "traddr": "10.0.0.2", 00:21:30.249 "trsvcid": "4420" 00:21:30.249 }, 00:21:30.249 "peer_address": { 00:21:30.249 "trtype": "TCP", 00:21:30.249 "adrfam": "IPv4", 00:21:30.249 "traddr": "10.0.0.1", 00:21:30.249 "trsvcid": "49338" 00:21:30.249 }, 00:21:30.249 "auth": { 00:21:30.249 "state": "completed", 00:21:30.249 "digest": "sha512", 00:21:30.249 "dhgroup": "ffdhe6144" 00:21:30.249 } 00:21:30.249 } 00:21:30.249 ]' 00:21:30.249 01:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:30.249 01:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:30.249 01:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:30.249 01:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:30.249 01:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:30.249 01:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:30.249 01:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:30.249 01:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:30.506 01:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MTQ0NDgzYWUzNGMyYzBlZWM2MzkxZDdkZmEzODllOWI1N2FlYjA3OTM2ZDdjYmFlZTgxNjM3YWQ0NjA2YTg3Mhs9fsM=: 00:21:31.438 01:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:31.438 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:31.438 01:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:31.438 01:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.438 01:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.438 01:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:31.438 01:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:31.438 01:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:31.438 01:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:31.438 01:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:31.696 01:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:21:31.696 01:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:31.696 01:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:31.696 01:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:31.696 01:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:31.696 01:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:31.696 01:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:31.696 01:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.696 01:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.696 01:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:31.696 01:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:31.696 01:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:32.628 00:21:32.628 01:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:32.628 01:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:32.628 01:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:32.886 01:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:32.886 01:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:32.886 01:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:32.886 01:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.886 01:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:32.886 01:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:32.886 { 00:21:32.886 "cntlid": 137, 00:21:32.886 "qid": 0, 00:21:32.886 "state": "enabled", 00:21:32.886 "thread": "nvmf_tgt_poll_group_000", 00:21:32.886 "listen_address": { 00:21:32.886 "trtype": "TCP", 00:21:32.886 "adrfam": "IPv4", 00:21:32.886 "traddr": "10.0.0.2", 00:21:32.886 "trsvcid": "4420" 00:21:32.886 }, 00:21:32.886 "peer_address": { 00:21:32.886 "trtype": "TCP", 00:21:32.886 "adrfam": "IPv4", 00:21:32.886 "traddr": "10.0.0.1", 00:21:32.886 "trsvcid": "49354" 00:21:32.886 }, 00:21:32.886 "auth": { 00:21:32.886 "state": "completed", 00:21:32.886 "digest": "sha512", 00:21:32.886 "dhgroup": "ffdhe8192" 00:21:32.886 } 00:21:32.886 } 00:21:32.886 ]' 00:21:32.886 01:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:32.886 01:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:32.886 01:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:32.886 01:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:32.886 01:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:33.143 01:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:33.143 01:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:33.143 01:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:33.400 01:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NDZhYzlmN2Q1MzQ2ZWMzOTQxYWY3ODdkNmYzMzllODc4MGZlYmFiMzE3YzlkMWZhDWmyZA==: --dhchap-ctrl-secret DHHC-1:03:OGM2Y2ZmN2IzZjA0ZGU2NjBhOWU1NmFhMTJkMjdlOGUwM2VjOWIyOGUyZDY4ZjViZDg4YjRkMDAyMTM1ZGQ4YaApqQU=: 00:21:34.333 01:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:34.333 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:34.333 01:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:34.333 01:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:34.333 01:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.333 01:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:34.333 01:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:34.333 01:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:34.333 01:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:34.590 01:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:21:34.590 01:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:34.590 01:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:34.590 01:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:34.590 01:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:34.590 01:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:34.590 01:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:34.590 01:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:34.590 01:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.590 01:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:34.590 01:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:34.590 01:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:35.523 00:21:35.523 01:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:35.523 01:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:35.523 01:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:35.780 01:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:35.780 01:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:35.780 01:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.780 01:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.780 01:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.780 01:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:35.780 { 00:21:35.780 "cntlid": 139, 00:21:35.780 "qid": 0, 00:21:35.780 "state": "enabled", 00:21:35.780 "thread": "nvmf_tgt_poll_group_000", 00:21:35.780 "listen_address": { 00:21:35.780 "trtype": "TCP", 00:21:35.780 "adrfam": "IPv4", 00:21:35.780 "traddr": "10.0.0.2", 00:21:35.780 "trsvcid": "4420" 00:21:35.780 }, 00:21:35.780 "peer_address": { 00:21:35.781 "trtype": "TCP", 00:21:35.781 "adrfam": "IPv4", 00:21:35.781 "traddr": "10.0.0.1", 00:21:35.781 "trsvcid": "49394" 00:21:35.781 }, 00:21:35.781 "auth": { 00:21:35.781 "state": "completed", 00:21:35.781 "digest": "sha512", 00:21:35.781 "dhgroup": "ffdhe8192" 00:21:35.781 } 00:21:35.781 } 00:21:35.781 ]' 00:21:35.781 01:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:35.781 01:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:35.781 01:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:35.781 01:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:35.781 01:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:35.781 01:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:35.781 01:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:35.781 01:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:36.038 01:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZTgyYjdkYjY5MmFhNDBlZWMyMzRlMDI1ZWI1OTc5OTIkog7U: --dhchap-ctrl-secret DHHC-1:02:ZDJmY2M5MzU1YmYxNGM2ZTRhZWU1OTQwODExMzhjNmJkY2NlNmFlMmQ2YTViODdm1yPkNQ==: 00:21:36.971 01:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:36.971 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:36.971 01:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:36.971 01:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:36.971 01:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.971 01:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:36.971 01:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:36.971 01:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:36.971 01:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:37.229 01:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:21:37.229 01:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:37.229 01:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:37.229 01:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:37.229 01:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:37.229 01:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:37.229 01:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:37.229 01:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:37.229 01:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.229 01:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:37.229 01:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:37.229 01:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:38.162 00:21:38.162 01:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:38.162 01:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:38.162 01:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:38.162 01:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:38.420 01:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:38.420 01:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:38.420 01:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.420 01:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:38.420 01:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:38.420 { 00:21:38.420 "cntlid": 141, 00:21:38.420 "qid": 0, 00:21:38.420 "state": "enabled", 00:21:38.420 "thread": "nvmf_tgt_poll_group_000", 00:21:38.420 "listen_address": { 00:21:38.420 "trtype": "TCP", 00:21:38.420 "adrfam": "IPv4", 00:21:38.420 "traddr": "10.0.0.2", 00:21:38.420 "trsvcid": "4420" 00:21:38.420 }, 00:21:38.420 "peer_address": { 00:21:38.420 "trtype": "TCP", 00:21:38.420 "adrfam": "IPv4", 00:21:38.420 "traddr": "10.0.0.1", 00:21:38.420 "trsvcid": "39252" 00:21:38.420 }, 00:21:38.420 "auth": { 00:21:38.420 "state": "completed", 00:21:38.420 "digest": "sha512", 00:21:38.420 "dhgroup": "ffdhe8192" 00:21:38.420 } 00:21:38.420 } 00:21:38.420 ]' 00:21:38.420 01:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:38.420 01:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:38.420 01:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:38.420 01:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:38.420 01:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:38.420 01:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:38.420 01:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:38.420 01:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:38.681 01:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MDJhZTg1NjhkMzE3OGFjNjc4NmYwNWE0MjVjMWNjY2M0OTI5M2ZiNWE4YWUxMjA3f8fA1A==: --dhchap-ctrl-secret DHHC-1:01:YjZjODE5NjY0NWZmMGQ1OTAzZjc1MDQxYmM3M2NhNDPOAk4W: 00:21:39.614 01:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:39.614 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:39.614 01:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:39.614 01:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.614 01:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.614 01:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.614 01:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:39.615 01:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:39.615 01:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:39.873 01:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:21:39.873 01:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:39.873 01:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:39.873 01:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:39.873 01:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:39.873 01:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:39.873 01:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:39.873 01:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.873 01:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.873 01:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.873 01:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:39.873 01:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:40.807 00:21:40.807 01:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:40.807 01:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:40.807 01:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:40.807 01:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:40.807 01:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:40.807 01:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:40.807 01:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.807 01:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:40.807 01:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:40.807 { 00:21:40.807 "cntlid": 143, 00:21:40.807 "qid": 0, 00:21:40.807 "state": "enabled", 00:21:40.807 "thread": "nvmf_tgt_poll_group_000", 00:21:40.807 "listen_address": { 00:21:40.807 "trtype": "TCP", 00:21:40.807 "adrfam": "IPv4", 00:21:40.807 "traddr": "10.0.0.2", 00:21:40.807 "trsvcid": "4420" 00:21:40.807 }, 00:21:40.807 "peer_address": { 00:21:40.807 "trtype": "TCP", 00:21:40.807 "adrfam": "IPv4", 00:21:40.807 "traddr": "10.0.0.1", 00:21:40.807 "trsvcid": "39290" 00:21:40.807 }, 00:21:40.807 "auth": { 00:21:40.807 "state": "completed", 00:21:40.807 "digest": "sha512", 00:21:40.807 "dhgroup": "ffdhe8192" 00:21:40.807 } 00:21:40.807 } 00:21:40.807 ]' 00:21:40.807 01:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:41.065 01:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:41.065 01:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:41.065 01:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:41.065 01:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:41.065 01:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:41.065 01:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:41.065 01:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:41.323 01:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MTQ0NDgzYWUzNGMyYzBlZWM2MzkxZDdkZmEzODllOWI1N2FlYjA3OTM2ZDdjYmFlZTgxNjM3YWQ0NjA2YTg3Mhs9fsM=: 00:21:42.304 01:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:42.304 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:42.304 01:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:42.304 01:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:42.304 01:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.304 01:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:42.304 01:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:21:42.304 01:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:21:42.304 01:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:21:42.304 01:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:42.304 01:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:42.304 01:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:42.562 01:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:21:42.562 01:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:42.562 01:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:42.562 01:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:42.562 01:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:42.562 01:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:42.562 01:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:42.562 01:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:42.562 01:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.562 01:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:42.562 01:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:42.562 01:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:43.498 00:21:43.498 01:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:43.498 01:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:43.498 01:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:43.756 01:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:43.756 01:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:43.756 01:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.756 01:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.756 01:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.756 01:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:43.756 { 00:21:43.756 "cntlid": 145, 00:21:43.756 "qid": 0, 00:21:43.756 "state": "enabled", 00:21:43.756 "thread": "nvmf_tgt_poll_group_000", 00:21:43.756 "listen_address": { 00:21:43.756 "trtype": "TCP", 00:21:43.756 "adrfam": "IPv4", 00:21:43.756 "traddr": "10.0.0.2", 00:21:43.756 "trsvcid": "4420" 00:21:43.756 }, 00:21:43.756 "peer_address": { 00:21:43.756 "trtype": "TCP", 00:21:43.756 "adrfam": "IPv4", 00:21:43.756 "traddr": "10.0.0.1", 00:21:43.756 "trsvcid": "39316" 00:21:43.756 }, 00:21:43.756 "auth": { 00:21:43.756 "state": "completed", 00:21:43.756 "digest": "sha512", 00:21:43.756 "dhgroup": "ffdhe8192" 00:21:43.756 } 00:21:43.756 } 00:21:43.756 ]' 00:21:43.756 01:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:43.756 01:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:43.756 01:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:43.756 01:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:43.756 01:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:43.756 01:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:43.756 01:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:43.756 01:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:44.014 01:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:NDZhYzlmN2Q1MzQ2ZWMzOTQxYWY3ODdkNmYzMzllODc4MGZlYmFiMzE3YzlkMWZhDWmyZA==: --dhchap-ctrl-secret DHHC-1:03:OGM2Y2ZmN2IzZjA0ZGU2NjBhOWU1NmFhMTJkMjdlOGUwM2VjOWIyOGUyZDY4ZjViZDg4YjRkMDAyMTM1ZGQ4YaApqQU=: 00:21:44.946 01:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:44.946 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:44.946 01:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:44.946 01:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.946 01:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.946 01:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.946 01:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:21:44.946 01:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.946 01:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.946 01:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.946 01:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:44.946 01:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:21:44.946 01:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:44.946 01:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:21:44.946 01:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:44.946 01:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:21:44.946 01:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:44.946 01:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:44.946 01:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:45.880 request: 00:21:45.880 { 00:21:45.880 "name": "nvme0", 00:21:45.880 "trtype": "tcp", 00:21:45.880 "traddr": "10.0.0.2", 00:21:45.880 "adrfam": "ipv4", 00:21:45.880 "trsvcid": "4420", 00:21:45.880 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:45.880 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:45.880 "prchk_reftag": false, 00:21:45.880 "prchk_guard": false, 00:21:45.880 "hdgst": false, 00:21:45.880 "ddgst": false, 00:21:45.880 "dhchap_key": "key2", 00:21:45.880 "method": "bdev_nvme_attach_controller", 00:21:45.880 "req_id": 1 00:21:45.880 } 00:21:45.880 Got JSON-RPC error response 00:21:45.880 response: 00:21:45.880 { 00:21:45.880 "code": -5, 00:21:45.880 "message": "Input/output error" 00:21:45.880 } 00:21:45.880 02:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:21:45.880 02:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:45.880 02:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:45.880 02:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:45.880 02:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:45.880 02:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.881 02:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.881 02:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.881 02:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:45.881 02:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.881 02:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.881 02:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.881 02:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:45.881 02:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:21:45.881 02:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:45.881 02:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:21:45.881 02:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:45.881 02:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:21:45.881 02:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:45.881 02:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:45.881 02:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:46.814 request: 00:21:46.814 { 00:21:46.814 "name": "nvme0", 00:21:46.814 "trtype": "tcp", 00:21:46.814 "traddr": "10.0.0.2", 00:21:46.814 "adrfam": "ipv4", 00:21:46.814 "trsvcid": "4420", 00:21:46.814 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:46.814 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:46.814 "prchk_reftag": false, 00:21:46.814 "prchk_guard": false, 00:21:46.814 "hdgst": false, 00:21:46.814 "ddgst": false, 00:21:46.814 "dhchap_key": "key1", 00:21:46.814 "dhchap_ctrlr_key": "ckey2", 00:21:46.814 "method": "bdev_nvme_attach_controller", 00:21:46.814 "req_id": 1 00:21:46.814 } 00:21:46.814 Got JSON-RPC error response 00:21:46.814 response: 00:21:46.814 { 00:21:46.814 "code": -5, 00:21:46.814 "message": "Input/output error" 00:21:46.814 } 00:21:46.814 02:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:21:46.814 02:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:46.814 02:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:46.814 02:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:46.814 02:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:46.814 02:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.814 02:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.814 02:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.814 02:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:21:46.814 02:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.814 02:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.814 02:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.814 02:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:46.814 02:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:21:46.814 02:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:46.814 02:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:21:46.814 02:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:46.814 02:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:21:46.814 02:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:46.814 02:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:46.814 02:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:47.745 request: 00:21:47.745 { 00:21:47.745 "name": "nvme0", 00:21:47.745 "trtype": "tcp", 00:21:47.745 "traddr": "10.0.0.2", 00:21:47.745 "adrfam": "ipv4", 00:21:47.745 "trsvcid": "4420", 00:21:47.745 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:47.745 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:47.745 "prchk_reftag": false, 00:21:47.745 "prchk_guard": false, 00:21:47.745 "hdgst": false, 00:21:47.745 "ddgst": false, 00:21:47.745 "dhchap_key": "key1", 00:21:47.745 "dhchap_ctrlr_key": "ckey1", 00:21:47.745 "method": "bdev_nvme_attach_controller", 00:21:47.745 "req_id": 1 00:21:47.745 } 00:21:47.745 Got JSON-RPC error response 00:21:47.745 response: 00:21:47.745 { 00:21:47.745 "code": -5, 00:21:47.745 "message": "Input/output error" 00:21:47.745 } 00:21:47.745 02:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:21:47.745 02:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:47.745 02:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:47.745 02:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:47.745 02:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:47.745 02:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.745 02:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.745 02:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.745 02:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 1434104 00:21:47.745 02:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 1434104 ']' 00:21:47.745 02:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 1434104 00:21:47.745 02:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:21:47.745 02:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:47.745 02:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1434104 00:21:47.745 02:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:47.745 02:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:47.745 02:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1434104' 00:21:47.745 killing process with pid 1434104 00:21:47.745 02:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 1434104 00:21:47.745 02:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 1434104 00:21:48.001 02:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:21:48.001 02:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:48.001 02:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:48.001 02:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.001 02:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=1456583 00:21:48.001 02:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:21:48.001 02:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 1456583 00:21:48.001 02:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1456583 ']' 00:21:48.001 02:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:48.001 02:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:48.001 02:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:48.001 02:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:48.001 02:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.259 02:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:48.259 02:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:21:48.259 02:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:48.259 02:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:48.259 02:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.259 02:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:48.259 02:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:21:48.259 02:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 1456583 00:21:48.259 02:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1456583 ']' 00:21:48.259 02:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:48.259 02:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:48.259 02:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:48.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:48.259 02:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:48.259 02:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.518 02:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:48.518 02:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:21:48.518 02:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:21:48.518 02:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.518 02:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.518 02:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.518 02:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:21:48.518 02:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:48.518 02:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:48.518 02:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:48.518 02:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:48.518 02:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:48.518 02:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:48.518 02:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.518 02:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.518 02:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.518 02:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:48.518 02:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:49.449 00:21:49.449 02:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:49.449 02:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:49.449 02:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:49.707 02:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:49.707 02:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:49.707 02:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.707 02:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.707 02:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.707 02:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:49.707 { 00:21:49.707 "cntlid": 1, 00:21:49.707 "qid": 0, 00:21:49.707 "state": "enabled", 00:21:49.707 "thread": "nvmf_tgt_poll_group_000", 00:21:49.707 "listen_address": { 00:21:49.707 "trtype": "TCP", 00:21:49.707 "adrfam": "IPv4", 00:21:49.707 "traddr": "10.0.0.2", 00:21:49.707 "trsvcid": "4420" 00:21:49.707 }, 00:21:49.707 "peer_address": { 00:21:49.707 "trtype": "TCP", 00:21:49.707 "adrfam": "IPv4", 00:21:49.707 "traddr": "10.0.0.1", 00:21:49.707 "trsvcid": "56350" 00:21:49.707 }, 00:21:49.707 "auth": { 00:21:49.707 "state": "completed", 00:21:49.707 "digest": "sha512", 00:21:49.707 "dhgroup": "ffdhe8192" 00:21:49.707 } 00:21:49.707 } 00:21:49.707 ]' 00:21:49.707 02:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:49.707 02:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:49.707 02:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:49.965 02:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:49.965 02:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:49.965 02:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:49.965 02:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:49.965 02:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:50.222 02:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:MTQ0NDgzYWUzNGMyYzBlZWM2MzkxZDdkZmEzODllOWI1N2FlYjA3OTM2ZDdjYmFlZTgxNjM3YWQ0NjA2YTg3Mhs9fsM=: 00:21:51.154 02:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:51.154 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:51.154 02:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:51.154 02:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:51.154 02:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.154 02:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:51.154 02:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:51.154 02:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:51.155 02:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.155 02:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:51.155 02:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:21:51.155 02:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:21:51.412 02:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:51.412 02:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:21:51.412 02:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:51.412 02:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:21:51.412 02:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:51.412 02:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:21:51.412 02:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:51.412 02:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:51.412 02:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:51.669 request: 00:21:51.669 { 00:21:51.669 "name": "nvme0", 00:21:51.669 "trtype": "tcp", 00:21:51.669 "traddr": "10.0.0.2", 00:21:51.669 "adrfam": "ipv4", 00:21:51.669 "trsvcid": "4420", 00:21:51.669 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:51.669 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:51.669 "prchk_reftag": false, 00:21:51.669 "prchk_guard": false, 00:21:51.669 "hdgst": false, 00:21:51.669 "ddgst": false, 00:21:51.669 "dhchap_key": "key3", 00:21:51.669 "method": "bdev_nvme_attach_controller", 00:21:51.669 "req_id": 1 00:21:51.669 } 00:21:51.669 Got JSON-RPC error response 00:21:51.669 response: 00:21:51.669 { 00:21:51.669 "code": -5, 00:21:51.669 "message": "Input/output error" 00:21:51.669 } 00:21:51.926 02:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:21:51.926 02:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:51.926 02:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:51.926 02:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:51.926 02:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:21:51.926 02:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:21:51.926 02:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:21:51.926 02:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:21:52.191 02:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:52.191 02:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:21:52.191 02:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:52.191 02:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:21:52.191 02:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:52.191 02:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:21:52.191 02:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:52.191 02:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:52.191 02:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:52.450 request: 00:21:52.450 { 00:21:52.450 "name": "nvme0", 00:21:52.450 "trtype": "tcp", 00:21:52.450 "traddr": "10.0.0.2", 00:21:52.450 "adrfam": "ipv4", 00:21:52.450 "trsvcid": "4420", 00:21:52.450 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:52.451 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:52.451 "prchk_reftag": false, 00:21:52.451 "prchk_guard": false, 00:21:52.451 "hdgst": false, 00:21:52.451 "ddgst": false, 00:21:52.451 "dhchap_key": "key3", 00:21:52.451 "method": "bdev_nvme_attach_controller", 00:21:52.451 "req_id": 1 00:21:52.451 } 00:21:52.451 Got JSON-RPC error response 00:21:52.451 response: 00:21:52.451 { 00:21:52.451 "code": -5, 00:21:52.451 "message": "Input/output error" 00:21:52.451 } 00:21:52.451 02:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:21:52.451 02:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:52.451 02:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:52.451 02:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:52.451 02:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:21:52.451 02:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:21:52.451 02:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:21:52.451 02:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:52.451 02:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:52.451 02:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:52.708 02:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:52.708 02:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.708 02:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.708 02:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.708 02:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:52.708 02:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.708 02:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.708 02:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.708 02:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:52.708 02:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:21:52.708 02:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:52.708 02:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:21:52.708 02:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:52.708 02:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:21:52.708 02:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:52.708 02:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:52.708 02:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:52.966 request: 00:21:52.966 { 00:21:52.966 "name": "nvme0", 00:21:52.966 "trtype": "tcp", 00:21:52.966 "traddr": "10.0.0.2", 00:21:52.966 "adrfam": "ipv4", 00:21:52.966 "trsvcid": "4420", 00:21:52.966 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:52.966 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:52.966 "prchk_reftag": false, 00:21:52.966 "prchk_guard": false, 00:21:52.966 "hdgst": false, 00:21:52.966 "ddgst": false, 00:21:52.966 "dhchap_key": "key0", 00:21:52.966 "dhchap_ctrlr_key": "key1", 00:21:52.966 "method": "bdev_nvme_attach_controller", 00:21:52.966 "req_id": 1 00:21:52.966 } 00:21:52.966 Got JSON-RPC error response 00:21:52.966 response: 00:21:52.966 { 00:21:52.966 "code": -5, 00:21:52.966 "message": "Input/output error" 00:21:52.966 } 00:21:52.966 02:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:21:52.966 02:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:52.966 02:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:52.966 02:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:52.966 02:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:52.966 02:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:53.224 00:21:53.224 02:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:21:53.224 02:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:21:53.224 02:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:53.480 02:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:53.480 02:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:53.480 02:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:53.737 02:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:21:53.737 02:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:21:53.737 02:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 1434124 00:21:53.737 02:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 1434124 ']' 00:21:53.737 02:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 1434124 00:21:53.737 02:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:21:53.737 02:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:53.737 02:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1434124 00:21:53.737 02:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:53.737 02:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:53.737 02:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1434124' 00:21:53.737 killing process with pid 1434124 00:21:53.737 02:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 1434124 00:21:53.737 02:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 1434124 00:21:54.303 02:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:21:54.303 02:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:54.303 02:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:21:54.303 02:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:54.303 02:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:21:54.303 02:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:54.303 02:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:54.303 rmmod nvme_tcp 00:21:54.303 rmmod nvme_fabrics 00:21:54.303 rmmod nvme_keyring 00:21:54.303 02:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:54.303 02:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:21:54.303 02:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:21:54.303 02:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 1456583 ']' 00:21:54.303 02:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 1456583 00:21:54.303 02:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 1456583 ']' 00:21:54.303 02:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 1456583 00:21:54.303 02:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:21:54.303 02:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:54.303 02:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1456583 00:21:54.303 02:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:54.303 02:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:54.303 02:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1456583' 00:21:54.303 killing process with pid 1456583 00:21:54.303 02:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 1456583 00:21:54.303 02:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 1456583 00:21:54.561 02:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:54.561 02:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:54.561 02:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:54.561 02:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:54.561 02:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:54.561 02:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:54.561 02:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:54.561 02:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:56.464 02:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:56.464 02:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.4EB /tmp/spdk.key-sha256.GJi /tmp/spdk.key-sha384.ZOe /tmp/spdk.key-sha512.Coj /tmp/spdk.key-sha512.2Qc /tmp/spdk.key-sha384.XJG /tmp/spdk.key-sha256.NQC '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:21:56.464 00:21:56.464 real 3m8.509s 00:21:56.464 user 7m18.850s 00:21:56.464 sys 0m25.065s 00:21:56.464 02:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:56.464 02:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.464 ************************************ 00:21:56.464 END TEST nvmf_auth_target 00:21:56.464 ************************************ 00:21:56.723 02:00:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:21:56.723 02:00:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:21:56.723 02:00:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:21:56.723 02:00:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:56.723 02:00:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:56.723 ************************************ 00:21:56.723 START TEST nvmf_bdevio_no_huge 00:21:56.723 ************************************ 00:21:56.723 02:00:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:21:56.723 * Looking for test storage... 00:21:56.723 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:56.723 02:00:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:56.723 02:00:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:21:56.723 02:00:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:56.723 02:00:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:56.723 02:00:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:56.723 02:00:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:56.723 02:00:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:56.723 02:00:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:56.723 02:00:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:56.723 02:00:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:56.723 02:00:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:56.723 02:00:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:56.723 02:00:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:56.723 02:00:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:56.723 02:00:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:56.723 02:00:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:56.723 02:00:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:56.723 02:00:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:56.723 02:00:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:56.724 02:00:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:56.724 02:00:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:56.724 02:00:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:56.724 02:00:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:56.724 02:00:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:56.724 02:00:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:56.724 02:00:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:21:56.724 02:00:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:56.724 02:00:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:21:56.724 02:00:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:56.724 02:00:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:56.724 02:00:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:56.724 02:00:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:56.724 02:00:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:56.724 02:00:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:56.724 02:00:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:56.724 02:00:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:56.724 02:00:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:56.724 02:00:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:56.724 02:00:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:21:56.724 02:00:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:56.724 02:00:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:56.724 02:00:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:56.724 02:00:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:56.724 02:00:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:56.724 02:00:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:56.724 02:00:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:56.724 02:00:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:56.724 02:00:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:56.724 02:00:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:56.724 02:00:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:21:56.724 02:00:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:58.628 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:58.628 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:21:58.628 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:58.628 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:58.628 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:58.628 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:58.628 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:58.628 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:21:58.628 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:58.628 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:21:58.628 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:21:58.628 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:21:58.628 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:21:58.628 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:21:58.628 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:21:58.628 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:58.628 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:58.628 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:58.628 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:58.628 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:58.628 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:58.628 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:58.628 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:58.628 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:58.628 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:58.628 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:58.628 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:58.628 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:58.628 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:58.628 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:58.628 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:58.628 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:58.628 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:58.628 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:58.628 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:58.628 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:58.628 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:58.628 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:58.628 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:58.629 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:58.629 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:58.629 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:58.629 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:58.629 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:58.629 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:58.629 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:58.629 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:58.629 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:58.629 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:58.629 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:58.629 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:58.629 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:58.629 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:58.629 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:58.629 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:58.629 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:58.629 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:58.629 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:58.629 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:58.629 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:58.629 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:58.629 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:58.629 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:58.629 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:58.629 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:58.629 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:58.629 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:58.629 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:58.629 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:58.629 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:58.629 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:58.629 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:58.629 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:21:58.629 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:58.629 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:58.629 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:58.629 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:58.629 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:58.629 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:58.629 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:58.629 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:58.629 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:58.629 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:58.629 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:58.629 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:58.629 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:58.629 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:58.629 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:58.629 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:58.629 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:58.629 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:58.629 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:58.629 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:58.629 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:58.629 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:58.629 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:58.629 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:58.629 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.132 ms 00:21:58.629 00:21:58.629 --- 10.0.0.2 ping statistics --- 00:21:58.629 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:58.629 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:21:58.629 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:58.629 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:58.629 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:21:58.629 00:21:58.629 --- 10.0.0.1 ping statistics --- 00:21:58.629 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:58.629 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:21:58.629 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:58.629 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:21:58.629 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:58.629 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:58.629 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:58.629 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:58.629 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:58.629 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:58.629 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:58.629 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:21:58.629 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:58.629 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:58.629 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:58.629 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=1459746 00:21:58.629 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:21:58.629 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 1459746 00:21:58.629 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 1459746 ']' 00:21:58.629 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:58.629 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:58.629 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:58.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:58.629 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:58.629 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:58.629 [2024-07-24 02:00:13.494551] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:21:58.630 [2024-07-24 02:00:13.494651] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:21:58.888 [2024-07-24 02:00:13.564898] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:58.888 [2024-07-24 02:00:13.648917] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:58.888 [2024-07-24 02:00:13.648965] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:58.888 [2024-07-24 02:00:13.648992] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:58.888 [2024-07-24 02:00:13.649003] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:58.888 [2024-07-24 02:00:13.649012] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:58.888 [2024-07-24 02:00:13.649062] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:21:58.888 [2024-07-24 02:00:13.649176] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:21:58.888 [2024-07-24 02:00:13.649216] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:21:58.888 [2024-07-24 02:00:13.649218] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:58.888 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:58.888 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:21:58.888 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:58.888 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:58.888 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:58.888 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:58.888 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:58.888 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.888 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:58.888 [2024-07-24 02:00:13.776222] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:59.155 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.155 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:59.155 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.155 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:59.155 Malloc0 00:21:59.155 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.155 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:59.155 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.155 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:59.155 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.155 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:59.155 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.155 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:59.155 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.155 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:59.155 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.155 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:59.155 [2024-07-24 02:00:13.815042] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:59.155 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.155 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:21:59.155 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:21:59.155 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:21:59.155 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:21:59.155 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:59.155 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:59.155 { 00:21:59.155 "params": { 00:21:59.155 "name": "Nvme$subsystem", 00:21:59.155 "trtype": "$TEST_TRANSPORT", 00:21:59.155 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:59.155 "adrfam": "ipv4", 00:21:59.155 "trsvcid": "$NVMF_PORT", 00:21:59.155 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:59.155 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:59.155 "hdgst": ${hdgst:-false}, 00:21:59.155 "ddgst": ${ddgst:-false} 00:21:59.155 }, 00:21:59.155 "method": "bdev_nvme_attach_controller" 00:21:59.155 } 00:21:59.155 EOF 00:21:59.155 )") 00:21:59.155 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:21:59.155 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:21:59.155 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:21:59.155 02:00:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:59.155 "params": { 00:21:59.155 "name": "Nvme1", 00:21:59.155 "trtype": "tcp", 00:21:59.155 "traddr": "10.0.0.2", 00:21:59.155 "adrfam": "ipv4", 00:21:59.155 "trsvcid": "4420", 00:21:59.155 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:59.155 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:59.155 "hdgst": false, 00:21:59.155 "ddgst": false 00:21:59.155 }, 00:21:59.155 "method": "bdev_nvme_attach_controller" 00:21:59.155 }' 00:21:59.155 [2024-07-24 02:00:13.861079] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:21:59.155 [2024-07-24 02:00:13.861164] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid1459886 ] 00:21:59.155 [2024-07-24 02:00:13.920048] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:59.155 [2024-07-24 02:00:14.008062] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:59.155 [2024-07-24 02:00:14.008115] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:59.155 [2024-07-24 02:00:14.008118] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:59.462 I/O targets: 00:21:59.462 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:21:59.462 00:21:59.462 00:21:59.462 CUnit - A unit testing framework for C - Version 2.1-3 00:21:59.462 http://cunit.sourceforge.net/ 00:21:59.462 00:21:59.462 00:21:59.462 Suite: bdevio tests on: Nvme1n1 00:21:59.462 Test: blockdev write read block ...passed 00:21:59.462 Test: blockdev write zeroes read block ...passed 00:21:59.462 Test: blockdev write zeroes read no split ...passed 00:21:59.462 Test: blockdev write zeroes read split ...passed 00:21:59.720 Test: blockdev write zeroes read split partial ...passed 00:21:59.720 Test: blockdev reset ...[2024-07-24 02:00:14.336925] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:59.720 [2024-07-24 02:00:14.337036] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123c4e0 (9): Bad file descriptor 00:21:59.720 [2024-07-24 02:00:14.348660] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:59.720 passed 00:21:59.720 Test: blockdev write read 8 blocks ...passed 00:21:59.720 Test: blockdev write read size > 128k ...passed 00:21:59.720 Test: blockdev write read invalid size ...passed 00:21:59.720 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:21:59.720 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:21:59.720 Test: blockdev write read max offset ...passed 00:21:59.720 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:21:59.720 Test: blockdev writev readv 8 blocks ...passed 00:21:59.720 Test: blockdev writev readv 30 x 1block ...passed 00:21:59.720 Test: blockdev writev readv block ...passed 00:21:59.720 Test: blockdev writev readv size > 128k ...passed 00:21:59.720 Test: blockdev writev readv size > 128k in two iovs ...passed 00:21:59.720 Test: blockdev comparev and writev ...[2024-07-24 02:00:14.565741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:59.720 [2024-07-24 02:00:14.565777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:59.720 [2024-07-24 02:00:14.565801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:59.720 [2024-07-24 02:00:14.565818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:59.720 [2024-07-24 02:00:14.566196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:59.720 [2024-07-24 02:00:14.566220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:59.720 [2024-07-24 02:00:14.566242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:59.720 [2024-07-24 02:00:14.566258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:59.720 [2024-07-24 02:00:14.566686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:59.720 [2024-07-24 02:00:14.566710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:59.721 [2024-07-24 02:00:14.566731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:59.721 [2024-07-24 02:00:14.566747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:59.721 [2024-07-24 02:00:14.567148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:59.721 [2024-07-24 02:00:14.567171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:59.721 [2024-07-24 02:00:14.567199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:59.721 [2024-07-24 02:00:14.567215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:59.721 passed 00:21:59.979 Test: blockdev nvme passthru rw ...passed 00:21:59.979 Test: blockdev nvme passthru vendor specific ...[2024-07-24 02:00:14.651634] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:59.979 [2024-07-24 02:00:14.651661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:59.979 [2024-07-24 02:00:14.651838] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:59.979 [2024-07-24 02:00:14.651861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:59.979 [2024-07-24 02:00:14.652034] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:59.979 [2024-07-24 02:00:14.652057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:59.979 [2024-07-24 02:00:14.652232] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:59.979 [2024-07-24 02:00:14.652255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:59.979 passed 00:21:59.979 Test: blockdev nvme admin passthru ...passed 00:21:59.979 Test: blockdev copy ...passed 00:21:59.979 00:21:59.979 Run Summary: Type Total Ran Passed Failed Inactive 00:21:59.979 suites 1 1 n/a 0 0 00:21:59.979 tests 23 23 23 0 0 00:21:59.979 asserts 152 152 152 0 n/a 00:21:59.979 00:21:59.979 Elapsed time = 1.080 seconds 00:22:00.237 02:00:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:00.237 02:00:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:00.237 02:00:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:00.237 02:00:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:00.237 02:00:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:22:00.237 02:00:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:22:00.237 02:00:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:00.237 02:00:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:22:00.237 02:00:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:00.237 02:00:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:22:00.237 02:00:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:00.237 02:00:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:00.237 rmmod nvme_tcp 00:22:00.237 rmmod nvme_fabrics 00:22:00.237 rmmod nvme_keyring 00:22:00.237 02:00:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:00.237 02:00:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:22:00.237 02:00:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:22:00.237 02:00:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 1459746 ']' 00:22:00.237 02:00:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 1459746 00:22:00.237 02:00:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 1459746 ']' 00:22:00.237 02:00:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 1459746 00:22:00.237 02:00:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:22:00.237 02:00:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:00.237 02:00:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1459746 00:22:00.237 02:00:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:22:00.237 02:00:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:22:00.237 02:00:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1459746' 00:22:00.237 killing process with pid 1459746 00:22:00.237 02:00:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 1459746 00:22:00.237 02:00:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 1459746 00:22:00.803 02:00:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:00.803 02:00:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:00.803 02:00:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:00.803 02:00:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:00.803 02:00:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:00.803 02:00:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:00.803 02:00:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:00.803 02:00:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:02.716 02:00:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:02.716 00:22:02.716 real 0m6.131s 00:22:02.716 user 0m9.492s 00:22:02.716 sys 0m2.384s 00:22:02.716 02:00:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:02.716 02:00:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:02.716 ************************************ 00:22:02.716 END TEST nvmf_bdevio_no_huge 00:22:02.716 ************************************ 00:22:02.716 02:00:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:02.716 02:00:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:02.716 02:00:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:02.716 02:00:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:02.716 ************************************ 00:22:02.716 START TEST nvmf_tls 00:22:02.716 ************************************ 00:22:02.716 02:00:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:02.975 * Looking for test storage... 00:22:02.975 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:02.975 02:00:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:02.975 02:00:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:22:02.975 02:00:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:02.975 02:00:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:02.975 02:00:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:02.975 02:00:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:02.975 02:00:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:02.975 02:00:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:02.975 02:00:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:02.975 02:00:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:02.975 02:00:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:02.975 02:00:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:02.975 02:00:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:02.975 02:00:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:02.975 02:00:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:02.975 02:00:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:02.975 02:00:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:02.975 02:00:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:02.975 02:00:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:02.975 02:00:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:02.975 02:00:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:02.975 02:00:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:02.975 02:00:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:02.975 02:00:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:02.975 02:00:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:02.975 02:00:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:22:02.975 02:00:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:02.975 02:00:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:22:02.975 02:00:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:02.975 02:00:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:02.975 02:00:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:02.975 02:00:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:02.975 02:00:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:02.975 02:00:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:02.975 02:00:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:02.975 02:00:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:02.975 02:00:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:02.975 02:00:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:22:02.975 02:00:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:02.975 02:00:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:02.975 02:00:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:02.975 02:00:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:02.975 02:00:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:02.975 02:00:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:02.975 02:00:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:02.975 02:00:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:02.975 02:00:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:02.975 02:00:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:02.975 02:00:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:22:02.975 02:00:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:04.877 02:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:04.877 02:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:22:04.877 02:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:04.877 02:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:04.877 02:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:04.877 02:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:04.877 02:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:04.877 02:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:22:04.877 02:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:04.877 02:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:22:04.877 02:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:22:04.877 02:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:22:04.877 02:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:22:04.877 02:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:22:04.877 02:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:22:04.877 02:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:04.877 02:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:04.877 02:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:04.877 02:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:04.877 02:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:04.877 02:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:04.877 02:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:04.877 02:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:04.877 02:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:04.877 02:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:04.877 02:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:04.877 02:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:04.877 02:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:04.877 02:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:04.877 02:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:04.877 02:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:04.877 02:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:04.877 02:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:04.877 02:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:04.877 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:04.877 02:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:04.877 02:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:04.877 02:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:04.877 02:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:04.877 02:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:04.877 02:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:04.877 02:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:04.877 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:04.877 02:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:04.877 02:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:04.878 02:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:04.878 02:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:04.878 02:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:04.878 02:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:04.878 02:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:04.878 02:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:04.878 02:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:04.878 02:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:04.878 02:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:04.878 02:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:04.878 02:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:04.878 02:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:04.878 02:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:04.878 02:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:04.878 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:04.878 02:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:04.878 02:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:04.878 02:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:04.878 02:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:04.878 02:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:04.878 02:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:04.878 02:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:04.878 02:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:04.878 02:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:04.878 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:04.878 02:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:04.878 02:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:04.878 02:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:22:04.878 02:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:04.878 02:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:04.878 02:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:04.878 02:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:04.878 02:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:04.878 02:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:04.878 02:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:04.878 02:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:04.878 02:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:04.878 02:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:04.878 02:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:04.878 02:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:04.878 02:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:04.878 02:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:04.878 02:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:04.878 02:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:04.878 02:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:04.878 02:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:04.878 02:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:04.878 02:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:04.878 02:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:04.878 02:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:04.878 02:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:04.878 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:04.878 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.257 ms 00:22:04.878 00:22:04.878 --- 10.0.0.2 ping statistics --- 00:22:04.878 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:04.878 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:22:04.878 02:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:04.878 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:04.878 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.090 ms 00:22:04.878 00:22:04.878 --- 10.0.0.1 ping statistics --- 00:22:04.878 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:04.878 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:22:04.878 02:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:04.878 02:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:22:04.878 02:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:04.878 02:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:04.878 02:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:04.878 02:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:04.878 02:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:04.878 02:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:04.878 02:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:04.878 02:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:22:04.878 02:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:04.878 02:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:04.878 02:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:04.878 02:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1461954 00:22:04.878 02:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:22:04.878 02:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1461954 00:22:04.878 02:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1461954 ']' 00:22:04.878 02:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:04.878 02:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:04.878 02:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:04.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:04.878 02:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:04.878 02:00:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:05.137 [2024-07-24 02:00:19.803801] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:22:05.137 [2024-07-24 02:00:19.803884] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:05.137 EAL: No free 2048 kB hugepages reported on node 1 00:22:05.137 [2024-07-24 02:00:19.873740] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:05.137 [2024-07-24 02:00:19.962390] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:05.137 [2024-07-24 02:00:19.962443] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:05.137 [2024-07-24 02:00:19.962472] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:05.137 [2024-07-24 02:00:19.962483] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:05.137 [2024-07-24 02:00:19.962493] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:05.137 [2024-07-24 02:00:19.962526] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:05.137 02:00:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:05.137 02:00:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:05.137 02:00:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:05.137 02:00:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:05.137 02:00:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:05.395 02:00:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:05.395 02:00:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:22:05.395 02:00:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:22:05.395 true 00:22:05.395 02:00:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:05.395 02:00:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:22:05.653 02:00:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # version=0 00:22:05.653 02:00:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:22:05.653 02:00:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:05.911 02:00:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:05.911 02:00:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:22:06.168 02:00:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # version=13 00:22:06.168 02:00:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:22:06.168 02:00:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:22:06.426 02:00:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:06.426 02:00:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:22:06.684 02:00:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # version=7 00:22:06.684 02:00:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:22:06.684 02:00:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:06.684 02:00:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:22:06.942 02:00:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:22:06.942 02:00:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:22:06.942 02:00:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:22:07.200 02:00:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:07.200 02:00:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:22:07.457 02:00:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:22:07.457 02:00:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:22:07.457 02:00:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:22:08.022 02:00:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:08.022 02:00:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:22:08.022 02:00:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:22:08.022 02:00:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:22:08.022 02:00:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:22:08.022 02:00:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:22:08.022 02:00:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:22:08.022 02:00:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:22:08.022 02:00:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:22:08.022 02:00:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:22:08.022 02:00:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:22:08.280 02:00:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:08.280 02:00:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:22:08.280 02:00:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:22:08.280 02:00:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:22:08.280 02:00:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:22:08.280 02:00:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:22:08.280 02:00:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:22:08.280 02:00:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:22:08.280 02:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:08.280 02:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:22:08.280 02:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.jEq9f4NgQZ 00:22:08.280 02:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:22:08.280 02:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.kRXE1l7L1B 00:22:08.280 02:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:08.280 02:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:08.280 02:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.jEq9f4NgQZ 00:22:08.280 02:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.kRXE1l7L1B 00:22:08.280 02:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:08.537 02:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:22:08.795 02:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.jEq9f4NgQZ 00:22:08.795 02:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.jEq9f4NgQZ 00:22:08.795 02:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:09.052 [2024-07-24 02:00:23.907579] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:09.052 02:00:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:09.616 02:00:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:09.616 [2024-07-24 02:00:24.497139] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:09.616 [2024-07-24 02:00:24.497428] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:09.874 02:00:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:10.131 malloc0 00:22:10.131 02:00:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:10.390 02:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.jEq9f4NgQZ 00:22:10.647 [2024-07-24 02:00:25.339507] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:10.647 02:00:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.jEq9f4NgQZ 00:22:10.647 EAL: No free 2048 kB hugepages reported on node 1 00:22:20.615 Initializing NVMe Controllers 00:22:20.615 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:20.615 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:20.615 Initialization complete. Launching workers. 00:22:20.615 ======================================================== 00:22:20.615 Latency(us) 00:22:20.615 Device Information : IOPS MiB/s Average min max 00:22:20.615 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7811.15 30.51 8196.11 1182.57 9157.90 00:22:20.615 ======================================================== 00:22:20.615 Total : 7811.15 30.51 8196.11 1182.57 9157.90 00:22:20.615 00:22:20.615 02:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.jEq9f4NgQZ 00:22:20.615 02:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:20.615 02:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:20.615 02:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:20.615 02:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.jEq9f4NgQZ' 00:22:20.615 02:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:20.615 02:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1463848 00:22:20.615 02:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:20.615 02:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:20.615 02:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1463848 /var/tmp/bdevperf.sock 00:22:20.615 02:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1463848 ']' 00:22:20.615 02:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:20.615 02:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:20.615 02:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:20.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:20.615 02:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:20.615 02:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:20.873 [2024-07-24 02:00:35.514270] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:22:20.873 [2024-07-24 02:00:35.514358] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1463848 ] 00:22:20.873 EAL: No free 2048 kB hugepages reported on node 1 00:22:20.873 [2024-07-24 02:00:35.570067] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:20.873 [2024-07-24 02:00:35.652867] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:20.873 02:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:20.873 02:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:20.873 02:00:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.jEq9f4NgQZ 00:22:21.131 [2024-07-24 02:00:35.978951] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:21.131 [2024-07-24 02:00:35.979076] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:21.389 TLSTESTn1 00:22:21.389 02:00:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:21.389 Running I/O for 10 seconds... 00:22:31.422 00:22:31.422 Latency(us) 00:22:31.422 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:31.422 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:31.422 Verification LBA range: start 0x0 length 0x2000 00:22:31.422 TLSTESTn1 : 10.02 3253.39 12.71 0.00 0.00 39276.03 6553.60 46020.84 00:22:31.422 =================================================================================================================== 00:22:31.422 Total : 3253.39 12.71 0.00 0.00 39276.03 6553.60 46020.84 00:22:31.422 0 00:22:31.422 02:00:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:31.422 02:00:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 1463848 00:22:31.422 02:00:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1463848 ']' 00:22:31.422 02:00:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1463848 00:22:31.422 02:00:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:31.422 02:00:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:31.422 02:00:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1463848 00:22:31.422 02:00:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:31.422 02:00:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:31.422 02:00:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1463848' 00:22:31.422 killing process with pid 1463848 00:22:31.422 02:00:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1463848 00:22:31.422 Received shutdown signal, test time was about 10.000000 seconds 00:22:31.422 00:22:31.422 Latency(us) 00:22:31.422 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:31.422 =================================================================================================================== 00:22:31.422 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:31.422 [2024-07-24 02:00:46.250661] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:31.422 02:00:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1463848 00:22:31.681 02:00:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.kRXE1l7L1B 00:22:31.681 02:00:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:22:31.681 02:00:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.kRXE1l7L1B 00:22:31.681 02:00:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:22:31.681 02:00:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:31.681 02:00:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:22:31.681 02:00:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:31.681 02:00:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.kRXE1l7L1B 00:22:31.681 02:00:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:31.682 02:00:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:31.682 02:00:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:31.682 02:00:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.kRXE1l7L1B' 00:22:31.682 02:00:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:31.682 02:00:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1465044 00:22:31.682 02:00:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:31.682 02:00:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:31.682 02:00:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1465044 /var/tmp/bdevperf.sock 00:22:31.682 02:00:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1465044 ']' 00:22:31.682 02:00:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:31.682 02:00:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:31.682 02:00:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:31.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:31.682 02:00:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:31.682 02:00:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:31.682 [2024-07-24 02:00:46.526613] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:22:31.682 [2024-07-24 02:00:46.526702] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1465044 ] 00:22:31.682 EAL: No free 2048 kB hugepages reported on node 1 00:22:31.940 [2024-07-24 02:00:46.593467] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:31.940 [2024-07-24 02:00:46.682588] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:31.940 02:00:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:31.940 02:00:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:31.940 02:00:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.kRXE1l7L1B 00:22:32.199 [2024-07-24 02:00:47.011646] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:32.199 [2024-07-24 02:00:47.011778] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:32.199 [2024-07-24 02:00:47.019200] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:32.199 [2024-07-24 02:00:47.019751] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17fdab0 (107): Transport endpoint is not connected 00:22:32.199 [2024-07-24 02:00:47.020741] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17fdab0 (9): Bad file descriptor 00:22:32.199 [2024-07-24 02:00:47.021740] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:32.199 [2024-07-24 02:00:47.021758] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:32.199 [2024-07-24 02:00:47.021788] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:32.199 request: 00:22:32.199 { 00:22:32.199 "name": "TLSTEST", 00:22:32.199 "trtype": "tcp", 00:22:32.199 "traddr": "10.0.0.2", 00:22:32.199 "adrfam": "ipv4", 00:22:32.199 "trsvcid": "4420", 00:22:32.199 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:32.199 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:32.199 "prchk_reftag": false, 00:22:32.199 "prchk_guard": false, 00:22:32.199 "hdgst": false, 00:22:32.199 "ddgst": false, 00:22:32.199 "psk": "/tmp/tmp.kRXE1l7L1B", 00:22:32.199 "method": "bdev_nvme_attach_controller", 00:22:32.199 "req_id": 1 00:22:32.199 } 00:22:32.199 Got JSON-RPC error response 00:22:32.199 response: 00:22:32.199 { 00:22:32.199 "code": -5, 00:22:32.199 "message": "Input/output error" 00:22:32.199 } 00:22:32.199 02:00:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 1465044 00:22:32.199 02:00:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1465044 ']' 00:22:32.199 02:00:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1465044 00:22:32.199 02:00:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:32.199 02:00:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:32.199 02:00:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1465044 00:22:32.199 02:00:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:32.199 02:00:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:32.199 02:00:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1465044' 00:22:32.199 killing process with pid 1465044 00:22:32.199 02:00:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1465044 00:22:32.199 Received shutdown signal, test time was about 10.000000 seconds 00:22:32.199 00:22:32.199 Latency(us) 00:22:32.199 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:32.200 =================================================================================================================== 00:22:32.200 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:32.200 [2024-07-24 02:00:47.063766] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:32.200 02:00:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1465044 00:22:32.458 02:00:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:32.458 02:00:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:22:32.458 02:00:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:32.458 02:00:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:32.458 02:00:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:32.458 02:00:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.jEq9f4NgQZ 00:22:32.458 02:00:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:22:32.458 02:00:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.jEq9f4NgQZ 00:22:32.458 02:00:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:22:32.458 02:00:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:32.458 02:00:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:22:32.458 02:00:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:32.458 02:00:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.jEq9f4NgQZ 00:22:32.458 02:00:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:32.458 02:00:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:32.458 02:00:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:22:32.458 02:00:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.jEq9f4NgQZ' 00:22:32.458 02:00:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:32.458 02:00:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1465177 00:22:32.458 02:00:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:32.458 02:00:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:32.458 02:00:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1465177 /var/tmp/bdevperf.sock 00:22:32.458 02:00:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1465177 ']' 00:22:32.458 02:00:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:32.458 02:00:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:32.458 02:00:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:32.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:32.458 02:00:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:32.458 02:00:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:32.458 [2024-07-24 02:00:47.296125] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:22:32.458 [2024-07-24 02:00:47.296200] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1465177 ] 00:22:32.458 EAL: No free 2048 kB hugepages reported on node 1 00:22:32.716 [2024-07-24 02:00:47.354455] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:32.716 [2024-07-24 02:00:47.447083] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:32.716 02:00:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:32.716 02:00:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:32.717 02:00:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.jEq9f4NgQZ 00:22:32.975 [2024-07-24 02:00:47.793509] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:32.975 [2024-07-24 02:00:47.793671] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:32.975 [2024-07-24 02:00:47.801108] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:22:32.975 [2024-07-24 02:00:47.801139] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:22:32.975 [2024-07-24 02:00:47.801193] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:32.975 [2024-07-24 02:00:47.801747] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x740ab0 (107): Transport endpoint is not connected 00:22:32.975 [2024-07-24 02:00:47.802737] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x740ab0 (9): Bad file descriptor 00:22:32.975 [2024-07-24 02:00:47.803735] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:32.975 [2024-07-24 02:00:47.803753] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:32.975 [2024-07-24 02:00:47.803784] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:32.975 request: 00:22:32.975 { 00:22:32.975 "name": "TLSTEST", 00:22:32.975 "trtype": "tcp", 00:22:32.975 "traddr": "10.0.0.2", 00:22:32.975 "adrfam": "ipv4", 00:22:32.975 "trsvcid": "4420", 00:22:32.976 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:32.976 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:32.976 "prchk_reftag": false, 00:22:32.976 "prchk_guard": false, 00:22:32.976 "hdgst": false, 00:22:32.976 "ddgst": false, 00:22:32.976 "psk": "/tmp/tmp.jEq9f4NgQZ", 00:22:32.976 "method": "bdev_nvme_attach_controller", 00:22:32.976 "req_id": 1 00:22:32.976 } 00:22:32.976 Got JSON-RPC error response 00:22:32.976 response: 00:22:32.976 { 00:22:32.976 "code": -5, 00:22:32.976 "message": "Input/output error" 00:22:32.976 } 00:22:32.976 02:00:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 1465177 00:22:32.976 02:00:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1465177 ']' 00:22:32.976 02:00:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1465177 00:22:32.976 02:00:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:32.976 02:00:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:32.976 02:00:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1465177 00:22:32.976 02:00:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:32.976 02:00:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:32.976 02:00:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1465177' 00:22:32.976 killing process with pid 1465177 00:22:32.976 02:00:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1465177 00:22:32.976 Received shutdown signal, test time was about 10.000000 seconds 00:22:32.976 00:22:32.976 Latency(us) 00:22:32.976 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:32.976 =================================================================================================================== 00:22:32.976 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:32.976 [2024-07-24 02:00:47.852577] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:32.976 02:00:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1465177 00:22:33.235 02:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:33.235 02:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:22:33.235 02:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:33.235 02:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:33.235 02:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:33.235 02:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.jEq9f4NgQZ 00:22:33.235 02:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:22:33.235 02:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.jEq9f4NgQZ 00:22:33.235 02:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:22:33.235 02:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:33.235 02:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:22:33.235 02:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:33.235 02:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.jEq9f4NgQZ 00:22:33.235 02:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:33.235 02:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:22:33.235 02:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:33.235 02:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.jEq9f4NgQZ' 00:22:33.235 02:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:33.235 02:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1465312 00:22:33.235 02:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:33.235 02:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:33.235 02:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1465312 /var/tmp/bdevperf.sock 00:22:33.235 02:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1465312 ']' 00:22:33.235 02:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:33.235 02:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:33.235 02:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:33.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:33.235 02:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:33.235 02:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:33.235 [2024-07-24 02:00:48.113018] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:22:33.235 [2024-07-24 02:00:48.113094] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1465312 ] 00:22:33.493 EAL: No free 2048 kB hugepages reported on node 1 00:22:33.493 [2024-07-24 02:00:48.172238] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:33.493 [2024-07-24 02:00:48.264162] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:33.493 02:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:33.493 02:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:33.493 02:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.jEq9f4NgQZ 00:22:33.752 [2024-07-24 02:00:48.647111] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:33.752 [2024-07-24 02:00:48.647267] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:34.010 [2024-07-24 02:00:48.653403] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:22:34.010 [2024-07-24 02:00:48.653438] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:22:34.011 [2024-07-24 02:00:48.653480] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:34.011 [2024-07-24 02:00:48.654494] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2238ab0 (107): Transport endpoint is not connected 00:22:34.011 [2024-07-24 02:00:48.655484] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2238ab0 (9): Bad file descriptor 00:22:34.011 [2024-07-24 02:00:48.656483] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:22:34.011 [2024-07-24 02:00:48.656513] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:34.011 [2024-07-24 02:00:48.656532] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:22:34.011 request: 00:22:34.011 { 00:22:34.011 "name": "TLSTEST", 00:22:34.011 "trtype": "tcp", 00:22:34.011 "traddr": "10.0.0.2", 00:22:34.011 "adrfam": "ipv4", 00:22:34.011 "trsvcid": "4420", 00:22:34.011 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:34.011 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:34.011 "prchk_reftag": false, 00:22:34.011 "prchk_guard": false, 00:22:34.011 "hdgst": false, 00:22:34.011 "ddgst": false, 00:22:34.011 "psk": "/tmp/tmp.jEq9f4NgQZ", 00:22:34.011 "method": "bdev_nvme_attach_controller", 00:22:34.011 "req_id": 1 00:22:34.011 } 00:22:34.011 Got JSON-RPC error response 00:22:34.011 response: 00:22:34.011 { 00:22:34.011 "code": -5, 00:22:34.011 "message": "Input/output error" 00:22:34.011 } 00:22:34.011 02:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 1465312 00:22:34.011 02:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1465312 ']' 00:22:34.011 02:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1465312 00:22:34.011 02:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:34.011 02:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:34.011 02:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1465312 00:22:34.011 02:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:34.011 02:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:34.011 02:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1465312' 00:22:34.011 killing process with pid 1465312 00:22:34.011 02:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1465312 00:22:34.011 Received shutdown signal, test time was about 10.000000 seconds 00:22:34.011 00:22:34.011 Latency(us) 00:22:34.011 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:34.011 =================================================================================================================== 00:22:34.011 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:34.011 [2024-07-24 02:00:48.707798] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:34.011 02:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1465312 00:22:34.270 02:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:34.270 02:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:22:34.270 02:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:34.270 02:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:34.270 02:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:34.270 02:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:34.270 02:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:22:34.270 02:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:34.270 02:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:22:34.270 02:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:34.270 02:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:22:34.270 02:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:34.270 02:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:34.270 02:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:34.270 02:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:34.270 02:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:34.270 02:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:22:34.270 02:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:34.270 02:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1465428 00:22:34.270 02:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:34.270 02:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:34.270 02:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1465428 /var/tmp/bdevperf.sock 00:22:34.270 02:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1465428 ']' 00:22:34.270 02:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:34.270 02:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:34.270 02:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:34.270 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:34.270 02:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:34.270 02:00:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:34.270 [2024-07-24 02:00:48.966335] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:22:34.270 [2024-07-24 02:00:48.966427] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1465428 ] 00:22:34.270 EAL: No free 2048 kB hugepages reported on node 1 00:22:34.270 [2024-07-24 02:00:49.024367] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:34.270 [2024-07-24 02:00:49.110066] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:34.528 02:00:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:34.528 02:00:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:34.528 02:00:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:22:34.787 [2024-07-24 02:00:49.445388] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:34.787 [2024-07-24 02:00:49.447334] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a9e60 (9): Bad file descriptor 00:22:34.787 [2024-07-24 02:00:49.448330] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:34.787 [2024-07-24 02:00:49.448349] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:34.787 [2024-07-24 02:00:49.448379] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:34.787 request: 00:22:34.787 { 00:22:34.787 "name": "TLSTEST", 00:22:34.787 "trtype": "tcp", 00:22:34.787 "traddr": "10.0.0.2", 00:22:34.787 "adrfam": "ipv4", 00:22:34.787 "trsvcid": "4420", 00:22:34.787 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:34.787 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:34.787 "prchk_reftag": false, 00:22:34.787 "prchk_guard": false, 00:22:34.787 "hdgst": false, 00:22:34.787 "ddgst": false, 00:22:34.787 "method": "bdev_nvme_attach_controller", 00:22:34.787 "req_id": 1 00:22:34.787 } 00:22:34.787 Got JSON-RPC error response 00:22:34.787 response: 00:22:34.787 { 00:22:34.787 "code": -5, 00:22:34.787 "message": "Input/output error" 00:22:34.787 } 00:22:34.787 02:00:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 1465428 00:22:34.787 02:00:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1465428 ']' 00:22:34.788 02:00:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1465428 00:22:34.788 02:00:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:34.788 02:00:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:34.788 02:00:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1465428 00:22:34.788 02:00:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:34.788 02:00:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:34.788 02:00:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1465428' 00:22:34.788 killing process with pid 1465428 00:22:34.788 02:00:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1465428 00:22:34.788 Received shutdown signal, test time was about 10.000000 seconds 00:22:34.788 00:22:34.788 Latency(us) 00:22:34.788 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:34.788 =================================================================================================================== 00:22:34.788 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:34.788 02:00:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1465428 00:22:35.046 02:00:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:35.046 02:00:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:22:35.046 02:00:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:35.046 02:00:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:35.046 02:00:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:35.046 02:00:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@158 -- # killprocess 1461954 00:22:35.046 02:00:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1461954 ']' 00:22:35.046 02:00:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1461954 00:22:35.046 02:00:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:35.046 02:00:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:35.046 02:00:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1461954 00:22:35.046 02:00:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:35.046 02:00:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:35.046 02:00:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1461954' 00:22:35.046 killing process with pid 1461954 00:22:35.046 02:00:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1461954 00:22:35.046 [2024-07-24 02:00:49.743532] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:35.046 02:00:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1461954 00:22:35.305 02:00:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:22:35.305 02:00:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:22:35.305 02:00:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:22:35.305 02:00:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:22:35.305 02:00:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:22:35.305 02:00:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:22:35.305 02:00:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:22:35.305 02:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:22:35.305 02:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:22:35.305 02:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.FOHimfKQiO 00:22:35.305 02:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:22:35.305 02:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.FOHimfKQiO 00:22:35.305 02:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:22:35.305 02:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:35.305 02:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:35.305 02:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:35.305 02:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1465584 00:22:35.305 02:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:35.305 02:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1465584 00:22:35.305 02:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1465584 ']' 00:22:35.305 02:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:35.305 02:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:35.305 02:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:35.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:35.305 02:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:35.305 02:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:35.305 [2024-07-24 02:00:50.122097] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:22:35.305 [2024-07-24 02:00:50.122183] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:35.305 EAL: No free 2048 kB hugepages reported on node 1 00:22:35.305 [2024-07-24 02:00:50.190549] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:35.564 [2024-07-24 02:00:50.279513] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:35.564 [2024-07-24 02:00:50.279580] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:35.564 [2024-07-24 02:00:50.279597] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:35.564 [2024-07-24 02:00:50.279611] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:35.564 [2024-07-24 02:00:50.279623] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:35.564 [2024-07-24 02:00:50.279660] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:35.564 02:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:35.564 02:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:35.564 02:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:35.564 02:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:35.564 02:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:35.564 02:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:35.564 02:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.FOHimfKQiO 00:22:35.564 02:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.FOHimfKQiO 00:22:35.564 02:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:35.822 [2024-07-24 02:00:50.641883] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:35.822 02:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:36.080 02:00:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:36.338 [2024-07-24 02:00:51.119172] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:36.338 [2024-07-24 02:00:51.119443] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:36.338 02:00:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:36.596 malloc0 00:22:36.596 02:00:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:36.854 02:00:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.FOHimfKQiO 00:22:37.112 [2024-07-24 02:00:51.853173] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:37.112 02:00:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.FOHimfKQiO 00:22:37.112 02:00:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:37.112 02:00:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:37.112 02:00:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:37.112 02:00:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.FOHimfKQiO' 00:22:37.112 02:00:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:37.112 02:00:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1465763 00:22:37.112 02:00:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:37.112 02:00:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:37.112 02:00:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1465763 /var/tmp/bdevperf.sock 00:22:37.112 02:00:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1465763 ']' 00:22:37.112 02:00:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:37.112 02:00:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:37.112 02:00:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:37.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:37.112 02:00:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:37.112 02:00:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:37.112 [2024-07-24 02:00:51.916573] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:22:37.112 [2024-07-24 02:00:51.916662] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1465763 ] 00:22:37.112 EAL: No free 2048 kB hugepages reported on node 1 00:22:37.112 [2024-07-24 02:00:51.974477] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:37.370 [2024-07-24 02:00:52.060927] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:37.370 02:00:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:37.370 02:00:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:37.370 02:00:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.FOHimfKQiO 00:22:37.628 [2024-07-24 02:00:52.415903] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:37.628 [2024-07-24 02:00:52.416040] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:37.628 TLSTESTn1 00:22:37.628 02:00:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:37.887 Running I/O for 10 seconds... 00:22:47.857 00:22:47.857 Latency(us) 00:22:47.857 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:47.857 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:47.857 Verification LBA range: start 0x0 length 0x2000 00:22:47.857 TLSTESTn1 : 10.02 3537.24 13.82 0.00 0.00 36121.37 9223.59 41166.32 00:22:47.857 =================================================================================================================== 00:22:47.857 Total : 3537.24 13.82 0.00 0.00 36121.37 9223.59 41166.32 00:22:47.857 0 00:22:47.857 02:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:47.857 02:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 1465763 00:22:47.857 02:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1465763 ']' 00:22:47.857 02:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1465763 00:22:47.857 02:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:47.857 02:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:47.857 02:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1465763 00:22:47.857 02:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:47.857 02:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:47.857 02:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1465763' 00:22:47.857 killing process with pid 1465763 00:22:47.857 02:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1465763 00:22:47.857 Received shutdown signal, test time was about 10.000000 seconds 00:22:47.857 00:22:47.857 Latency(us) 00:22:47.857 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:47.857 =================================================================================================================== 00:22:47.857 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:47.857 [2024-07-24 02:01:02.711873] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:47.857 02:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1465763 00:22:48.115 02:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.FOHimfKQiO 00:22:48.115 02:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.FOHimfKQiO 00:22:48.115 02:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:22:48.115 02:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.FOHimfKQiO 00:22:48.115 02:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:22:48.115 02:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:48.115 02:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:22:48.115 02:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:48.115 02:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.FOHimfKQiO 00:22:48.115 02:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:48.115 02:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:48.115 02:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:48.115 02:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.FOHimfKQiO' 00:22:48.115 02:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:48.115 02:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1467072 00:22:48.115 02:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:48.115 02:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:48.115 02:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1467072 /var/tmp/bdevperf.sock 00:22:48.115 02:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1467072 ']' 00:22:48.115 02:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:48.115 02:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:48.115 02:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:48.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:48.115 02:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:48.115 02:01:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:48.115 [2024-07-24 02:01:02.989517] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:22:48.115 [2024-07-24 02:01:02.989593] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1467072 ] 00:22:48.373 EAL: No free 2048 kB hugepages reported on node 1 00:22:48.373 [2024-07-24 02:01:03.049030] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:48.373 [2024-07-24 02:01:03.136370] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:48.373 02:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:48.373 02:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:48.373 02:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.FOHimfKQiO 00:22:48.636 [2024-07-24 02:01:03.465747] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:48.636 [2024-07-24 02:01:03.465843] bdev_nvme.c:6153:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:22:48.636 [2024-07-24 02:01:03.465859] bdev_nvme.c:6258:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.FOHimfKQiO 00:22:48.636 request: 00:22:48.636 { 00:22:48.636 "name": "TLSTEST", 00:22:48.636 "trtype": "tcp", 00:22:48.636 "traddr": "10.0.0.2", 00:22:48.636 "adrfam": "ipv4", 00:22:48.636 "trsvcid": "4420", 00:22:48.636 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:48.636 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:48.636 "prchk_reftag": false, 00:22:48.636 "prchk_guard": false, 00:22:48.636 "hdgst": false, 00:22:48.636 "ddgst": false, 00:22:48.636 "psk": "/tmp/tmp.FOHimfKQiO", 00:22:48.636 "method": "bdev_nvme_attach_controller", 00:22:48.636 "req_id": 1 00:22:48.636 } 00:22:48.636 Got JSON-RPC error response 00:22:48.636 response: 00:22:48.636 { 00:22:48.636 "code": -1, 00:22:48.636 "message": "Operation not permitted" 00:22:48.636 } 00:22:48.636 02:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 1467072 00:22:48.636 02:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1467072 ']' 00:22:48.636 02:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1467072 00:22:48.636 02:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:48.636 02:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:48.636 02:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1467072 00:22:48.636 02:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:48.636 02:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:48.636 02:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1467072' 00:22:48.636 killing process with pid 1467072 00:22:48.636 02:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1467072 00:22:48.636 Received shutdown signal, test time was about 10.000000 seconds 00:22:48.636 00:22:48.636 Latency(us) 00:22:48.636 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:48.636 =================================================================================================================== 00:22:48.636 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:48.636 02:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1467072 00:22:48.897 02:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:48.897 02:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:22:48.897 02:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:48.897 02:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:48.897 02:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:48.898 02:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@174 -- # killprocess 1465584 00:22:48.898 02:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1465584 ']' 00:22:48.898 02:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1465584 00:22:48.898 02:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:48.898 02:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:48.898 02:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1465584 00:22:48.898 02:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:48.898 02:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:48.898 02:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1465584' 00:22:48.898 killing process with pid 1465584 00:22:48.898 02:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1465584 00:22:48.898 [2024-07-24 02:01:03.766902] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:48.898 02:01:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1465584 00:22:49.156 02:01:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:22:49.156 02:01:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:49.156 02:01:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:49.156 02:01:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:49.156 02:01:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1467214 00:22:49.156 02:01:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:49.156 02:01:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1467214 00:22:49.156 02:01:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1467214 ']' 00:22:49.156 02:01:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:49.156 02:01:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:49.156 02:01:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:49.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:49.156 02:01:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:49.156 02:01:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:49.415 [2024-07-24 02:01:04.069172] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:22:49.415 [2024-07-24 02:01:04.069259] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:49.415 EAL: No free 2048 kB hugepages reported on node 1 00:22:49.415 [2024-07-24 02:01:04.142549] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:49.415 [2024-07-24 02:01:04.236141] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:49.415 [2024-07-24 02:01:04.236204] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:49.415 [2024-07-24 02:01:04.236229] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:49.415 [2024-07-24 02:01:04.236243] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:49.415 [2024-07-24 02:01:04.236254] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:49.415 [2024-07-24 02:01:04.236285] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:49.673 02:01:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:49.673 02:01:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:49.673 02:01:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:49.673 02:01:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:49.673 02:01:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:49.673 02:01:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:49.673 02:01:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.FOHimfKQiO 00:22:49.673 02:01:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:22:49.673 02:01:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.FOHimfKQiO 00:22:49.673 02:01:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:22:49.673 02:01:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:49.673 02:01:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:22:49.673 02:01:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:49.673 02:01:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.FOHimfKQiO 00:22:49.673 02:01:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.FOHimfKQiO 00:22:49.673 02:01:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:49.931 [2024-07-24 02:01:04.660547] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:49.931 02:01:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:50.188 02:01:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:50.445 [2024-07-24 02:01:05.206113] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:50.445 [2024-07-24 02:01:05.206414] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:50.445 02:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:50.702 malloc0 00:22:50.702 02:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:50.963 02:01:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.FOHimfKQiO 00:22:51.268 [2024-07-24 02:01:06.016288] tcp.c:3635:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:22:51.268 [2024-07-24 02:01:06.016352] tcp.c:3721:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:22:51.268 [2024-07-24 02:01:06.016385] subsystem.c:1052:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:22:51.268 request: 00:22:51.268 { 00:22:51.268 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:51.268 "host": "nqn.2016-06.io.spdk:host1", 00:22:51.268 "psk": "/tmp/tmp.FOHimfKQiO", 00:22:51.268 "method": "nvmf_subsystem_add_host", 00:22:51.268 "req_id": 1 00:22:51.268 } 00:22:51.268 Got JSON-RPC error response 00:22:51.268 response: 00:22:51.268 { 00:22:51.268 "code": -32603, 00:22:51.268 "message": "Internal error" 00:22:51.268 } 00:22:51.268 02:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:22:51.268 02:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:51.269 02:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:51.269 02:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:51.269 02:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@180 -- # killprocess 1467214 00:22:51.269 02:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1467214 ']' 00:22:51.269 02:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1467214 00:22:51.269 02:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:51.269 02:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:51.269 02:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1467214 00:22:51.269 02:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:51.269 02:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:51.269 02:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1467214' 00:22:51.269 killing process with pid 1467214 00:22:51.269 02:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1467214 00:22:51.269 02:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1467214 00:22:51.534 02:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.FOHimfKQiO 00:22:51.534 02:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:22:51.534 02:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:51.534 02:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:51.534 02:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:51.534 02:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1467508 00:22:51.534 02:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:51.534 02:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1467508 00:22:51.534 02:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1467508 ']' 00:22:51.534 02:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:51.534 02:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:51.534 02:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:51.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:51.534 02:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:51.534 02:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:51.534 [2024-07-24 02:01:06.372024] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:22:51.534 [2024-07-24 02:01:06.372123] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:51.534 EAL: No free 2048 kB hugepages reported on node 1 00:22:51.792 [2024-07-24 02:01:06.445375] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:51.792 [2024-07-24 02:01:06.532293] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:51.792 [2024-07-24 02:01:06.532386] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:51.792 [2024-07-24 02:01:06.532425] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:51.792 [2024-07-24 02:01:06.532437] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:51.792 [2024-07-24 02:01:06.532446] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:51.792 [2024-07-24 02:01:06.532472] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:51.792 02:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:51.792 02:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:51.792 02:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:51.792 02:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:51.792 02:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:51.792 02:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:51.792 02:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.FOHimfKQiO 00:22:51.792 02:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.FOHimfKQiO 00:22:51.792 02:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:52.358 [2024-07-24 02:01:06.947666] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:52.358 02:01:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:52.616 02:01:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:52.873 [2024-07-24 02:01:07.537291] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:52.874 [2024-07-24 02:01:07.537559] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:52.874 02:01:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:53.131 malloc0 00:22:53.131 02:01:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:53.389 02:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.FOHimfKQiO 00:22:53.647 [2024-07-24 02:01:08.343193] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:53.647 02:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=1467784 00:22:53.647 02:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:53.647 02:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:53.647 02:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 1467784 /var/tmp/bdevperf.sock 00:22:53.647 02:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1467784 ']' 00:22:53.647 02:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:53.647 02:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:53.647 02:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:53.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:53.647 02:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:53.647 02:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:53.647 [2024-07-24 02:01:08.405813] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:22:53.647 [2024-07-24 02:01:08.405885] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1467784 ] 00:22:53.647 EAL: No free 2048 kB hugepages reported on node 1 00:22:53.647 [2024-07-24 02:01:08.464290] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:53.905 [2024-07-24 02:01:08.551301] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:53.905 02:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:53.905 02:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:53.905 02:01:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.FOHimfKQiO 00:22:54.163 [2024-07-24 02:01:08.914495] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:54.163 [2024-07-24 02:01:08.914656] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:54.163 TLSTESTn1 00:22:54.163 02:01:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:22:54.729 02:01:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:22:54.729 "subsystems": [ 00:22:54.729 { 00:22:54.729 "subsystem": "keyring", 00:22:54.729 "config": [] 00:22:54.729 }, 00:22:54.729 { 00:22:54.729 "subsystem": "iobuf", 00:22:54.729 "config": [ 00:22:54.729 { 00:22:54.729 "method": "iobuf_set_options", 00:22:54.729 "params": { 00:22:54.729 "small_pool_count": 8192, 00:22:54.729 "large_pool_count": 1024, 00:22:54.729 "small_bufsize": 8192, 00:22:54.729 "large_bufsize": 135168 00:22:54.729 } 00:22:54.729 } 00:22:54.729 ] 00:22:54.729 }, 00:22:54.729 { 00:22:54.729 "subsystem": "sock", 00:22:54.729 "config": [ 00:22:54.729 { 00:22:54.729 "method": "sock_set_default_impl", 00:22:54.729 "params": { 00:22:54.729 "impl_name": "posix" 00:22:54.729 } 00:22:54.729 }, 00:22:54.729 { 00:22:54.729 "method": "sock_impl_set_options", 00:22:54.729 "params": { 00:22:54.729 "impl_name": "ssl", 00:22:54.729 "recv_buf_size": 4096, 00:22:54.729 "send_buf_size": 4096, 00:22:54.729 "enable_recv_pipe": true, 00:22:54.729 "enable_quickack": false, 00:22:54.729 "enable_placement_id": 0, 00:22:54.729 "enable_zerocopy_send_server": true, 00:22:54.729 "enable_zerocopy_send_client": false, 00:22:54.729 "zerocopy_threshold": 0, 00:22:54.729 "tls_version": 0, 00:22:54.729 "enable_ktls": false 00:22:54.729 } 00:22:54.729 }, 00:22:54.729 { 00:22:54.729 "method": "sock_impl_set_options", 00:22:54.729 "params": { 00:22:54.729 "impl_name": "posix", 00:22:54.729 "recv_buf_size": 2097152, 00:22:54.730 "send_buf_size": 2097152, 00:22:54.730 "enable_recv_pipe": true, 00:22:54.730 "enable_quickack": false, 00:22:54.730 "enable_placement_id": 0, 00:22:54.730 "enable_zerocopy_send_server": true, 00:22:54.730 "enable_zerocopy_send_client": false, 00:22:54.730 "zerocopy_threshold": 0, 00:22:54.730 "tls_version": 0, 00:22:54.730 "enable_ktls": false 00:22:54.730 } 00:22:54.730 } 00:22:54.730 ] 00:22:54.730 }, 00:22:54.730 { 00:22:54.730 "subsystem": "vmd", 00:22:54.730 "config": [] 00:22:54.730 }, 00:22:54.730 { 00:22:54.730 "subsystem": "accel", 00:22:54.730 "config": [ 00:22:54.730 { 00:22:54.730 "method": "accel_set_options", 00:22:54.730 "params": { 00:22:54.730 "small_cache_size": 128, 00:22:54.730 "large_cache_size": 16, 00:22:54.730 "task_count": 2048, 00:22:54.730 "sequence_count": 2048, 00:22:54.730 "buf_count": 2048 00:22:54.730 } 00:22:54.730 } 00:22:54.730 ] 00:22:54.730 }, 00:22:54.730 { 00:22:54.730 "subsystem": "bdev", 00:22:54.730 "config": [ 00:22:54.730 { 00:22:54.730 "method": "bdev_set_options", 00:22:54.730 "params": { 00:22:54.730 "bdev_io_pool_size": 65535, 00:22:54.730 "bdev_io_cache_size": 256, 00:22:54.730 "bdev_auto_examine": true, 00:22:54.730 "iobuf_small_cache_size": 128, 00:22:54.730 "iobuf_large_cache_size": 16 00:22:54.730 } 00:22:54.730 }, 00:22:54.730 { 00:22:54.730 "method": "bdev_raid_set_options", 00:22:54.730 "params": { 00:22:54.730 "process_window_size_kb": 1024, 00:22:54.730 "process_max_bandwidth_mb_sec": 0 00:22:54.730 } 00:22:54.730 }, 00:22:54.730 { 00:22:54.730 "method": "bdev_iscsi_set_options", 00:22:54.730 "params": { 00:22:54.730 "timeout_sec": 30 00:22:54.730 } 00:22:54.730 }, 00:22:54.730 { 00:22:54.730 "method": "bdev_nvme_set_options", 00:22:54.730 "params": { 00:22:54.730 "action_on_timeout": "none", 00:22:54.730 "timeout_us": 0, 00:22:54.730 "timeout_admin_us": 0, 00:22:54.730 "keep_alive_timeout_ms": 10000, 00:22:54.730 "arbitration_burst": 0, 00:22:54.730 "low_priority_weight": 0, 00:22:54.730 "medium_priority_weight": 0, 00:22:54.730 "high_priority_weight": 0, 00:22:54.730 "nvme_adminq_poll_period_us": 10000, 00:22:54.730 "nvme_ioq_poll_period_us": 0, 00:22:54.730 "io_queue_requests": 0, 00:22:54.730 "delay_cmd_submit": true, 00:22:54.730 "transport_retry_count": 4, 00:22:54.730 "bdev_retry_count": 3, 00:22:54.730 "transport_ack_timeout": 0, 00:22:54.730 "ctrlr_loss_timeout_sec": 0, 00:22:54.730 "reconnect_delay_sec": 0, 00:22:54.730 "fast_io_fail_timeout_sec": 0, 00:22:54.730 "disable_auto_failback": false, 00:22:54.730 "generate_uuids": false, 00:22:54.730 "transport_tos": 0, 00:22:54.730 "nvme_error_stat": false, 00:22:54.730 "rdma_srq_size": 0, 00:22:54.730 "io_path_stat": false, 00:22:54.730 "allow_accel_sequence": false, 00:22:54.730 "rdma_max_cq_size": 0, 00:22:54.730 "rdma_cm_event_timeout_ms": 0, 00:22:54.730 "dhchap_digests": [ 00:22:54.730 "sha256", 00:22:54.730 "sha384", 00:22:54.730 "sha512" 00:22:54.730 ], 00:22:54.730 "dhchap_dhgroups": [ 00:22:54.730 "null", 00:22:54.730 "ffdhe2048", 00:22:54.730 "ffdhe3072", 00:22:54.730 "ffdhe4096", 00:22:54.730 "ffdhe6144", 00:22:54.730 "ffdhe8192" 00:22:54.730 ] 00:22:54.730 } 00:22:54.730 }, 00:22:54.730 { 00:22:54.730 "method": "bdev_nvme_set_hotplug", 00:22:54.730 "params": { 00:22:54.730 "period_us": 100000, 00:22:54.730 "enable": false 00:22:54.730 } 00:22:54.730 }, 00:22:54.730 { 00:22:54.730 "method": "bdev_malloc_create", 00:22:54.730 "params": { 00:22:54.730 "name": "malloc0", 00:22:54.730 "num_blocks": 8192, 00:22:54.730 "block_size": 4096, 00:22:54.730 "physical_block_size": 4096, 00:22:54.730 "uuid": "c2d9a610-891f-4376-a2af-8da12a2d3424", 00:22:54.730 "optimal_io_boundary": 0, 00:22:54.730 "md_size": 0, 00:22:54.730 "dif_type": 0, 00:22:54.730 "dif_is_head_of_md": false, 00:22:54.730 "dif_pi_format": 0 00:22:54.730 } 00:22:54.730 }, 00:22:54.730 { 00:22:54.730 "method": "bdev_wait_for_examine" 00:22:54.730 } 00:22:54.730 ] 00:22:54.730 }, 00:22:54.730 { 00:22:54.730 "subsystem": "nbd", 00:22:54.730 "config": [] 00:22:54.730 }, 00:22:54.730 { 00:22:54.730 "subsystem": "scheduler", 00:22:54.730 "config": [ 00:22:54.730 { 00:22:54.730 "method": "framework_set_scheduler", 00:22:54.730 "params": { 00:22:54.730 "name": "static" 00:22:54.730 } 00:22:54.730 } 00:22:54.730 ] 00:22:54.730 }, 00:22:54.730 { 00:22:54.730 "subsystem": "nvmf", 00:22:54.730 "config": [ 00:22:54.730 { 00:22:54.730 "method": "nvmf_set_config", 00:22:54.730 "params": { 00:22:54.730 "discovery_filter": "match_any", 00:22:54.730 "admin_cmd_passthru": { 00:22:54.730 "identify_ctrlr": false 00:22:54.730 } 00:22:54.730 } 00:22:54.730 }, 00:22:54.730 { 00:22:54.730 "method": "nvmf_set_max_subsystems", 00:22:54.730 "params": { 00:22:54.730 "max_subsystems": 1024 00:22:54.730 } 00:22:54.730 }, 00:22:54.730 { 00:22:54.730 "method": "nvmf_set_crdt", 00:22:54.730 "params": { 00:22:54.730 "crdt1": 0, 00:22:54.730 "crdt2": 0, 00:22:54.730 "crdt3": 0 00:22:54.730 } 00:22:54.730 }, 00:22:54.730 { 00:22:54.730 "method": "nvmf_create_transport", 00:22:54.730 "params": { 00:22:54.730 "trtype": "TCP", 00:22:54.730 "max_queue_depth": 128, 00:22:54.730 "max_io_qpairs_per_ctrlr": 127, 00:22:54.730 "in_capsule_data_size": 4096, 00:22:54.730 "max_io_size": 131072, 00:22:54.730 "io_unit_size": 131072, 00:22:54.730 "max_aq_depth": 128, 00:22:54.730 "num_shared_buffers": 511, 00:22:54.730 "buf_cache_size": 4294967295, 00:22:54.730 "dif_insert_or_strip": false, 00:22:54.730 "zcopy": false, 00:22:54.730 "c2h_success": false, 00:22:54.730 "sock_priority": 0, 00:22:54.730 "abort_timeout_sec": 1, 00:22:54.730 "ack_timeout": 0, 00:22:54.730 "data_wr_pool_size": 0 00:22:54.730 } 00:22:54.730 }, 00:22:54.730 { 00:22:54.730 "method": "nvmf_create_subsystem", 00:22:54.730 "params": { 00:22:54.730 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:54.730 "allow_any_host": false, 00:22:54.730 "serial_number": "SPDK00000000000001", 00:22:54.730 "model_number": "SPDK bdev Controller", 00:22:54.730 "max_namespaces": 10, 00:22:54.730 "min_cntlid": 1, 00:22:54.730 "max_cntlid": 65519, 00:22:54.730 "ana_reporting": false 00:22:54.730 } 00:22:54.730 }, 00:22:54.730 { 00:22:54.730 "method": "nvmf_subsystem_add_host", 00:22:54.730 "params": { 00:22:54.730 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:54.730 "host": "nqn.2016-06.io.spdk:host1", 00:22:54.730 "psk": "/tmp/tmp.FOHimfKQiO" 00:22:54.730 } 00:22:54.730 }, 00:22:54.730 { 00:22:54.730 "method": "nvmf_subsystem_add_ns", 00:22:54.730 "params": { 00:22:54.730 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:54.730 "namespace": { 00:22:54.730 "nsid": 1, 00:22:54.730 "bdev_name": "malloc0", 00:22:54.730 "nguid": "C2D9A610891F4376A2AF8DA12A2D3424", 00:22:54.730 "uuid": "c2d9a610-891f-4376-a2af-8da12a2d3424", 00:22:54.730 "no_auto_visible": false 00:22:54.730 } 00:22:54.730 } 00:22:54.730 }, 00:22:54.730 { 00:22:54.730 "method": "nvmf_subsystem_add_listener", 00:22:54.730 "params": { 00:22:54.730 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:54.730 "listen_address": { 00:22:54.730 "trtype": "TCP", 00:22:54.730 "adrfam": "IPv4", 00:22:54.731 "traddr": "10.0.0.2", 00:22:54.731 "trsvcid": "4420" 00:22:54.731 }, 00:22:54.731 "secure_channel": true 00:22:54.731 } 00:22:54.731 } 00:22:54.731 ] 00:22:54.731 } 00:22:54.731 ] 00:22:54.731 }' 00:22:54.731 02:01:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:22:54.989 02:01:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:22:54.989 "subsystems": [ 00:22:54.989 { 00:22:54.989 "subsystem": "keyring", 00:22:54.989 "config": [] 00:22:54.989 }, 00:22:54.989 { 00:22:54.989 "subsystem": "iobuf", 00:22:54.989 "config": [ 00:22:54.989 { 00:22:54.989 "method": "iobuf_set_options", 00:22:54.989 "params": { 00:22:54.989 "small_pool_count": 8192, 00:22:54.989 "large_pool_count": 1024, 00:22:54.989 "small_bufsize": 8192, 00:22:54.989 "large_bufsize": 135168 00:22:54.989 } 00:22:54.989 } 00:22:54.989 ] 00:22:54.989 }, 00:22:54.989 { 00:22:54.989 "subsystem": "sock", 00:22:54.989 "config": [ 00:22:54.989 { 00:22:54.989 "method": "sock_set_default_impl", 00:22:54.989 "params": { 00:22:54.989 "impl_name": "posix" 00:22:54.989 } 00:22:54.989 }, 00:22:54.989 { 00:22:54.989 "method": "sock_impl_set_options", 00:22:54.989 "params": { 00:22:54.989 "impl_name": "ssl", 00:22:54.989 "recv_buf_size": 4096, 00:22:54.989 "send_buf_size": 4096, 00:22:54.989 "enable_recv_pipe": true, 00:22:54.989 "enable_quickack": false, 00:22:54.989 "enable_placement_id": 0, 00:22:54.989 "enable_zerocopy_send_server": true, 00:22:54.989 "enable_zerocopy_send_client": false, 00:22:54.989 "zerocopy_threshold": 0, 00:22:54.989 "tls_version": 0, 00:22:54.989 "enable_ktls": false 00:22:54.989 } 00:22:54.989 }, 00:22:54.989 { 00:22:54.989 "method": "sock_impl_set_options", 00:22:54.989 "params": { 00:22:54.989 "impl_name": "posix", 00:22:54.989 "recv_buf_size": 2097152, 00:22:54.989 "send_buf_size": 2097152, 00:22:54.989 "enable_recv_pipe": true, 00:22:54.989 "enable_quickack": false, 00:22:54.989 "enable_placement_id": 0, 00:22:54.989 "enable_zerocopy_send_server": true, 00:22:54.989 "enable_zerocopy_send_client": false, 00:22:54.989 "zerocopy_threshold": 0, 00:22:54.989 "tls_version": 0, 00:22:54.989 "enable_ktls": false 00:22:54.989 } 00:22:54.989 } 00:22:54.989 ] 00:22:54.989 }, 00:22:54.989 { 00:22:54.989 "subsystem": "vmd", 00:22:54.989 "config": [] 00:22:54.989 }, 00:22:54.990 { 00:22:54.990 "subsystem": "accel", 00:22:54.990 "config": [ 00:22:54.990 { 00:22:54.990 "method": "accel_set_options", 00:22:54.990 "params": { 00:22:54.990 "small_cache_size": 128, 00:22:54.990 "large_cache_size": 16, 00:22:54.990 "task_count": 2048, 00:22:54.990 "sequence_count": 2048, 00:22:54.990 "buf_count": 2048 00:22:54.990 } 00:22:54.990 } 00:22:54.990 ] 00:22:54.990 }, 00:22:54.990 { 00:22:54.990 "subsystem": "bdev", 00:22:54.990 "config": [ 00:22:54.990 { 00:22:54.990 "method": "bdev_set_options", 00:22:54.990 "params": { 00:22:54.990 "bdev_io_pool_size": 65535, 00:22:54.990 "bdev_io_cache_size": 256, 00:22:54.990 "bdev_auto_examine": true, 00:22:54.990 "iobuf_small_cache_size": 128, 00:22:54.990 "iobuf_large_cache_size": 16 00:22:54.990 } 00:22:54.990 }, 00:22:54.990 { 00:22:54.990 "method": "bdev_raid_set_options", 00:22:54.990 "params": { 00:22:54.990 "process_window_size_kb": 1024, 00:22:54.990 "process_max_bandwidth_mb_sec": 0 00:22:54.990 } 00:22:54.990 }, 00:22:54.990 { 00:22:54.990 "method": "bdev_iscsi_set_options", 00:22:54.990 "params": { 00:22:54.990 "timeout_sec": 30 00:22:54.990 } 00:22:54.990 }, 00:22:54.990 { 00:22:54.990 "method": "bdev_nvme_set_options", 00:22:54.990 "params": { 00:22:54.990 "action_on_timeout": "none", 00:22:54.990 "timeout_us": 0, 00:22:54.990 "timeout_admin_us": 0, 00:22:54.990 "keep_alive_timeout_ms": 10000, 00:22:54.990 "arbitration_burst": 0, 00:22:54.990 "low_priority_weight": 0, 00:22:54.990 "medium_priority_weight": 0, 00:22:54.990 "high_priority_weight": 0, 00:22:54.990 "nvme_adminq_poll_period_us": 10000, 00:22:54.990 "nvme_ioq_poll_period_us": 0, 00:22:54.990 "io_queue_requests": 512, 00:22:54.990 "delay_cmd_submit": true, 00:22:54.990 "transport_retry_count": 4, 00:22:54.990 "bdev_retry_count": 3, 00:22:54.990 "transport_ack_timeout": 0, 00:22:54.990 "ctrlr_loss_timeout_sec": 0, 00:22:54.990 "reconnect_delay_sec": 0, 00:22:54.990 "fast_io_fail_timeout_sec": 0, 00:22:54.990 "disable_auto_failback": false, 00:22:54.990 "generate_uuids": false, 00:22:54.990 "transport_tos": 0, 00:22:54.990 "nvme_error_stat": false, 00:22:54.990 "rdma_srq_size": 0, 00:22:54.990 "io_path_stat": false, 00:22:54.990 "allow_accel_sequence": false, 00:22:54.990 "rdma_max_cq_size": 0, 00:22:54.990 "rdma_cm_event_timeout_ms": 0, 00:22:54.990 "dhchap_digests": [ 00:22:54.990 "sha256", 00:22:54.990 "sha384", 00:22:54.990 "sha512" 00:22:54.990 ], 00:22:54.990 "dhchap_dhgroups": [ 00:22:54.990 "null", 00:22:54.990 "ffdhe2048", 00:22:54.990 "ffdhe3072", 00:22:54.990 "ffdhe4096", 00:22:54.990 "ffdhe6144", 00:22:54.990 "ffdhe8192" 00:22:54.990 ] 00:22:54.990 } 00:22:54.990 }, 00:22:54.990 { 00:22:54.990 "method": "bdev_nvme_attach_controller", 00:22:54.990 "params": { 00:22:54.990 "name": "TLSTEST", 00:22:54.990 "trtype": "TCP", 00:22:54.990 "adrfam": "IPv4", 00:22:54.990 "traddr": "10.0.0.2", 00:22:54.990 "trsvcid": "4420", 00:22:54.990 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:54.990 "prchk_reftag": false, 00:22:54.990 "prchk_guard": false, 00:22:54.990 "ctrlr_loss_timeout_sec": 0, 00:22:54.990 "reconnect_delay_sec": 0, 00:22:54.990 "fast_io_fail_timeout_sec": 0, 00:22:54.990 "psk": "/tmp/tmp.FOHimfKQiO", 00:22:54.990 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:54.990 "hdgst": false, 00:22:54.990 "ddgst": false 00:22:54.990 } 00:22:54.990 }, 00:22:54.990 { 00:22:54.990 "method": "bdev_nvme_set_hotplug", 00:22:54.990 "params": { 00:22:54.990 "period_us": 100000, 00:22:54.990 "enable": false 00:22:54.990 } 00:22:54.990 }, 00:22:54.990 { 00:22:54.990 "method": "bdev_wait_for_examine" 00:22:54.990 } 00:22:54.990 ] 00:22:54.990 }, 00:22:54.990 { 00:22:54.990 "subsystem": "nbd", 00:22:54.990 "config": [] 00:22:54.990 } 00:22:54.990 ] 00:22:54.990 }' 00:22:54.990 02:01:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # killprocess 1467784 00:22:54.990 02:01:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1467784 ']' 00:22:54.990 02:01:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1467784 00:22:54.990 02:01:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:54.990 02:01:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:54.990 02:01:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1467784 00:22:54.990 02:01:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:54.990 02:01:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:54.990 02:01:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1467784' 00:22:54.990 killing process with pid 1467784 00:22:54.990 02:01:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1467784 00:22:54.990 Received shutdown signal, test time was about 10.000000 seconds 00:22:54.990 00:22:54.990 Latency(us) 00:22:54.990 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:54.990 =================================================================================================================== 00:22:54.990 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:54.990 [2024-07-24 02:01:09.770969] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:54.990 02:01:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1467784 00:22:55.248 02:01:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@200 -- # killprocess 1467508 00:22:55.248 02:01:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1467508 ']' 00:22:55.248 02:01:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1467508 00:22:55.248 02:01:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:55.248 02:01:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:55.248 02:01:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1467508 00:22:55.248 02:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:55.248 02:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:55.248 02:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1467508' 00:22:55.248 killing process with pid 1467508 00:22:55.248 02:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1467508 00:22:55.248 [2024-07-24 02:01:10.022877] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:55.248 02:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1467508 00:22:55.506 02:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:22:55.506 02:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:55.506 02:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:22:55.506 "subsystems": [ 00:22:55.506 { 00:22:55.506 "subsystem": "keyring", 00:22:55.506 "config": [] 00:22:55.506 }, 00:22:55.506 { 00:22:55.506 "subsystem": "iobuf", 00:22:55.506 "config": [ 00:22:55.506 { 00:22:55.506 "method": "iobuf_set_options", 00:22:55.506 "params": { 00:22:55.506 "small_pool_count": 8192, 00:22:55.506 "large_pool_count": 1024, 00:22:55.506 "small_bufsize": 8192, 00:22:55.506 "large_bufsize": 135168 00:22:55.506 } 00:22:55.506 } 00:22:55.506 ] 00:22:55.506 }, 00:22:55.506 { 00:22:55.506 "subsystem": "sock", 00:22:55.506 "config": [ 00:22:55.506 { 00:22:55.506 "method": "sock_set_default_impl", 00:22:55.506 "params": { 00:22:55.506 "impl_name": "posix" 00:22:55.506 } 00:22:55.506 }, 00:22:55.506 { 00:22:55.506 "method": "sock_impl_set_options", 00:22:55.506 "params": { 00:22:55.506 "impl_name": "ssl", 00:22:55.506 "recv_buf_size": 4096, 00:22:55.506 "send_buf_size": 4096, 00:22:55.506 "enable_recv_pipe": true, 00:22:55.506 "enable_quickack": false, 00:22:55.506 "enable_placement_id": 0, 00:22:55.506 "enable_zerocopy_send_server": true, 00:22:55.506 "enable_zerocopy_send_client": false, 00:22:55.506 "zerocopy_threshold": 0, 00:22:55.506 "tls_version": 0, 00:22:55.506 "enable_ktls": false 00:22:55.506 } 00:22:55.506 }, 00:22:55.506 { 00:22:55.506 "method": "sock_impl_set_options", 00:22:55.506 "params": { 00:22:55.506 "impl_name": "posix", 00:22:55.506 "recv_buf_size": 2097152, 00:22:55.506 "send_buf_size": 2097152, 00:22:55.506 "enable_recv_pipe": true, 00:22:55.506 "enable_quickack": false, 00:22:55.506 "enable_placement_id": 0, 00:22:55.506 "enable_zerocopy_send_server": true, 00:22:55.506 "enable_zerocopy_send_client": false, 00:22:55.506 "zerocopy_threshold": 0, 00:22:55.506 "tls_version": 0, 00:22:55.506 "enable_ktls": false 00:22:55.506 } 00:22:55.506 } 00:22:55.506 ] 00:22:55.506 }, 00:22:55.506 { 00:22:55.506 "subsystem": "vmd", 00:22:55.506 "config": [] 00:22:55.506 }, 00:22:55.506 { 00:22:55.506 "subsystem": "accel", 00:22:55.506 "config": [ 00:22:55.506 { 00:22:55.506 "method": "accel_set_options", 00:22:55.506 "params": { 00:22:55.506 "small_cache_size": 128, 00:22:55.506 "large_cache_size": 16, 00:22:55.506 "task_count": 2048, 00:22:55.506 "sequence_count": 2048, 00:22:55.506 "buf_count": 2048 00:22:55.506 } 00:22:55.506 } 00:22:55.506 ] 00:22:55.506 }, 00:22:55.506 { 00:22:55.506 "subsystem": "bdev", 00:22:55.506 "config": [ 00:22:55.506 { 00:22:55.506 "method": "bdev_set_options", 00:22:55.506 "params": { 00:22:55.506 "bdev_io_pool_size": 65535, 00:22:55.507 "bdev_io_cache_size": 256, 00:22:55.507 "bdev_auto_examine": true, 00:22:55.507 "iobuf_small_cache_size": 128, 00:22:55.507 "iobuf_large_cache_size": 16 00:22:55.507 } 00:22:55.507 }, 00:22:55.507 { 00:22:55.507 "method": "bdev_raid_set_options", 00:22:55.507 "params": { 00:22:55.507 "process_window_size_kb": 1024, 00:22:55.507 "process_max_bandwidth_mb_sec": 0 00:22:55.507 } 00:22:55.507 }, 00:22:55.507 { 00:22:55.507 "method": "bdev_iscsi_set_options", 00:22:55.507 "params": { 00:22:55.507 "timeout_sec": 30 00:22:55.507 } 00:22:55.507 }, 00:22:55.507 { 00:22:55.507 "method": "bdev_nvme_set_options", 00:22:55.507 "params": { 00:22:55.507 "action_on_timeout": "none", 00:22:55.507 "timeout_us": 0, 00:22:55.507 "timeout_admin_us": 0, 00:22:55.507 "keep_alive_timeout_ms": 10000, 00:22:55.507 "arbitration_burst": 0, 00:22:55.507 "low_priority_weight": 0, 00:22:55.507 "medium_priority_weight": 0, 00:22:55.507 "high_priority_weight": 0, 00:22:55.507 "nvme_adminq_poll_period_us": 10000, 00:22:55.507 "nvme_ioq_poll_period_us": 0, 00:22:55.507 "io_queue_requests": 0, 00:22:55.507 "delay_cmd_submit": true, 00:22:55.507 "transport_retry_count": 4, 00:22:55.507 "bdev_retry_count": 3, 00:22:55.507 "transport_ack_timeout": 0, 00:22:55.507 "ctrlr_loss_timeout_sec": 0, 00:22:55.507 "reconnect_delay_sec": 0, 00:22:55.507 "fast_io_fail_timeout_sec": 0, 00:22:55.507 "disable_auto_failback": false, 00:22:55.507 "generate_uuids": false, 00:22:55.507 "transport_tos": 0, 00:22:55.507 "nvme_error_stat": false, 00:22:55.507 "rdma_srq_size": 0, 00:22:55.507 "io_path_stat": false, 00:22:55.507 "allow_accel_sequence": false, 00:22:55.507 "rdma_max_cq_size": 0, 00:22:55.507 "rdma_cm_event_timeout_ms": 0, 00:22:55.507 "dhchap_digests": [ 00:22:55.507 "sha256", 00:22:55.507 "sha384", 00:22:55.507 "sha512" 00:22:55.507 ], 00:22:55.507 "dhchap_dhgroups": [ 00:22:55.507 "null", 00:22:55.507 "ffdhe2048", 00:22:55.507 "ffdhe3072", 00:22:55.507 "ffdhe4096", 00:22:55.507 "ffdhe6144", 00:22:55.507 "ffdhe8192" 00:22:55.507 ] 00:22:55.507 } 00:22:55.507 }, 00:22:55.507 { 00:22:55.507 "method": "bdev_nvme_set_hotplug", 00:22:55.507 "params": { 00:22:55.507 "period_us": 100000, 00:22:55.507 "enable": false 00:22:55.507 } 00:22:55.507 }, 00:22:55.507 { 00:22:55.507 "method": "bdev_malloc_create", 00:22:55.507 "params": { 00:22:55.507 "name": "malloc0", 00:22:55.507 "num_blocks": 8192, 00:22:55.507 "block_size": 4096, 00:22:55.507 "physical_block_size": 4096, 00:22:55.507 "uuid": "c2d9a610-891f-4376-a2af-8da12a2d3424", 00:22:55.507 "optimal_io_boundary": 0, 00:22:55.507 "md_size": 0, 00:22:55.507 "dif_type": 0, 00:22:55.507 "dif_is_head_of_md": false, 00:22:55.507 "dif_pi_format": 0 00:22:55.507 } 00:22:55.507 }, 00:22:55.507 { 00:22:55.507 "method": "bdev_wait_for_examine" 00:22:55.507 } 00:22:55.507 ] 00:22:55.507 }, 00:22:55.507 { 00:22:55.507 "subsystem": "nbd", 00:22:55.507 "config": [] 00:22:55.507 }, 00:22:55.507 { 00:22:55.507 "subsystem": "scheduler", 00:22:55.507 "config": [ 00:22:55.507 { 00:22:55.507 "method": "framework_set_scheduler", 00:22:55.507 "params": { 00:22:55.507 "name": "static" 00:22:55.507 } 00:22:55.507 } 00:22:55.507 ] 00:22:55.507 }, 00:22:55.507 { 00:22:55.507 "subsystem": "nvmf", 00:22:55.507 "config": [ 00:22:55.507 { 00:22:55.507 "method": "nvmf_set_config", 00:22:55.507 "params": { 00:22:55.507 "discovery_filter": "match_any", 00:22:55.507 "admin_cmd_passthru": { 00:22:55.507 "identify_ctrlr": false 00:22:55.507 } 00:22:55.507 } 00:22:55.507 }, 00:22:55.507 { 00:22:55.507 "method": "nvmf_set_max_subsystems", 00:22:55.507 "params": { 00:22:55.507 "max_subsystems": 1024 00:22:55.507 } 00:22:55.507 }, 00:22:55.507 { 00:22:55.507 "method": "nvmf_set_crdt", 00:22:55.507 "params": { 00:22:55.507 "crdt1": 0, 00:22:55.507 "crdt2": 0, 00:22:55.507 "crdt3": 0 00:22:55.507 } 00:22:55.507 }, 00:22:55.507 { 00:22:55.507 "method": "nvmf_create_transport", 00:22:55.507 "params": { 00:22:55.507 "trtype": "TCP", 00:22:55.507 "max_queue_depth": 128, 00:22:55.507 "max_io_qpairs_per_ctrlr": 127, 00:22:55.507 "in_capsule_data_size": 4096, 00:22:55.507 "max_io_size": 131072, 00:22:55.507 "io_unit_size": 131072, 00:22:55.507 "max_aq_depth": 128, 00:22:55.507 "num_shared_buffers": 511, 00:22:55.507 "buf_cache_size": 4294967295, 00:22:55.507 "dif_insert_or_strip": false, 00:22:55.507 "zcopy": false, 00:22:55.507 "c2h_success": false, 00:22:55.507 "sock_priority": 0, 00:22:55.507 "abort_timeout_sec": 1, 00:22:55.507 "ack_timeout": 0, 00:22:55.507 "data_wr_pool_size": 0 00:22:55.507 } 00:22:55.507 }, 00:22:55.507 { 00:22:55.507 "method": "nvmf_create_subsystem", 00:22:55.507 "params": { 00:22:55.507 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:55.507 "allow_any_host": false, 00:22:55.507 "serial_number": "SPDK00000000000001", 00:22:55.507 "model_number": "SPDK bdev Controller", 00:22:55.507 "max_namespaces": 10, 00:22:55.507 "min_cntlid": 1, 00:22:55.507 "max_cntlid": 65519, 00:22:55.507 "ana_reporting": false 00:22:55.507 } 00:22:55.507 }, 00:22:55.507 { 00:22:55.507 "method": "nvmf_subsystem_add_host", 00:22:55.507 "params": { 00:22:55.507 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:55.507 "host": "nqn.2016-06.io.spdk:host1", 00:22:55.507 "psk": "/tmp/tmp.FOHimfKQiO" 00:22:55.507 } 00:22:55.507 }, 00:22:55.507 { 00:22:55.507 "method": "nvmf_subsystem_add_ns", 00:22:55.507 "params": { 00:22:55.507 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:55.507 "namespace": { 00:22:55.507 "nsid": 1, 00:22:55.507 "bdev_name": "malloc0", 00:22:55.507 "nguid": "C2D9A610891F4376A2AF8DA12A2D3424", 00:22:55.507 "uuid": "c2d9a610-891f-4376-a2af-8da12a2d3424", 00:22:55.507 "no_auto_visible": false 00:22:55.507 } 00:22:55.507 } 00:22:55.507 }, 00:22:55.507 { 00:22:55.507 "method": "nvmf_subsystem_add_listener", 00:22:55.507 "params": { 00:22:55.507 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:55.507 "listen_address": { 00:22:55.507 "trtype": "TCP", 00:22:55.507 "adrfam": "IPv4", 00:22:55.507 "traddr": "10.0.0.2", 00:22:55.507 "trsvcid": "4420" 00:22:55.507 }, 00:22:55.507 "secure_channel": true 00:22:55.507 } 00:22:55.507 } 00:22:55.507 ] 00:22:55.507 } 00:22:55.507 ] 00:22:55.507 }' 00:22:55.507 02:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:55.507 02:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:55.507 02:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1467959 00:22:55.507 02:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:22:55.507 02:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1467959 00:22:55.507 02:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1467959 ']' 00:22:55.507 02:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:55.507 02:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:55.507 02:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:55.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:55.508 02:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:55.508 02:01:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:55.508 [2024-07-24 02:01:10.320244] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:22:55.508 [2024-07-24 02:01:10.320359] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:55.508 EAL: No free 2048 kB hugepages reported on node 1 00:22:55.508 [2024-07-24 02:01:10.384489] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:55.766 [2024-07-24 02:01:10.469517] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:55.766 [2024-07-24 02:01:10.469576] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:55.766 [2024-07-24 02:01:10.469589] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:55.766 [2024-07-24 02:01:10.469608] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:55.766 [2024-07-24 02:01:10.469618] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:55.766 [2024-07-24 02:01:10.469692] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:56.024 [2024-07-24 02:01:10.700158] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:56.024 [2024-07-24 02:01:10.720929] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:56.024 [2024-07-24 02:01:10.736991] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:56.024 [2024-07-24 02:01:10.737250] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:56.589 02:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:56.589 02:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:56.589 02:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:56.589 02:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:56.589 02:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:56.589 02:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:56.589 02:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=1468108 00:22:56.589 02:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 1468108 /var/tmp/bdevperf.sock 00:22:56.589 02:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1468108 ']' 00:22:56.589 02:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:56.589 02:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:22:56.589 02:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:56.589 02:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:56.589 02:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:22:56.589 "subsystems": [ 00:22:56.589 { 00:22:56.589 "subsystem": "keyring", 00:22:56.589 "config": [] 00:22:56.589 }, 00:22:56.589 { 00:22:56.589 "subsystem": "iobuf", 00:22:56.589 "config": [ 00:22:56.589 { 00:22:56.589 "method": "iobuf_set_options", 00:22:56.589 "params": { 00:22:56.589 "small_pool_count": 8192, 00:22:56.589 "large_pool_count": 1024, 00:22:56.589 "small_bufsize": 8192, 00:22:56.589 "large_bufsize": 135168 00:22:56.589 } 00:22:56.589 } 00:22:56.589 ] 00:22:56.589 }, 00:22:56.589 { 00:22:56.589 "subsystem": "sock", 00:22:56.589 "config": [ 00:22:56.589 { 00:22:56.589 "method": "sock_set_default_impl", 00:22:56.589 "params": { 00:22:56.589 "impl_name": "posix" 00:22:56.589 } 00:22:56.589 }, 00:22:56.589 { 00:22:56.589 "method": "sock_impl_set_options", 00:22:56.589 "params": { 00:22:56.589 "impl_name": "ssl", 00:22:56.589 "recv_buf_size": 4096, 00:22:56.589 "send_buf_size": 4096, 00:22:56.589 "enable_recv_pipe": true, 00:22:56.589 "enable_quickack": false, 00:22:56.589 "enable_placement_id": 0, 00:22:56.589 "enable_zerocopy_send_server": true, 00:22:56.590 "enable_zerocopy_send_client": false, 00:22:56.590 "zerocopy_threshold": 0, 00:22:56.590 "tls_version": 0, 00:22:56.590 "enable_ktls": false 00:22:56.590 } 00:22:56.590 }, 00:22:56.590 { 00:22:56.590 "method": "sock_impl_set_options", 00:22:56.590 "params": { 00:22:56.590 "impl_name": "posix", 00:22:56.590 "recv_buf_size": 2097152, 00:22:56.590 "send_buf_size": 2097152, 00:22:56.590 "enable_recv_pipe": true, 00:22:56.590 "enable_quickack": false, 00:22:56.590 "enable_placement_id": 0, 00:22:56.590 "enable_zerocopy_send_server": true, 00:22:56.590 "enable_zerocopy_send_client": false, 00:22:56.590 "zerocopy_threshold": 0, 00:22:56.590 "tls_version": 0, 00:22:56.590 "enable_ktls": false 00:22:56.590 } 00:22:56.590 } 00:22:56.590 ] 00:22:56.590 }, 00:22:56.590 { 00:22:56.590 "subsystem": "vmd", 00:22:56.590 "config": [] 00:22:56.590 }, 00:22:56.590 { 00:22:56.590 "subsystem": "accel", 00:22:56.590 "config": [ 00:22:56.590 { 00:22:56.590 "method": "accel_set_options", 00:22:56.590 "params": { 00:22:56.590 "small_cache_size": 128, 00:22:56.590 "large_cache_size": 16, 00:22:56.590 "task_count": 2048, 00:22:56.590 "sequence_count": 2048, 00:22:56.590 "buf_count": 2048 00:22:56.590 } 00:22:56.590 } 00:22:56.590 ] 00:22:56.590 }, 00:22:56.590 { 00:22:56.590 "subsystem": "bdev", 00:22:56.590 "config": [ 00:22:56.590 { 00:22:56.590 "method": "bdev_set_options", 00:22:56.590 "params": { 00:22:56.590 "bdev_io_pool_size": 65535, 00:22:56.590 "bdev_io_cache_size": 256, 00:22:56.590 "bdev_auto_examine": true, 00:22:56.590 "iobuf_small_cache_size": 128, 00:22:56.590 "iobuf_large_cache_size": 16 00:22:56.590 } 00:22:56.590 }, 00:22:56.590 { 00:22:56.590 "method": "bdev_raid_set_options", 00:22:56.590 "params": { 00:22:56.590 "process_window_size_kb": 1024, 00:22:56.590 "process_max_bandwidth_mb_sec": 0 00:22:56.590 } 00:22:56.590 }, 00:22:56.590 { 00:22:56.590 "method": "bdev_iscsi_set_options", 00:22:56.590 "params": { 00:22:56.590 "timeout_sec": 30 00:22:56.590 } 00:22:56.590 }, 00:22:56.590 { 00:22:56.590 "method": "bdev_nvme_set_options", 00:22:56.590 "params": { 00:22:56.590 "action_on_timeout": "none", 00:22:56.590 "timeout_us": 0, 00:22:56.590 "timeout_admin_us": 0, 00:22:56.590 "keep_alive_timeout_ms": 10000, 00:22:56.590 "arbitration_burst": 0, 00:22:56.590 "low_priority_weight": 0, 00:22:56.590 "medium_priority_weight": 0, 00:22:56.590 "high_priority_weight": 0, 00:22:56.590 "nvme_adminq_poll_period_us": 10000, 00:22:56.590 "nvme_ioq_poll_period_us": 0, 00:22:56.590 "io_queue_requests": 512, 00:22:56.590 "delay_cmd_submit": true, 00:22:56.590 "transport_retry_count": 4, 00:22:56.590 "bdev_retry_count": 3, 00:22:56.590 "transport_ack_timeout": 0, 00:22:56.590 "ctrlr_loss_timeout_sec": 0, 00:22:56.590 "reconnect_delay_sec": 0, 00:22:56.590 "fast_io_fail_timeout_sec": 0, 00:22:56.590 "disable_auto_failback": false, 00:22:56.590 "generate_uuids": false, 00:22:56.590 "transport_tos": 0, 00:22:56.590 "nvme_error_stat": false, 00:22:56.590 "rdma_srq_size": 0, 00:22:56.590 "io_path_stat": false, 00:22:56.590 "allow_accel_sequence": false, 00:22:56.590 "rdma_max_cq_size": 0, 00:22:56.590 "rdma_cm_event_timeout_ms": 0, 00:22:56.590 "dhchap_digests": [ 00:22:56.590 "sha256", 00:22:56.590 "sha384", 00:22:56.590 "sha512" 00:22:56.590 ], 00:22:56.590 "dhchap_dhgroups": [ 00:22:56.590 "null", 00:22:56.590 "ffdhe2048", 00:22:56.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:56.590 "ffdhe3072", 00:22:56.590 "ffdhe4096", 00:22:56.590 "ffdhe6144", 00:22:56.590 "ffdhe8192" 00:22:56.590 ] 00:22:56.590 } 00:22:56.590 }, 00:22:56.590 { 00:22:56.590 "method": "bdev_nvme_attach_controller", 00:22:56.590 "params": { 00:22:56.590 "name": "TLSTEST", 00:22:56.590 "trtype": "TCP", 00:22:56.590 "adrfam": "IPv4", 00:22:56.590 "traddr": "10.0.0.2", 00:22:56.590 "trsvcid": "4420", 00:22:56.590 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:56.590 "prchk_reftag": false, 00:22:56.590 "prchk_guard": false, 00:22:56.590 "ctrlr_loss_timeout_sec": 0, 00:22:56.590 "reconnect_delay_sec": 0, 00:22:56.590 "fast_io_fail_timeout_sec": 0, 00:22:56.590 "psk": "/tmp/tmp.FOHimfKQiO", 00:22:56.590 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:56.590 "hdgst": false, 00:22:56.590 "ddgst": false 00:22:56.590 } 00:22:56.590 }, 00:22:56.590 { 00:22:56.590 "method": "bdev_nvme_set_hotplug", 00:22:56.590 "params": { 00:22:56.590 "period_us": 100000, 00:22:56.591 "enable": false 00:22:56.591 } 00:22:56.591 }, 00:22:56.591 { 00:22:56.591 "method": "bdev_wait_for_examine" 00:22:56.591 } 00:22:56.591 ] 00:22:56.591 }, 00:22:56.591 { 00:22:56.591 "subsystem": "nbd", 00:22:56.591 "config": [] 00:22:56.591 } 00:22:56.591 ] 00:22:56.591 }' 00:22:56.591 02:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:56.591 02:01:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:56.591 [2024-07-24 02:01:11.326366] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:22:56.591 [2024-07-24 02:01:11.326445] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1468108 ] 00:22:56.591 EAL: No free 2048 kB hugepages reported on node 1 00:22:56.591 [2024-07-24 02:01:11.383163] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:56.591 [2024-07-24 02:01:11.466185] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:56.849 [2024-07-24 02:01:11.633287] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:56.849 [2024-07-24 02:01:11.633480] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:57.415 02:01:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:57.415 02:01:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:57.415 02:01:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:57.673 Running I/O for 10 seconds... 00:23:07.639 00:23:07.639 Latency(us) 00:23:07.639 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:07.639 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:07.639 Verification LBA range: start 0x0 length 0x2000 00:23:07.639 TLSTESTn1 : 10.03 3158.71 12.34 0.00 0.00 40449.91 6650.69 52817.16 00:23:07.639 =================================================================================================================== 00:23:07.639 Total : 3158.71 12.34 0.00 0.00 40449.91 6650.69 52817.16 00:23:07.639 0 00:23:07.639 02:01:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:07.639 02:01:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@214 -- # killprocess 1468108 00:23:07.639 02:01:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1468108 ']' 00:23:07.639 02:01:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1468108 00:23:07.639 02:01:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:07.639 02:01:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:07.639 02:01:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1468108 00:23:07.639 02:01:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:07.639 02:01:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:07.639 02:01:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1468108' 00:23:07.639 killing process with pid 1468108 00:23:07.639 02:01:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1468108 00:23:07.639 Received shutdown signal, test time was about 10.000000 seconds 00:23:07.639 00:23:07.639 Latency(us) 00:23:07.639 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:07.639 =================================================================================================================== 00:23:07.640 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:07.640 [2024-07-24 02:01:22.474160] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:07.640 02:01:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1468108 00:23:07.897 02:01:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # killprocess 1467959 00:23:07.897 02:01:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1467959 ']' 00:23:07.897 02:01:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1467959 00:23:07.897 02:01:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:07.898 02:01:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:07.898 02:01:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1467959 00:23:07.898 02:01:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:07.898 02:01:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:07.898 02:01:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1467959' 00:23:07.898 killing process with pid 1467959 00:23:07.898 02:01:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1467959 00:23:07.898 [2024-07-24 02:01:22.724120] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:07.898 02:01:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1467959 00:23:08.155 02:01:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:23:08.155 02:01:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:08.155 02:01:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:08.155 02:01:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:08.155 02:01:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1469435 00:23:08.155 02:01:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:08.155 02:01:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1469435 00:23:08.155 02:01:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1469435 ']' 00:23:08.155 02:01:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:08.155 02:01:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:08.155 02:01:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:08.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:08.155 02:01:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:08.155 02:01:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:08.155 [2024-07-24 02:01:23.031563] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:23:08.155 [2024-07-24 02:01:23.031651] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:08.414 EAL: No free 2048 kB hugepages reported on node 1 00:23:08.414 [2024-07-24 02:01:23.098337] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:08.414 [2024-07-24 02:01:23.188466] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:08.414 [2024-07-24 02:01:23.188521] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:08.414 [2024-07-24 02:01:23.188550] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:08.414 [2024-07-24 02:01:23.188561] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:08.414 [2024-07-24 02:01:23.188571] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:08.414 [2024-07-24 02:01:23.188623] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:08.414 02:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:08.414 02:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:08.414 02:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:08.414 02:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:08.414 02:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:08.682 02:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:08.683 02:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.FOHimfKQiO 00:23:08.683 02:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.FOHimfKQiO 00:23:08.683 02:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:08.940 [2024-07-24 02:01:23.596965] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:08.940 02:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:09.198 02:01:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:09.456 [2024-07-24 02:01:24.110343] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:09.456 [2024-07-24 02:01:24.110598] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:09.456 02:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:09.714 malloc0 00:23:09.714 02:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:09.972 02:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.FOHimfKQiO 00:23:10.230 [2024-07-24 02:01:24.908209] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:10.230 02:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=1469718 00:23:10.230 02:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:10.230 02:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:10.230 02:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 1469718 /var/tmp/bdevperf.sock 00:23:10.230 02:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1469718 ']' 00:23:10.230 02:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:10.230 02:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:10.230 02:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:10.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:10.230 02:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:10.230 02:01:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:10.230 [2024-07-24 02:01:24.968158] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:23:10.230 [2024-07-24 02:01:24.968242] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1469718 ] 00:23:10.230 EAL: No free 2048 kB hugepages reported on node 1 00:23:10.230 [2024-07-24 02:01:25.027951] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:10.230 [2024-07-24 02:01:25.114245] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:10.488 02:01:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:10.488 02:01:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:10.488 02:01:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.FOHimfKQiO 00:23:10.746 02:01:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:11.003 [2024-07-24 02:01:25.689983] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:11.004 nvme0n1 00:23:11.004 02:01:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:11.004 Running I/O for 1 seconds... 00:23:12.376 00:23:12.376 Latency(us) 00:23:12.376 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:12.376 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:12.376 Verification LBA range: start 0x0 length 0x2000 00:23:12.376 nvme0n1 : 1.02 3230.43 12.62 0.00 0.00 39176.13 6407.96 36894.34 00:23:12.376 =================================================================================================================== 00:23:12.376 Total : 3230.43 12.62 0.00 0.00 39176.13 6407.96 36894.34 00:23:12.376 0 00:23:12.376 02:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # killprocess 1469718 00:23:12.376 02:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1469718 ']' 00:23:12.376 02:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1469718 00:23:12.376 02:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:12.376 02:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:12.376 02:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1469718 00:23:12.376 02:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:12.376 02:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:12.376 02:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1469718' 00:23:12.376 killing process with pid 1469718 00:23:12.376 02:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1469718 00:23:12.376 Received shutdown signal, test time was about 1.000000 seconds 00:23:12.376 00:23:12.376 Latency(us) 00:23:12.376 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:12.376 =================================================================================================================== 00:23:12.376 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:12.376 02:01:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1469718 00:23:12.376 02:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@235 -- # killprocess 1469435 00:23:12.376 02:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1469435 ']' 00:23:12.376 02:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1469435 00:23:12.376 02:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:12.376 02:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:12.376 02:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1469435 00:23:12.376 02:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:12.376 02:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:12.376 02:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1469435' 00:23:12.376 killing process with pid 1469435 00:23:12.376 02:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1469435 00:23:12.376 [2024-07-24 02:01:27.211181] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:12.376 02:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1469435 00:23:12.634 02:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@240 -- # nvmfappstart 00:23:12.634 02:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:12.634 02:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:12.634 02:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:12.634 02:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1469996 00:23:12.634 02:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:12.634 02:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1469996 00:23:12.634 02:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1469996 ']' 00:23:12.634 02:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:12.634 02:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:12.634 02:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:12.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:12.634 02:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:12.634 02:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:12.634 [2024-07-24 02:01:27.500718] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:23:12.634 [2024-07-24 02:01:27.500796] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:12.891 EAL: No free 2048 kB hugepages reported on node 1 00:23:12.891 [2024-07-24 02:01:27.566557] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:12.892 [2024-07-24 02:01:27.654608] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:12.892 [2024-07-24 02:01:27.654674] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:12.892 [2024-07-24 02:01:27.654691] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:12.892 [2024-07-24 02:01:27.654705] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:12.892 [2024-07-24 02:01:27.654718] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:12.892 [2024-07-24 02:01:27.654758] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:12.892 02:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:12.892 02:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:12.892 02:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:12.892 02:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:12.892 02:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:13.149 02:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:13.149 02:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@241 -- # rpc_cmd 00:23:13.149 02:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.149 02:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:13.149 [2024-07-24 02:01:27.795167] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:13.149 malloc0 00:23:13.149 [2024-07-24 02:01:27.827158] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:13.149 [2024-07-24 02:01:27.844526] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:13.149 02:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.149 02:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # bdevperf_pid=1470137 00:23:13.149 02:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@252 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:13.149 02:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # waitforlisten 1470137 /var/tmp/bdevperf.sock 00:23:13.149 02:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1470137 ']' 00:23:13.149 02:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:13.149 02:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:13.149 02:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:13.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:13.149 02:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:13.149 02:01:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:13.149 [2024-07-24 02:01:27.910850] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:23:13.149 [2024-07-24 02:01:27.910928] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1470137 ] 00:23:13.149 EAL: No free 2048 kB hugepages reported on node 1 00:23:13.149 [2024-07-24 02:01:27.971833] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:13.411 [2024-07-24 02:01:28.062570] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:13.411 02:01:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:13.411 02:01:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:13.411 02:01:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@257 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.FOHimfKQiO 00:23:13.715 02:01:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:13.974 [2024-07-24 02:01:28.706887] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:13.974 nvme0n1 00:23:13.974 02:01:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@262 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:14.232 Running I/O for 1 seconds... 00:23:15.170 00:23:15.170 Latency(us) 00:23:15.170 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:15.170 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:15.170 Verification LBA range: start 0x0 length 0x2000 00:23:15.170 nvme0n1 : 1.04 3025.27 11.82 0.00 0.00 41684.11 6650.69 42525.58 00:23:15.170 =================================================================================================================== 00:23:15.170 Total : 3025.27 11.82 0.00 0.00 41684.11 6650.69 42525.58 00:23:15.170 0 00:23:15.170 02:01:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # rpc_cmd save_config 00:23:15.170 02:01:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.170 02:01:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:15.170 02:01:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.170 02:01:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # tgtcfg='{ 00:23:15.170 "subsystems": [ 00:23:15.170 { 00:23:15.170 "subsystem": "keyring", 00:23:15.170 "config": [ 00:23:15.170 { 00:23:15.170 "method": "keyring_file_add_key", 00:23:15.170 "params": { 00:23:15.170 "name": "key0", 00:23:15.170 "path": "/tmp/tmp.FOHimfKQiO" 00:23:15.170 } 00:23:15.170 } 00:23:15.170 ] 00:23:15.170 }, 00:23:15.170 { 00:23:15.170 "subsystem": "iobuf", 00:23:15.170 "config": [ 00:23:15.170 { 00:23:15.170 "method": "iobuf_set_options", 00:23:15.170 "params": { 00:23:15.170 "small_pool_count": 8192, 00:23:15.170 "large_pool_count": 1024, 00:23:15.170 "small_bufsize": 8192, 00:23:15.170 "large_bufsize": 135168 00:23:15.170 } 00:23:15.170 } 00:23:15.170 ] 00:23:15.170 }, 00:23:15.170 { 00:23:15.170 "subsystem": "sock", 00:23:15.170 "config": [ 00:23:15.170 { 00:23:15.170 "method": "sock_set_default_impl", 00:23:15.170 "params": { 00:23:15.170 "impl_name": "posix" 00:23:15.170 } 00:23:15.170 }, 00:23:15.170 { 00:23:15.170 "method": "sock_impl_set_options", 00:23:15.170 "params": { 00:23:15.170 "impl_name": "ssl", 00:23:15.170 "recv_buf_size": 4096, 00:23:15.170 "send_buf_size": 4096, 00:23:15.170 "enable_recv_pipe": true, 00:23:15.170 "enable_quickack": false, 00:23:15.170 "enable_placement_id": 0, 00:23:15.170 "enable_zerocopy_send_server": true, 00:23:15.170 "enable_zerocopy_send_client": false, 00:23:15.170 "zerocopy_threshold": 0, 00:23:15.170 "tls_version": 0, 00:23:15.170 "enable_ktls": false 00:23:15.170 } 00:23:15.170 }, 00:23:15.170 { 00:23:15.170 "method": "sock_impl_set_options", 00:23:15.170 "params": { 00:23:15.170 "impl_name": "posix", 00:23:15.170 "recv_buf_size": 2097152, 00:23:15.170 "send_buf_size": 2097152, 00:23:15.170 "enable_recv_pipe": true, 00:23:15.170 "enable_quickack": false, 00:23:15.170 "enable_placement_id": 0, 00:23:15.170 "enable_zerocopy_send_server": true, 00:23:15.170 "enable_zerocopy_send_client": false, 00:23:15.170 "zerocopy_threshold": 0, 00:23:15.170 "tls_version": 0, 00:23:15.170 "enable_ktls": false 00:23:15.170 } 00:23:15.170 } 00:23:15.170 ] 00:23:15.170 }, 00:23:15.170 { 00:23:15.170 "subsystem": "vmd", 00:23:15.170 "config": [] 00:23:15.170 }, 00:23:15.170 { 00:23:15.170 "subsystem": "accel", 00:23:15.170 "config": [ 00:23:15.170 { 00:23:15.170 "method": "accel_set_options", 00:23:15.170 "params": { 00:23:15.170 "small_cache_size": 128, 00:23:15.170 "large_cache_size": 16, 00:23:15.170 "task_count": 2048, 00:23:15.170 "sequence_count": 2048, 00:23:15.170 "buf_count": 2048 00:23:15.170 } 00:23:15.170 } 00:23:15.170 ] 00:23:15.170 }, 00:23:15.170 { 00:23:15.170 "subsystem": "bdev", 00:23:15.170 "config": [ 00:23:15.170 { 00:23:15.170 "method": "bdev_set_options", 00:23:15.170 "params": { 00:23:15.170 "bdev_io_pool_size": 65535, 00:23:15.170 "bdev_io_cache_size": 256, 00:23:15.170 "bdev_auto_examine": true, 00:23:15.170 "iobuf_small_cache_size": 128, 00:23:15.170 "iobuf_large_cache_size": 16 00:23:15.170 } 00:23:15.170 }, 00:23:15.170 { 00:23:15.170 "method": "bdev_raid_set_options", 00:23:15.170 "params": { 00:23:15.170 "process_window_size_kb": 1024, 00:23:15.170 "process_max_bandwidth_mb_sec": 0 00:23:15.170 } 00:23:15.170 }, 00:23:15.170 { 00:23:15.170 "method": "bdev_iscsi_set_options", 00:23:15.170 "params": { 00:23:15.170 "timeout_sec": 30 00:23:15.170 } 00:23:15.170 }, 00:23:15.170 { 00:23:15.170 "method": "bdev_nvme_set_options", 00:23:15.170 "params": { 00:23:15.170 "action_on_timeout": "none", 00:23:15.170 "timeout_us": 0, 00:23:15.170 "timeout_admin_us": 0, 00:23:15.170 "keep_alive_timeout_ms": 10000, 00:23:15.170 "arbitration_burst": 0, 00:23:15.170 "low_priority_weight": 0, 00:23:15.170 "medium_priority_weight": 0, 00:23:15.170 "high_priority_weight": 0, 00:23:15.170 "nvme_adminq_poll_period_us": 10000, 00:23:15.170 "nvme_ioq_poll_period_us": 0, 00:23:15.170 "io_queue_requests": 0, 00:23:15.170 "delay_cmd_submit": true, 00:23:15.170 "transport_retry_count": 4, 00:23:15.170 "bdev_retry_count": 3, 00:23:15.170 "transport_ack_timeout": 0, 00:23:15.170 "ctrlr_loss_timeout_sec": 0, 00:23:15.170 "reconnect_delay_sec": 0, 00:23:15.170 "fast_io_fail_timeout_sec": 0, 00:23:15.170 "disable_auto_failback": false, 00:23:15.170 "generate_uuids": false, 00:23:15.170 "transport_tos": 0, 00:23:15.170 "nvme_error_stat": false, 00:23:15.170 "rdma_srq_size": 0, 00:23:15.170 "io_path_stat": false, 00:23:15.170 "allow_accel_sequence": false, 00:23:15.170 "rdma_max_cq_size": 0, 00:23:15.170 "rdma_cm_event_timeout_ms": 0, 00:23:15.170 "dhchap_digests": [ 00:23:15.170 "sha256", 00:23:15.170 "sha384", 00:23:15.170 "sha512" 00:23:15.170 ], 00:23:15.170 "dhchap_dhgroups": [ 00:23:15.170 "null", 00:23:15.170 "ffdhe2048", 00:23:15.170 "ffdhe3072", 00:23:15.170 "ffdhe4096", 00:23:15.170 "ffdhe6144", 00:23:15.170 "ffdhe8192" 00:23:15.170 ] 00:23:15.170 } 00:23:15.170 }, 00:23:15.170 { 00:23:15.170 "method": "bdev_nvme_set_hotplug", 00:23:15.170 "params": { 00:23:15.170 "period_us": 100000, 00:23:15.170 "enable": false 00:23:15.170 } 00:23:15.170 }, 00:23:15.170 { 00:23:15.170 "method": "bdev_malloc_create", 00:23:15.170 "params": { 00:23:15.170 "name": "malloc0", 00:23:15.170 "num_blocks": 8192, 00:23:15.170 "block_size": 4096, 00:23:15.170 "physical_block_size": 4096, 00:23:15.170 "uuid": "bb4b7592-2d55-4e29-b545-35c767c60718", 00:23:15.170 "optimal_io_boundary": 0, 00:23:15.170 "md_size": 0, 00:23:15.170 "dif_type": 0, 00:23:15.170 "dif_is_head_of_md": false, 00:23:15.170 "dif_pi_format": 0 00:23:15.170 } 00:23:15.170 }, 00:23:15.170 { 00:23:15.170 "method": "bdev_wait_for_examine" 00:23:15.170 } 00:23:15.170 ] 00:23:15.170 }, 00:23:15.170 { 00:23:15.170 "subsystem": "nbd", 00:23:15.170 "config": [] 00:23:15.170 }, 00:23:15.170 { 00:23:15.170 "subsystem": "scheduler", 00:23:15.170 "config": [ 00:23:15.170 { 00:23:15.170 "method": "framework_set_scheduler", 00:23:15.170 "params": { 00:23:15.170 "name": "static" 00:23:15.170 } 00:23:15.170 } 00:23:15.170 ] 00:23:15.170 }, 00:23:15.170 { 00:23:15.171 "subsystem": "nvmf", 00:23:15.171 "config": [ 00:23:15.171 { 00:23:15.171 "method": "nvmf_set_config", 00:23:15.171 "params": { 00:23:15.171 "discovery_filter": "match_any", 00:23:15.171 "admin_cmd_passthru": { 00:23:15.171 "identify_ctrlr": false 00:23:15.171 } 00:23:15.171 } 00:23:15.171 }, 00:23:15.171 { 00:23:15.171 "method": "nvmf_set_max_subsystems", 00:23:15.171 "params": { 00:23:15.171 "max_subsystems": 1024 00:23:15.171 } 00:23:15.171 }, 00:23:15.171 { 00:23:15.171 "method": "nvmf_set_crdt", 00:23:15.171 "params": { 00:23:15.171 "crdt1": 0, 00:23:15.171 "crdt2": 0, 00:23:15.171 "crdt3": 0 00:23:15.171 } 00:23:15.171 }, 00:23:15.171 { 00:23:15.171 "method": "nvmf_create_transport", 00:23:15.171 "params": { 00:23:15.171 "trtype": "TCP", 00:23:15.171 "max_queue_depth": 128, 00:23:15.171 "max_io_qpairs_per_ctrlr": 127, 00:23:15.171 "in_capsule_data_size": 4096, 00:23:15.171 "max_io_size": 131072, 00:23:15.171 "io_unit_size": 131072, 00:23:15.171 "max_aq_depth": 128, 00:23:15.171 "num_shared_buffers": 511, 00:23:15.171 "buf_cache_size": 4294967295, 00:23:15.171 "dif_insert_or_strip": false, 00:23:15.171 "zcopy": false, 00:23:15.171 "c2h_success": false, 00:23:15.171 "sock_priority": 0, 00:23:15.171 "abort_timeout_sec": 1, 00:23:15.171 "ack_timeout": 0, 00:23:15.171 "data_wr_pool_size": 0 00:23:15.171 } 00:23:15.171 }, 00:23:15.171 { 00:23:15.171 "method": "nvmf_create_subsystem", 00:23:15.171 "params": { 00:23:15.171 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:15.171 "allow_any_host": false, 00:23:15.171 "serial_number": "00000000000000000000", 00:23:15.171 "model_number": "SPDK bdev Controller", 00:23:15.171 "max_namespaces": 32, 00:23:15.171 "min_cntlid": 1, 00:23:15.171 "max_cntlid": 65519, 00:23:15.171 "ana_reporting": false 00:23:15.171 } 00:23:15.171 }, 00:23:15.171 { 00:23:15.171 "method": "nvmf_subsystem_add_host", 00:23:15.171 "params": { 00:23:15.171 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:15.171 "host": "nqn.2016-06.io.spdk:host1", 00:23:15.171 "psk": "key0" 00:23:15.171 } 00:23:15.171 }, 00:23:15.171 { 00:23:15.171 "method": "nvmf_subsystem_add_ns", 00:23:15.171 "params": { 00:23:15.171 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:15.171 "namespace": { 00:23:15.171 "nsid": 1, 00:23:15.171 "bdev_name": "malloc0", 00:23:15.171 "nguid": "BB4B75922D554E29B54535C767C60718", 00:23:15.171 "uuid": "bb4b7592-2d55-4e29-b545-35c767c60718", 00:23:15.171 "no_auto_visible": false 00:23:15.171 } 00:23:15.171 } 00:23:15.171 }, 00:23:15.171 { 00:23:15.171 "method": "nvmf_subsystem_add_listener", 00:23:15.171 "params": { 00:23:15.171 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:15.171 "listen_address": { 00:23:15.171 "trtype": "TCP", 00:23:15.171 "adrfam": "IPv4", 00:23:15.171 "traddr": "10.0.0.2", 00:23:15.171 "trsvcid": "4420" 00:23:15.171 }, 00:23:15.171 "secure_channel": false, 00:23:15.171 "sock_impl": "ssl" 00:23:15.171 } 00:23:15.171 } 00:23:15.171 ] 00:23:15.171 } 00:23:15.171 ] 00:23:15.171 }' 00:23:15.429 02:01:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:15.689 02:01:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # bperfcfg='{ 00:23:15.689 "subsystems": [ 00:23:15.689 { 00:23:15.689 "subsystem": "keyring", 00:23:15.689 "config": [ 00:23:15.689 { 00:23:15.689 "method": "keyring_file_add_key", 00:23:15.689 "params": { 00:23:15.689 "name": "key0", 00:23:15.689 "path": "/tmp/tmp.FOHimfKQiO" 00:23:15.689 } 00:23:15.689 } 00:23:15.689 ] 00:23:15.689 }, 00:23:15.689 { 00:23:15.689 "subsystem": "iobuf", 00:23:15.689 "config": [ 00:23:15.689 { 00:23:15.689 "method": "iobuf_set_options", 00:23:15.689 "params": { 00:23:15.689 "small_pool_count": 8192, 00:23:15.689 "large_pool_count": 1024, 00:23:15.689 "small_bufsize": 8192, 00:23:15.689 "large_bufsize": 135168 00:23:15.689 } 00:23:15.689 } 00:23:15.689 ] 00:23:15.689 }, 00:23:15.689 { 00:23:15.689 "subsystem": "sock", 00:23:15.689 "config": [ 00:23:15.689 { 00:23:15.689 "method": "sock_set_default_impl", 00:23:15.689 "params": { 00:23:15.689 "impl_name": "posix" 00:23:15.689 } 00:23:15.689 }, 00:23:15.689 { 00:23:15.689 "method": "sock_impl_set_options", 00:23:15.689 "params": { 00:23:15.689 "impl_name": "ssl", 00:23:15.689 "recv_buf_size": 4096, 00:23:15.689 "send_buf_size": 4096, 00:23:15.689 "enable_recv_pipe": true, 00:23:15.689 "enable_quickack": false, 00:23:15.689 "enable_placement_id": 0, 00:23:15.689 "enable_zerocopy_send_server": true, 00:23:15.689 "enable_zerocopy_send_client": false, 00:23:15.689 "zerocopy_threshold": 0, 00:23:15.689 "tls_version": 0, 00:23:15.689 "enable_ktls": false 00:23:15.689 } 00:23:15.689 }, 00:23:15.689 { 00:23:15.689 "method": "sock_impl_set_options", 00:23:15.689 "params": { 00:23:15.689 "impl_name": "posix", 00:23:15.689 "recv_buf_size": 2097152, 00:23:15.689 "send_buf_size": 2097152, 00:23:15.689 "enable_recv_pipe": true, 00:23:15.689 "enable_quickack": false, 00:23:15.689 "enable_placement_id": 0, 00:23:15.689 "enable_zerocopy_send_server": true, 00:23:15.689 "enable_zerocopy_send_client": false, 00:23:15.689 "zerocopy_threshold": 0, 00:23:15.689 "tls_version": 0, 00:23:15.689 "enable_ktls": false 00:23:15.689 } 00:23:15.689 } 00:23:15.689 ] 00:23:15.689 }, 00:23:15.689 { 00:23:15.689 "subsystem": "vmd", 00:23:15.689 "config": [] 00:23:15.689 }, 00:23:15.689 { 00:23:15.689 "subsystem": "accel", 00:23:15.689 "config": [ 00:23:15.689 { 00:23:15.689 "method": "accel_set_options", 00:23:15.689 "params": { 00:23:15.689 "small_cache_size": 128, 00:23:15.689 "large_cache_size": 16, 00:23:15.689 "task_count": 2048, 00:23:15.689 "sequence_count": 2048, 00:23:15.689 "buf_count": 2048 00:23:15.689 } 00:23:15.689 } 00:23:15.689 ] 00:23:15.689 }, 00:23:15.689 { 00:23:15.689 "subsystem": "bdev", 00:23:15.689 "config": [ 00:23:15.689 { 00:23:15.689 "method": "bdev_set_options", 00:23:15.689 "params": { 00:23:15.689 "bdev_io_pool_size": 65535, 00:23:15.689 "bdev_io_cache_size": 256, 00:23:15.689 "bdev_auto_examine": true, 00:23:15.689 "iobuf_small_cache_size": 128, 00:23:15.689 "iobuf_large_cache_size": 16 00:23:15.689 } 00:23:15.689 }, 00:23:15.689 { 00:23:15.689 "method": "bdev_raid_set_options", 00:23:15.689 "params": { 00:23:15.689 "process_window_size_kb": 1024, 00:23:15.689 "process_max_bandwidth_mb_sec": 0 00:23:15.689 } 00:23:15.689 }, 00:23:15.689 { 00:23:15.689 "method": "bdev_iscsi_set_options", 00:23:15.689 "params": { 00:23:15.689 "timeout_sec": 30 00:23:15.689 } 00:23:15.689 }, 00:23:15.689 { 00:23:15.689 "method": "bdev_nvme_set_options", 00:23:15.689 "params": { 00:23:15.689 "action_on_timeout": "none", 00:23:15.689 "timeout_us": 0, 00:23:15.689 "timeout_admin_us": 0, 00:23:15.689 "keep_alive_timeout_ms": 10000, 00:23:15.689 "arbitration_burst": 0, 00:23:15.689 "low_priority_weight": 0, 00:23:15.689 "medium_priority_weight": 0, 00:23:15.689 "high_priority_weight": 0, 00:23:15.689 "nvme_adminq_poll_period_us": 10000, 00:23:15.689 "nvme_ioq_poll_period_us": 0, 00:23:15.689 "io_queue_requests": 512, 00:23:15.689 "delay_cmd_submit": true, 00:23:15.689 "transport_retry_count": 4, 00:23:15.689 "bdev_retry_count": 3, 00:23:15.689 "transport_ack_timeout": 0, 00:23:15.689 "ctrlr_loss_timeout_sec": 0, 00:23:15.689 "reconnect_delay_sec": 0, 00:23:15.689 "fast_io_fail_timeout_sec": 0, 00:23:15.689 "disable_auto_failback": false, 00:23:15.689 "generate_uuids": false, 00:23:15.689 "transport_tos": 0, 00:23:15.689 "nvme_error_stat": false, 00:23:15.689 "rdma_srq_size": 0, 00:23:15.689 "io_path_stat": false, 00:23:15.689 "allow_accel_sequence": false, 00:23:15.689 "rdma_max_cq_size": 0, 00:23:15.689 "rdma_cm_event_timeout_ms": 0, 00:23:15.689 "dhchap_digests": [ 00:23:15.689 "sha256", 00:23:15.689 "sha384", 00:23:15.689 "sha512" 00:23:15.689 ], 00:23:15.689 "dhchap_dhgroups": [ 00:23:15.689 "null", 00:23:15.689 "ffdhe2048", 00:23:15.689 "ffdhe3072", 00:23:15.689 "ffdhe4096", 00:23:15.689 "ffdhe6144", 00:23:15.689 "ffdhe8192" 00:23:15.689 ] 00:23:15.689 } 00:23:15.689 }, 00:23:15.689 { 00:23:15.689 "method": "bdev_nvme_attach_controller", 00:23:15.689 "params": { 00:23:15.689 "name": "nvme0", 00:23:15.689 "trtype": "TCP", 00:23:15.689 "adrfam": "IPv4", 00:23:15.689 "traddr": "10.0.0.2", 00:23:15.689 "trsvcid": "4420", 00:23:15.689 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:15.690 "prchk_reftag": false, 00:23:15.690 "prchk_guard": false, 00:23:15.690 "ctrlr_loss_timeout_sec": 0, 00:23:15.690 "reconnect_delay_sec": 0, 00:23:15.690 "fast_io_fail_timeout_sec": 0, 00:23:15.690 "psk": "key0", 00:23:15.690 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:15.690 "hdgst": false, 00:23:15.690 "ddgst": false 00:23:15.690 } 00:23:15.690 }, 00:23:15.690 { 00:23:15.690 "method": "bdev_nvme_set_hotplug", 00:23:15.690 "params": { 00:23:15.690 "period_us": 100000, 00:23:15.690 "enable": false 00:23:15.690 } 00:23:15.690 }, 00:23:15.690 { 00:23:15.690 "method": "bdev_enable_histogram", 00:23:15.690 "params": { 00:23:15.690 "name": "nvme0n1", 00:23:15.690 "enable": true 00:23:15.690 } 00:23:15.690 }, 00:23:15.690 { 00:23:15.690 "method": "bdev_wait_for_examine" 00:23:15.690 } 00:23:15.690 ] 00:23:15.690 }, 00:23:15.690 { 00:23:15.690 "subsystem": "nbd", 00:23:15.690 "config": [] 00:23:15.690 } 00:23:15.690 ] 00:23:15.690 }' 00:23:15.690 02:01:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # killprocess 1470137 00:23:15.690 02:01:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1470137 ']' 00:23:15.690 02:01:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1470137 00:23:15.690 02:01:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:15.690 02:01:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:15.690 02:01:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1470137 00:23:15.690 02:01:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:15.690 02:01:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:15.690 02:01:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1470137' 00:23:15.690 killing process with pid 1470137 00:23:15.690 02:01:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1470137 00:23:15.690 Received shutdown signal, test time was about 1.000000 seconds 00:23:15.690 00:23:15.690 Latency(us) 00:23:15.690 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:15.690 =================================================================================================================== 00:23:15.690 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:15.690 02:01:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1470137 00:23:15.949 02:01:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@269 -- # killprocess 1469996 00:23:15.949 02:01:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1469996 ']' 00:23:15.949 02:01:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1469996 00:23:15.949 02:01:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:15.949 02:01:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:15.949 02:01:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1469996 00:23:15.949 02:01:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:15.949 02:01:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:15.949 02:01:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1469996' 00:23:15.949 killing process with pid 1469996 00:23:15.949 02:01:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1469996 00:23:15.949 02:01:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1469996 00:23:16.207 02:01:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # nvmfappstart -c /dev/fd/62 00:23:16.207 02:01:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # echo '{ 00:23:16.207 "subsystems": [ 00:23:16.207 { 00:23:16.207 "subsystem": "keyring", 00:23:16.207 "config": [ 00:23:16.207 { 00:23:16.207 "method": "keyring_file_add_key", 00:23:16.207 "params": { 00:23:16.207 "name": "key0", 00:23:16.207 "path": "/tmp/tmp.FOHimfKQiO" 00:23:16.207 } 00:23:16.207 } 00:23:16.207 ] 00:23:16.207 }, 00:23:16.207 { 00:23:16.207 "subsystem": "iobuf", 00:23:16.207 "config": [ 00:23:16.207 { 00:23:16.207 "method": "iobuf_set_options", 00:23:16.207 "params": { 00:23:16.207 "small_pool_count": 8192, 00:23:16.207 "large_pool_count": 1024, 00:23:16.207 "small_bufsize": 8192, 00:23:16.207 "large_bufsize": 135168 00:23:16.207 } 00:23:16.207 } 00:23:16.207 ] 00:23:16.207 }, 00:23:16.207 { 00:23:16.208 "subsystem": "sock", 00:23:16.208 "config": [ 00:23:16.208 { 00:23:16.208 "method": "sock_set_default_impl", 00:23:16.208 "params": { 00:23:16.208 "impl_name": "posix" 00:23:16.208 } 00:23:16.208 }, 00:23:16.208 { 00:23:16.208 "method": "sock_impl_set_options", 00:23:16.208 "params": { 00:23:16.208 "impl_name": "ssl", 00:23:16.208 "recv_buf_size": 4096, 00:23:16.208 "send_buf_size": 4096, 00:23:16.208 "enable_recv_pipe": true, 00:23:16.208 "enable_quickack": false, 00:23:16.208 "enable_placement_id": 0, 00:23:16.208 "enable_zerocopy_send_server": true, 00:23:16.208 "enable_zerocopy_send_client": false, 00:23:16.208 "zerocopy_threshold": 0, 00:23:16.208 "tls_version": 0, 00:23:16.208 "enable_ktls": false 00:23:16.208 } 00:23:16.208 }, 00:23:16.208 { 00:23:16.208 "method": "sock_impl_set_options", 00:23:16.208 "params": { 00:23:16.208 "impl_name": "posix", 00:23:16.208 "recv_buf_size": 2097152, 00:23:16.208 "send_buf_size": 2097152, 00:23:16.208 "enable_recv_pipe": true, 00:23:16.208 "enable_quickack": false, 00:23:16.208 "enable_placement_id": 0, 00:23:16.208 "enable_zerocopy_send_server": true, 00:23:16.208 "enable_zerocopy_send_client": false, 00:23:16.208 "zerocopy_threshold": 0, 00:23:16.208 "tls_version": 0, 00:23:16.208 "enable_ktls": false 00:23:16.208 } 00:23:16.208 } 00:23:16.208 ] 00:23:16.208 }, 00:23:16.208 { 00:23:16.208 "subsystem": "vmd", 00:23:16.208 "config": [] 00:23:16.208 }, 00:23:16.208 { 00:23:16.208 "subsystem": "accel", 00:23:16.208 "config": [ 00:23:16.208 { 00:23:16.208 "method": "accel_set_options", 00:23:16.208 "params": { 00:23:16.208 "small_cache_size": 128, 00:23:16.208 "large_cache_size": 16, 00:23:16.208 "task_count": 2048, 00:23:16.208 "sequence_count": 2048, 00:23:16.208 "buf_count": 2048 00:23:16.208 } 00:23:16.208 } 00:23:16.208 ] 00:23:16.208 }, 00:23:16.208 { 00:23:16.208 "subsystem": "bdev", 00:23:16.208 "config": [ 00:23:16.208 { 00:23:16.208 "method": "bdev_set_options", 00:23:16.208 "params": { 00:23:16.208 "bdev_io_pool_size": 65535, 00:23:16.208 "bdev_io_cache_size": 256, 00:23:16.208 "bdev_auto_examine": true, 00:23:16.208 "iobuf_small_cache_size": 128, 00:23:16.208 "iobuf_large_cache_size": 16 00:23:16.208 } 00:23:16.208 }, 00:23:16.208 { 00:23:16.208 "method": "bdev_raid_set_options", 00:23:16.208 "params": { 00:23:16.208 "process_window_size_kb": 1024, 00:23:16.208 "process_max_bandwidth_mb_sec": 0 00:23:16.208 } 00:23:16.208 }, 00:23:16.208 { 00:23:16.208 "method": "bdev_iscsi_set_options", 00:23:16.208 "params": { 00:23:16.208 "timeout_sec": 30 00:23:16.208 } 00:23:16.208 }, 00:23:16.208 { 00:23:16.208 "method": "bdev_nvme_set_options", 00:23:16.208 "params": { 00:23:16.208 "action_on_timeout": "none", 00:23:16.208 "timeout_us": 0, 00:23:16.208 "timeout_admin_us": 0, 00:23:16.208 "keep_alive_timeout_ms": 10000, 00:23:16.208 "arbitration_burst": 0, 00:23:16.208 "low_priority_weight": 0, 00:23:16.208 "medium_priority_weight": 0, 00:23:16.208 "high_priority_weight": 0, 00:23:16.208 "nvme_adminq_poll_period_us": 10000, 00:23:16.208 "nvme_ioq_poll_period_us": 0, 00:23:16.208 "io_queue_requests": 0, 00:23:16.208 "delay_cmd_submit": true, 00:23:16.208 "transport_retry_count": 4, 00:23:16.208 "bdev_retry_count": 3, 00:23:16.208 "transport_ack_timeout": 0, 00:23:16.208 "ctrlr_loss_timeout_sec": 0, 00:23:16.208 "reconnect_delay_sec": 0, 00:23:16.208 "fast_io_fail_timeout_sec": 0, 00:23:16.208 "disable_auto_failback": false, 00:23:16.208 "generate_uuids": false, 00:23:16.208 "transport_tos": 0, 00:23:16.208 "nvme_error_stat": false, 00:23:16.208 "rdma_srq_size": 0, 00:23:16.208 "io_path_stat": false, 00:23:16.208 "allow_accel_sequence": false, 00:23:16.208 "rdma_max_cq_size": 0, 00:23:16.208 "rdma_cm_event_timeout_ms": 0, 00:23:16.208 "dhchap_digests": [ 00:23:16.208 "sha256", 00:23:16.208 "sha384", 00:23:16.208 "sha512" 00:23:16.208 ], 00:23:16.208 "dhchap_dhgroups": [ 00:23:16.208 "null", 00:23:16.208 "ffdhe2048", 00:23:16.208 "ffdhe3072", 00:23:16.208 "ffdhe4096", 00:23:16.208 "ffdhe6144", 00:23:16.208 "ffdhe8192" 00:23:16.208 ] 00:23:16.208 } 00:23:16.208 }, 00:23:16.208 { 00:23:16.208 "method": "bdev_nvme_set_hotplug", 00:23:16.208 "params": { 00:23:16.208 "period_us": 100000, 00:23:16.208 "enable": false 00:23:16.208 } 00:23:16.208 }, 00:23:16.208 { 00:23:16.208 "method": "bdev_malloc_create", 00:23:16.208 "params": { 00:23:16.208 "name": "malloc0", 00:23:16.208 "num_blocks": 8192, 00:23:16.208 "block_size": 4096, 00:23:16.208 "physical_block_size": 4096, 00:23:16.208 "uuid": "bb4b7592-2d55-4e29-b545-35c767c60718", 00:23:16.208 "optimal_io_boundary": 0, 00:23:16.208 "md_size": 0, 00:23:16.208 "dif_type": 0, 00:23:16.208 "dif_is_head_of_md": false, 00:23:16.208 "dif_pi_format": 0 00:23:16.208 } 00:23:16.208 }, 00:23:16.208 { 00:23:16.208 "method": "bdev_wait_for_examine" 00:23:16.208 } 00:23:16.208 ] 00:23:16.208 }, 00:23:16.208 { 00:23:16.208 "subsystem": "nbd", 00:23:16.208 "config": [] 00:23:16.208 }, 00:23:16.208 { 00:23:16.208 "subsystem": "scheduler", 00:23:16.208 "config": [ 00:23:16.208 { 00:23:16.208 "method": "framework_set_scheduler", 00:23:16.208 "params": { 00:23:16.208 "name": "static" 00:23:16.208 } 00:23:16.208 } 00:23:16.208 ] 00:23:16.208 }, 00:23:16.208 { 00:23:16.208 "subsystem": "nvmf", 00:23:16.208 "config": [ 00:23:16.208 { 00:23:16.208 "method": "nvmf_set_config", 00:23:16.208 "params": { 00:23:16.208 "discovery_filter": "match_any", 00:23:16.208 "admin_cmd_passthru": { 00:23:16.208 "identify_ctrlr": false 00:23:16.208 } 00:23:16.208 } 00:23:16.208 }, 00:23:16.208 { 00:23:16.208 "method": "nvmf_set_max_subsystems", 00:23:16.208 "params": { 00:23:16.208 "max_subsystems": 1024 00:23:16.208 } 00:23:16.208 }, 00:23:16.208 { 00:23:16.208 "method": "nvmf_set_crdt", 00:23:16.208 "params": { 00:23:16.208 "crdt1": 0, 00:23:16.208 "crdt2": 0, 00:23:16.208 "crdt3": 0 00:23:16.208 } 00:23:16.208 }, 00:23:16.208 { 00:23:16.208 "method": "nvmf_create_transport", 00:23:16.208 "params": { 00:23:16.208 "trtype": "TCP", 00:23:16.208 "max_queue_depth": 128, 00:23:16.208 "max_io_qpairs_per_ctrlr": 127, 00:23:16.208 "in_capsule_data_size": 4096, 00:23:16.208 "max_io_size": 131072, 00:23:16.208 "io_unit_size": 131072, 00:23:16.208 "max_aq_depth": 128, 00:23:16.208 "num_shared_buffers": 511, 00:23:16.208 "buf_cache_size": 4294967295, 00:23:16.208 "dif_insert_or_strip": false, 00:23:16.208 "zcopy": false, 00:23:16.208 "c2h_success": false, 00:23:16.208 "sock_priority": 0, 00:23:16.208 "abort_timeout_sec": 1, 00:23:16.208 " 02:01:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:16.208 ack_timeout": 0, 00:23:16.208 "data_wr_pool_size": 0 00:23:16.208 } 00:23:16.208 }, 00:23:16.208 { 00:23:16.208 "method": "nvmf_create_subsystem", 00:23:16.208 "params": { 00:23:16.209 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:16.209 "allow_any_host": false, 00:23:16.209 "serial_number": "00000000000000000000", 00:23:16.209 "model_number": "SPDK bdev Controller", 00:23:16.209 "max_namespaces": 32, 00:23:16.209 "min_cntlid": 1, 00:23:16.209 "max_cntlid": 65519, 00:23:16.209 "ana_reporting": false 00:23:16.209 } 00:23:16.209 }, 00:23:16.209 { 00:23:16.209 "method": "nvmf_subsystem_add_host", 00:23:16.209 "params": { 00:23:16.209 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:16.209 "host": "nqn.2016-06.io.spdk:host1", 00:23:16.209 "psk": "key0" 00:23:16.209 } 00:23:16.209 }, 00:23:16.209 { 00:23:16.209 "method": "nvmf_subsystem_add_ns", 00:23:16.209 "params": { 00:23:16.209 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:16.209 "namespace": { 00:23:16.209 "nsid": 1, 00:23:16.209 "bdev_name": "malloc0", 00:23:16.209 "nguid": "BB4B75922D554E29B54535C767C60718", 00:23:16.209 "uuid": "bb4b7592-2d55-4e29-b545-35c767c60718", 00:23:16.209 "no_auto_visible": false 00:23:16.209 } 00:23:16.209 } 00:23:16.209 }, 00:23:16.209 { 00:23:16.209 "method": "nvmf_subsystem_add_listener", 00:23:16.209 "params": { 00:23:16.209 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:16.209 "listen_address": { 00:23:16.209 "trtype": "TCP", 00:23:16.209 "adrfam": "IPv4", 00:23:16.209 "traddr": "10.0.0.2", 00:23:16.209 "trsvcid": "4420" 00:23:16.209 }, 00:23:16.209 "secure_channel": false, 00:23:16.209 "sock_impl": "ssl" 00:23:16.209 } 00:23:16.209 } 00:23:16.209 ] 00:23:16.209 } 00:23:16.209 ] 00:23:16.209 }' 00:23:16.209 02:01:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:16.209 02:01:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:16.209 02:01:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1470433 00:23:16.209 02:01:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:23:16.209 02:01:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1470433 00:23:16.209 02:01:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1470433 ']' 00:23:16.209 02:01:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:16.209 02:01:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:16.209 02:01:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:16.209 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:16.209 02:01:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:16.209 02:01:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:16.209 [2024-07-24 02:01:30.943557] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:23:16.209 [2024-07-24 02:01:30.943648] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:16.209 EAL: No free 2048 kB hugepages reported on node 1 00:23:16.209 [2024-07-24 02:01:31.004549] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:16.209 [2024-07-24 02:01:31.091206] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:16.209 [2024-07-24 02:01:31.091260] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:16.209 [2024-07-24 02:01:31.091288] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:16.209 [2024-07-24 02:01:31.091299] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:16.209 [2024-07-24 02:01:31.091308] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:16.209 [2024-07-24 02:01:31.091388] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:16.469 [2024-07-24 02:01:31.333670] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:16.729 [2024-07-24 02:01:31.372076] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:16.729 [2024-07-24 02:01:31.372384] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:17.295 02:01:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:17.295 02:01:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:17.295 02:01:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:17.295 02:01:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:17.295 02:01:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:17.295 02:01:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:17.295 02:01:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # bdevperf_pid=1470582 00:23:17.295 02:01:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@275 -- # waitforlisten 1470582 /var/tmp/bdevperf.sock 00:23:17.295 02:01:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1470582 ']' 00:23:17.295 02:01:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:17.295 02:01:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:17.295 02:01:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:17.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:17.295 02:01:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:17.295 02:01:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:23:17.295 02:01:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:17.295 02:01:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # echo '{ 00:23:17.295 "subsystems": [ 00:23:17.295 { 00:23:17.295 "subsystem": "keyring", 00:23:17.295 "config": [ 00:23:17.295 { 00:23:17.295 "method": "keyring_file_add_key", 00:23:17.295 "params": { 00:23:17.295 "name": "key0", 00:23:17.295 "path": "/tmp/tmp.FOHimfKQiO" 00:23:17.295 } 00:23:17.295 } 00:23:17.295 ] 00:23:17.295 }, 00:23:17.295 { 00:23:17.295 "subsystem": "iobuf", 00:23:17.295 "config": [ 00:23:17.295 { 00:23:17.295 "method": "iobuf_set_options", 00:23:17.295 "params": { 00:23:17.295 "small_pool_count": 8192, 00:23:17.296 "large_pool_count": 1024, 00:23:17.296 "small_bufsize": 8192, 00:23:17.296 "large_bufsize": 135168 00:23:17.296 } 00:23:17.296 } 00:23:17.296 ] 00:23:17.296 }, 00:23:17.296 { 00:23:17.296 "subsystem": "sock", 00:23:17.296 "config": [ 00:23:17.296 { 00:23:17.296 "method": "sock_set_default_impl", 00:23:17.296 "params": { 00:23:17.296 "impl_name": "posix" 00:23:17.296 } 00:23:17.296 }, 00:23:17.296 { 00:23:17.296 "method": "sock_impl_set_options", 00:23:17.296 "params": { 00:23:17.296 "impl_name": "ssl", 00:23:17.296 "recv_buf_size": 4096, 00:23:17.296 "send_buf_size": 4096, 00:23:17.296 "enable_recv_pipe": true, 00:23:17.296 "enable_quickack": false, 00:23:17.296 "enable_placement_id": 0, 00:23:17.296 "enable_zerocopy_send_server": true, 00:23:17.296 "enable_zerocopy_send_client": false, 00:23:17.296 "zerocopy_threshold": 0, 00:23:17.296 "tls_version": 0, 00:23:17.296 "enable_ktls": false 00:23:17.296 } 00:23:17.296 }, 00:23:17.296 { 00:23:17.296 "method": "sock_impl_set_options", 00:23:17.296 "params": { 00:23:17.296 "impl_name": "posix", 00:23:17.296 "recv_buf_size": 2097152, 00:23:17.296 "send_buf_size": 2097152, 00:23:17.296 "enable_recv_pipe": true, 00:23:17.296 "enable_quickack": false, 00:23:17.296 "enable_placement_id": 0, 00:23:17.296 "enable_zerocopy_send_server": true, 00:23:17.296 "enable_zerocopy_send_client": false, 00:23:17.296 "zerocopy_threshold": 0, 00:23:17.296 "tls_version": 0, 00:23:17.296 "enable_ktls": false 00:23:17.296 } 00:23:17.296 } 00:23:17.296 ] 00:23:17.296 }, 00:23:17.296 { 00:23:17.296 "subsystem": "vmd", 00:23:17.296 "config": [] 00:23:17.296 }, 00:23:17.296 { 00:23:17.296 "subsystem": "accel", 00:23:17.296 "config": [ 00:23:17.296 { 00:23:17.296 "method": "accel_set_options", 00:23:17.296 "params": { 00:23:17.296 "small_cache_size": 128, 00:23:17.296 "large_cache_size": 16, 00:23:17.296 "task_count": 2048, 00:23:17.296 "sequence_count": 2048, 00:23:17.296 "buf_count": 2048 00:23:17.296 } 00:23:17.296 } 00:23:17.296 ] 00:23:17.296 }, 00:23:17.296 { 00:23:17.296 "subsystem": "bdev", 00:23:17.296 "config": [ 00:23:17.296 { 00:23:17.296 "method": "bdev_set_options", 00:23:17.296 "params": { 00:23:17.296 "bdev_io_pool_size": 65535, 00:23:17.296 "bdev_io_cache_size": 256, 00:23:17.296 "bdev_auto_examine": true, 00:23:17.296 "iobuf_small_cache_size": 128, 00:23:17.296 "iobuf_large_cache_size": 16 00:23:17.296 } 00:23:17.296 }, 00:23:17.296 { 00:23:17.296 "method": "bdev_raid_set_options", 00:23:17.296 "params": { 00:23:17.296 "process_window_size_kb": 1024, 00:23:17.296 "process_max_bandwidth_mb_sec": 0 00:23:17.296 } 00:23:17.296 }, 00:23:17.296 { 00:23:17.296 "method": "bdev_iscsi_set_options", 00:23:17.296 "params": { 00:23:17.296 "timeout_sec": 30 00:23:17.296 } 00:23:17.296 }, 00:23:17.296 { 00:23:17.296 "method": "bdev_nvme_set_options", 00:23:17.296 "params": { 00:23:17.296 "action_on_timeout": "none", 00:23:17.296 "timeout_us": 0, 00:23:17.296 "timeout_admin_us": 0, 00:23:17.296 "keep_alive_timeout_ms": 10000, 00:23:17.296 "arbitration_burst": 0, 00:23:17.296 "low_priority_weight": 0, 00:23:17.296 "medium_priority_weight": 0, 00:23:17.296 "high_priority_weight": 0, 00:23:17.296 "nvme_adminq_poll_period_us": 10000, 00:23:17.296 "nvme_ioq_poll_period_us": 0, 00:23:17.296 "io_queue_requests": 512, 00:23:17.296 "delay_cmd_submit": true, 00:23:17.296 "transport_retry_count": 4, 00:23:17.296 "bdev_retry_count": 3, 00:23:17.296 "transport_ack_timeout": 0, 00:23:17.296 "ctrlr_loss_timeout_sec": 0, 00:23:17.296 "reconnect_delay_sec": 0, 00:23:17.296 "fast_io_fail_timeout_sec": 0, 00:23:17.296 "disable_auto_failback": false, 00:23:17.296 "generate_uuids": false, 00:23:17.296 "transport_tos": 0, 00:23:17.296 "nvme_error_stat": false, 00:23:17.296 "rdma_srq_size": 0, 00:23:17.296 "io_path_stat": false, 00:23:17.296 "allow_accel_sequence": false, 00:23:17.296 "rdma_max_cq_size": 0, 00:23:17.296 "rdma_cm_event_timeout_ms": 0, 00:23:17.296 "dhchap_digests": [ 00:23:17.296 "sha256", 00:23:17.296 "sha384", 00:23:17.296 "sha512" 00:23:17.296 ], 00:23:17.296 "dhchap_dhgroups": [ 00:23:17.296 "null", 00:23:17.296 "ffdhe2048", 00:23:17.296 "ffdhe3072", 00:23:17.296 "ffdhe4096", 00:23:17.296 "ffdhe6144", 00:23:17.296 "ffdhe8192" 00:23:17.296 ] 00:23:17.296 } 00:23:17.296 }, 00:23:17.296 { 00:23:17.296 "method": "bdev_nvme_attach_controller", 00:23:17.296 "params": { 00:23:17.296 "name": "nvme0", 00:23:17.296 "trtype": "TCP", 00:23:17.296 "adrfam": "IPv4", 00:23:17.296 "traddr": "10.0.0.2", 00:23:17.296 "trsvcid": "4420", 00:23:17.296 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:17.296 "prchk_reftag": false, 00:23:17.296 "prchk_guard": false, 00:23:17.296 "ctrlr_loss_timeout_sec": 0, 00:23:17.296 "reconnect_delay_sec": 0, 00:23:17.296 "fast_io_fail_timeout_sec": 0, 00:23:17.296 "psk": "key0", 00:23:17.296 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:17.296 "hdgst": false, 00:23:17.296 "ddgst": false 00:23:17.296 } 00:23:17.296 }, 00:23:17.296 { 00:23:17.296 "method": "bdev_nvme_set_hotplug", 00:23:17.296 "params": { 00:23:17.296 "period_us": 100000, 00:23:17.296 "enable": false 00:23:17.296 } 00:23:17.296 }, 00:23:17.296 { 00:23:17.296 "method": "bdev_enable_histogram", 00:23:17.296 "params": { 00:23:17.296 "name": "nvme0n1", 00:23:17.296 "enable": true 00:23:17.296 } 00:23:17.296 }, 00:23:17.296 { 00:23:17.296 "method": "bdev_wait_for_examine" 00:23:17.296 } 00:23:17.296 ] 00:23:17.296 }, 00:23:17.296 { 00:23:17.296 "subsystem": "nbd", 00:23:17.296 "config": [] 00:23:17.296 } 00:23:17.296 ] 00:23:17.296 }' 00:23:17.296 [2024-07-24 02:01:31.979512] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:23:17.296 [2024-07-24 02:01:31.979588] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1470582 ] 00:23:17.296 EAL: No free 2048 kB hugepages reported on node 1 00:23:17.296 [2024-07-24 02:01:32.040365] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:17.296 [2024-07-24 02:01:32.130564] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:17.555 [2024-07-24 02:01:32.307878] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:18.121 02:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:18.121 02:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:18.121 02:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:18.121 02:01:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # jq -r '.[].name' 00:23:18.379 02:01:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:18.379 02:01:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@278 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:18.639 Running I/O for 1 seconds... 00:23:19.577 00:23:19.578 Latency(us) 00:23:19.578 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:19.578 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:19.578 Verification LBA range: start 0x0 length 0x2000 00:23:19.578 nvme0n1 : 1.02 3252.35 12.70 0.00 0.00 38956.60 7767.23 60196.03 00:23:19.578 =================================================================================================================== 00:23:19.578 Total : 3252.35 12.70 0.00 0.00 38956.60 7767.23 60196.03 00:23:19.578 0 00:23:19.578 02:01:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # trap - SIGINT SIGTERM EXIT 00:23:19.578 02:01:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@281 -- # cleanup 00:23:19.578 02:01:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:23:19.578 02:01:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:23:19.578 02:01:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:23:19.578 02:01:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:23:19.578 02:01:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:23:19.578 02:01:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:23:19.578 02:01:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:23:19.578 02:01:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:23:19.578 02:01:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:23:19.578 nvmf_trace.0 00:23:19.578 02:01:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:23:19.578 02:01:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 1470582 00:23:19.578 02:01:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1470582 ']' 00:23:19.578 02:01:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1470582 00:23:19.578 02:01:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:19.578 02:01:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:19.578 02:01:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1470582 00:23:19.578 02:01:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:19.578 02:01:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:19.578 02:01:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1470582' 00:23:19.578 killing process with pid 1470582 00:23:19.578 02:01:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1470582 00:23:19.578 Received shutdown signal, test time was about 1.000000 seconds 00:23:19.578 00:23:19.578 Latency(us) 00:23:19.578 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:19.578 =================================================================================================================== 00:23:19.578 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:19.578 02:01:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1470582 00:23:19.837 02:01:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:23:19.837 02:01:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:19.837 02:01:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:23:19.837 02:01:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:19.837 02:01:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:23:19.837 02:01:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:19.837 02:01:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:19.837 rmmod nvme_tcp 00:23:19.837 rmmod nvme_fabrics 00:23:19.837 rmmod nvme_keyring 00:23:20.097 02:01:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:20.097 02:01:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:23:20.097 02:01:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:23:20.097 02:01:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 1470433 ']' 00:23:20.097 02:01:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 1470433 00:23:20.097 02:01:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1470433 ']' 00:23:20.097 02:01:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1470433 00:23:20.097 02:01:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:20.097 02:01:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:20.097 02:01:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1470433 00:23:20.097 02:01:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:20.097 02:01:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:20.097 02:01:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1470433' 00:23:20.097 killing process with pid 1470433 00:23:20.097 02:01:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1470433 00:23:20.097 02:01:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1470433 00:23:20.358 02:01:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:20.358 02:01:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:20.358 02:01:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:20.358 02:01:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:20.358 02:01:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:20.358 02:01:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:20.358 02:01:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:20.358 02:01:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:22.261 02:01:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:22.261 02:01:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.jEq9f4NgQZ /tmp/tmp.kRXE1l7L1B /tmp/tmp.FOHimfKQiO 00:23:22.261 00:23:22.261 real 1m19.492s 00:23:22.261 user 2m5.411s 00:23:22.261 sys 0m27.356s 00:23:22.261 02:01:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:22.261 02:01:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:22.261 ************************************ 00:23:22.261 END TEST nvmf_tls 00:23:22.261 ************************************ 00:23:22.261 02:01:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:23:22.261 02:01:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:22.261 02:01:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:22.261 02:01:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:22.261 ************************************ 00:23:22.261 START TEST nvmf_fips 00:23:22.261 ************************************ 00:23:22.261 02:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:23:22.520 * Looking for test storage... 00:23:22.520 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:23:22.520 02:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:22.520 02:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:23:22.520 02:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:22.520 02:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:22.520 02:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:22.520 02:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:22.520 02:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:22.520 02:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:22.520 02:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:22.520 02:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:22.520 02:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:22.520 02:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:22.520 02:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:22.520 02:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:22.520 02:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:22.520 02:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:22.520 02:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:22.520 02:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:22.520 02:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:22.520 02:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:22.520 02:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:22.520 02:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:22.520 02:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:22.521 02:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:22.521 02:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:22.521 02:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:23:22.521 02:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:22.521 02:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:23:22.521 02:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:22.521 02:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:22.521 02:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:22.521 02:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:22.521 02:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:22.521 02:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:22.521 02:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:22.521 02:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:22.521 02:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:22.521 02:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:23:22.521 02:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:23:22.521 02:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:23:22.521 02:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:23:22.521 02:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:23:22.521 02:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:23:22.521 02:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:23:22.521 02:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:23:22.521 02:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:23:22.521 02:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:23:22.521 02:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:23:22.521 02:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:23:22.521 02:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:23:22.521 02:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:23:22.521 02:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:23:22.521 02:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:23:22.521 02:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:23:22.521 02:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:23:22.521 02:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:23:22.521 02:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:22.521 02:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:23:22.521 02:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:23:22.521 02:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:23:22.521 02:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:23:22.521 02:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:23:22.521 02:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:23:22.521 02:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:23:22.521 02:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:23:22.521 02:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:23:22.521 02:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:23:22.521 02:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:23:22.521 02:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:23:22.521 02:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:23:22.521 02:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:22.521 02:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:23:22.521 02:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:23:22.521 02:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:22.521 02:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:23:22.521 02:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:23:22.521 02:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:23:22.521 02:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:23:22.521 02:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:22.521 02:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:23:22.521 02:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:23:22.521 02:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:23:22.521 02:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:23:22.521 02:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:23:22.521 02:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:22.521 02:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:23:22.521 02:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:23:22.521 02:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:23:22.521 02:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:23:22.521 02:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:23:22.521 02:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:23:22.521 02:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:23:22.521 02:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:22.521 02:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:23:22.521 02:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:23:22.521 02:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:23:22.521 02:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:23:22.521 02:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:23:22.521 02:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:23:22.521 02:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:23:22.521 02:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:23:22.521 02:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:23:22.521 02:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:23:22.521 02:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:23:22.521 02:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:23:22.521 02:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@37 -- # cat 00:23:22.521 02:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:23:22.521 02:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:23:22.521 02:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:23:22.521 02:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:23:22.521 02:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:23:22.521 02:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:23:22.521 02:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:23:22.521 02:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:23:22.521 02:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:23:22.521 02:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:23:22.521 02:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:23:22.521 02:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # : 00:23:22.521 02:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:23:22.521 02:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:23:22.521 02:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:23:22.521 02:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:22.521 02:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:23:22.521 02:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:22.521 02:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:23:22.522 02:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:22.522 02:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:23:22.522 02:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:23:22.522 02:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:23:22.522 Error setting digest 00:23:22.522 0032FBAD197F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:23:22.522 0032FBAD197F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:23:22.522 02:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:23:22.522 02:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:22.522 02:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:22.522 02:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:22.522 02:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:23:22.522 02:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:22.522 02:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:22.522 02:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:22.522 02:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:22.522 02:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:22.522 02:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:22.522 02:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:22.522 02:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:22.522 02:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:22.522 02:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:22.522 02:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:23:22.522 02:01:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:24.428 02:01:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:24.428 02:01:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:23:24.428 02:01:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:24.428 02:01:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:24.428 02:01:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:24.428 02:01:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:24.428 02:01:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:24.428 02:01:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:23:24.428 02:01:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:24.428 02:01:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:23:24.428 02:01:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:23:24.428 02:01:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:23:24.428 02:01:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:23:24.428 02:01:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:23:24.428 02:01:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:23:24.428 02:01:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:24.428 02:01:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:24.428 02:01:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:24.428 02:01:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:24.428 02:01:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:24.428 02:01:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:24.428 02:01:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:24.428 02:01:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:24.428 02:01:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:24.428 02:01:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:24.428 02:01:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:24.428 02:01:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:24.428 02:01:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:24.428 02:01:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:24.428 02:01:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:24.428 02:01:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:24.428 02:01:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:24.428 02:01:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:24.428 02:01:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:24.428 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:24.428 02:01:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:24.428 02:01:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:24.428 02:01:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:24.428 02:01:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:24.428 02:01:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:24.428 02:01:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:24.428 02:01:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:24.428 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:24.428 02:01:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:24.428 02:01:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:24.428 02:01:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:24.428 02:01:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:24.428 02:01:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:24.428 02:01:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:24.428 02:01:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:24.428 02:01:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:24.428 02:01:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:24.428 02:01:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:24.428 02:01:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:24.428 02:01:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:24.428 02:01:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:24.428 02:01:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:24.428 02:01:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:24.428 02:01:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:24.428 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:24.428 02:01:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:24.428 02:01:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:24.428 02:01:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:24.428 02:01:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:24.428 02:01:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:24.428 02:01:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:24.428 02:01:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:24.428 02:01:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:24.428 02:01:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:24.428 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:24.428 02:01:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:24.428 02:01:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:24.428 02:01:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:23:24.428 02:01:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:24.428 02:01:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:24.428 02:01:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:24.428 02:01:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:24.428 02:01:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:24.428 02:01:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:24.428 02:01:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:24.428 02:01:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:24.428 02:01:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:24.428 02:01:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:24.428 02:01:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:24.428 02:01:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:24.428 02:01:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:24.428 02:01:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:24.428 02:01:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:24.428 02:01:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:24.428 02:01:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:24.428 02:01:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:24.428 02:01:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:24.428 02:01:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:24.687 02:01:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:24.687 02:01:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:24.687 02:01:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:24.687 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:24.687 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.221 ms 00:23:24.687 00:23:24.687 --- 10.0.0.2 ping statistics --- 00:23:24.687 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:24.687 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:23:24.687 02:01:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:24.687 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:24.687 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.088 ms 00:23:24.687 00:23:24.687 --- 10.0.0.1 ping statistics --- 00:23:24.687 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:24.687 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:23:24.687 02:01:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:24.687 02:01:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:23:24.687 02:01:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:24.687 02:01:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:24.687 02:01:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:24.687 02:01:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:24.687 02:01:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:24.687 02:01:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:24.687 02:01:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:24.687 02:01:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:23:24.687 02:01:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:24.687 02:01:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:24.687 02:01:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:24.687 02:01:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=1472932 00:23:24.687 02:01:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:24.687 02:01:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 1472932 00:23:24.687 02:01:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 1472932 ']' 00:23:24.687 02:01:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:24.687 02:01:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:24.687 02:01:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:24.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:24.687 02:01:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:24.687 02:01:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:24.687 [2024-07-24 02:01:39.450020] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:23:24.687 [2024-07-24 02:01:39.450110] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:24.687 EAL: No free 2048 kB hugepages reported on node 1 00:23:24.687 [2024-07-24 02:01:39.518498] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:24.946 [2024-07-24 02:01:39.608816] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:24.946 [2024-07-24 02:01:39.608880] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:24.946 [2024-07-24 02:01:39.608897] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:24.946 [2024-07-24 02:01:39.608910] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:24.946 [2024-07-24 02:01:39.608922] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:24.946 [2024-07-24 02:01:39.608952] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:25.512 02:01:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:25.512 02:01:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:23:25.512 02:01:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:25.512 02:01:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:25.512 02:01:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:25.512 02:01:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:25.512 02:01:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:23:25.512 02:01:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:23:25.512 02:01:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:25.512 02:01:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:23:25.512 02:01:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:25.512 02:01:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:25.512 02:01:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:25.512 02:01:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:25.772 [2024-07-24 02:01:40.609454] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:25.772 [2024-07-24 02:01:40.625449] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:25.772 [2024-07-24 02:01:40.625680] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:25.772 [2024-07-24 02:01:40.657973] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:25.772 malloc0 00:23:26.033 02:01:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:26.033 02:01:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=1473089 00:23:26.033 02:01:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:26.033 02:01:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 1473089 /var/tmp/bdevperf.sock 00:23:26.033 02:01:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 1473089 ']' 00:23:26.033 02:01:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:26.033 02:01:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:26.033 02:01:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:26.033 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:26.033 02:01:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:26.033 02:01:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:26.033 [2024-07-24 02:01:40.747278] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:23:26.033 [2024-07-24 02:01:40.747383] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1473089 ] 00:23:26.033 EAL: No free 2048 kB hugepages reported on node 1 00:23:26.033 [2024-07-24 02:01:40.804383] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:26.033 [2024-07-24 02:01:40.887569] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:26.292 02:01:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:26.292 02:01:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:23:26.292 02:01:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:26.551 [2024-07-24 02:01:41.226291] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:26.551 [2024-07-24 02:01:41.226481] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:26.551 TLSTESTn1 00:23:26.551 02:01:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:26.551 Running I/O for 10 seconds... 00:23:38.770 00:23:38.770 Latency(us) 00:23:38.770 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:38.770 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:38.770 Verification LBA range: start 0x0 length 0x2000 00:23:38.770 TLSTESTn1 : 10.02 3386.05 13.23 0.00 0.00 37734.71 9223.59 63691.28 00:23:38.770 =================================================================================================================== 00:23:38.770 Total : 3386.05 13.23 0.00 0.00 37734.71 9223.59 63691.28 00:23:38.770 0 00:23:38.770 02:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:23:38.770 02:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:23:38.770 02:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:23:38.770 02:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:23:38.770 02:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:23:38.770 02:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:23:38.770 02:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:23:38.770 02:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:23:38.770 02:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:23:38.770 02:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:23:38.770 nvmf_trace.0 00:23:38.770 02:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:23:38.770 02:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 1473089 00:23:38.770 02:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 1473089 ']' 00:23:38.770 02:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 1473089 00:23:38.770 02:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:23:38.770 02:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:38.770 02:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1473089 00:23:38.770 02:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:38.770 02:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:38.770 02:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1473089' 00:23:38.770 killing process with pid 1473089 00:23:38.770 02:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@967 -- # kill 1473089 00:23:38.770 Received shutdown signal, test time was about 10.000000 seconds 00:23:38.770 00:23:38.770 Latency(us) 00:23:38.770 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:38.770 =================================================================================================================== 00:23:38.770 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:38.770 [2024-07-24 02:01:51.596401] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:38.770 02:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # wait 1473089 00:23:38.770 02:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:23:38.771 02:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:38.771 02:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:23:38.771 02:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:38.771 02:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:23:38.771 02:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:38.771 02:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:38.771 rmmod nvme_tcp 00:23:38.771 rmmod nvme_fabrics 00:23:38.771 rmmod nvme_keyring 00:23:38.771 02:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:38.771 02:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:23:38.771 02:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:23:38.771 02:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 1472932 ']' 00:23:38.771 02:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 1472932 00:23:38.771 02:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 1472932 ']' 00:23:38.771 02:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 1472932 00:23:38.771 02:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:23:38.771 02:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:38.771 02:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1472932 00:23:38.771 02:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:38.771 02:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:38.771 02:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1472932' 00:23:38.771 killing process with pid 1472932 00:23:38.771 02:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@967 -- # kill 1472932 00:23:38.771 [2024-07-24 02:01:51.923047] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:38.771 02:01:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # wait 1472932 00:23:38.771 02:01:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:38.771 02:01:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:38.771 02:01:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:38.771 02:01:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:38.771 02:01:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:38.771 02:01:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:38.771 02:01:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:38.771 02:01:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:39.375 02:01:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:39.375 02:01:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:39.375 00:23:39.375 real 0m17.092s 00:23:39.375 user 0m21.972s 00:23:39.375 sys 0m5.577s 00:23:39.375 02:01:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:39.375 02:01:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:39.375 ************************************ 00:23:39.375 END TEST nvmf_fips 00:23:39.375 ************************************ 00:23:39.375 02:01:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@45 -- # '[' 1 -eq 1 ']' 00:23:39.375 02:01:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@46 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:23:39.375 02:01:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:39.375 02:01:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:39.375 02:01:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:39.375 ************************************ 00:23:39.375 START TEST nvmf_fuzz 00:23:39.375 ************************************ 00:23:39.375 02:01:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:23:39.640 * Looking for test storage... 00:23:39.640 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:39.640 02:01:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:39.640 02:01:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:23:39.640 02:01:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:39.640 02:01:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:39.640 02:01:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:39.640 02:01:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:39.640 02:01:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:39.640 02:01:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:39.640 02:01:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:39.640 02:01:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:39.640 02:01:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:39.640 02:01:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:39.640 02:01:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:39.640 02:01:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:39.640 02:01:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:39.640 02:01:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:39.640 02:01:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:39.640 02:01:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:39.640 02:01:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:39.641 02:01:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:39.641 02:01:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:39.641 02:01:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:39.641 02:01:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.641 02:01:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.641 02:01:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.641 02:01:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:23:39.641 02:01:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.641 02:01:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@47 -- # : 0 00:23:39.641 02:01:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:39.641 02:01:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:39.641 02:01:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:39.641 02:01:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:39.641 02:01:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:39.641 02:01:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:39.641 02:01:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:39.641 02:01:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:39.641 02:01:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:23:39.641 02:01:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:39.641 02:01:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:39.641 02:01:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:39.641 02:01:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:39.641 02:01:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:39.641 02:01:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:39.641 02:01:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:39.641 02:01:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:39.641 02:01:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:39.641 02:01:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:39.641 02:01:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@285 -- # xtrace_disable 00:23:39.641 02:01:54 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:41.546 02:01:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:41.546 02:01:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@291 -- # pci_devs=() 00:23:41.546 02:01:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:41.546 02:01:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:41.546 02:01:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:41.546 02:01:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:41.546 02:01:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:41.546 02:01:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@295 -- # net_devs=() 00:23:41.546 02:01:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:41.546 02:01:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@296 -- # e810=() 00:23:41.546 02:01:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@296 -- # local -ga e810 00:23:41.546 02:01:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@297 -- # x722=() 00:23:41.546 02:01:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@297 -- # local -ga x722 00:23:41.546 02:01:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@298 -- # mlx=() 00:23:41.546 02:01:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@298 -- # local -ga mlx 00:23:41.546 02:01:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:41.546 02:01:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:41.546 02:01:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:41.546 02:01:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:41.546 02:01:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:41.546 02:01:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:41.546 02:01:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:41.546 02:01:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:41.546 02:01:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:41.546 02:01:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:41.546 02:01:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:41.546 02:01:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:41.546 02:01:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:41.546 02:01:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:41.546 02:01:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:41.546 02:01:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:41.546 02:01:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:41.546 02:01:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:41.546 02:01:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:41.546 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:41.546 02:01:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:41.546 02:01:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:41.546 02:01:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:41.546 02:01:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:41.546 02:01:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:41.546 02:01:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:41.546 02:01:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:41.546 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:41.546 02:01:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:41.546 02:01:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:41.546 02:01:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:41.546 02:01:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:41.546 02:01:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:41.546 02:01:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:41.546 02:01:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:41.546 02:01:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:41.546 02:01:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:41.546 02:01:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:41.547 02:01:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:41.547 02:01:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:41.547 02:01:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:41.547 02:01:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:41.547 02:01:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:41.547 02:01:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:41.547 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:41.547 02:01:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:41.547 02:01:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:41.547 02:01:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:41.547 02:01:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:41.547 02:01:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:41.547 02:01:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:41.547 02:01:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:41.547 02:01:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:41.547 02:01:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:41.547 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:41.547 02:01:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:41.547 02:01:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:41.547 02:01:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@414 -- # is_hw=yes 00:23:41.547 02:01:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:41.547 02:01:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:41.547 02:01:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:41.547 02:01:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:41.547 02:01:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:41.547 02:01:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:41.547 02:01:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:41.547 02:01:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:41.547 02:01:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:41.547 02:01:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:41.547 02:01:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:41.547 02:01:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:41.547 02:01:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:41.547 02:01:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:41.547 02:01:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:41.547 02:01:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:41.547 02:01:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:41.547 02:01:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:41.547 02:01:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:41.547 02:01:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:41.547 02:01:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:41.547 02:01:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:41.547 02:01:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:41.547 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:41.547 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.224 ms 00:23:41.547 00:23:41.547 --- 10.0.0.2 ping statistics --- 00:23:41.547 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:41.547 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:23:41.547 02:01:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:41.547 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:41.547 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.060 ms 00:23:41.547 00:23:41.547 --- 10.0.0.1 ping statistics --- 00:23:41.547 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:41.547 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:23:41.547 02:01:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:41.547 02:01:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@422 -- # return 0 00:23:41.547 02:01:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:41.547 02:01:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:41.547 02:01:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:41.547 02:01:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:41.547 02:01:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:41.547 02:01:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:41.547 02:01:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:41.547 02:01:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=1476231 00:23:41.547 02:01:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:23:41.547 02:01:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:23:41.547 02:01:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 1476231 00:23:41.547 02:01:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@829 -- # '[' -z 1476231 ']' 00:23:41.547 02:01:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:41.547 02:01:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:41.548 02:01:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:41.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:41.548 02:01:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:41.548 02:01:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:42.117 02:01:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:42.117 02:01:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@862 -- # return 0 00:23:42.117 02:01:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:42.117 02:01:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.117 02:01:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:42.117 02:01:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.117 02:01:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:23:42.117 02:01:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.117 02:01:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:42.117 Malloc0 00:23:42.117 02:01:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.117 02:01:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:42.117 02:01:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.117 02:01:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:42.117 02:01:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.117 02:01:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:42.117 02:01:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.117 02:01:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:42.117 02:01:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.117 02:01:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:42.117 02:01:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.117 02:01:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:42.117 02:01:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.117 02:01:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:23:42.117 02:01:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:24:14.213 Fuzzing completed. Shutting down the fuzz application 00:24:14.213 00:24:14.213 Dumping successful admin opcodes: 00:24:14.213 8, 9, 10, 24, 00:24:14.213 Dumping successful io opcodes: 00:24:14.213 0, 9, 00:24:14.213 NS: 0x200003aeff00 I/O qp, Total commands completed: 477951, total successful commands: 2771, random_seed: 458316800 00:24:14.213 NS: 0x200003aeff00 admin qp, Total commands completed: 58432, total successful commands: 464, random_seed: 481500224 00:24:14.213 02:02:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:24:14.213 Fuzzing completed. Shutting down the fuzz application 00:24:14.213 00:24:14.213 Dumping successful admin opcodes: 00:24:14.213 24, 00:24:14.213 Dumping successful io opcodes: 00:24:14.213 00:24:14.213 NS: 0x200003aeff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 4190357745 00:24:14.213 NS: 0x200003aeff00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 4190480973 00:24:14.213 02:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:14.213 02:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.214 02:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:14.214 02:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.214 02:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:24:14.214 02:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:24:14.214 02:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:14.214 02:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # sync 00:24:14.214 02:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:14.214 02:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@120 -- # set +e 00:24:14.214 02:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:14.214 02:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:14.214 rmmod nvme_tcp 00:24:14.214 rmmod nvme_fabrics 00:24:14.214 rmmod nvme_keyring 00:24:14.214 02:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:14.214 02:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@124 -- # set -e 00:24:14.214 02:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@125 -- # return 0 00:24:14.214 02:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@489 -- # '[' -n 1476231 ']' 00:24:14.214 02:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@490 -- # killprocess 1476231 00:24:14.214 02:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@948 -- # '[' -z 1476231 ']' 00:24:14.214 02:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@952 -- # kill -0 1476231 00:24:14.214 02:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@953 -- # uname 00:24:14.214 02:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:14.214 02:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1476231 00:24:14.214 02:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:14.214 02:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:14.214 02:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1476231' 00:24:14.214 killing process with pid 1476231 00:24:14.214 02:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@967 -- # kill 1476231 00:24:14.214 02:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@972 -- # wait 1476231 00:24:14.214 02:02:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:14.214 02:02:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:14.214 02:02:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:14.214 02:02:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:14.214 02:02:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:14.214 02:02:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:14.214 02:02:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:14.214 02:02:29 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:16.752 02:02:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:16.752 02:02:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:24:16.752 00:24:16.752 real 0m36.911s 00:24:16.752 user 0m50.891s 00:24:16.752 sys 0m15.401s 00:24:16.752 02:02:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:16.752 02:02:31 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:16.752 ************************************ 00:24:16.752 END TEST nvmf_fuzz 00:24:16.752 ************************************ 00:24:16.752 02:02:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:24:16.752 02:02:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:16.752 02:02:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:16.752 02:02:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:16.752 ************************************ 00:24:16.752 START TEST nvmf_multiconnection 00:24:16.752 ************************************ 00:24:16.752 02:02:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:24:16.752 * Looking for test storage... 00:24:16.752 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:16.753 02:02:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:16.753 02:02:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:24:16.753 02:02:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:16.753 02:02:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:16.753 02:02:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:16.753 02:02:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:16.753 02:02:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:16.753 02:02:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:16.753 02:02:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:16.753 02:02:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:16.753 02:02:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:16.753 02:02:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:16.753 02:02:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:16.753 02:02:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:16.753 02:02:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:16.753 02:02:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:16.753 02:02:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:16.753 02:02:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:16.753 02:02:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:16.753 02:02:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:16.753 02:02:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:16.753 02:02:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:16.753 02:02:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:16.753 02:02:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:16.753 02:02:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:16.753 02:02:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:24:16.753 02:02:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:16.753 02:02:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@47 -- # : 0 00:24:16.753 02:02:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:16.753 02:02:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:16.753 02:02:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:16.753 02:02:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:16.753 02:02:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:16.753 02:02:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:16.753 02:02:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:16.753 02:02:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:16.753 02:02:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:16.753 02:02:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:16.753 02:02:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:24:16.753 02:02:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:24:16.753 02:02:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:16.753 02:02:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:16.753 02:02:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:16.753 02:02:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:16.753 02:02:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:16.753 02:02:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:16.753 02:02:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:16.753 02:02:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:16.753 02:02:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:16.753 02:02:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:16.753 02:02:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@285 -- # xtrace_disable 00:24:16.753 02:02:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:18.661 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:18.661 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@291 -- # pci_devs=() 00:24:18.661 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:18.661 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:18.661 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:18.661 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:18.661 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:18.661 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@295 -- # net_devs=() 00:24:18.661 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:18.661 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@296 -- # e810=() 00:24:18.661 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@296 -- # local -ga e810 00:24:18.661 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@297 -- # x722=() 00:24:18.661 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@297 -- # local -ga x722 00:24:18.661 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@298 -- # mlx=() 00:24:18.661 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@298 -- # local -ga mlx 00:24:18.661 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:18.661 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:18.661 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:18.661 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:18.661 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:18.661 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:18.661 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:18.661 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:18.661 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:18.661 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:18.661 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:18.661 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:18.661 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:18.661 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:18.661 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:18.661 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:18.661 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:18.661 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:18.661 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:18.661 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:18.661 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:18.661 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:18.661 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:18.661 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:18.661 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:18.661 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:18.661 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:18.661 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:18.661 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:18.661 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:18.661 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:18.661 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:18.661 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:18.661 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:18.661 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:18.661 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:18.661 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:18.661 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:18.661 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:18.661 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:18.661 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:18.661 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:18.661 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:18.661 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:18.661 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:18.661 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:18.661 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:18.661 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:18.661 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:18.661 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:18.661 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:18.661 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:18.661 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:18.661 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:18.661 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:18.661 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:18.661 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:18.661 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@414 -- # is_hw=yes 00:24:18.661 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:18.661 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:18.661 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:18.661 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:18.661 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:18.661 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:18.661 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:18.661 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:18.661 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:18.661 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:18.661 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:18.661 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:18.661 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:18.661 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:18.661 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:18.661 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:18.661 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:18.661 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:18.661 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:18.661 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:18.661 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:18.661 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:18.661 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:18.661 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:18.661 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.193 ms 00:24:18.661 00:24:18.661 --- 10.0.0.2 ping statistics --- 00:24:18.661 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:18.661 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:24:18.661 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:18.661 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:18.661 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.135 ms 00:24:18.661 00:24:18.661 --- 10.0.0.1 ping statistics --- 00:24:18.661 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:18.661 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:24:18.661 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:18.661 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@422 -- # return 0 00:24:18.661 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:18.661 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:18.661 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:18.661 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:18.661 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:18.661 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:18.661 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:18.661 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:24:18.661 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:18.662 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:18.662 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:18.662 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@481 -- # nvmfpid=1481930 00:24:18.662 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:18.662 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@482 -- # waitforlisten 1481930 00:24:18.662 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@829 -- # '[' -z 1481930 ']' 00:24:18.662 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:18.662 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:18.662 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:18.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:18.662 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:18.662 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:18.662 [2024-07-24 02:02:33.517113] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:24:18.662 [2024-07-24 02:02:33.517182] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:18.662 EAL: No free 2048 kB hugepages reported on node 1 00:24:18.920 [2024-07-24 02:02:33.583807] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:18.920 [2024-07-24 02:02:33.676192] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:18.920 [2024-07-24 02:02:33.676265] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:18.920 [2024-07-24 02:02:33.676281] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:18.920 [2024-07-24 02:02:33.676295] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:18.920 [2024-07-24 02:02:33.676307] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:18.920 [2024-07-24 02:02:33.676393] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:18.920 [2024-07-24 02:02:33.676475] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:18.920 [2024-07-24 02:02:33.676568] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:18.920 [2024-07-24 02:02:33.676570] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:18.920 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:18.920 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@862 -- # return 0 00:24:18.920 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:18.920 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:18.920 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:18.920 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:18.920 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:18.920 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:18.920 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:19.179 [2024-07-24 02:02:33.816479] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:19.179 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.179 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:24:19.179 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:19.179 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:19.179 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.179 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:19.179 Malloc1 00:24:19.179 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.179 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:24:19.179 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.179 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:19.179 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.179 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:19.179 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.179 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:19.179 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.179 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:19.179 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.179 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:19.179 [2024-07-24 02:02:33.871421] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:19.179 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.179 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:19.179 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:24:19.179 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.180 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:19.180 Malloc2 00:24:19.180 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.180 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:24:19.180 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.180 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:19.180 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.180 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:24:19.180 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.180 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:19.180 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.180 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:24:19.180 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.180 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:19.180 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.180 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:19.180 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:24:19.180 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.180 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:19.180 Malloc3 00:24:19.180 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.180 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:24:19.180 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.180 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:19.180 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.180 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:24:19.180 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.180 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:19.180 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.180 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:24:19.180 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.180 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:19.180 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.180 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:19.180 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:24:19.180 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.180 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:19.180 Malloc4 00:24:19.180 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.180 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:24:19.180 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.180 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:19.180 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.180 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:24:19.180 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.180 02:02:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:19.180 02:02:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.180 02:02:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:24:19.180 02:02:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.180 02:02:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:19.180 02:02:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.180 02:02:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:19.180 02:02:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:24:19.180 02:02:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.180 02:02:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:19.180 Malloc5 00:24:19.180 02:02:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.180 02:02:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:24:19.180 02:02:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.180 02:02:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:19.180 02:02:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.180 02:02:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:24:19.180 02:02:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.180 02:02:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:19.180 02:02:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.180 02:02:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:24:19.180 02:02:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.180 02:02:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:19.180 02:02:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.180 02:02:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:19.180 02:02:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:24:19.180 02:02:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.180 02:02:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:19.441 Malloc6 00:24:19.441 02:02:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.441 02:02:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:24:19.441 02:02:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.441 02:02:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:19.441 02:02:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.441 02:02:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:24:19.441 02:02:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.441 02:02:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:19.441 02:02:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.441 02:02:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:24:19.441 02:02:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.441 02:02:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:19.441 02:02:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.441 02:02:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:19.441 02:02:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:24:19.441 02:02:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.441 02:02:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:19.441 Malloc7 00:24:19.441 02:02:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.441 02:02:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:24:19.441 02:02:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.441 02:02:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:19.441 02:02:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.442 02:02:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:24:19.442 02:02:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.442 02:02:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:19.442 02:02:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.442 02:02:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:24:19.442 02:02:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.442 02:02:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:19.442 02:02:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.442 02:02:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:19.442 02:02:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:24:19.442 02:02:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.442 02:02:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:19.442 Malloc8 00:24:19.442 02:02:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.442 02:02:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:24:19.442 02:02:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.442 02:02:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:19.442 02:02:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.442 02:02:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:24:19.442 02:02:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.442 02:02:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:19.442 02:02:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.442 02:02:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:24:19.442 02:02:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.442 02:02:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:19.442 02:02:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.442 02:02:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:19.442 02:02:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:24:19.442 02:02:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.442 02:02:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:19.442 Malloc9 00:24:19.442 02:02:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.442 02:02:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:24:19.442 02:02:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.442 02:02:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:19.442 02:02:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.442 02:02:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:24:19.442 02:02:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.442 02:02:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:19.442 02:02:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.442 02:02:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:24:19.442 02:02:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.442 02:02:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:19.442 02:02:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.442 02:02:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:19.442 02:02:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:24:19.442 02:02:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.442 02:02:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:19.442 Malloc10 00:24:19.442 02:02:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.442 02:02:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:24:19.442 02:02:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.442 02:02:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:19.442 02:02:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.442 02:02:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:24:19.442 02:02:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.442 02:02:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:19.442 02:02:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.442 02:02:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:24:19.442 02:02:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.442 02:02:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:19.442 02:02:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.442 02:02:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:19.442 02:02:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:24:19.442 02:02:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.442 02:02:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:19.442 Malloc11 00:24:19.442 02:02:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.442 02:02:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:24:19.442 02:02:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.442 02:02:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:19.702 02:02:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.702 02:02:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:24:19.702 02:02:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.702 02:02:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:19.702 02:02:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.702 02:02:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:24:19.702 02:02:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.702 02:02:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:19.702 02:02:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.702 02:02:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:24:19.702 02:02:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:19.702 02:02:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:24:20.273 02:02:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:24:20.273 02:02:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # local i=0 00:24:20.273 02:02:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:24:20.273 02:02:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # [[ -n '' ]] 00:24:20.273 02:02:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # sleep 2 00:24:22.178 02:02:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:24:22.179 02:02:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:24:22.179 02:02:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # grep -c SPDK1 00:24:22.179 02:02:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:24:22.179 02:02:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:24:22.179 02:02:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # return 0 00:24:22.179 02:02:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:22.179 02:02:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:24:22.748 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:24:22.748 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # local i=0 00:24:22.748 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:24:22.748 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # [[ -n '' ]] 00:24:22.748 02:02:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # sleep 2 00:24:25.321 02:02:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:24:25.321 02:02:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:24:25.321 02:02:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # grep -c SPDK2 00:24:25.321 02:02:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:24:25.321 02:02:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:24:25.321 02:02:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # return 0 00:24:25.321 02:02:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:25.321 02:02:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:24:25.581 02:02:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:24:25.581 02:02:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # local i=0 00:24:25.581 02:02:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:24:25.581 02:02:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # [[ -n '' ]] 00:24:25.581 02:02:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # sleep 2 00:24:27.486 02:02:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:24:27.486 02:02:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:24:27.486 02:02:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # grep -c SPDK3 00:24:27.486 02:02:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:24:27.486 02:02:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:24:27.486 02:02:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # return 0 00:24:27.486 02:02:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:27.486 02:02:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:24:28.055 02:02:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:24:28.055 02:02:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # local i=0 00:24:28.055 02:02:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:24:28.055 02:02:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # [[ -n '' ]] 00:24:28.055 02:02:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # sleep 2 00:24:30.590 02:02:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:24:30.590 02:02:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:24:30.590 02:02:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # grep -c SPDK4 00:24:30.590 02:02:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:24:30.590 02:02:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:24:30.590 02:02:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # return 0 00:24:30.590 02:02:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:30.590 02:02:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:24:30.850 02:02:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:24:30.850 02:02:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # local i=0 00:24:30.850 02:02:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:24:30.850 02:02:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # [[ -n '' ]] 00:24:30.850 02:02:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # sleep 2 00:24:32.756 02:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:24:32.756 02:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:24:32.756 02:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # grep -c SPDK5 00:24:32.756 02:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:24:32.756 02:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:24:32.756 02:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # return 0 00:24:32.756 02:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:32.756 02:02:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:24:33.694 02:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:24:33.694 02:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # local i=0 00:24:33.694 02:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:24:33.694 02:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # [[ -n '' ]] 00:24:33.694 02:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # sleep 2 00:24:35.599 02:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:24:35.599 02:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:24:35.599 02:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # grep -c SPDK6 00:24:35.599 02:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:24:35.599 02:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:24:35.599 02:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # return 0 00:24:35.599 02:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:35.599 02:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:24:36.533 02:02:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:24:36.533 02:02:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # local i=0 00:24:36.533 02:02:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:24:36.533 02:02:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # [[ -n '' ]] 00:24:36.533 02:02:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # sleep 2 00:24:38.437 02:02:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:24:38.437 02:02:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:24:38.437 02:02:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # grep -c SPDK7 00:24:38.437 02:02:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:24:38.437 02:02:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:24:38.437 02:02:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # return 0 00:24:38.437 02:02:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:38.437 02:02:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:24:39.374 02:02:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:24:39.374 02:02:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # local i=0 00:24:39.374 02:02:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:24:39.374 02:02:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # [[ -n '' ]] 00:24:39.374 02:02:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # sleep 2 00:24:41.279 02:02:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:24:41.279 02:02:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:24:41.279 02:02:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # grep -c SPDK8 00:24:41.279 02:02:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:24:41.279 02:02:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:24:41.279 02:02:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # return 0 00:24:41.279 02:02:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:41.279 02:02:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:24:42.215 02:02:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:24:42.215 02:02:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # local i=0 00:24:42.215 02:02:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:24:42.215 02:02:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # [[ -n '' ]] 00:24:42.215 02:02:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # sleep 2 00:24:44.120 02:02:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:24:44.120 02:02:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:24:44.120 02:02:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # grep -c SPDK9 00:24:44.120 02:02:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:24:44.120 02:02:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:24:44.120 02:02:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # return 0 00:24:44.120 02:02:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:44.120 02:02:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:24:45.059 02:02:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:24:45.059 02:02:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # local i=0 00:24:45.060 02:02:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:24:45.060 02:02:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # [[ -n '' ]] 00:24:45.060 02:02:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # sleep 2 00:24:47.010 02:03:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:24:47.010 02:03:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:24:47.010 02:03:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # grep -c SPDK10 00:24:47.010 02:03:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:24:47.010 02:03:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:24:47.010 02:03:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # return 0 00:24:47.010 02:03:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:47.010 02:03:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:24:47.946 02:03:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:24:47.946 02:03:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # local i=0 00:24:47.946 02:03:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:24:47.946 02:03:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # [[ -n '' ]] 00:24:47.946 02:03:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # sleep 2 00:24:49.849 02:03:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:24:49.849 02:03:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:24:49.849 02:03:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # grep -c SPDK11 00:24:50.108 02:03:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:24:50.108 02:03:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:24:50.108 02:03:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # return 0 00:24:50.108 02:03:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:24:50.108 [global] 00:24:50.108 thread=1 00:24:50.108 invalidate=1 00:24:50.108 rw=read 00:24:50.108 time_based=1 00:24:50.108 runtime=10 00:24:50.108 ioengine=libaio 00:24:50.108 direct=1 00:24:50.108 bs=262144 00:24:50.108 iodepth=64 00:24:50.108 norandommap=1 00:24:50.108 numjobs=1 00:24:50.108 00:24:50.108 [job0] 00:24:50.108 filename=/dev/nvme0n1 00:24:50.108 [job1] 00:24:50.108 filename=/dev/nvme10n1 00:24:50.108 [job2] 00:24:50.108 filename=/dev/nvme1n1 00:24:50.108 [job3] 00:24:50.108 filename=/dev/nvme2n1 00:24:50.108 [job4] 00:24:50.108 filename=/dev/nvme3n1 00:24:50.108 [job5] 00:24:50.108 filename=/dev/nvme4n1 00:24:50.108 [job6] 00:24:50.108 filename=/dev/nvme5n1 00:24:50.108 [job7] 00:24:50.108 filename=/dev/nvme6n1 00:24:50.108 [job8] 00:24:50.108 filename=/dev/nvme7n1 00:24:50.108 [job9] 00:24:50.108 filename=/dev/nvme8n1 00:24:50.108 [job10] 00:24:50.108 filename=/dev/nvme9n1 00:24:50.108 Could not set queue depth (nvme0n1) 00:24:50.108 Could not set queue depth (nvme10n1) 00:24:50.108 Could not set queue depth (nvme1n1) 00:24:50.108 Could not set queue depth (nvme2n1) 00:24:50.108 Could not set queue depth (nvme3n1) 00:24:50.108 Could not set queue depth (nvme4n1) 00:24:50.108 Could not set queue depth (nvme5n1) 00:24:50.108 Could not set queue depth (nvme6n1) 00:24:50.108 Could not set queue depth (nvme7n1) 00:24:50.108 Could not set queue depth (nvme8n1) 00:24:50.108 Could not set queue depth (nvme9n1) 00:24:50.366 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:50.366 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:50.366 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:50.366 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:50.366 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:50.366 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:50.366 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:50.366 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:50.366 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:50.366 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:50.366 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:50.366 fio-3.35 00:24:50.366 Starting 11 threads 00:25:02.581 00:25:02.581 job0: (groupid=0, jobs=1): err= 0: pid=1486316: Wed Jul 24 02:03:15 2024 00:25:02.581 read: IOPS=621, BW=155MiB/s (163MB/s)(1569MiB/10107msec) 00:25:02.581 slat (usec): min=8, max=190352, avg=1298.97, stdev=6008.63 00:25:02.581 clat (msec): min=2, max=461, avg=101.69, stdev=58.48 00:25:02.581 lat (msec): min=2, max=461, avg=102.99, stdev=59.32 00:25:02.581 clat percentiles (msec): 00:25:02.581 | 1.00th=[ 9], 5.00th=[ 25], 10.00th=[ 37], 20.00th=[ 61], 00:25:02.581 | 30.00th=[ 73], 40.00th=[ 84], 50.00th=[ 93], 60.00th=[ 106], 00:25:02.581 | 70.00th=[ 115], 80.00th=[ 133], 90.00th=[ 165], 95.00th=[ 211], 00:25:02.581 | 99.00th=[ 330], 99.50th=[ 380], 99.90th=[ 422], 99.95th=[ 422], 00:25:02.581 | 99.99th=[ 460] 00:25:02.581 bw ( KiB/s): min=46080, max=316416, per=8.45%, avg=159017.95, stdev=57444.44, samples=20 00:25:02.581 iops : min= 180, max= 1236, avg=621.10, stdev=224.40, samples=20 00:25:02.581 lat (msec) : 4=0.03%, 10=1.34%, 20=2.23%, 50=10.51%, 100=41.05% 00:25:02.581 lat (msec) : 250=41.98%, 500=2.85% 00:25:02.581 cpu : usr=0.27%, sys=1.85%, ctx=1212, majf=0, minf=4097 00:25:02.581 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:25:02.582 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:02.582 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:02.582 issued rwts: total=6277,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:02.582 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:02.582 job1: (groupid=0, jobs=1): err= 0: pid=1486317: Wed Jul 24 02:03:15 2024 00:25:02.582 read: IOPS=622, BW=156MiB/s (163MB/s)(1572MiB/10101msec) 00:25:02.582 slat (usec): min=8, max=187469, avg=1138.32, stdev=5941.36 00:25:02.582 clat (msec): min=4, max=519, avg=101.61, stdev=58.90 00:25:02.582 lat (msec): min=8, max=519, avg=102.75, stdev=59.78 00:25:02.582 clat percentiles (msec): 00:25:02.582 | 1.00th=[ 21], 5.00th=[ 28], 10.00th=[ 31], 20.00th=[ 55], 00:25:02.582 | 30.00th=[ 72], 40.00th=[ 81], 50.00th=[ 91], 60.00th=[ 107], 00:25:02.582 | 70.00th=[ 122], 80.00th=[ 140], 90.00th=[ 169], 95.00th=[ 211], 00:25:02.582 | 99.00th=[ 305], 99.50th=[ 330], 99.90th=[ 351], 99.95th=[ 388], 00:25:02.582 | 99.99th=[ 518] 00:25:02.582 bw ( KiB/s): min=55808, max=318464, per=8.47%, avg=159298.15, stdev=63370.02, samples=20 00:25:02.582 iops : min= 218, max= 1244, avg=622.20, stdev=247.55, samples=20 00:25:02.582 lat (msec) : 10=0.06%, 20=0.78%, 50=17.93%, 100=36.71%, 250=41.47% 00:25:02.582 lat (msec) : 500=3.01%, 750=0.05% 00:25:02.582 cpu : usr=0.26%, sys=1.49%, ctx=1327, majf=0, minf=3722 00:25:02.582 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:25:02.582 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:02.582 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:02.582 issued rwts: total=6287,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:02.582 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:02.582 job2: (groupid=0, jobs=1): err= 0: pid=1486318: Wed Jul 24 02:03:15 2024 00:25:02.582 read: IOPS=532, BW=133MiB/s (140MB/s)(1341MiB/10073msec) 00:25:02.582 slat (usec): min=8, max=269880, avg=1102.86, stdev=6411.40 00:25:02.582 clat (usec): min=1651, max=396477, avg=119047.83, stdev=58743.88 00:25:02.582 lat (usec): min=1680, max=521579, avg=120150.69, stdev=59684.09 00:25:02.582 clat percentiles (msec): 00:25:02.582 | 1.00th=[ 21], 5.00th=[ 37], 10.00th=[ 55], 20.00th=[ 73], 00:25:02.582 | 30.00th=[ 85], 40.00th=[ 99], 50.00th=[ 114], 60.00th=[ 128], 00:25:02.582 | 70.00th=[ 140], 80.00th=[ 159], 90.00th=[ 188], 95.00th=[ 224], 00:25:02.582 | 99.00th=[ 342], 99.50th=[ 363], 99.90th=[ 384], 99.95th=[ 388], 00:25:02.582 | 99.99th=[ 397] 00:25:02.582 bw ( KiB/s): min=60928, max=216064, per=7.21%, avg=135632.65, stdev=43755.22, samples=20 00:25:02.582 iops : min= 238, max= 844, avg=529.80, stdev=170.93, samples=20 00:25:02.582 lat (msec) : 2=0.04%, 4=0.11%, 10=0.07%, 20=0.67%, 50=6.68% 00:25:02.582 lat (msec) : 100=34.60%, 250=54.08%, 500=3.75% 00:25:02.582 cpu : usr=0.25%, sys=1.50%, ctx=1230, majf=0, minf=4097 00:25:02.582 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:25:02.582 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:02.582 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:02.582 issued rwts: total=5362,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:02.582 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:02.582 job3: (groupid=0, jobs=1): err= 0: pid=1486319: Wed Jul 24 02:03:15 2024 00:25:02.582 read: IOPS=749, BW=187MiB/s (196MB/s)(1893MiB/10107msec) 00:25:02.582 slat (usec): min=8, max=113173, avg=921.05, stdev=4178.41 00:25:02.582 clat (usec): min=700, max=393521, avg=84454.10, stdev=55786.20 00:25:02.582 lat (usec): min=729, max=426648, avg=85375.15, stdev=56468.99 00:25:02.582 clat percentiles (msec): 00:25:02.582 | 1.00th=[ 4], 5.00th=[ 14], 10.00th=[ 31], 20.00th=[ 44], 00:25:02.582 | 30.00th=[ 53], 40.00th=[ 63], 50.00th=[ 78], 60.00th=[ 90], 00:25:02.582 | 70.00th=[ 100], 80.00th=[ 115], 90.00th=[ 136], 95.00th=[ 192], 00:25:02.582 | 99.00th=[ 292], 99.50th=[ 338], 99.90th=[ 368], 99.95th=[ 376], 00:25:02.582 | 99.99th=[ 393] 00:25:02.582 bw ( KiB/s): min=60928, max=324608, per=10.22%, avg=192182.05, stdev=83491.78, samples=20 00:25:02.582 iops : min= 238, max= 1268, avg=750.65, stdev=326.06, samples=20 00:25:02.582 lat (usec) : 750=0.01%, 1000=0.01% 00:25:02.582 lat (msec) : 2=0.32%, 4=1.06%, 10=2.77%, 20=2.73%, 50=19.70% 00:25:02.582 lat (msec) : 100=44.45%, 250=26.10%, 500=2.84% 00:25:02.582 cpu : usr=0.39%, sys=1.88%, ctx=1362, majf=0, minf=4097 00:25:02.582 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:25:02.582 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:02.582 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:02.582 issued rwts: total=7572,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:02.582 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:02.582 job4: (groupid=0, jobs=1): err= 0: pid=1486322: Wed Jul 24 02:03:15 2024 00:25:02.582 read: IOPS=1024, BW=256MiB/s (269MB/s)(2569MiB/10028msec) 00:25:02.582 slat (usec): min=8, max=76075, avg=870.76, stdev=3125.22 00:25:02.582 clat (msec): min=5, max=234, avg=61.55, stdev=38.20 00:25:02.582 lat (msec): min=6, max=234, avg=62.42, stdev=38.61 00:25:02.582 clat percentiles (msec): 00:25:02.582 | 1.00th=[ 13], 5.00th=[ 29], 10.00th=[ 31], 20.00th=[ 32], 00:25:02.582 | 30.00th=[ 34], 40.00th=[ 36], 50.00th=[ 46], 60.00th=[ 57], 00:25:02.582 | 70.00th=[ 71], 80.00th=[ 96], 90.00th=[ 122], 95.00th=[ 136], 00:25:02.582 | 99.00th=[ 178], 99.50th=[ 190], 99.90th=[ 205], 99.95th=[ 207], 00:25:02.582 | 99.99th=[ 215] 00:25:02.582 bw ( KiB/s): min=123904, max=482304, per=13.89%, avg=261388.25, stdev=125962.51, samples=20 00:25:02.582 iops : min= 484, max= 1884, avg=1021.00, stdev=492.07, samples=20 00:25:02.582 lat (msec) : 10=0.43%, 20=1.67%, 50=50.93%, 100=28.35%, 250=18.62% 00:25:02.582 cpu : usr=0.53%, sys=2.91%, ctx=1876, majf=0, minf=4097 00:25:02.582 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:25:02.582 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:02.582 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:02.582 issued rwts: total=10275,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:02.582 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:02.582 job5: (groupid=0, jobs=1): err= 0: pid=1486323: Wed Jul 24 02:03:15 2024 00:25:02.582 read: IOPS=705, BW=176MiB/s (185MB/s)(1783MiB/10111msec) 00:25:02.582 slat (usec): min=8, max=203622, avg=1072.41, stdev=4880.30 00:25:02.582 clat (msec): min=2, max=334, avg=89.60, stdev=51.92 00:25:02.582 lat (msec): min=2, max=455, avg=90.67, stdev=52.53 00:25:02.582 clat percentiles (msec): 00:25:02.582 | 1.00th=[ 13], 5.00th=[ 26], 10.00th=[ 29], 20.00th=[ 41], 00:25:02.582 | 30.00th=[ 61], 40.00th=[ 72], 50.00th=[ 83], 60.00th=[ 96], 00:25:02.582 | 70.00th=[ 111], 80.00th=[ 131], 90.00th=[ 150], 95.00th=[ 182], 00:25:02.582 | 99.00th=[ 259], 99.50th=[ 326], 99.90th=[ 326], 99.95th=[ 330], 00:25:02.582 | 99.99th=[ 334] 00:25:02.582 bw ( KiB/s): min=93696, max=308736, per=9.61%, avg=180864.30, stdev=62617.12, samples=20 00:25:02.582 iops : min= 366, max= 1206, avg=706.40, stdev=244.65, samples=20 00:25:02.582 lat (msec) : 4=0.07%, 10=0.53%, 20=2.26%, 50=22.36%, 100=37.50% 00:25:02.582 lat (msec) : 250=36.23%, 500=1.05% 00:25:02.582 cpu : usr=0.34%, sys=2.18%, ctx=1435, majf=0, minf=4097 00:25:02.582 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:25:02.582 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:02.582 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:02.582 issued rwts: total=7130,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:02.582 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:02.582 job6: (groupid=0, jobs=1): err= 0: pid=1486324: Wed Jul 24 02:03:15 2024 00:25:02.582 read: IOPS=537, BW=134MiB/s (141MB/s)(1355MiB/10092msec) 00:25:02.582 slat (usec): min=8, max=105733, avg=1133.70, stdev=5423.02 00:25:02.582 clat (usec): min=763, max=434769, avg=117955.14, stdev=63025.61 00:25:02.582 lat (usec): min=784, max=434808, avg=119088.84, stdev=63985.06 00:25:02.582 clat percentiles (msec): 00:25:02.582 | 1.00th=[ 3], 5.00th=[ 13], 10.00th=[ 34], 20.00th=[ 66], 00:25:02.582 | 30.00th=[ 88], 40.00th=[ 105], 50.00th=[ 120], 60.00th=[ 132], 00:25:02.582 | 70.00th=[ 144], 80.00th=[ 163], 90.00th=[ 192], 95.00th=[ 220], 00:25:02.582 | 99.00th=[ 321], 99.50th=[ 338], 99.90th=[ 376], 99.95th=[ 393], 00:25:02.582 | 99.99th=[ 435] 00:25:02.582 bw ( KiB/s): min=71168, max=287744, per=7.29%, avg=137124.95, stdev=51309.26, samples=20 00:25:02.582 iops : min= 278, max= 1124, avg=535.60, stdev=200.46, samples=20 00:25:02.582 lat (usec) : 1000=0.13% 00:25:02.582 lat (msec) : 2=0.42%, 4=1.35%, 10=2.60%, 20=1.55%, 50=9.56% 00:25:02.582 lat (msec) : 100=21.42%, 250=59.87%, 500=3.10% 00:25:02.582 cpu : usr=0.24%, sys=1.69%, ctx=1340, majf=0, minf=4097 00:25:02.582 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:25:02.582 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:02.582 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:02.582 issued rwts: total=5420,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:02.582 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:02.582 job7: (groupid=0, jobs=1): err= 0: pid=1486325: Wed Jul 24 02:03:15 2024 00:25:02.582 read: IOPS=503, BW=126MiB/s (132MB/s)(1273MiB/10109msec) 00:25:02.582 slat (usec): min=10, max=126793, avg=1933.01, stdev=6270.02 00:25:02.582 clat (msec): min=19, max=407, avg=125.04, stdev=57.87 00:25:02.582 lat (msec): min=19, max=432, avg=126.97, stdev=58.84 00:25:02.582 clat percentiles (msec): 00:25:02.582 | 1.00th=[ 31], 5.00th=[ 55], 10.00th=[ 62], 20.00th=[ 73], 00:25:02.582 | 30.00th=[ 85], 40.00th=[ 99], 50.00th=[ 122], 60.00th=[ 136], 00:25:02.582 | 70.00th=[ 153], 80.00th=[ 167], 90.00th=[ 190], 95.00th=[ 234], 00:25:02.582 | 99.00th=[ 309], 99.50th=[ 326], 99.90th=[ 363], 99.95th=[ 372], 00:25:02.582 | 99.99th=[ 409] 00:25:02.582 bw ( KiB/s): min=62976, max=230912, per=6.84%, avg=128696.00, stdev=52063.31, samples=20 00:25:02.582 iops : min= 246, max= 902, avg=502.70, stdev=203.38, samples=20 00:25:02.582 lat (msec) : 20=0.04%, 50=3.63%, 100=37.20%, 250=54.75%, 500=4.38% 00:25:02.582 cpu : usr=0.31%, sys=1.59%, ctx=953, majf=0, minf=4097 00:25:02.583 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:25:02.583 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:02.583 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:02.583 issued rwts: total=5092,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:02.583 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:02.583 job8: (groupid=0, jobs=1): err= 0: pid=1486328: Wed Jul 24 02:03:15 2024 00:25:02.583 read: IOPS=572, BW=143MiB/s (150MB/s)(1446MiB/10098msec) 00:25:02.583 slat (usec): min=9, max=74661, avg=1331.71, stdev=5130.53 00:25:02.583 clat (msec): min=3, max=395, avg=110.31, stdev=58.49 00:25:02.583 lat (msec): min=3, max=414, avg=111.64, stdev=59.08 00:25:02.583 clat percentiles (msec): 00:25:02.583 | 1.00th=[ 12], 5.00th=[ 29], 10.00th=[ 45], 20.00th=[ 66], 00:25:02.583 | 30.00th=[ 79], 40.00th=[ 88], 50.00th=[ 99], 60.00th=[ 113], 00:25:02.583 | 70.00th=[ 136], 80.00th=[ 153], 90.00th=[ 182], 95.00th=[ 213], 00:25:02.583 | 99.00th=[ 321], 99.50th=[ 342], 99.90th=[ 359], 99.95th=[ 397], 00:25:02.583 | 99.99th=[ 397] 00:25:02.583 bw ( KiB/s): min=62976, max=322560, per=7.78%, avg=146448.10, stdev=62850.42, samples=20 00:25:02.583 iops : min= 246, max= 1260, avg=572.00, stdev=245.56, samples=20 00:25:02.583 lat (msec) : 4=0.03%, 10=0.54%, 20=2.61%, 50=8.70%, 100=40.20% 00:25:02.583 lat (msec) : 250=44.55%, 500=3.37% 00:25:02.583 cpu : usr=0.21%, sys=1.69%, ctx=1174, majf=0, minf=4097 00:25:02.583 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:25:02.583 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:02.583 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:02.583 issued rwts: total=5784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:02.583 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:02.583 job9: (groupid=0, jobs=1): err= 0: pid=1486329: Wed Jul 24 02:03:15 2024 00:25:02.583 read: IOPS=607, BW=152MiB/s (159MB/s)(1523MiB/10020msec) 00:25:02.583 slat (usec): min=8, max=143129, avg=1353.11, stdev=5192.66 00:25:02.583 clat (usec): min=1531, max=389773, avg=103886.85, stdev=58046.31 00:25:02.583 lat (usec): min=1552, max=401295, avg=105239.96, stdev=58933.85 00:25:02.583 clat percentiles (msec): 00:25:02.583 | 1.00th=[ 4], 5.00th=[ 13], 10.00th=[ 31], 20.00th=[ 54], 00:25:02.583 | 30.00th=[ 75], 40.00th=[ 93], 50.00th=[ 103], 60.00th=[ 115], 00:25:02.583 | 70.00th=[ 128], 80.00th=[ 144], 90.00th=[ 171], 95.00th=[ 190], 00:25:02.583 | 99.00th=[ 309], 99.50th=[ 355], 99.90th=[ 384], 99.95th=[ 388], 00:25:02.583 | 99.99th=[ 388] 00:25:02.583 bw ( KiB/s): min=79360, max=313344, per=8.20%, avg=154249.45, stdev=56098.24, samples=20 00:25:02.583 iops : min= 310, max= 1224, avg=602.45, stdev=219.03, samples=20 00:25:02.583 lat (msec) : 2=0.02%, 4=1.23%, 10=2.33%, 20=4.07%, 50=11.07% 00:25:02.583 lat (msec) : 100=28.28%, 250=51.07%, 500=1.94% 00:25:02.583 cpu : usr=0.32%, sys=2.00%, ctx=1318, majf=0, minf=4097 00:25:02.583 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:25:02.583 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:02.583 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:02.583 issued rwts: total=6090,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:02.583 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:02.583 job10: (groupid=0, jobs=1): err= 0: pid=1486334: Wed Jul 24 02:03:15 2024 00:25:02.583 read: IOPS=899, BW=225MiB/s (236MB/s)(2253MiB/10022msec) 00:25:02.583 slat (usec): min=8, max=119546, avg=754.98, stdev=3338.87 00:25:02.583 clat (usec): min=1015, max=293039, avg=70390.84, stdev=40765.30 00:25:02.583 lat (usec): min=1045, max=293067, avg=71145.83, stdev=41128.11 00:25:02.583 clat percentiles (msec): 00:25:02.583 | 1.00th=[ 5], 5.00th=[ 15], 10.00th=[ 23], 20.00th=[ 41], 00:25:02.583 | 30.00th=[ 50], 40.00th=[ 57], 50.00th=[ 65], 60.00th=[ 75], 00:25:02.583 | 70.00th=[ 85], 80.00th=[ 97], 90.00th=[ 124], 95.00th=[ 138], 00:25:02.583 | 99.00th=[ 222], 99.50th=[ 275], 99.90th=[ 288], 99.95th=[ 288], 00:25:02.583 | 99.99th=[ 292] 00:25:02.583 bw ( KiB/s): min=117248, max=462336, per=12.17%, avg=229013.20, stdev=91237.55, samples=20 00:25:02.583 iops : min= 458, max= 1806, avg=894.50, stdev=356.39, samples=20 00:25:02.583 lat (msec) : 2=0.36%, 4=0.37%, 10=2.85%, 20=5.28%, 50=22.21% 00:25:02.583 lat (msec) : 100=49.99%, 250=18.16%, 500=0.79% 00:25:02.583 cpu : usr=0.33%, sys=2.44%, ctx=1727, majf=0, minf=4097 00:25:02.583 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:25:02.583 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:02.583 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:02.583 issued rwts: total=9010,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:02.583 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:02.583 00:25:02.583 Run status group 0 (all jobs): 00:25:02.583 READ: bw=1837MiB/s (1926MB/s), 126MiB/s-256MiB/s (132MB/s-269MB/s), io=18.1GiB (19.5GB), run=10020-10111msec 00:25:02.583 00:25:02.583 Disk stats (read/write): 00:25:02.583 nvme0n1: ios=12426/0, merge=0/0, ticks=1239590/0, in_queue=1239590, util=96.07% 00:25:02.583 nvme10n1: ios=12516/0, merge=0/0, ticks=1253080/0, in_queue=1253080, util=96.50% 00:25:02.583 nvme1n1: ios=10662/0, merge=0/0, ticks=1261060/0, in_queue=1261060, util=97.04% 00:25:02.583 nvme2n1: ios=15023/0, merge=0/0, ticks=1241437/0, in_queue=1241437, util=97.32% 00:25:02.583 nvme3n1: ios=19949/0, merge=0/0, ticks=1223108/0, in_queue=1223108, util=97.40% 00:25:02.583 nvme4n1: ios=14197/0, merge=0/0, ticks=1254369/0, in_queue=1254369, util=98.09% 00:25:02.583 nvme5n1: ios=10827/0, merge=0/0, ticks=1254219/0, in_queue=1254219, util=98.30% 00:25:02.583 nvme6n1: ios=10062/0, merge=0/0, ticks=1235719/0, in_queue=1235719, util=98.43% 00:25:02.583 nvme7n1: ios=11534/0, merge=0/0, ticks=1254676/0, in_queue=1254676, util=98.91% 00:25:02.583 nvme8n1: ios=11603/0, merge=0/0, ticks=1224684/0, in_queue=1224684, util=99.07% 00:25:02.583 nvme9n1: ios=17589/0, merge=0/0, ticks=1215896/0, in_queue=1215896, util=99.20% 00:25:02.583 02:03:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:25:02.583 [global] 00:25:02.583 thread=1 00:25:02.583 invalidate=1 00:25:02.583 rw=randwrite 00:25:02.583 time_based=1 00:25:02.583 runtime=10 00:25:02.583 ioengine=libaio 00:25:02.583 direct=1 00:25:02.583 bs=262144 00:25:02.583 iodepth=64 00:25:02.583 norandommap=1 00:25:02.583 numjobs=1 00:25:02.583 00:25:02.583 [job0] 00:25:02.583 filename=/dev/nvme0n1 00:25:02.583 [job1] 00:25:02.583 filename=/dev/nvme10n1 00:25:02.583 [job2] 00:25:02.583 filename=/dev/nvme1n1 00:25:02.583 [job3] 00:25:02.583 filename=/dev/nvme2n1 00:25:02.583 [job4] 00:25:02.583 filename=/dev/nvme3n1 00:25:02.583 [job5] 00:25:02.583 filename=/dev/nvme4n1 00:25:02.583 [job6] 00:25:02.583 filename=/dev/nvme5n1 00:25:02.583 [job7] 00:25:02.583 filename=/dev/nvme6n1 00:25:02.583 [job8] 00:25:02.583 filename=/dev/nvme7n1 00:25:02.583 [job9] 00:25:02.583 filename=/dev/nvme8n1 00:25:02.583 [job10] 00:25:02.583 filename=/dev/nvme9n1 00:25:02.583 Could not set queue depth (nvme0n1) 00:25:02.583 Could not set queue depth (nvme10n1) 00:25:02.583 Could not set queue depth (nvme1n1) 00:25:02.583 Could not set queue depth (nvme2n1) 00:25:02.583 Could not set queue depth (nvme3n1) 00:25:02.583 Could not set queue depth (nvme4n1) 00:25:02.583 Could not set queue depth (nvme5n1) 00:25:02.583 Could not set queue depth (nvme6n1) 00:25:02.583 Could not set queue depth (nvme7n1) 00:25:02.583 Could not set queue depth (nvme8n1) 00:25:02.583 Could not set queue depth (nvme9n1) 00:25:02.583 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:02.583 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:02.583 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:02.583 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:02.583 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:02.583 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:02.583 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:02.583 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:02.583 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:02.584 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:02.584 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:02.584 fio-3.35 00:25:02.584 Starting 11 threads 00:25:12.565 00:25:12.565 job0: (groupid=0, jobs=1): err= 0: pid=1487853: Wed Jul 24 02:03:26 2024 00:25:12.565 write: IOPS=1011, BW=253MiB/s (265MB/s)(2547MiB/10071msec); 0 zone resets 00:25:12.565 slat (usec): min=16, max=16993, avg=816.52, stdev=1813.65 00:25:12.565 clat (usec): min=1366, max=243435, avg=62415.17, stdev=33517.59 00:25:12.565 lat (usec): min=1431, max=243481, avg=63231.69, stdev=33881.05 00:25:12.565 clat percentiles (msec): 00:25:12.565 | 1.00th=[ 8], 5.00th=[ 23], 10.00th=[ 39], 20.00th=[ 42], 00:25:12.565 | 30.00th=[ 44], 40.00th=[ 45], 50.00th=[ 47], 60.00th=[ 57], 00:25:12.565 | 70.00th=[ 77], 80.00th=[ 84], 90.00th=[ 109], 95.00th=[ 136], 00:25:12.565 | 99.00th=[ 167], 99.50th=[ 188], 99.90th=[ 213], 99.95th=[ 226], 00:25:12.565 | 99.99th=[ 241] 00:25:12.565 bw ( KiB/s): min=134144, max=391168, per=18.38%, avg=259225.60, stdev=83615.32, samples=20 00:25:12.565 iops : min= 524, max= 1528, avg=1012.60, stdev=326.62, samples=20 00:25:12.565 lat (msec) : 2=0.05%, 4=0.16%, 10=1.24%, 20=3.10%, 50=51.63% 00:25:12.565 lat (msec) : 100=30.92%, 250=12.91% 00:25:12.565 cpu : usr=3.17%, sys=3.16%, ctx=3930, majf=0, minf=1 00:25:12.565 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:25:12.565 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:12.565 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:12.565 issued rwts: total=0,10189,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:12.565 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:12.565 job1: (groupid=0, jobs=1): err= 0: pid=1487855: Wed Jul 24 02:03:26 2024 00:25:12.565 write: IOPS=263, BW=65.9MiB/s (69.1MB/s)(673MiB/10216msec); 0 zone resets 00:25:12.565 slat (usec): min=22, max=75777, avg=3416.75, stdev=7768.80 00:25:12.565 clat (usec): min=813, max=554671, avg=239315.54, stdev=108026.25 00:25:12.565 lat (usec): min=845, max=573807, avg=242732.29, stdev=109201.91 00:25:12.565 clat percentiles (msec): 00:25:12.565 | 1.00th=[ 6], 5.00th=[ 47], 10.00th=[ 93], 20.00th=[ 157], 00:25:12.565 | 30.00th=[ 190], 40.00th=[ 209], 50.00th=[ 245], 60.00th=[ 268], 00:25:12.565 | 70.00th=[ 292], 80.00th=[ 330], 90.00th=[ 368], 95.00th=[ 422], 00:25:12.565 | 99.00th=[ 527], 99.50th=[ 542], 99.90th=[ 550], 99.95th=[ 550], 00:25:12.565 | 99.99th=[ 558] 00:25:12.565 bw ( KiB/s): min=34816, max=139776, per=4.77%, avg=67283.15, stdev=25867.85, samples=20 00:25:12.565 iops : min= 136, max= 546, avg=262.80, stdev=101.05, samples=20 00:25:12.565 lat (usec) : 1000=0.19% 00:25:12.565 lat (msec) : 2=0.63%, 4=0.07%, 10=0.89%, 20=0.37%, 50=3.05% 00:25:12.565 lat (msec) : 100=5.94%, 250=41.05%, 500=46.17%, 750=1.63% 00:25:12.565 cpu : usr=0.84%, sys=0.81%, ctx=1004, majf=0, minf=1 00:25:12.565 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.7% 00:25:12.565 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:12.565 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:12.565 issued rwts: total=0,2692,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:12.565 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:12.565 job2: (groupid=0, jobs=1): err= 0: pid=1487856: Wed Jul 24 02:03:26 2024 00:25:12.565 write: IOPS=706, BW=177MiB/s (185MB/s)(1779MiB/10071msec); 0 zone resets 00:25:12.565 slat (usec): min=18, max=149543, avg=1235.73, stdev=3782.26 00:25:12.565 clat (usec): min=940, max=551812, avg=89294.49, stdev=75392.69 00:25:12.565 lat (usec): min=972, max=551871, avg=90530.22, stdev=76419.28 00:25:12.565 clat percentiles (msec): 00:25:12.565 | 1.00th=[ 5], 5.00th=[ 14], 10.00th=[ 26], 20.00th=[ 41], 00:25:12.565 | 30.00th=[ 45], 40.00th=[ 59], 50.00th=[ 80], 60.00th=[ 88], 00:25:12.565 | 70.00th=[ 102], 80.00th=[ 117], 90.00th=[ 163], 95.00th=[ 201], 00:25:12.565 | 99.00th=[ 426], 99.50th=[ 523], 99.90th=[ 550], 99.95th=[ 550], 00:25:12.565 | 99.99th=[ 550] 00:25:12.565 bw ( KiB/s): min=36864, max=400896, per=12.80%, avg=180523.50, stdev=91065.61, samples=20 00:25:12.565 iops : min= 144, max= 1566, avg=705.15, stdev=355.73, samples=20 00:25:12.565 lat (usec) : 1000=0.03% 00:25:12.565 lat (msec) : 2=0.24%, 4=0.73%, 10=2.40%, 20=4.23%, 50=26.16% 00:25:12.565 lat (msec) : 100=34.66%, 250=27.94%, 500=2.88%, 750=0.72% 00:25:12.565 cpu : usr=2.33%, sys=2.20%, ctx=2838, majf=0, minf=1 00:25:12.565 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:25:12.565 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:12.565 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:12.565 issued rwts: total=0,7114,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:12.565 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:12.565 job3: (groupid=0, jobs=1): err= 0: pid=1487857: Wed Jul 24 02:03:26 2024 00:25:12.565 write: IOPS=288, BW=72.1MiB/s (75.6MB/s)(737MiB/10219msec); 0 zone resets 00:25:12.565 slat (usec): min=24, max=108840, avg=2847.63, stdev=7620.52 00:25:12.565 clat (msec): min=3, max=589, avg=219.03, stdev=121.32 00:25:12.565 lat (msec): min=4, max=589, avg=221.88, stdev=123.24 00:25:12.565 clat percentiles (msec): 00:25:12.565 | 1.00th=[ 16], 5.00th=[ 30], 10.00th=[ 58], 20.00th=[ 113], 00:25:12.565 | 30.00th=[ 146], 40.00th=[ 186], 50.00th=[ 211], 60.00th=[ 234], 00:25:12.565 | 70.00th=[ 271], 80.00th=[ 334], 90.00th=[ 376], 95.00th=[ 418], 00:25:12.565 | 99.00th=[ 558], 99.50th=[ 575], 99.90th=[ 592], 99.95th=[ 592], 00:25:12.565 | 99.99th=[ 592] 00:25:12.565 bw ( KiB/s): min=32768, max=170496, per=5.23%, avg=73779.20, stdev=36937.99, samples=20 00:25:12.565 iops : min= 128, max= 666, avg=288.20, stdev=144.29, samples=20 00:25:12.565 lat (msec) : 4=0.03%, 10=0.44%, 20=1.70%, 50=6.82%, 100=7.88% 00:25:12.565 lat (msec) : 250=48.07%, 500=32.96%, 750=2.10% 00:25:12.565 cpu : usr=1.02%, sys=0.96%, ctx=1528, majf=0, minf=1 00:25:12.565 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.9% 00:25:12.565 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:12.565 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:12.565 issued rwts: total=0,2946,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:12.565 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:12.565 job4: (groupid=0, jobs=1): err= 0: pid=1487858: Wed Jul 24 02:03:26 2024 00:25:12.565 write: IOPS=397, BW=99.4MiB/s (104MB/s)(1015MiB/10216msec); 0 zone resets 00:25:12.565 slat (usec): min=22, max=139048, avg=2018.50, stdev=6008.20 00:25:12.565 clat (msec): min=2, max=551, avg=158.86, stdev=119.57 00:25:12.566 lat (msec): min=2, max=562, avg=160.88, stdev=121.13 00:25:12.566 clat percentiles (msec): 00:25:12.566 | 1.00th=[ 6], 5.00th=[ 16], 10.00th=[ 37], 20.00th=[ 65], 00:25:12.566 | 30.00th=[ 84], 40.00th=[ 105], 50.00th=[ 120], 60.00th=[ 140], 00:25:12.566 | 70.00th=[ 192], 80.00th=[ 292], 90.00th=[ 342], 95.00th=[ 380], 00:25:12.566 | 99.00th=[ 518], 99.50th=[ 542], 99.90th=[ 550], 99.95th=[ 550], 00:25:12.566 | 99.99th=[ 550] 00:25:12.566 bw ( KiB/s): min=36864, max=223744, per=7.26%, avg=102337.00, stdev=53067.53, samples=20 00:25:12.566 iops : min= 144, max= 874, avg=399.75, stdev=207.29, samples=20 00:25:12.566 lat (msec) : 4=0.57%, 10=2.04%, 20=3.30%, 50=10.20%, 100=22.39% 00:25:12.566 lat (msec) : 250=37.24%, 500=22.96%, 750=1.31% 00:25:12.566 cpu : usr=1.26%, sys=1.46%, ctx=1954, majf=0, minf=1 00:25:12.566 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:25:12.566 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:12.566 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:12.566 issued rwts: total=0,4060,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:12.566 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:12.566 job5: (groupid=0, jobs=1): err= 0: pid=1487876: Wed Jul 24 02:03:26 2024 00:25:12.566 write: IOPS=328, BW=82.1MiB/s (86.1MB/s)(825MiB/10040msec); 0 zone resets 00:25:12.566 slat (usec): min=22, max=77851, avg=2524.03, stdev=6817.27 00:25:12.566 clat (usec): min=818, max=551509, avg=192033.82, stdev=125073.40 00:25:12.566 lat (usec): min=887, max=551570, avg=194557.86, stdev=126757.13 00:25:12.566 clat percentiles (usec): 00:25:12.566 | 1.00th=[ 1434], 5.00th=[ 14222], 10.00th=[ 29492], 20.00th=[ 56361], 00:25:12.566 | 30.00th=[100140], 40.00th=[158335], 50.00th=[189793], 60.00th=[223347], 00:25:12.566 | 70.00th=[270533], 80.00th=[308282], 90.00th=[346031], 95.00th=[396362], 00:25:12.566 | 99.00th=[522191], 99.50th=[549454], 99.90th=[549454], 99.95th=[549454], 00:25:12.566 | 99.99th=[549454] 00:25:12.566 bw ( KiB/s): min=34816, max=207872, per=5.87%, avg=82841.60, stdev=44122.34, samples=20 00:25:12.566 iops : min= 136, max= 812, avg=323.60, stdev=172.35, samples=20 00:25:12.566 lat (usec) : 1000=0.36% 00:25:12.566 lat (msec) : 2=1.09%, 4=0.27%, 10=1.12%, 20=4.03%, 50=11.61% 00:25:12.566 lat (msec) : 100=11.12%, 250=35.40%, 500=33.43%, 750=1.55% 00:25:12.566 cpu : usr=1.03%, sys=1.09%, ctx=1665, majf=0, minf=1 00:25:12.566 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:25:12.566 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:12.566 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:12.566 issued rwts: total=0,3299,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:12.566 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:12.566 job6: (groupid=0, jobs=1): err= 0: pid=1487877: Wed Jul 24 02:03:26 2024 00:25:12.566 write: IOPS=319, BW=79.9MiB/s (83.8MB/s)(816MiB/10213msec); 0 zone resets 00:25:12.566 slat (usec): min=16, max=130750, avg=2510.72, stdev=7493.07 00:25:12.566 clat (usec): min=1375, max=600198, avg=197703.07, stdev=133988.07 00:25:12.566 lat (usec): min=1417, max=600245, avg=200213.79, stdev=135855.59 00:25:12.566 clat percentiles (msec): 00:25:12.566 | 1.00th=[ 6], 5.00th=[ 13], 10.00th=[ 26], 20.00th=[ 54], 00:25:12.566 | 30.00th=[ 94], 40.00th=[ 163], 50.00th=[ 205], 60.00th=[ 234], 00:25:12.566 | 70.00th=[ 262], 80.00th=[ 300], 90.00th=[ 376], 95.00th=[ 447], 00:25:12.566 | 99.00th=[ 550], 99.50th=[ 567], 99.90th=[ 584], 99.95th=[ 600], 00:25:12.566 | 99.99th=[ 600] 00:25:12.566 bw ( KiB/s): min=43008, max=276480, per=5.81%, avg=81920.00, stdev=54280.39, samples=20 00:25:12.566 iops : min= 168, max= 1080, avg=320.00, stdev=212.03, samples=20 00:25:12.566 lat (msec) : 2=0.15%, 4=0.43%, 10=2.82%, 20=4.93%, 50=10.30% 00:25:12.566 lat (msec) : 100=11.80%, 250=36.16%, 500=31.32%, 750=2.08% 00:25:12.566 cpu : usr=0.76%, sys=1.02%, ctx=1804, majf=0, minf=1 00:25:12.566 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:25:12.566 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:12.566 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:12.566 issued rwts: total=0,3263,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:12.566 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:12.566 job7: (groupid=0, jobs=1): err= 0: pid=1487878: Wed Jul 24 02:03:26 2024 00:25:12.566 write: IOPS=692, BW=173MiB/s (182MB/s)(1759MiB/10152msec); 0 zone resets 00:25:12.566 slat (usec): min=24, max=56916, avg=1287.03, stdev=2980.32 00:25:12.566 clat (msec): min=4, max=282, avg=91.00, stdev=57.19 00:25:12.566 lat (msec): min=4, max=282, avg=92.29, stdev=57.89 00:25:12.566 clat percentiles (msec): 00:25:12.566 | 1.00th=[ 22], 5.00th=[ 42], 10.00th=[ 44], 20.00th=[ 47], 00:25:12.566 | 30.00th=[ 52], 40.00th=[ 67], 50.00th=[ 77], 60.00th=[ 81], 00:25:12.566 | 70.00th=[ 91], 80.00th=[ 121], 90.00th=[ 188], 95.00th=[ 232], 00:25:12.566 | 99.00th=[ 268], 99.50th=[ 275], 99.90th=[ 279], 99.95th=[ 279], 00:25:12.566 | 99.99th=[ 284] 00:25:12.566 bw ( KiB/s): min=61440, max=335360, per=12.66%, avg=178466.40, stdev=79706.66, samples=20 00:25:12.566 iops : min= 240, max= 1310, avg=697.10, stdev=311.40, samples=20 00:25:12.566 lat (msec) : 10=0.38%, 20=0.50%, 50=27.44%, 100=44.57%, 250=24.28% 00:25:12.566 lat (msec) : 500=2.83% 00:25:12.566 cpu : usr=2.28%, sys=2.18%, ctx=2301, majf=0, minf=1 00:25:12.566 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:25:12.566 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:12.566 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:12.566 issued rwts: total=0,7034,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:12.566 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:12.566 job8: (groupid=0, jobs=1): err= 0: pid=1487879: Wed Jul 24 02:03:26 2024 00:25:12.566 write: IOPS=556, BW=139MiB/s (146MB/s)(1397MiB/10044msec); 0 zone resets 00:25:12.566 slat (usec): min=20, max=118617, avg=1008.17, stdev=4649.33 00:25:12.566 clat (usec): min=1181, max=573300, avg=113945.63, stdev=104415.48 00:25:12.566 lat (usec): min=1228, max=573383, avg=114953.80, stdev=105623.99 00:25:12.566 clat percentiles (msec): 00:25:12.566 | 1.00th=[ 6], 5.00th=[ 13], 10.00th=[ 20], 20.00th=[ 34], 00:25:12.566 | 30.00th=[ 47], 40.00th=[ 68], 50.00th=[ 90], 60.00th=[ 110], 00:25:12.566 | 70.00th=[ 124], 80.00th=[ 161], 90.00th=[ 247], 95.00th=[ 368], 00:25:12.566 | 99.00th=[ 468], 99.50th=[ 542], 99.90th=[ 567], 99.95th=[ 567], 00:25:12.566 | 99.99th=[ 575] 00:25:12.566 bw ( KiB/s): min=34816, max=283136, per=10.03%, avg=141440.00, stdev=74350.94, samples=20 00:25:12.566 iops : min= 136, max= 1106, avg=552.50, stdev=290.43, samples=20 00:25:12.566 lat (msec) : 2=0.16%, 4=0.47%, 10=2.90%, 20=6.50%, 50=21.80% 00:25:12.566 lat (msec) : 100=22.17%, 250=36.22%, 500=9.04%, 750=0.75% 00:25:12.566 cpu : usr=1.55%, sys=1.97%, ctx=3944, majf=0, minf=1 00:25:12.566 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:25:12.566 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:12.566 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:12.566 issued rwts: total=0,5588,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:12.566 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:12.566 job9: (groupid=0, jobs=1): err= 0: pid=1487880: Wed Jul 24 02:03:26 2024 00:25:12.566 write: IOPS=541, BW=135MiB/s (142MB/s)(1382MiB/10218msec); 0 zone resets 00:25:12.566 slat (usec): min=20, max=135255, avg=1259.62, stdev=5123.98 00:25:12.566 clat (msec): min=2, max=552, avg=116.96, stdev=113.59 00:25:12.566 lat (msec): min=2, max=552, avg=118.22, stdev=115.01 00:25:12.566 clat percentiles (msec): 00:25:12.566 | 1.00th=[ 7], 5.00th=[ 17], 10.00th=[ 31], 20.00th=[ 43], 00:25:12.566 | 30.00th=[ 46], 40.00th=[ 57], 50.00th=[ 77], 60.00th=[ 85], 00:25:12.566 | 70.00th=[ 118], 80.00th=[ 174], 90.00th=[ 317], 95.00th=[ 388], 00:25:12.566 | 99.00th=[ 493], 99.50th=[ 523], 99.90th=[ 550], 99.95th=[ 550], 00:25:12.566 | 99.99th=[ 550] 00:25:12.566 bw ( KiB/s): min=34816, max=322560, per=9.92%, avg=139929.60, stdev=84680.09, samples=20 00:25:12.566 iops : min= 136, max= 1260, avg=546.60, stdev=330.78, samples=20 00:25:12.566 lat (msec) : 4=0.24%, 10=2.30%, 20=3.69%, 50=31.14%, 100=28.18% 00:25:12.566 lat (msec) : 250=20.78%, 500=12.75%, 750=0.92% 00:25:12.566 cpu : usr=1.66%, sys=2.08%, ctx=3284, majf=0, minf=1 00:25:12.566 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:25:12.566 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:12.566 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:12.566 issued rwts: total=0,5529,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:12.566 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:12.566 job10: (groupid=0, jobs=1): err= 0: pid=1487881: Wed Jul 24 02:03:26 2024 00:25:12.566 write: IOPS=447, BW=112MiB/s (117MB/s)(1143MiB/10218msec); 0 zone resets 00:25:12.566 slat (usec): min=22, max=132128, avg=1658.25, stdev=5769.77 00:25:12.566 clat (usec): min=1967, max=637172, avg=141257.21, stdev=121127.85 00:25:12.566 lat (msec): min=2, max=637, avg=142.92, stdev=122.62 00:25:12.566 clat percentiles (msec): 00:25:12.566 | 1.00th=[ 9], 5.00th=[ 19], 10.00th=[ 34], 20.00th=[ 45], 00:25:12.566 | 30.00th=[ 70], 40.00th=[ 83], 50.00th=[ 103], 60.00th=[ 124], 00:25:12.566 | 70.00th=[ 157], 80.00th=[ 203], 90.00th=[ 342], 95.00th=[ 388], 00:25:12.566 | 99.00th=[ 550], 99.50th=[ 600], 99.90th=[ 625], 99.95th=[ 634], 00:25:12.566 | 99.99th=[ 634] 00:25:12.566 bw ( KiB/s): min=30720, max=242688, per=8.19%, avg=115430.40, stdev=65306.86, samples=20 00:25:12.566 iops : min= 120, max= 948, avg=450.90, stdev=255.10, samples=20 00:25:12.566 lat (msec) : 2=0.02%, 4=0.11%, 10=1.64%, 20=3.85%, 50=16.31% 00:25:12.566 lat (msec) : 100=27.29%, 250=33.26%, 500=15.79%, 750=1.73% 00:25:12.566 cpu : usr=1.56%, sys=1.65%, ctx=2562, majf=0, minf=1 00:25:12.566 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:25:12.566 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:12.566 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:12.567 issued rwts: total=0,4573,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:12.567 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:12.567 00:25:12.567 Run status group 0 (all jobs): 00:25:12.567 WRITE: bw=1377MiB/s (1444MB/s), 65.9MiB/s-253MiB/s (69.1MB/s-265MB/s), io=13.7GiB (14.8GB), run=10040-10219msec 00:25:12.567 00:25:12.567 Disk stats (read/write): 00:25:12.567 nvme0n1: ios=49/20135, merge=0/0, ticks=118/1215312, in_queue=1215430, util=97.60% 00:25:12.567 nvme10n1: ios=47/5342, merge=0/0, ticks=1845/1230322, in_queue=1232167, util=99.23% 00:25:12.567 nvme1n1: ios=48/13985, merge=0/0, ticks=1098/1200408, in_queue=1201506, util=99.48% 00:25:12.567 nvme2n1: ios=49/5853, merge=0/0, ticks=47/1237315, in_queue=1237362, util=97.89% 00:25:12.567 nvme3n1: ios=54/8086, merge=0/0, ticks=2075/1232797, in_queue=1234872, util=99.71% 00:25:12.567 nvme4n1: ios=29/6235, merge=0/0, ticks=380/1216377, in_queue=1216757, util=100.00% 00:25:12.567 nvme5n1: ios=0/6494, merge=0/0, ticks=0/1237184, in_queue=1237184, util=98.31% 00:25:12.567 nvme6n1: ios=48/13797, merge=0/0, ticks=807/1211922, in_queue=1212729, util=100.00% 00:25:12.567 nvme7n1: ios=44/10913, merge=0/0, ticks=2149/1213513, in_queue=1215662, util=100.00% 00:25:12.567 nvme8n1: ios=0/11021, merge=0/0, ticks=0/1244929, in_queue=1244929, util=99.03% 00:25:12.567 nvme9n1: ios=0/9101, merge=0/0, ticks=0/1241359, in_queue=1241359, util=99.10% 00:25:12.567 02:03:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:25:12.567 02:03:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:25:12.567 02:03:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:12.567 02:03:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:25:12.567 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:25:12.567 02:03:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:25:12.567 02:03:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1217 -- # local i=0 00:25:12.567 02:03:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:25:12.567 02:03:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # grep -q -w SPDK1 00:25:12.567 02:03:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:25:12.567 02:03:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1225 -- # grep -q -w SPDK1 00:25:12.567 02:03:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # return 0 00:25:12.567 02:03:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:12.567 02:03:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:12.567 02:03:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:12.567 02:03:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:12.567 02:03:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:12.567 02:03:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:25:12.567 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:25:12.567 02:03:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:25:12.567 02:03:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1217 -- # local i=0 00:25:12.567 02:03:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:25:12.567 02:03:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # grep -q -w SPDK2 00:25:12.567 02:03:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:25:12.567 02:03:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1225 -- # grep -q -w SPDK2 00:25:12.567 02:03:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # return 0 00:25:12.567 02:03:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:25:12.567 02:03:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:12.567 02:03:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:12.567 02:03:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:12.567 02:03:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:12.567 02:03:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:25:12.850 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:25:12.850 02:03:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:25:12.850 02:03:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1217 -- # local i=0 00:25:12.850 02:03:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:25:12.850 02:03:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # grep -q -w SPDK3 00:25:12.850 02:03:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:25:12.850 02:03:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1225 -- # grep -q -w SPDK3 00:25:12.850 02:03:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # return 0 00:25:12.850 02:03:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:25:12.850 02:03:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:12.850 02:03:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:12.850 02:03:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:12.850 02:03:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:12.850 02:03:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:25:12.850 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:25:12.850 02:03:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:25:12.850 02:03:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1217 -- # local i=0 00:25:12.850 02:03:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:25:12.850 02:03:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # grep -q -w SPDK4 00:25:12.850 02:03:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:25:12.850 02:03:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1225 -- # grep -q -w SPDK4 00:25:12.850 02:03:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # return 0 00:25:12.850 02:03:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:25:12.850 02:03:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:12.850 02:03:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:12.850 02:03:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:12.850 02:03:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:12.850 02:03:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:25:13.109 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:25:13.109 02:03:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:25:13.109 02:03:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1217 -- # local i=0 00:25:13.109 02:03:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:25:13.109 02:03:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # grep -q -w SPDK5 00:25:13.109 02:03:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:25:13.109 02:03:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1225 -- # grep -q -w SPDK5 00:25:13.109 02:03:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # return 0 00:25:13.109 02:03:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:25:13.109 02:03:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.109 02:03:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:13.109 02:03:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.109 02:03:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:13.109 02:03:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:25:13.367 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:25:13.368 02:03:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:25:13.368 02:03:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1217 -- # local i=0 00:25:13.368 02:03:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:25:13.368 02:03:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # grep -q -w SPDK6 00:25:13.368 02:03:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:25:13.368 02:03:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1225 -- # grep -q -w SPDK6 00:25:13.368 02:03:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # return 0 00:25:13.368 02:03:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:25:13.368 02:03:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.368 02:03:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:13.368 02:03:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.368 02:03:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:13.368 02:03:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:25:13.368 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:25:13.368 02:03:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:25:13.368 02:03:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1217 -- # local i=0 00:25:13.368 02:03:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:25:13.368 02:03:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # grep -q -w SPDK7 00:25:13.368 02:03:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:25:13.368 02:03:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1225 -- # grep -q -w SPDK7 00:25:13.368 02:03:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # return 0 00:25:13.368 02:03:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:25:13.368 02:03:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.368 02:03:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:13.368 02:03:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.368 02:03:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:13.368 02:03:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:25:13.627 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:25:13.627 02:03:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:25:13.627 02:03:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1217 -- # local i=0 00:25:13.627 02:03:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:25:13.627 02:03:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # grep -q -w SPDK8 00:25:13.627 02:03:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:25:13.627 02:03:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1225 -- # grep -q -w SPDK8 00:25:13.627 02:03:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # return 0 00:25:13.627 02:03:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:25:13.627 02:03:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.627 02:03:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:13.627 02:03:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.627 02:03:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:13.627 02:03:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:25:13.627 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:25:13.627 02:03:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:25:13.627 02:03:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1217 -- # local i=0 00:25:13.627 02:03:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:25:13.627 02:03:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # grep -q -w SPDK9 00:25:13.627 02:03:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:25:13.627 02:03:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1225 -- # grep -q -w SPDK9 00:25:13.627 02:03:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # return 0 00:25:13.627 02:03:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:25:13.627 02:03:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.627 02:03:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:13.886 02:03:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.886 02:03:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:13.886 02:03:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:25:13.886 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:25:13.886 02:03:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:25:13.886 02:03:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1217 -- # local i=0 00:25:13.886 02:03:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # grep -q -w SPDK10 00:25:13.886 02:03:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:25:13.886 02:03:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:25:13.886 02:03:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1225 -- # grep -q -w SPDK10 00:25:13.886 02:03:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # return 0 00:25:13.886 02:03:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:25:13.886 02:03:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.886 02:03:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:13.886 02:03:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.886 02:03:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:13.886 02:03:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:25:13.886 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:25:13.886 02:03:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:25:13.886 02:03:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1217 -- # local i=0 00:25:13.886 02:03:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:25:13.886 02:03:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # grep -q -w SPDK11 00:25:13.886 02:03:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:25:13.886 02:03:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1225 -- # grep -q -w SPDK11 00:25:13.886 02:03:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # return 0 00:25:13.886 02:03:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:25:13.886 02:03:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.886 02:03:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:13.886 02:03:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.886 02:03:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:25:13.887 02:03:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:25:13.887 02:03:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:25:13.887 02:03:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:13.887 02:03:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # sync 00:25:13.887 02:03:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:13.887 02:03:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@120 -- # set +e 00:25:13.887 02:03:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:13.887 02:03:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:13.887 rmmod nvme_tcp 00:25:14.148 rmmod nvme_fabrics 00:25:14.148 rmmod nvme_keyring 00:25:14.148 02:03:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:14.148 02:03:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@124 -- # set -e 00:25:14.148 02:03:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@125 -- # return 0 00:25:14.148 02:03:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@489 -- # '[' -n 1481930 ']' 00:25:14.148 02:03:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@490 -- # killprocess 1481930 00:25:14.148 02:03:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@948 -- # '[' -z 1481930 ']' 00:25:14.148 02:03:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@952 -- # kill -0 1481930 00:25:14.148 02:03:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@953 -- # uname 00:25:14.148 02:03:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:14.148 02:03:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1481930 00:25:14.148 02:03:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:14.148 02:03:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:14.148 02:03:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1481930' 00:25:14.148 killing process with pid 1481930 00:25:14.148 02:03:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@967 -- # kill 1481930 00:25:14.148 02:03:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@972 -- # wait 1481930 00:25:14.720 02:03:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:14.720 02:03:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:14.720 02:03:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:14.720 02:03:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:14.720 02:03:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:14.720 02:03:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:14.720 02:03:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:14.720 02:03:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:16.630 02:03:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:16.630 00:25:16.630 real 1m0.253s 00:25:16.630 user 3m19.975s 00:25:16.630 sys 0m24.635s 00:25:16.630 02:03:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:16.630 02:03:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:16.630 ************************************ 00:25:16.630 END TEST nvmf_multiconnection 00:25:16.630 ************************************ 00:25:16.630 02:03:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@48 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:25:16.630 02:03:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:16.630 02:03:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:16.630 02:03:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:16.630 ************************************ 00:25:16.630 START TEST nvmf_initiator_timeout 00:25:16.630 ************************************ 00:25:16.630 02:03:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:25:16.889 * Looking for test storage... 00:25:16.889 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:16.889 02:03:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:16.889 02:03:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:25:16.889 02:03:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:16.889 02:03:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:16.889 02:03:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:16.889 02:03:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:16.889 02:03:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:16.889 02:03:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:16.889 02:03:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:16.889 02:03:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:16.889 02:03:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:16.889 02:03:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:16.889 02:03:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:16.889 02:03:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:16.889 02:03:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:16.889 02:03:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:16.889 02:03:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:16.889 02:03:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:16.889 02:03:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:16.889 02:03:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:16.889 02:03:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:16.889 02:03:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:16.889 02:03:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:16.889 02:03:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:16.889 02:03:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:16.889 02:03:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:25:16.889 02:03:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:16.889 02:03:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@47 -- # : 0 00:25:16.889 02:03:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:16.889 02:03:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:16.889 02:03:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:16.889 02:03:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:16.889 02:03:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:16.889 02:03:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:16.889 02:03:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:16.889 02:03:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:16.889 02:03:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:16.889 02:03:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:16.889 02:03:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:25:16.889 02:03:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:16.889 02:03:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:16.889 02:03:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:16.889 02:03:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:16.889 02:03:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:16.889 02:03:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:16.889 02:03:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:16.889 02:03:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:16.889 02:03:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:16.889 02:03:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:16.889 02:03:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@285 -- # xtrace_disable 00:25:16.889 02:03:31 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:18.795 02:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:18.795 02:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # pci_devs=() 00:25:18.795 02:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:18.795 02:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:18.795 02:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:18.796 02:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:18.796 02:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:18.796 02:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # net_devs=() 00:25:18.796 02:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:18.796 02:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # e810=() 00:25:18.796 02:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # local -ga e810 00:25:18.796 02:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # x722=() 00:25:18.796 02:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # local -ga x722 00:25:18.796 02:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # mlx=() 00:25:18.796 02:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # local -ga mlx 00:25:18.796 02:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:18.796 02:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:18.796 02:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:18.796 02:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:18.796 02:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:18.796 02:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:18.796 02:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:18.796 02:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:18.796 02:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:18.796 02:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:18.796 02:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:18.796 02:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:18.796 02:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:18.796 02:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:18.796 02:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:18.796 02:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:18.796 02:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:18.796 02:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:18.796 02:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:18.796 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:18.796 02:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:18.796 02:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:18.796 02:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:18.796 02:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:18.796 02:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:18.796 02:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:18.796 02:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:18.796 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:18.796 02:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:18.796 02:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:18.796 02:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:18.796 02:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:18.796 02:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:18.796 02:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:18.796 02:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:18.796 02:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:18.796 02:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:18.796 02:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:18.796 02:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:18.796 02:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:18.796 02:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:18.796 02:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:18.796 02:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:18.796 02:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:18.796 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:18.796 02:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:18.796 02:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:18.796 02:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:18.796 02:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:18.796 02:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:18.796 02:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:18.796 02:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:18.796 02:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:18.796 02:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:18.796 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:18.796 02:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:18.796 02:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:18.796 02:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # is_hw=yes 00:25:18.796 02:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:18.796 02:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:18.796 02:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:18.796 02:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:18.796 02:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:18.796 02:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:18.796 02:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:18.796 02:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:18.796 02:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:18.796 02:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:18.796 02:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:18.796 02:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:18.796 02:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:18.796 02:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:18.796 02:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:18.796 02:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:18.796 02:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:18.796 02:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:18.796 02:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:18.796 02:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:18.796 02:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:18.796 02:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:18.796 02:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:18.796 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:18.796 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.236 ms 00:25:18.796 00:25:18.796 --- 10.0.0.2 ping statistics --- 00:25:18.796 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:18.796 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:25:18.796 02:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:18.796 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:18.796 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.094 ms 00:25:18.796 00:25:18.796 --- 10.0.0.1 ping statistics --- 00:25:18.796 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:18.796 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:25:18.797 02:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:18.797 02:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # return 0 00:25:18.797 02:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:18.797 02:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:18.797 02:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:18.797 02:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:18.797 02:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:18.797 02:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:18.797 02:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:18.797 02:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:25:18.797 02:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:18.797 02:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:18.797 02:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:18.797 02:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@481 -- # nvmfpid=1491192 00:25:18.797 02:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:18.797 02:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # waitforlisten 1491192 00:25:18.797 02:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@829 -- # '[' -z 1491192 ']' 00:25:18.797 02:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:18.797 02:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:18.797 02:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:18.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:18.797 02:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:18.797 02:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:18.797 [2024-07-24 02:03:33.619883] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:25:18.797 [2024-07-24 02:03:33.619959] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:18.797 EAL: No free 2048 kB hugepages reported on node 1 00:25:18.797 [2024-07-24 02:03:33.682726] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:19.055 [2024-07-24 02:03:33.768231] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:19.055 [2024-07-24 02:03:33.768285] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:19.055 [2024-07-24 02:03:33.768309] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:19.055 [2024-07-24 02:03:33.768341] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:19.055 [2024-07-24 02:03:33.768351] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:19.055 [2024-07-24 02:03:33.768419] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:19.055 [2024-07-24 02:03:33.768486] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:19.055 [2024-07-24 02:03:33.768552] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:19.055 [2024-07-24 02:03:33.768554] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:19.055 02:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:19.055 02:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@862 -- # return 0 00:25:19.055 02:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:19.055 02:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:19.055 02:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:19.055 02:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:19.055 02:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:25:19.055 02:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:19.055 02:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:19.055 02:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:19.055 Malloc0 00:25:19.055 02:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:19.055 02:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:25:19.055 02:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:19.055 02:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:19.055 Delay0 00:25:19.055 02:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:19.055 02:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:19.055 02:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:19.055 02:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:19.055 [2024-07-24 02:03:33.934408] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:19.055 02:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:19.055 02:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:25:19.055 02:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:19.055 02:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:19.346 02:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:19.346 02:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:19.346 02:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:19.346 02:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:19.346 02:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:19.346 02:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:19.346 02:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:19.346 02:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:19.346 [2024-07-24 02:03:33.962692] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:19.346 02:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:19.346 02:03:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:25:19.913 02:03:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:25:19.913 02:03:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1196 -- # local i=0 00:25:19.913 02:03:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:25:19.913 02:03:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1198 -- # [[ -n '' ]] 00:25:19.913 02:03:34 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # sleep 2 00:25:21.816 02:03:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:25:21.816 02:03:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:25:21.816 02:03:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:25:21.816 02:03:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:25:21.816 02:03:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:25:21.816 02:03:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1206 -- # return 0 00:25:21.816 02:03:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=1491495 00:25:21.816 02:03:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:25:21.816 02:03:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:25:21.816 [global] 00:25:21.816 thread=1 00:25:21.816 invalidate=1 00:25:21.816 rw=write 00:25:21.816 time_based=1 00:25:21.816 runtime=60 00:25:21.816 ioengine=libaio 00:25:21.816 direct=1 00:25:21.816 bs=4096 00:25:21.816 iodepth=1 00:25:21.816 norandommap=0 00:25:21.816 numjobs=1 00:25:21.816 00:25:21.816 verify_dump=1 00:25:21.816 verify_backlog=512 00:25:21.816 verify_state_save=0 00:25:21.816 do_verify=1 00:25:21.816 verify=crc32c-intel 00:25:21.816 [job0] 00:25:21.816 filename=/dev/nvme0n1 00:25:21.816 Could not set queue depth (nvme0n1) 00:25:22.075 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:25:22.075 fio-3.35 00:25:22.075 Starting 1 thread 00:25:25.362 02:03:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:25:25.362 02:03:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.362 02:03:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:25.362 true 00:25:25.362 02:03:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.362 02:03:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:25:25.362 02:03:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.362 02:03:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:25.362 true 00:25:25.362 02:03:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.362 02:03:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:25:25.362 02:03:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.362 02:03:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:25.362 true 00:25:25.362 02:03:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.363 02:03:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:25:25.363 02:03:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.363 02:03:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:25.363 true 00:25:25.363 02:03:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.363 02:03:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:25:27.899 02:03:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:25:27.899 02:03:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:27.899 02:03:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:27.899 true 00:25:27.899 02:03:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:27.899 02:03:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:25:27.899 02:03:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:27.899 02:03:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:27.899 true 00:25:27.899 02:03:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:27.899 02:03:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:25:27.899 02:03:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:27.899 02:03:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:27.899 true 00:25:27.899 02:03:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:27.899 02:03:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:25:27.899 02:03:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:27.899 02:03:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:27.899 true 00:25:27.899 02:03:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:27.899 02:03:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:25:27.899 02:03:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 1491495 00:26:24.126 00:26:24.126 job0: (groupid=0, jobs=1): err= 0: pid=1491589: Wed Jul 24 02:04:36 2024 00:26:24.126 read: IOPS=37, BW=151KiB/s (154kB/s)(9036KiB/60036msec) 00:26:24.126 slat (usec): min=5, max=7672, avg=22.27, stdev=224.58 00:26:24.126 clat (usec): min=251, max=41010k, avg=26222.93, stdev=862822.13 00:26:24.126 lat (usec): min=257, max=41010k, avg=26245.20, stdev=862822.04 00:26:24.126 clat percentiles (usec): 00:26:24.126 | 1.00th=[ 258], 5.00th=[ 265], 10.00th=[ 269], 00:26:24.126 | 20.00th=[ 277], 30.00th=[ 285], 40.00th=[ 297], 00:26:24.126 | 50.00th=[ 302], 60.00th=[ 310], 70.00th=[ 318], 00:26:24.126 | 80.00th=[ 570], 90.00th=[ 41157], 95.00th=[ 42206], 00:26:24.126 | 99.00th=[ 42206], 99.50th=[ 42206], 99.90th=[ 42206], 00:26:24.126 | 99.95th=[ 42730], 99.99th=[17112761] 00:26:24.126 write: IOPS=42, BW=171KiB/s (175kB/s)(10.0MiB/60036msec); 0 zone resets 00:26:24.126 slat (nsec): min=6327, max=70606, avg=21407.83, stdev=10268.68 00:26:24.126 clat (usec): min=182, max=465, avg=260.95, stdev=57.47 00:26:24.126 lat (usec): min=189, max=494, avg=282.36, stdev=63.61 00:26:24.126 clat percentiles (usec): 00:26:24.126 | 1.00th=[ 190], 5.00th=[ 198], 10.00th=[ 204], 20.00th=[ 215], 00:26:24.126 | 30.00th=[ 223], 40.00th=[ 233], 50.00th=[ 245], 60.00th=[ 260], 00:26:24.126 | 70.00th=[ 281], 80.00th=[ 306], 90.00th=[ 347], 95.00th=[ 375], 00:26:24.126 | 99.00th=[ 445], 99.50th=[ 453], 99.90th=[ 465], 99.95th=[ 465], 00:26:24.126 | 99.99th=[ 465] 00:26:24.126 bw ( KiB/s): min= 3160, max= 5032, per=100.00%, avg=4096.00, stdev=661.85, samples=5 00:26:24.126 iops : min= 790, max= 1258, avg=1024.00, stdev=165.46, samples=5 00:26:24.126 lat (usec) : 250=29.05%, 500=60.93%, 750=1.14% 00:26:24.126 lat (msec) : 2=0.02%, 50=8.84%, >=2000=0.02% 00:26:24.126 cpu : usr=0.11%, sys=0.19%, ctx=4822, majf=0, minf=2 00:26:24.126 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:24.126 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:24.126 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:24.126 issued rwts: total=2259,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:24.126 latency : target=0, window=0, percentile=100.00%, depth=1 00:26:24.126 00:26:24.126 Run status group 0 (all jobs): 00:26:24.126 READ: bw=151KiB/s (154kB/s), 151KiB/s-151KiB/s (154kB/s-154kB/s), io=9036KiB (9253kB), run=60036-60036msec 00:26:24.126 WRITE: bw=171KiB/s (175kB/s), 171KiB/s-171KiB/s (175kB/s-175kB/s), io=10.0MiB (10.5MB), run=60036-60036msec 00:26:24.126 00:26:24.126 Disk stats (read/write): 00:26:24.126 nvme0n1: ios=2308/2560, merge=0/0, ticks=19310/630, in_queue=19940, util=99.88% 00:26:24.126 02:04:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:26:24.126 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:24.126 02:04:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:26:24.126 02:04:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1217 -- # local i=0 00:26:24.126 02:04:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:26:24.126 02:04:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1218 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:24.126 02:04:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:26:24.126 02:04:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1225 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:24.126 02:04:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1229 -- # return 0 00:26:24.126 02:04:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:26:24.126 02:04:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:26:24.127 nvmf hotplug test: fio successful as expected 00:26:24.127 02:04:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:24.127 02:04:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.127 02:04:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:24.127 02:04:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.127 02:04:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:26:24.127 02:04:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:26:24.127 02:04:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:26:24.127 02:04:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:24.127 02:04:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # sync 00:26:24.127 02:04:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:24.127 02:04:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@120 -- # set +e 00:26:24.127 02:04:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:24.127 02:04:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:24.127 rmmod nvme_tcp 00:26:24.127 rmmod nvme_fabrics 00:26:24.127 rmmod nvme_keyring 00:26:24.127 02:04:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:24.127 02:04:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set -e 00:26:24.127 02:04:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # return 0 00:26:24.127 02:04:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@489 -- # '[' -n 1491192 ']' 00:26:24.127 02:04:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@490 -- # killprocess 1491192 00:26:24.127 02:04:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@948 -- # '[' -z 1491192 ']' 00:26:24.127 02:04:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@952 -- # kill -0 1491192 00:26:24.127 02:04:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@953 -- # uname 00:26:24.127 02:04:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:24.127 02:04:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1491192 00:26:24.127 02:04:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:24.127 02:04:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:24.127 02:04:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1491192' 00:26:24.127 killing process with pid 1491192 00:26:24.127 02:04:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@967 -- # kill 1491192 00:26:24.127 02:04:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@972 -- # wait 1491192 00:26:24.127 02:04:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:24.127 02:04:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:24.127 02:04:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:24.127 02:04:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:24.127 02:04:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:24.127 02:04:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:24.127 02:04:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:24.127 02:04:37 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:24.692 02:04:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:24.692 00:26:24.692 real 1m7.898s 00:26:24.692 user 4m10.659s 00:26:24.692 sys 0m6.129s 00:26:24.692 02:04:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:24.692 02:04:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:24.692 ************************************ 00:26:24.692 END TEST nvmf_initiator_timeout 00:26:24.692 ************************************ 00:26:24.692 02:04:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@51 -- # [[ phy == phy ]] 00:26:24.692 02:04:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@52 -- # '[' tcp = tcp ']' 00:26:24.692 02:04:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # gather_supported_nvmf_pci_devs 00:26:24.692 02:04:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@285 -- # xtrace_disable 00:26:24.692 02:04:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:26.599 02:04:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:26.599 02:04:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@291 -- # pci_devs=() 00:26:26.599 02:04:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:26.599 02:04:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:26.599 02:04:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:26.599 02:04:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:26.599 02:04:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:26.599 02:04:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@295 -- # net_devs=() 00:26:26.599 02:04:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:26.599 02:04:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@296 -- # e810=() 00:26:26.599 02:04:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@296 -- # local -ga e810 00:26:26.599 02:04:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@297 -- # x722=() 00:26:26.599 02:04:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@297 -- # local -ga x722 00:26:26.599 02:04:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@298 -- # mlx=() 00:26:26.599 02:04:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@298 -- # local -ga mlx 00:26:26.599 02:04:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:26.599 02:04:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:26.599 02:04:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:26.599 02:04:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:26.599 02:04:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:26.599 02:04:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:26.599 02:04:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:26.599 02:04:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:26.599 02:04:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:26.599 02:04:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:26.599 02:04:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:26.599 02:04:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:26.599 02:04:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:26.599 02:04:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:26.599 02:04:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:26.599 02:04:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:26.599 02:04:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:26.599 02:04:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:26.599 02:04:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:26.599 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:26.599 02:04:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:26.599 02:04:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:26.599 02:04:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:26.599 02:04:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:26.599 02:04:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:26.599 02:04:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:26.599 02:04:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:26.599 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:26.599 02:04:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:26.599 02:04:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:26.599 02:04:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:26.599 02:04:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:26.599 02:04:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:26.599 02:04:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:26.599 02:04:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:26.599 02:04:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:26.599 02:04:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:26.599 02:04:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:26.599 02:04:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:26.599 02:04:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:26.599 02:04:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:26.599 02:04:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:26.599 02:04:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:26.599 02:04:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:26.599 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:26.599 02:04:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:26.599 02:04:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:26.599 02:04:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:26.599 02:04:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:26.599 02:04:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:26.599 02:04:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:26.599 02:04:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:26.599 02:04:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:26.599 02:04:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:26.599 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:26.599 02:04:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:26.599 02:04:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:26.599 02:04:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:26.599 02:04:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # (( 2 > 0 )) 00:26:26.599 02:04:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:26:26.599 02:04:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:26.599 02:04:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:26.599 02:04:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:26.599 ************************************ 00:26:26.599 START TEST nvmf_perf_adq 00:26:26.599 ************************************ 00:26:26.599 02:04:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:26:26.599 * Looking for test storage... 00:26:26.599 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:26.599 02:04:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:26.599 02:04:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:26:26.599 02:04:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:26.599 02:04:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:26.599 02:04:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:26.599 02:04:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:26.599 02:04:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:26.599 02:04:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:26.599 02:04:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:26.599 02:04:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:26.599 02:04:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:26.599 02:04:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:26.599 02:04:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:26.599 02:04:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:26.599 02:04:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:26.599 02:04:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:26.599 02:04:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:26.599 02:04:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:26.600 02:04:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:26.600 02:04:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:26.600 02:04:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:26.600 02:04:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:26.600 02:04:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:26.600 02:04:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:26.600 02:04:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:26.600 02:04:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:26:26.600 02:04:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:26.600 02:04:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:26:26.600 02:04:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:26.600 02:04:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:26.600 02:04:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:26.600 02:04:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:26.600 02:04:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:26.600 02:04:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:26.600 02:04:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:26.600 02:04:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:26.600 02:04:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:26:26.600 02:04:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:26:26.600 02:04:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:28.503 02:04:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:28.503 02:04:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:26:28.503 02:04:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:28.503 02:04:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:28.503 02:04:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:28.503 02:04:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:28.503 02:04:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:28.503 02:04:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:26:28.503 02:04:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:28.503 02:04:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:26:28.503 02:04:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:26:28.503 02:04:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:26:28.503 02:04:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:26:28.503 02:04:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:26:28.503 02:04:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:26:28.503 02:04:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:28.503 02:04:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:28.503 02:04:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:28.503 02:04:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:28.503 02:04:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:28.503 02:04:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:28.503 02:04:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:28.503 02:04:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:28.503 02:04:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:28.503 02:04:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:28.503 02:04:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:28.503 02:04:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:28.503 02:04:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:28.503 02:04:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:28.503 02:04:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:28.503 02:04:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:28.503 02:04:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:28.503 02:04:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:28.503 02:04:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:28.503 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:28.503 02:04:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:28.503 02:04:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:28.503 02:04:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:28.503 02:04:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:28.503 02:04:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:28.503 02:04:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:28.503 02:04:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:28.503 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:28.503 02:04:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:28.503 02:04:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:28.503 02:04:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:28.503 02:04:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:28.503 02:04:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:28.503 02:04:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:28.503 02:04:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:28.503 02:04:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:28.503 02:04:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:28.503 02:04:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:28.503 02:04:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:28.503 02:04:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:28.503 02:04:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:28.503 02:04:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:28.503 02:04:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:28.503 02:04:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:28.503 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:28.503 02:04:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:28.503 02:04:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:28.503 02:04:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:28.503 02:04:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:28.503 02:04:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:28.503 02:04:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:28.503 02:04:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:28.503 02:04:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:28.503 02:04:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:28.503 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:28.503 02:04:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:28.503 02:04:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:28.503 02:04:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:28.503 02:04:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:26:28.503 02:04:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:26:28.503 02:04:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:26:28.503 02:04:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:26:29.442 02:04:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:26:31.353 02:04:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:26:36.633 02:04:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:26:36.633 02:04:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:36.633 02:04:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:36.633 02:04:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:36.633 02:04:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:36.634 02:04:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:36.634 02:04:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:36.634 02:04:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:36.634 02:04:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:36.634 02:04:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:36.634 02:04:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:36.634 02:04:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:26:36.634 02:04:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:36.634 02:04:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:36.634 02:04:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:26:36.634 02:04:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:36.634 02:04:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:36.634 02:04:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:36.634 02:04:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:36.634 02:04:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:36.634 02:04:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:26:36.634 02:04:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:36.634 02:04:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:26:36.634 02:04:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:26:36.634 02:04:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:26:36.634 02:04:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:26:36.634 02:04:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:26:36.634 02:04:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:26:36.634 02:04:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:36.634 02:04:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:36.634 02:04:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:36.634 02:04:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:36.634 02:04:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:36.634 02:04:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:36.634 02:04:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:36.634 02:04:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:36.634 02:04:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:36.634 02:04:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:36.634 02:04:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:36.634 02:04:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:36.634 02:04:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:36.634 02:04:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:36.634 02:04:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:36.634 02:04:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:36.634 02:04:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:36.634 02:04:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:36.634 02:04:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:36.634 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:36.634 02:04:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:36.634 02:04:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:36.634 02:04:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:36.634 02:04:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:36.634 02:04:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:36.634 02:04:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:36.634 02:04:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:36.634 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:36.634 02:04:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:36.634 02:04:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:36.634 02:04:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:36.634 02:04:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:36.634 02:04:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:36.634 02:04:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:36.634 02:04:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:36.634 02:04:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:36.634 02:04:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:36.634 02:04:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:36.634 02:04:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:36.634 02:04:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:36.634 02:04:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:36.634 02:04:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:36.634 02:04:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:36.634 02:04:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:36.634 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:36.634 02:04:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:36.634 02:04:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:36.634 02:04:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:36.634 02:04:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:36.634 02:04:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:36.634 02:04:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:36.634 02:04:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:36.634 02:04:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:36.634 02:04:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:36.634 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:36.634 02:04:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:36.634 02:04:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:36.634 02:04:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:26:36.634 02:04:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:36.634 02:04:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:36.634 02:04:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:36.634 02:04:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:36.634 02:04:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:36.634 02:04:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:36.634 02:04:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:36.634 02:04:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:36.634 02:04:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:36.634 02:04:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:36.634 02:04:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:36.634 02:04:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:36.634 02:04:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:36.634 02:04:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:36.634 02:04:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:36.634 02:04:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:36.634 02:04:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:36.634 02:04:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:36.634 02:04:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:36.635 02:04:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:36.635 02:04:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:36.635 02:04:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:36.635 02:04:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:36.635 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:36.635 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.134 ms 00:26:36.635 00:26:36.635 --- 10.0.0.2 ping statistics --- 00:26:36.635 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:36.635 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:26:36.635 02:04:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:36.635 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:36.635 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:26:36.635 00:26:36.635 --- 10.0.0.1 ping statistics --- 00:26:36.635 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:36.635 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:26:36.635 02:04:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:36.635 02:04:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:26:36.635 02:04:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:36.635 02:04:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:36.635 02:04:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:36.635 02:04:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:36.635 02:04:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:36.635 02:04:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:36.635 02:04:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:36.635 02:04:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:26:36.635 02:04:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:36.635 02:04:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:36.635 02:04:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:36.635 02:04:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=1503158 00:26:36.635 02:04:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:26:36.635 02:04:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 1503158 00:26:36.635 02:04:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 1503158 ']' 00:26:36.635 02:04:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:36.635 02:04:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:36.635 02:04:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:36.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:36.635 02:04:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:36.635 02:04:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:36.635 [2024-07-24 02:04:51.116636] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:26:36.635 [2024-07-24 02:04:51.116706] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:36.635 EAL: No free 2048 kB hugepages reported on node 1 00:26:36.635 [2024-07-24 02:04:51.185222] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:36.635 [2024-07-24 02:04:51.276730] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:36.635 [2024-07-24 02:04:51.276790] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:36.635 [2024-07-24 02:04:51.276817] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:36.635 [2024-07-24 02:04:51.276830] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:36.635 [2024-07-24 02:04:51.276842] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:36.635 [2024-07-24 02:04:51.276920] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:36.635 [2024-07-24 02:04:51.276989] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:36.635 [2024-07-24 02:04:51.277081] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:36.635 [2024-07-24 02:04:51.277084] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:36.635 02:04:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:36.635 02:04:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:26:36.635 02:04:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:36.635 02:04:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:36.635 02:04:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:36.635 02:04:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:36.635 02:04:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:26:36.635 02:04:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:26:36.635 02:04:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:26:36.635 02:04:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:36.635 02:04:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:36.635 02:04:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:36.635 02:04:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:26:36.635 02:04:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:26:36.635 02:04:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:36.635 02:04:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:36.635 02:04:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:36.635 02:04:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:26:36.635 02:04:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:36.635 02:04:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:36.635 02:04:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:36.635 02:04:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:26:36.635 02:04:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:36.635 02:04:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:36.635 [2024-07-24 02:04:51.522781] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:36.893 02:04:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:36.893 02:04:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:26:36.893 02:04:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:36.893 02:04:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:36.893 Malloc1 00:26:36.893 02:04:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:36.893 02:04:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:36.893 02:04:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:36.893 02:04:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:36.893 02:04:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:36.893 02:04:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:36.893 02:04:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:36.893 02:04:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:36.893 02:04:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:36.893 02:04:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:36.893 02:04:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:36.893 02:04:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:36.893 [2024-07-24 02:04:51.576363] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:36.893 02:04:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:36.893 02:04:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=1503226 00:26:36.893 02:04:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:26:36.893 02:04:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:26:36.893 EAL: No free 2048 kB hugepages reported on node 1 00:26:38.790 02:04:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:26:38.790 02:04:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.790 02:04:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:38.790 02:04:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:38.790 02:04:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:26:38.790 "tick_rate": 2700000000, 00:26:38.790 "poll_groups": [ 00:26:38.790 { 00:26:38.790 "name": "nvmf_tgt_poll_group_000", 00:26:38.790 "admin_qpairs": 1, 00:26:38.790 "io_qpairs": 1, 00:26:38.790 "current_admin_qpairs": 1, 00:26:38.790 "current_io_qpairs": 1, 00:26:38.790 "pending_bdev_io": 0, 00:26:38.790 "completed_nvme_io": 19858, 00:26:38.790 "transports": [ 00:26:38.790 { 00:26:38.790 "trtype": "TCP" 00:26:38.790 } 00:26:38.790 ] 00:26:38.790 }, 00:26:38.790 { 00:26:38.790 "name": "nvmf_tgt_poll_group_001", 00:26:38.790 "admin_qpairs": 0, 00:26:38.790 "io_qpairs": 1, 00:26:38.790 "current_admin_qpairs": 0, 00:26:38.790 "current_io_qpairs": 1, 00:26:38.790 "pending_bdev_io": 0, 00:26:38.790 "completed_nvme_io": 20293, 00:26:38.790 "transports": [ 00:26:38.790 { 00:26:38.790 "trtype": "TCP" 00:26:38.790 } 00:26:38.790 ] 00:26:38.790 }, 00:26:38.790 { 00:26:38.790 "name": "nvmf_tgt_poll_group_002", 00:26:38.790 "admin_qpairs": 0, 00:26:38.790 "io_qpairs": 1, 00:26:38.790 "current_admin_qpairs": 0, 00:26:38.790 "current_io_qpairs": 1, 00:26:38.790 "pending_bdev_io": 0, 00:26:38.790 "completed_nvme_io": 20353, 00:26:38.790 "transports": [ 00:26:38.790 { 00:26:38.790 "trtype": "TCP" 00:26:38.790 } 00:26:38.790 ] 00:26:38.790 }, 00:26:38.790 { 00:26:38.790 "name": "nvmf_tgt_poll_group_003", 00:26:38.790 "admin_qpairs": 0, 00:26:38.790 "io_qpairs": 1, 00:26:38.790 "current_admin_qpairs": 0, 00:26:38.790 "current_io_qpairs": 1, 00:26:38.790 "pending_bdev_io": 0, 00:26:38.790 "completed_nvme_io": 20294, 00:26:38.790 "transports": [ 00:26:38.790 { 00:26:38.790 "trtype": "TCP" 00:26:38.790 } 00:26:38.790 ] 00:26:38.790 } 00:26:38.790 ] 00:26:38.790 }' 00:26:38.790 02:04:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:26:38.790 02:04:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:26:38.790 02:04:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:26:38.790 02:04:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:26:38.790 02:04:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 1503226 00:26:46.894 Initializing NVMe Controllers 00:26:46.894 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:46.894 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:26:46.894 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:26:46.894 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:26:46.894 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:26:46.894 Initialization complete. Launching workers. 00:26:46.894 ======================================================== 00:26:46.894 Latency(us) 00:26:46.894 Device Information : IOPS MiB/s Average min max 00:26:46.894 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10675.80 41.70 5995.59 2450.75 9087.15 00:26:46.894 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10643.30 41.58 6014.94 2886.61 8985.58 00:26:46.894 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10687.90 41.75 5990.35 4791.04 7625.15 00:26:46.894 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10363.60 40.48 6177.38 2634.79 9846.38 00:26:46.894 ======================================================== 00:26:46.894 Total : 42370.61 165.51 6043.59 2450.75 9846.38 00:26:46.894 00:26:46.894 02:05:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:26:46.894 02:05:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:46.894 02:05:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:26:46.894 02:05:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:46.894 02:05:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:26:46.894 02:05:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:46.894 02:05:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:46.894 rmmod nvme_tcp 00:26:46.894 rmmod nvme_fabrics 00:26:46.894 rmmod nvme_keyring 00:26:46.894 02:05:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:46.894 02:05:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:26:46.894 02:05:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:26:46.894 02:05:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 1503158 ']' 00:26:46.894 02:05:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 1503158 00:26:46.894 02:05:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 1503158 ']' 00:26:46.894 02:05:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 1503158 00:26:46.894 02:05:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:26:46.894 02:05:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:46.894 02:05:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1503158 00:26:47.153 02:05:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:47.153 02:05:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:47.153 02:05:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1503158' 00:26:47.153 killing process with pid 1503158 00:26:47.153 02:05:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 1503158 00:26:47.153 02:05:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 1503158 00:26:47.153 02:05:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:47.153 02:05:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:47.153 02:05:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:47.153 02:05:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:47.153 02:05:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:47.153 02:05:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:47.153 02:05:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:47.153 02:05:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:49.686 02:05:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:49.686 02:05:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:26:49.686 02:05:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:26:49.944 02:05:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:26:51.846 02:05:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:26:57.115 02:05:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:26:57.115 02:05:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:57.115 02:05:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:57.115 02:05:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:57.115 02:05:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:57.115 02:05:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:57.115 02:05:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:57.115 02:05:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:57.115 02:05:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:57.115 02:05:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:57.115 02:05:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:57.115 02:05:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:26:57.115 02:05:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:57.115 02:05:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:57.115 02:05:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:26:57.115 02:05:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:57.115 02:05:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:57.115 02:05:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:57.115 02:05:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:57.115 02:05:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:57.115 02:05:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:26:57.115 02:05:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:57.115 02:05:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:26:57.115 02:05:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:26:57.115 02:05:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:26:57.115 02:05:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:26:57.115 02:05:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:26:57.115 02:05:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:26:57.115 02:05:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:57.115 02:05:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:57.115 02:05:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:57.115 02:05:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:57.116 02:05:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:57.116 02:05:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:57.116 02:05:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:57.116 02:05:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:57.116 02:05:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:57.116 02:05:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:57.116 02:05:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:57.116 02:05:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:57.116 02:05:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:57.116 02:05:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:57.116 02:05:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:57.116 02:05:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:57.116 02:05:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:57.116 02:05:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:57.116 02:05:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:57.116 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:57.116 02:05:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:57.116 02:05:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:57.116 02:05:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:57.116 02:05:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:57.116 02:05:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:57.116 02:05:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:57.116 02:05:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:57.116 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:57.116 02:05:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:57.116 02:05:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:57.116 02:05:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:57.116 02:05:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:57.116 02:05:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:57.116 02:05:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:57.116 02:05:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:57.116 02:05:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:57.116 02:05:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:57.116 02:05:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:57.116 02:05:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:57.116 02:05:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:57.116 02:05:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:57.116 02:05:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:57.116 02:05:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:57.116 02:05:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:57.116 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:57.116 02:05:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:57.116 02:05:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:57.116 02:05:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:57.116 02:05:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:57.116 02:05:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:57.116 02:05:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:57.116 02:05:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:57.116 02:05:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:57.116 02:05:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:57.116 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:57.116 02:05:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:57.116 02:05:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:57.116 02:05:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:26:57.116 02:05:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:57.116 02:05:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:57.116 02:05:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:57.116 02:05:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:57.116 02:05:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:57.116 02:05:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:57.116 02:05:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:57.116 02:05:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:57.116 02:05:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:57.116 02:05:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:57.116 02:05:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:57.116 02:05:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:57.116 02:05:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:57.116 02:05:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:57.116 02:05:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:57.116 02:05:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:57.116 02:05:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:57.116 02:05:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:57.116 02:05:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:57.116 02:05:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:57.116 02:05:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:57.116 02:05:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:57.116 02:05:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:57.116 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:57.116 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.242 ms 00:26:57.116 00:26:57.116 --- 10.0.0.2 ping statistics --- 00:26:57.116 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:57.116 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:26:57.116 02:05:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:57.116 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:57.116 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.106 ms 00:26:57.116 00:26:57.116 --- 10.0.0.1 ping statistics --- 00:26:57.116 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:57.116 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:26:57.116 02:05:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:57.116 02:05:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:26:57.116 02:05:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:57.116 02:05:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:57.116 02:05:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:57.116 02:05:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:57.116 02:05:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:57.116 02:05:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:57.116 02:05:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:57.116 02:05:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:26:57.116 02:05:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:26:57.116 02:05:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:26:57.116 02:05:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:26:57.116 net.core.busy_poll = 1 00:26:57.116 02:05:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:26:57.116 net.core.busy_read = 1 00:26:57.116 02:05:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:26:57.116 02:05:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:26:57.116 02:05:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:26:57.117 02:05:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:26:57.117 02:05:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:26:57.117 02:05:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:26:57.117 02:05:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:57.117 02:05:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:57.117 02:05:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:57.117 02:05:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=1505837 00:26:57.117 02:05:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:26:57.117 02:05:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 1505837 00:26:57.117 02:05:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 1505837 ']' 00:26:57.117 02:05:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:57.117 02:05:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:57.117 02:05:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:57.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:57.117 02:05:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:57.117 02:05:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:57.375 [2024-07-24 02:05:12.043009] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:26:57.375 [2024-07-24 02:05:12.043081] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:57.375 EAL: No free 2048 kB hugepages reported on node 1 00:26:57.375 [2024-07-24 02:05:12.108870] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:57.375 [2024-07-24 02:05:12.200058] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:57.375 [2024-07-24 02:05:12.200122] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:57.375 [2024-07-24 02:05:12.200139] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:57.375 [2024-07-24 02:05:12.200152] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:57.375 [2024-07-24 02:05:12.200164] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:57.375 [2024-07-24 02:05:12.200246] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:57.375 [2024-07-24 02:05:12.200298] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:57.375 [2024-07-24 02:05:12.200415] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:57.375 [2024-07-24 02:05:12.200418] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:57.633 02:05:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:57.633 02:05:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:26:57.633 02:05:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:57.633 02:05:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:57.633 02:05:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:57.633 02:05:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:57.633 02:05:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:26:57.633 02:05:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:26:57.633 02:05:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:26:57.633 02:05:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:57.634 02:05:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:57.634 02:05:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.634 02:05:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:26:57.634 02:05:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:26:57.634 02:05:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:57.634 02:05:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:57.634 02:05:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.634 02:05:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:26:57.634 02:05:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:57.634 02:05:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:57.634 02:05:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.634 02:05:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:26:57.634 02:05:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:57.634 02:05:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:57.634 [2024-07-24 02:05:12.489494] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:57.634 02:05:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.634 02:05:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:26:57.634 02:05:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:57.634 02:05:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:57.634 Malloc1 00:26:57.634 02:05:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.634 02:05:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:57.634 02:05:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:57.634 02:05:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:57.892 02:05:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.892 02:05:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:57.892 02:05:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:57.892 02:05:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:57.892 02:05:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.892 02:05:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:57.892 02:05:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:57.892 02:05:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:57.892 [2024-07-24 02:05:12.542705] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:57.892 02:05:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.892 02:05:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=1505868 00:26:57.892 02:05:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:26:57.892 02:05:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:26:57.892 EAL: No free 2048 kB hugepages reported on node 1 00:26:59.825 02:05:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:26:59.825 02:05:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:59.825 02:05:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:59.825 02:05:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:59.825 02:05:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:26:59.825 "tick_rate": 2700000000, 00:26:59.825 "poll_groups": [ 00:26:59.825 { 00:26:59.825 "name": "nvmf_tgt_poll_group_000", 00:26:59.825 "admin_qpairs": 1, 00:26:59.825 "io_qpairs": 2, 00:26:59.825 "current_admin_qpairs": 1, 00:26:59.825 "current_io_qpairs": 2, 00:26:59.825 "pending_bdev_io": 0, 00:26:59.825 "completed_nvme_io": 26295, 00:26:59.825 "transports": [ 00:26:59.825 { 00:26:59.825 "trtype": "TCP" 00:26:59.825 } 00:26:59.825 ] 00:26:59.825 }, 00:26:59.825 { 00:26:59.825 "name": "nvmf_tgt_poll_group_001", 00:26:59.825 "admin_qpairs": 0, 00:26:59.825 "io_qpairs": 2, 00:26:59.825 "current_admin_qpairs": 0, 00:26:59.825 "current_io_qpairs": 2, 00:26:59.825 "pending_bdev_io": 0, 00:26:59.825 "completed_nvme_io": 25723, 00:26:59.825 "transports": [ 00:26:59.825 { 00:26:59.825 "trtype": "TCP" 00:26:59.825 } 00:26:59.825 ] 00:26:59.825 }, 00:26:59.825 { 00:26:59.825 "name": "nvmf_tgt_poll_group_002", 00:26:59.825 "admin_qpairs": 0, 00:26:59.825 "io_qpairs": 0, 00:26:59.825 "current_admin_qpairs": 0, 00:26:59.825 "current_io_qpairs": 0, 00:26:59.825 "pending_bdev_io": 0, 00:26:59.825 "completed_nvme_io": 0, 00:26:59.825 "transports": [ 00:26:59.825 { 00:26:59.825 "trtype": "TCP" 00:26:59.825 } 00:26:59.825 ] 00:26:59.825 }, 00:26:59.825 { 00:26:59.825 "name": "nvmf_tgt_poll_group_003", 00:26:59.825 "admin_qpairs": 0, 00:26:59.825 "io_qpairs": 0, 00:26:59.825 "current_admin_qpairs": 0, 00:26:59.825 "current_io_qpairs": 0, 00:26:59.825 "pending_bdev_io": 0, 00:26:59.825 "completed_nvme_io": 0, 00:26:59.825 "transports": [ 00:26:59.825 { 00:26:59.825 "trtype": "TCP" 00:26:59.825 } 00:26:59.825 ] 00:26:59.825 } 00:26:59.825 ] 00:26:59.825 }' 00:26:59.825 02:05:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:26:59.825 02:05:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:26:59.825 02:05:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:26:59.825 02:05:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:26:59.825 02:05:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 1505868 00:27:07.930 Initializing NVMe Controllers 00:27:07.930 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:07.930 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:27:07.930 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:27:07.930 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:27:07.930 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:27:07.930 Initialization complete. Launching workers. 00:27:07.930 ======================================================== 00:27:07.930 Latency(us) 00:27:07.930 Device Information : IOPS MiB/s Average min max 00:27:07.930 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 5969.50 23.32 10756.42 1992.97 55161.11 00:27:07.930 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 6403.90 25.02 9995.70 1680.07 54667.22 00:27:07.930 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 7811.80 30.51 8192.29 1457.85 53409.66 00:27:07.930 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 7307.90 28.55 8760.21 1446.50 54780.10 00:27:07.930 ======================================================== 00:27:07.930 Total : 27493.10 107.39 9320.05 1446.50 55161.11 00:27:07.930 00:27:07.930 02:05:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:27:07.930 02:05:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:07.930 02:05:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:27:07.930 02:05:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:07.930 02:05:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:27:07.930 02:05:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:07.930 02:05:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:07.930 rmmod nvme_tcp 00:27:07.930 rmmod nvme_fabrics 00:27:07.930 rmmod nvme_keyring 00:27:07.930 02:05:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:07.930 02:05:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:27:07.930 02:05:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:27:07.930 02:05:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 1505837 ']' 00:27:07.930 02:05:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 1505837 00:27:07.930 02:05:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 1505837 ']' 00:27:07.930 02:05:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 1505837 00:27:07.930 02:05:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:27:07.930 02:05:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:07.930 02:05:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1505837 00:27:07.930 02:05:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:07.930 02:05:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:07.930 02:05:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1505837' 00:27:07.930 killing process with pid 1505837 00:27:07.930 02:05:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 1505837 00:27:07.930 02:05:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 1505837 00:27:08.189 02:05:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:08.189 02:05:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:08.189 02:05:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:08.189 02:05:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:08.189 02:05:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:08.189 02:05:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:08.189 02:05:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:08.189 02:05:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:10.716 02:05:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:10.716 02:05:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:27:10.716 00:27:10.716 real 0m43.679s 00:27:10.716 user 2m38.106s 00:27:10.716 sys 0m10.046s 00:27:10.716 02:05:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:10.716 02:05:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:10.716 ************************************ 00:27:10.716 END TEST nvmf_perf_adq 00:27:10.716 ************************************ 00:27:10.716 02:05:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@63 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:27:10.716 02:05:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:10.716 02:05:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:10.716 02:05:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:10.716 ************************************ 00:27:10.716 START TEST nvmf_shutdown 00:27:10.716 ************************************ 00:27:10.716 02:05:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:27:10.716 * Looking for test storage... 00:27:10.716 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:10.716 02:05:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:10.716 02:05:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:27:10.716 02:05:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:10.716 02:05:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:10.716 02:05:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:10.716 02:05:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:10.716 02:05:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:10.716 02:05:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:10.716 02:05:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:10.717 02:05:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:10.717 02:05:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:10.717 02:05:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:10.717 02:05:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:10.717 02:05:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:10.717 02:05:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:10.717 02:05:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:10.717 02:05:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:10.717 02:05:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:10.717 02:05:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:10.717 02:05:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:10.717 02:05:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:10.717 02:05:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:10.717 02:05:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:10.717 02:05:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:10.717 02:05:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:10.717 02:05:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:27:10.717 02:05:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:10.717 02:05:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:27:10.717 02:05:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:10.717 02:05:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:10.717 02:05:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:10.717 02:05:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:10.717 02:05:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:10.717 02:05:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:10.717 02:05:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:10.717 02:05:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:10.717 02:05:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:10.717 02:05:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:10.717 02:05:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:27:10.717 02:05:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:10.717 02:05:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:10.717 02:05:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:10.717 ************************************ 00:27:10.717 START TEST nvmf_shutdown_tc1 00:27:10.717 ************************************ 00:27:10.717 02:05:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc1 00:27:10.717 02:05:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:27:10.717 02:05:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:27:10.717 02:05:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:10.717 02:05:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:10.717 02:05:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:10.717 02:05:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:10.717 02:05:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:10.717 02:05:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:10.717 02:05:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:10.717 02:05:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:10.717 02:05:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:10.717 02:05:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:10.717 02:05:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:27:10.717 02:05:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:12.618 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:12.618 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:27:12.618 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:12.618 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:12.618 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:12.618 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:12.618 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:12.618 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:27:12.618 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:12.618 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:27:12.618 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:27:12.618 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:27:12.618 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:27:12.618 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:27:12.618 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:27:12.618 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:12.618 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:12.618 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:12.618 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:12.618 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:12.618 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:12.618 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:12.618 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:12.618 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:12.619 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:12.619 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:12.619 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:12.619 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:12.619 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:12.619 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:12.619 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:12.619 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:12.619 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:12.619 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:12.619 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:12.619 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:12.619 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:12.619 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:12.619 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:12.619 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:12.619 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:12.619 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:12.619 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:12.619 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:12.619 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:12.619 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:12.619 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:12.619 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:12.619 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:12.619 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:12.619 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:12.619 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:12.619 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:12.619 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:12.619 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:12.619 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:12.619 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:12.619 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:12.619 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:12.619 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:12.619 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:12.619 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:12.619 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:12.619 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:12.619 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:12.619 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:12.619 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:12.619 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:12.619 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:12.619 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:12.619 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:12.619 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:12.619 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:27:12.619 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:12.619 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:12.619 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:12.619 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:12.619 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:12.619 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:12.619 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:12.619 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:12.619 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:12.619 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:12.619 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:12.619 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:12.619 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:12.619 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:12.619 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:12.619 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:12.619 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:12.619 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:12.619 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:12.620 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:12.620 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:12.620 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:12.620 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:12.620 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:12.620 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.267 ms 00:27:12.620 00:27:12.620 --- 10.0.0.2 ping statistics --- 00:27:12.620 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:12.620 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:27:12.620 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:12.620 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:12.620 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.086 ms 00:27:12.620 00:27:12.620 --- 10.0.0.1 ping statistics --- 00:27:12.620 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:12.620 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:27:12.620 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:12.620 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:27:12.620 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:12.620 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:12.620 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:12.620 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:12.620 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:12.620 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:12.620 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:12.620 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:27:12.620 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:12.620 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:12.620 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:12.620 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=1509019 00:27:12.620 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:27:12.620 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 1509019 00:27:12.620 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 1509019 ']' 00:27:12.620 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:12.620 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:12.620 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:12.620 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:12.620 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:12.620 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:12.620 [2024-07-24 02:05:27.312137] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:27:12.620 [2024-07-24 02:05:27.312233] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:12.620 EAL: No free 2048 kB hugepages reported on node 1 00:27:12.620 [2024-07-24 02:05:27.382115] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:12.620 [2024-07-24 02:05:27.472931] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:12.620 [2024-07-24 02:05:27.472994] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:12.620 [2024-07-24 02:05:27.473022] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:12.620 [2024-07-24 02:05:27.473036] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:12.620 [2024-07-24 02:05:27.473048] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:12.620 [2024-07-24 02:05:27.473159] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:12.620 [2024-07-24 02:05:27.473257] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:12.620 [2024-07-24 02:05:27.473334] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:27:12.620 [2024-07-24 02:05:27.473354] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:12.879 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:12.879 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:27:12.879 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:12.879 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:12.879 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:12.879 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:12.879 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:12.879 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.879 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:12.879 [2024-07-24 02:05:27.622628] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:12.879 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.879 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:27:12.879 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:27:12.879 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:12.879 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:12.879 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:12.879 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:12.879 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:12.879 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:12.879 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:12.879 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:12.879 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:12.879 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:12.879 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:12.879 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:12.879 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:12.879 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:12.879 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:12.879 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:12.879 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:12.879 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:12.879 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:12.879 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:12.879 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:12.879 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:12.879 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:12.879 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:27:12.879 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.879 02:05:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:12.879 Malloc1 00:27:12.879 [2024-07-24 02:05:27.706720] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:12.879 Malloc2 00:27:13.137 Malloc3 00:27:13.137 Malloc4 00:27:13.137 Malloc5 00:27:13.137 Malloc6 00:27:13.137 Malloc7 00:27:13.396 Malloc8 00:27:13.396 Malloc9 00:27:13.396 Malloc10 00:27:13.396 02:05:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:13.396 02:05:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:27:13.396 02:05:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:13.396 02:05:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:13.396 02:05:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=1509200 00:27:13.396 02:05:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 1509200 /var/tmp/bdevperf.sock 00:27:13.396 02:05:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 1509200 ']' 00:27:13.396 02:05:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:13.396 02:05:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:27:13.396 02:05:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:13.396 02:05:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:13.396 02:05:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:27:13.396 02:05:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:13.396 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:13.396 02:05:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:27:13.396 02:05:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:13.396 02:05:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:13.396 02:05:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:13.396 02:05:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:13.396 { 00:27:13.396 "params": { 00:27:13.396 "name": "Nvme$subsystem", 00:27:13.396 "trtype": "$TEST_TRANSPORT", 00:27:13.396 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:13.396 "adrfam": "ipv4", 00:27:13.396 "trsvcid": "$NVMF_PORT", 00:27:13.396 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:13.396 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:13.396 "hdgst": ${hdgst:-false}, 00:27:13.396 "ddgst": ${ddgst:-false} 00:27:13.396 }, 00:27:13.396 "method": "bdev_nvme_attach_controller" 00:27:13.396 } 00:27:13.396 EOF 00:27:13.396 )") 00:27:13.396 02:05:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:13.396 02:05:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:13.396 02:05:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:13.396 { 00:27:13.396 "params": { 00:27:13.396 "name": "Nvme$subsystem", 00:27:13.396 "trtype": "$TEST_TRANSPORT", 00:27:13.396 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:13.396 "adrfam": "ipv4", 00:27:13.396 "trsvcid": "$NVMF_PORT", 00:27:13.396 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:13.396 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:13.396 "hdgst": ${hdgst:-false}, 00:27:13.396 "ddgst": ${ddgst:-false} 00:27:13.396 }, 00:27:13.396 "method": "bdev_nvme_attach_controller" 00:27:13.396 } 00:27:13.396 EOF 00:27:13.396 )") 00:27:13.396 02:05:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:13.396 02:05:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:13.396 02:05:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:13.396 { 00:27:13.396 "params": { 00:27:13.396 "name": "Nvme$subsystem", 00:27:13.396 "trtype": "$TEST_TRANSPORT", 00:27:13.396 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:13.396 "adrfam": "ipv4", 00:27:13.396 "trsvcid": "$NVMF_PORT", 00:27:13.396 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:13.396 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:13.396 "hdgst": ${hdgst:-false}, 00:27:13.396 "ddgst": ${ddgst:-false} 00:27:13.396 }, 00:27:13.396 "method": "bdev_nvme_attach_controller" 00:27:13.396 } 00:27:13.396 EOF 00:27:13.396 )") 00:27:13.396 02:05:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:13.396 02:05:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:13.396 02:05:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:13.396 { 00:27:13.396 "params": { 00:27:13.396 "name": "Nvme$subsystem", 00:27:13.396 "trtype": "$TEST_TRANSPORT", 00:27:13.396 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:13.396 "adrfam": "ipv4", 00:27:13.396 "trsvcid": "$NVMF_PORT", 00:27:13.396 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:13.396 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:13.396 "hdgst": ${hdgst:-false}, 00:27:13.396 "ddgst": ${ddgst:-false} 00:27:13.396 }, 00:27:13.396 "method": "bdev_nvme_attach_controller" 00:27:13.396 } 00:27:13.396 EOF 00:27:13.396 )") 00:27:13.396 02:05:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:13.396 02:05:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:13.396 02:05:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:13.396 { 00:27:13.396 "params": { 00:27:13.396 "name": "Nvme$subsystem", 00:27:13.396 "trtype": "$TEST_TRANSPORT", 00:27:13.396 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:13.396 "adrfam": "ipv4", 00:27:13.396 "trsvcid": "$NVMF_PORT", 00:27:13.396 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:13.396 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:13.396 "hdgst": ${hdgst:-false}, 00:27:13.396 "ddgst": ${ddgst:-false} 00:27:13.396 }, 00:27:13.396 "method": "bdev_nvme_attach_controller" 00:27:13.396 } 00:27:13.396 EOF 00:27:13.396 )") 00:27:13.396 02:05:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:13.396 02:05:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:13.396 02:05:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:13.396 { 00:27:13.396 "params": { 00:27:13.396 "name": "Nvme$subsystem", 00:27:13.396 "trtype": "$TEST_TRANSPORT", 00:27:13.396 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:13.396 "adrfam": "ipv4", 00:27:13.396 "trsvcid": "$NVMF_PORT", 00:27:13.396 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:13.396 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:13.396 "hdgst": ${hdgst:-false}, 00:27:13.396 "ddgst": ${ddgst:-false} 00:27:13.396 }, 00:27:13.396 "method": "bdev_nvme_attach_controller" 00:27:13.396 } 00:27:13.396 EOF 00:27:13.396 )") 00:27:13.396 02:05:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:13.396 02:05:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:13.396 02:05:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:13.396 { 00:27:13.396 "params": { 00:27:13.396 "name": "Nvme$subsystem", 00:27:13.396 "trtype": "$TEST_TRANSPORT", 00:27:13.396 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:13.396 "adrfam": "ipv4", 00:27:13.396 "trsvcid": "$NVMF_PORT", 00:27:13.397 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:13.397 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:13.397 "hdgst": ${hdgst:-false}, 00:27:13.397 "ddgst": ${ddgst:-false} 00:27:13.397 }, 00:27:13.397 "method": "bdev_nvme_attach_controller" 00:27:13.397 } 00:27:13.397 EOF 00:27:13.397 )") 00:27:13.397 02:05:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:13.397 02:05:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:13.397 02:05:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:13.397 { 00:27:13.397 "params": { 00:27:13.397 "name": "Nvme$subsystem", 00:27:13.397 "trtype": "$TEST_TRANSPORT", 00:27:13.397 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:13.397 "adrfam": "ipv4", 00:27:13.397 "trsvcid": "$NVMF_PORT", 00:27:13.397 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:13.397 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:13.397 "hdgst": ${hdgst:-false}, 00:27:13.397 "ddgst": ${ddgst:-false} 00:27:13.397 }, 00:27:13.397 "method": "bdev_nvme_attach_controller" 00:27:13.397 } 00:27:13.397 EOF 00:27:13.397 )") 00:27:13.397 02:05:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:13.397 02:05:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:13.397 02:05:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:13.397 { 00:27:13.397 "params": { 00:27:13.397 "name": "Nvme$subsystem", 00:27:13.397 "trtype": "$TEST_TRANSPORT", 00:27:13.397 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:13.397 "adrfam": "ipv4", 00:27:13.397 "trsvcid": "$NVMF_PORT", 00:27:13.397 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:13.397 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:13.397 "hdgst": ${hdgst:-false}, 00:27:13.397 "ddgst": ${ddgst:-false} 00:27:13.397 }, 00:27:13.397 "method": "bdev_nvme_attach_controller" 00:27:13.397 } 00:27:13.397 EOF 00:27:13.397 )") 00:27:13.397 02:05:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:13.397 02:05:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:13.397 02:05:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:13.397 { 00:27:13.397 "params": { 00:27:13.397 "name": "Nvme$subsystem", 00:27:13.397 "trtype": "$TEST_TRANSPORT", 00:27:13.397 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:13.397 "adrfam": "ipv4", 00:27:13.397 "trsvcid": "$NVMF_PORT", 00:27:13.397 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:13.397 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:13.397 "hdgst": ${hdgst:-false}, 00:27:13.397 "ddgst": ${ddgst:-false} 00:27:13.397 }, 00:27:13.397 "method": "bdev_nvme_attach_controller" 00:27:13.397 } 00:27:13.397 EOF 00:27:13.397 )") 00:27:13.397 02:05:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:13.397 02:05:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:27:13.397 02:05:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:27:13.397 02:05:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:13.397 "params": { 00:27:13.397 "name": "Nvme1", 00:27:13.397 "trtype": "tcp", 00:27:13.397 "traddr": "10.0.0.2", 00:27:13.397 "adrfam": "ipv4", 00:27:13.397 "trsvcid": "4420", 00:27:13.397 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:13.397 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:13.397 "hdgst": false, 00:27:13.397 "ddgst": false 00:27:13.397 }, 00:27:13.397 "method": "bdev_nvme_attach_controller" 00:27:13.397 },{ 00:27:13.397 "params": { 00:27:13.397 "name": "Nvme2", 00:27:13.397 "trtype": "tcp", 00:27:13.397 "traddr": "10.0.0.2", 00:27:13.397 "adrfam": "ipv4", 00:27:13.397 "trsvcid": "4420", 00:27:13.397 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:13.397 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:13.397 "hdgst": false, 00:27:13.397 "ddgst": false 00:27:13.397 }, 00:27:13.397 "method": "bdev_nvme_attach_controller" 00:27:13.397 },{ 00:27:13.397 "params": { 00:27:13.397 "name": "Nvme3", 00:27:13.397 "trtype": "tcp", 00:27:13.397 "traddr": "10.0.0.2", 00:27:13.397 "adrfam": "ipv4", 00:27:13.397 "trsvcid": "4420", 00:27:13.397 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:13.397 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:13.397 "hdgst": false, 00:27:13.397 "ddgst": false 00:27:13.397 }, 00:27:13.397 "method": "bdev_nvme_attach_controller" 00:27:13.397 },{ 00:27:13.397 "params": { 00:27:13.397 "name": "Nvme4", 00:27:13.397 "trtype": "tcp", 00:27:13.397 "traddr": "10.0.0.2", 00:27:13.397 "adrfam": "ipv4", 00:27:13.397 "trsvcid": "4420", 00:27:13.397 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:13.397 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:13.397 "hdgst": false, 00:27:13.397 "ddgst": false 00:27:13.397 }, 00:27:13.397 "method": "bdev_nvme_attach_controller" 00:27:13.397 },{ 00:27:13.397 "params": { 00:27:13.397 "name": "Nvme5", 00:27:13.397 "trtype": "tcp", 00:27:13.397 "traddr": "10.0.0.2", 00:27:13.397 "adrfam": "ipv4", 00:27:13.397 "trsvcid": "4420", 00:27:13.397 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:13.397 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:13.397 "hdgst": false, 00:27:13.397 "ddgst": false 00:27:13.397 }, 00:27:13.397 "method": "bdev_nvme_attach_controller" 00:27:13.397 },{ 00:27:13.397 "params": { 00:27:13.397 "name": "Nvme6", 00:27:13.397 "trtype": "tcp", 00:27:13.397 "traddr": "10.0.0.2", 00:27:13.397 "adrfam": "ipv4", 00:27:13.397 "trsvcid": "4420", 00:27:13.397 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:13.397 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:13.397 "hdgst": false, 00:27:13.397 "ddgst": false 00:27:13.397 }, 00:27:13.397 "method": "bdev_nvme_attach_controller" 00:27:13.397 },{ 00:27:13.397 "params": { 00:27:13.397 "name": "Nvme7", 00:27:13.397 "trtype": "tcp", 00:27:13.397 "traddr": "10.0.0.2", 00:27:13.397 "adrfam": "ipv4", 00:27:13.397 "trsvcid": "4420", 00:27:13.397 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:13.397 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:13.397 "hdgst": false, 00:27:13.397 "ddgst": false 00:27:13.397 }, 00:27:13.397 "method": "bdev_nvme_attach_controller" 00:27:13.397 },{ 00:27:13.397 "params": { 00:27:13.397 "name": "Nvme8", 00:27:13.397 "trtype": "tcp", 00:27:13.397 "traddr": "10.0.0.2", 00:27:13.397 "adrfam": "ipv4", 00:27:13.397 "trsvcid": "4420", 00:27:13.397 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:13.397 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:13.397 "hdgst": false, 00:27:13.397 "ddgst": false 00:27:13.397 }, 00:27:13.397 "method": "bdev_nvme_attach_controller" 00:27:13.397 },{ 00:27:13.397 "params": { 00:27:13.397 "name": "Nvme9", 00:27:13.397 "trtype": "tcp", 00:27:13.397 "traddr": "10.0.0.2", 00:27:13.397 "adrfam": "ipv4", 00:27:13.397 "trsvcid": "4420", 00:27:13.397 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:13.397 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:13.397 "hdgst": false, 00:27:13.397 "ddgst": false 00:27:13.397 }, 00:27:13.397 "method": "bdev_nvme_attach_controller" 00:27:13.397 },{ 00:27:13.397 "params": { 00:27:13.397 "name": "Nvme10", 00:27:13.397 "trtype": "tcp", 00:27:13.397 "traddr": "10.0.0.2", 00:27:13.397 "adrfam": "ipv4", 00:27:13.397 "trsvcid": "4420", 00:27:13.397 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:13.397 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:13.397 "hdgst": false, 00:27:13.397 "ddgst": false 00:27:13.397 }, 00:27:13.397 "method": "bdev_nvme_attach_controller" 00:27:13.397 }' 00:27:13.397 [2024-07-24 02:05:28.219834] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:27:13.397 [2024-07-24 02:05:28.219922] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:27:13.397 EAL: No free 2048 kB hugepages reported on node 1 00:27:13.397 [2024-07-24 02:05:28.283978] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:13.656 [2024-07-24 02:05:28.371150] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:15.552 02:05:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:15.552 02:05:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:27:15.552 02:05:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:15.552 02:05:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:15.552 02:05:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:15.552 02:05:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:15.552 02:05:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 1509200 00:27:15.552 02:05:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:27:15.552 02:05:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:27:16.485 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 1509200 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:27:16.485 02:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 1509019 00:27:16.485 02:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:27:16.485 02:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:16.485 02:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:27:16.485 02:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:27:16.485 02:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:16.485 02:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:16.485 { 00:27:16.485 "params": { 00:27:16.485 "name": "Nvme$subsystem", 00:27:16.485 "trtype": "$TEST_TRANSPORT", 00:27:16.485 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:16.485 "adrfam": "ipv4", 00:27:16.485 "trsvcid": "$NVMF_PORT", 00:27:16.485 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:16.485 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:16.485 "hdgst": ${hdgst:-false}, 00:27:16.485 "ddgst": ${ddgst:-false} 00:27:16.485 }, 00:27:16.485 "method": "bdev_nvme_attach_controller" 00:27:16.485 } 00:27:16.486 EOF 00:27:16.486 )") 00:27:16.486 02:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:16.486 02:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:16.486 02:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:16.486 { 00:27:16.486 "params": { 00:27:16.486 "name": "Nvme$subsystem", 00:27:16.486 "trtype": "$TEST_TRANSPORT", 00:27:16.486 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:16.486 "adrfam": "ipv4", 00:27:16.486 "trsvcid": "$NVMF_PORT", 00:27:16.486 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:16.486 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:16.486 "hdgst": ${hdgst:-false}, 00:27:16.486 "ddgst": ${ddgst:-false} 00:27:16.486 }, 00:27:16.486 "method": "bdev_nvme_attach_controller" 00:27:16.486 } 00:27:16.486 EOF 00:27:16.486 )") 00:27:16.486 02:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:16.486 02:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:16.486 02:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:16.486 { 00:27:16.486 "params": { 00:27:16.486 "name": "Nvme$subsystem", 00:27:16.486 "trtype": "$TEST_TRANSPORT", 00:27:16.486 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:16.486 "adrfam": "ipv4", 00:27:16.486 "trsvcid": "$NVMF_PORT", 00:27:16.486 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:16.486 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:16.486 "hdgst": ${hdgst:-false}, 00:27:16.486 "ddgst": ${ddgst:-false} 00:27:16.486 }, 00:27:16.486 "method": "bdev_nvme_attach_controller" 00:27:16.486 } 00:27:16.486 EOF 00:27:16.486 )") 00:27:16.486 02:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:16.486 02:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:16.486 02:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:16.486 { 00:27:16.486 "params": { 00:27:16.486 "name": "Nvme$subsystem", 00:27:16.486 "trtype": "$TEST_TRANSPORT", 00:27:16.486 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:16.486 "adrfam": "ipv4", 00:27:16.486 "trsvcid": "$NVMF_PORT", 00:27:16.486 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:16.486 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:16.486 "hdgst": ${hdgst:-false}, 00:27:16.486 "ddgst": ${ddgst:-false} 00:27:16.486 }, 00:27:16.486 "method": "bdev_nvme_attach_controller" 00:27:16.486 } 00:27:16.486 EOF 00:27:16.486 )") 00:27:16.486 02:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:16.486 02:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:16.486 02:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:16.486 { 00:27:16.486 "params": { 00:27:16.486 "name": "Nvme$subsystem", 00:27:16.486 "trtype": "$TEST_TRANSPORT", 00:27:16.486 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:16.486 "adrfam": "ipv4", 00:27:16.486 "trsvcid": "$NVMF_PORT", 00:27:16.486 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:16.486 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:16.486 "hdgst": ${hdgst:-false}, 00:27:16.486 "ddgst": ${ddgst:-false} 00:27:16.486 }, 00:27:16.486 "method": "bdev_nvme_attach_controller" 00:27:16.486 } 00:27:16.486 EOF 00:27:16.486 )") 00:27:16.486 02:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:16.486 02:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:16.486 02:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:16.486 { 00:27:16.486 "params": { 00:27:16.486 "name": "Nvme$subsystem", 00:27:16.486 "trtype": "$TEST_TRANSPORT", 00:27:16.486 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:16.486 "adrfam": "ipv4", 00:27:16.486 "trsvcid": "$NVMF_PORT", 00:27:16.486 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:16.486 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:16.486 "hdgst": ${hdgst:-false}, 00:27:16.486 "ddgst": ${ddgst:-false} 00:27:16.486 }, 00:27:16.486 "method": "bdev_nvme_attach_controller" 00:27:16.486 } 00:27:16.486 EOF 00:27:16.486 )") 00:27:16.486 02:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:16.486 02:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:16.486 02:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:16.486 { 00:27:16.486 "params": { 00:27:16.486 "name": "Nvme$subsystem", 00:27:16.486 "trtype": "$TEST_TRANSPORT", 00:27:16.486 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:16.486 "adrfam": "ipv4", 00:27:16.486 "trsvcid": "$NVMF_PORT", 00:27:16.486 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:16.486 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:16.486 "hdgst": ${hdgst:-false}, 00:27:16.486 "ddgst": ${ddgst:-false} 00:27:16.486 }, 00:27:16.486 "method": "bdev_nvme_attach_controller" 00:27:16.486 } 00:27:16.486 EOF 00:27:16.486 )") 00:27:16.486 02:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:16.486 02:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:16.486 02:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:16.486 { 00:27:16.486 "params": { 00:27:16.486 "name": "Nvme$subsystem", 00:27:16.486 "trtype": "$TEST_TRANSPORT", 00:27:16.486 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:16.486 "adrfam": "ipv4", 00:27:16.486 "trsvcid": "$NVMF_PORT", 00:27:16.486 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:16.486 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:16.486 "hdgst": ${hdgst:-false}, 00:27:16.486 "ddgst": ${ddgst:-false} 00:27:16.486 }, 00:27:16.486 "method": "bdev_nvme_attach_controller" 00:27:16.486 } 00:27:16.486 EOF 00:27:16.486 )") 00:27:16.486 02:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:16.486 02:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:16.486 02:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:16.486 { 00:27:16.486 "params": { 00:27:16.486 "name": "Nvme$subsystem", 00:27:16.486 "trtype": "$TEST_TRANSPORT", 00:27:16.486 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:16.486 "adrfam": "ipv4", 00:27:16.486 "trsvcid": "$NVMF_PORT", 00:27:16.486 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:16.486 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:16.486 "hdgst": ${hdgst:-false}, 00:27:16.486 "ddgst": ${ddgst:-false} 00:27:16.486 }, 00:27:16.486 "method": "bdev_nvme_attach_controller" 00:27:16.486 } 00:27:16.486 EOF 00:27:16.486 )") 00:27:16.486 02:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:16.486 02:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:16.486 02:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:16.486 { 00:27:16.486 "params": { 00:27:16.486 "name": "Nvme$subsystem", 00:27:16.486 "trtype": "$TEST_TRANSPORT", 00:27:16.486 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:16.486 "adrfam": "ipv4", 00:27:16.486 "trsvcid": "$NVMF_PORT", 00:27:16.486 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:16.486 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:16.486 "hdgst": ${hdgst:-false}, 00:27:16.486 "ddgst": ${ddgst:-false} 00:27:16.486 }, 00:27:16.486 "method": "bdev_nvme_attach_controller" 00:27:16.486 } 00:27:16.486 EOF 00:27:16.486 )") 00:27:16.486 02:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:16.486 02:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:27:16.486 02:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:27:16.486 02:05:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:16.486 "params": { 00:27:16.486 "name": "Nvme1", 00:27:16.486 "trtype": "tcp", 00:27:16.486 "traddr": "10.0.0.2", 00:27:16.486 "adrfam": "ipv4", 00:27:16.486 "trsvcid": "4420", 00:27:16.486 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:16.486 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:16.486 "hdgst": false, 00:27:16.486 "ddgst": false 00:27:16.486 }, 00:27:16.486 "method": "bdev_nvme_attach_controller" 00:27:16.486 },{ 00:27:16.486 "params": { 00:27:16.486 "name": "Nvme2", 00:27:16.486 "trtype": "tcp", 00:27:16.486 "traddr": "10.0.0.2", 00:27:16.486 "adrfam": "ipv4", 00:27:16.487 "trsvcid": "4420", 00:27:16.487 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:16.487 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:16.487 "hdgst": false, 00:27:16.487 "ddgst": false 00:27:16.487 }, 00:27:16.487 "method": "bdev_nvme_attach_controller" 00:27:16.487 },{ 00:27:16.487 "params": { 00:27:16.487 "name": "Nvme3", 00:27:16.487 "trtype": "tcp", 00:27:16.487 "traddr": "10.0.0.2", 00:27:16.487 "adrfam": "ipv4", 00:27:16.487 "trsvcid": "4420", 00:27:16.487 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:16.487 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:16.487 "hdgst": false, 00:27:16.487 "ddgst": false 00:27:16.487 }, 00:27:16.487 "method": "bdev_nvme_attach_controller" 00:27:16.487 },{ 00:27:16.487 "params": { 00:27:16.487 "name": "Nvme4", 00:27:16.487 "trtype": "tcp", 00:27:16.487 "traddr": "10.0.0.2", 00:27:16.487 "adrfam": "ipv4", 00:27:16.487 "trsvcid": "4420", 00:27:16.487 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:16.487 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:16.487 "hdgst": false, 00:27:16.487 "ddgst": false 00:27:16.487 }, 00:27:16.487 "method": "bdev_nvme_attach_controller" 00:27:16.487 },{ 00:27:16.487 "params": { 00:27:16.487 "name": "Nvme5", 00:27:16.487 "trtype": "tcp", 00:27:16.487 "traddr": "10.0.0.2", 00:27:16.487 "adrfam": "ipv4", 00:27:16.487 "trsvcid": "4420", 00:27:16.487 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:16.487 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:16.487 "hdgst": false, 00:27:16.487 "ddgst": false 00:27:16.487 }, 00:27:16.487 "method": "bdev_nvme_attach_controller" 00:27:16.487 },{ 00:27:16.487 "params": { 00:27:16.487 "name": "Nvme6", 00:27:16.487 "trtype": "tcp", 00:27:16.487 "traddr": "10.0.0.2", 00:27:16.487 "adrfam": "ipv4", 00:27:16.487 "trsvcid": "4420", 00:27:16.487 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:16.487 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:16.487 "hdgst": false, 00:27:16.487 "ddgst": false 00:27:16.487 }, 00:27:16.487 "method": "bdev_nvme_attach_controller" 00:27:16.487 },{ 00:27:16.487 "params": { 00:27:16.487 "name": "Nvme7", 00:27:16.487 "trtype": "tcp", 00:27:16.487 "traddr": "10.0.0.2", 00:27:16.487 "adrfam": "ipv4", 00:27:16.487 "trsvcid": "4420", 00:27:16.487 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:16.487 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:16.487 "hdgst": false, 00:27:16.487 "ddgst": false 00:27:16.487 }, 00:27:16.487 "method": "bdev_nvme_attach_controller" 00:27:16.487 },{ 00:27:16.487 "params": { 00:27:16.487 "name": "Nvme8", 00:27:16.487 "trtype": "tcp", 00:27:16.487 "traddr": "10.0.0.2", 00:27:16.487 "adrfam": "ipv4", 00:27:16.487 "trsvcid": "4420", 00:27:16.487 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:16.487 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:16.487 "hdgst": false, 00:27:16.487 "ddgst": false 00:27:16.487 }, 00:27:16.487 "method": "bdev_nvme_attach_controller" 00:27:16.487 },{ 00:27:16.487 "params": { 00:27:16.487 "name": "Nvme9", 00:27:16.487 "trtype": "tcp", 00:27:16.487 "traddr": "10.0.0.2", 00:27:16.487 "adrfam": "ipv4", 00:27:16.487 "trsvcid": "4420", 00:27:16.487 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:16.487 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:16.487 "hdgst": false, 00:27:16.487 "ddgst": false 00:27:16.487 }, 00:27:16.487 "method": "bdev_nvme_attach_controller" 00:27:16.487 },{ 00:27:16.487 "params": { 00:27:16.487 "name": "Nvme10", 00:27:16.487 "trtype": "tcp", 00:27:16.487 "traddr": "10.0.0.2", 00:27:16.487 "adrfam": "ipv4", 00:27:16.487 "trsvcid": "4420", 00:27:16.487 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:16.487 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:16.487 "hdgst": false, 00:27:16.487 "ddgst": false 00:27:16.487 }, 00:27:16.487 "method": "bdev_nvme_attach_controller" 00:27:16.487 }' 00:27:16.487 [2024-07-24 02:05:31.255452] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:27:16.487 [2024-07-24 02:05:31.255541] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1509613 ] 00:27:16.487 EAL: No free 2048 kB hugepages reported on node 1 00:27:16.487 [2024-07-24 02:05:31.320047] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:16.745 [2024-07-24 02:05:31.409687] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:18.118 Running I/O for 1 seconds... 00:27:19.490 00:27:19.490 Latency(us) 00:27:19.490 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:19.490 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:19.490 Verification LBA range: start 0x0 length 0x400 00:27:19.490 Nvme1n1 : 1.14 224.36 14.02 0.00 0.00 282256.12 20583.16 267192.70 00:27:19.490 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:19.490 Verification LBA range: start 0x0 length 0x400 00:27:19.490 Nvme2n1 : 1.14 227.47 14.22 0.00 0.00 270858.61 13592.65 262532.36 00:27:19.490 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:19.490 Verification LBA range: start 0x0 length 0x400 00:27:19.490 Nvme3n1 : 1.12 229.43 14.34 0.00 0.00 266308.27 21262.79 237677.23 00:27:19.490 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:19.490 Verification LBA range: start 0x0 length 0x400 00:27:19.490 Nvme4n1 : 1.12 228.79 14.30 0.00 0.00 263288.60 20583.16 264085.81 00:27:19.490 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:19.490 Verification LBA range: start 0x0 length 0x400 00:27:19.490 Nvme5n1 : 1.13 226.34 14.15 0.00 0.00 261708.80 21942.42 268746.15 00:27:19.490 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:19.490 Verification LBA range: start 0x0 length 0x400 00:27:19.490 Nvme6n1 : 1.15 223.06 13.94 0.00 0.00 261235.86 19709.35 248551.35 00:27:19.490 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:19.490 Verification LBA range: start 0x0 length 0x400 00:27:19.490 Nvme7n1 : 1.16 221.28 13.83 0.00 0.00 259189.76 18350.08 274959.93 00:27:19.490 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:19.490 Verification LBA range: start 0x0 length 0x400 00:27:19.490 Nvme8n1 : 1.14 225.01 14.06 0.00 0.00 249808.97 35923.44 250104.79 00:27:19.490 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:19.490 Verification LBA range: start 0x0 length 0x400 00:27:19.490 Nvme9n1 : 1.15 222.12 13.88 0.00 0.00 248988.07 21165.70 270299.59 00:27:19.490 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:19.490 Verification LBA range: start 0x0 length 0x400 00:27:19.490 Nvme10n1 : 1.23 259.17 16.20 0.00 0.00 204116.73 7330.32 293601.28 00:27:19.490 =================================================================================================================== 00:27:19.490 Total : 2287.03 142.94 0.00 0.00 255515.00 7330.32 293601.28 00:27:19.490 02:05:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:27:19.490 02:05:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:27:19.490 02:05:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:27:19.490 02:05:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:19.490 02:05:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:27:19.490 02:05:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:19.490 02:05:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:27:19.490 02:05:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:19.490 02:05:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:27:19.490 02:05:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:19.490 02:05:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:19.490 rmmod nvme_tcp 00:27:19.490 rmmod nvme_fabrics 00:27:19.748 rmmod nvme_keyring 00:27:19.748 02:05:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:19.748 02:05:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:27:19.748 02:05:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:27:19.748 02:05:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 1509019 ']' 00:27:19.748 02:05:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 1509019 00:27:19.748 02:05:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@948 -- # '[' -z 1509019 ']' 00:27:19.748 02:05:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # kill -0 1509019 00:27:19.748 02:05:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # uname 00:27:19.748 02:05:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:19.748 02:05:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1509019 00:27:19.748 02:05:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:19.748 02:05:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:19.748 02:05:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1509019' 00:27:19.748 killing process with pid 1509019 00:27:19.748 02:05:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@967 -- # kill 1509019 00:27:19.748 02:05:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # wait 1509019 00:27:20.315 02:05:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:20.315 02:05:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:20.315 02:05:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:20.315 02:05:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:20.315 02:05:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:20.315 02:05:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:20.315 02:05:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:20.315 02:05:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:22.216 02:05:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:22.216 00:27:22.216 real 0m11.733s 00:27:22.216 user 0m34.341s 00:27:22.216 sys 0m3.137s 00:27:22.216 02:05:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:22.216 02:05:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:22.216 ************************************ 00:27:22.216 END TEST nvmf_shutdown_tc1 00:27:22.216 ************************************ 00:27:22.216 02:05:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:27:22.216 02:05:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:22.216 02:05:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:22.216 02:05:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:22.216 ************************************ 00:27:22.216 START TEST nvmf_shutdown_tc2 00:27:22.216 ************************************ 00:27:22.216 02:05:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc2 00:27:22.216 02:05:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:27:22.216 02:05:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:27:22.216 02:05:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:22.216 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:22.216 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:22.216 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:22.216 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:22.216 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:22.216 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:22.216 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:22.216 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:22.216 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:22.216 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:27:22.216 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:22.216 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:22.216 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:27:22.216 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:22.216 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:22.216 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:22.216 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:22.216 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:22.216 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:27:22.216 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:22.216 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:27:22.216 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:27:22.216 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:27:22.216 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:27:22.216 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:27:22.216 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:27:22.216 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:22.216 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:22.216 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:22.216 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:22.216 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:22.216 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:22.216 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:22.216 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:22.216 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:22.216 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:22.216 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:22.216 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:22.216 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:22.216 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:22.216 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:22.216 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:22.216 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:22.216 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:22.216 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:22.216 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:22.216 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:22.216 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:22.216 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:22.216 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:22.216 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:22.216 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:22.216 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:22.216 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:22.216 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:22.216 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:22.216 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:22.216 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:22.217 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:22.217 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:22.217 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:22.217 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:22.217 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:22.217 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:22.217 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:22.217 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:22.217 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:22.217 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:22.217 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:22.217 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:22.217 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:22.217 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:22.217 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:22.217 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:22.217 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:22.217 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:22.217 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:22.217 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:22.217 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:22.217 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:22.217 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:22.217 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:22.217 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:22.217 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:27:22.217 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:22.217 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:22.217 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:22.217 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:22.217 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:22.217 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:22.217 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:22.217 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:22.217 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:22.217 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:22.217 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:22.217 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:22.217 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:22.217 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:22.217 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:22.217 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:22.217 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:22.217 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:22.217 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:22.217 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:22.476 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:22.476 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:22.476 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:22.476 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:22.476 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.138 ms 00:27:22.476 00:27:22.476 --- 10.0.0.2 ping statistics --- 00:27:22.476 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:22.476 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:27:22.476 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:22.476 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:22.476 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.170 ms 00:27:22.476 00:27:22.476 --- 10.0.0.1 ping statistics --- 00:27:22.476 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:22.476 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:27:22.476 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:22.476 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:27:22.476 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:22.476 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:22.476 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:22.476 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:22.476 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:22.476 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:22.476 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:22.476 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:27:22.476 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:22.476 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:22.476 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:22.476 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1510384 00:27:22.476 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:27:22.476 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1510384 00:27:22.476 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 1510384 ']' 00:27:22.476 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:22.476 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:22.476 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:22.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:22.476 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:22.476 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:22.476 [2024-07-24 02:05:37.234906] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:27:22.476 [2024-07-24 02:05:37.234993] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:22.476 EAL: No free 2048 kB hugepages reported on node 1 00:27:22.477 [2024-07-24 02:05:37.304949] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:22.735 [2024-07-24 02:05:37.402855] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:22.735 [2024-07-24 02:05:37.402917] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:22.735 [2024-07-24 02:05:37.402934] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:22.735 [2024-07-24 02:05:37.402946] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:22.735 [2024-07-24 02:05:37.402958] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:22.735 [2024-07-24 02:05:37.403022] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:22.735 [2024-07-24 02:05:37.403142] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:22.735 [2024-07-24 02:05:37.403209] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:27:22.735 [2024-07-24 02:05:37.403212] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:22.735 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:22.735 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:27:22.735 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:22.735 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:22.735 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:22.735 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:22.735 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:22.735 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.735 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:22.735 [2024-07-24 02:05:37.563878] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:22.735 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.735 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:27:22.735 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:27:22.735 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:22.735 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:22.735 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:22.735 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:22.735 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:22.735 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:22.735 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:22.735 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:22.735 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:22.735 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:22.735 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:22.735 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:22.735 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:22.735 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:22.735 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:22.735 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:22.735 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:22.735 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:22.735 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:22.735 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:22.735 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:22.735 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:22.735 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:22.735 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:27:22.735 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.735 02:05:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:22.993 Malloc1 00:27:22.993 [2024-07-24 02:05:37.652980] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:22.993 Malloc2 00:27:22.993 Malloc3 00:27:22.993 Malloc4 00:27:22.993 Malloc5 00:27:22.993 Malloc6 00:27:23.251 Malloc7 00:27:23.251 Malloc8 00:27:23.251 Malloc9 00:27:23.251 Malloc10 00:27:23.251 02:05:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.251 02:05:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:27:23.251 02:05:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:23.251 02:05:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:23.251 02:05:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=1510563 00:27:23.251 02:05:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 1510563 /var/tmp/bdevperf.sock 00:27:23.251 02:05:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 1510563 ']' 00:27:23.251 02:05:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:23.251 02:05:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:27:23.251 02:05:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:23.251 02:05:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:23.251 02:05:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:27:23.252 02:05:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:23.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:23.252 02:05:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:27:23.252 02:05:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:23.252 02:05:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:23.252 02:05:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:23.252 02:05:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:23.252 { 00:27:23.252 "params": { 00:27:23.252 "name": "Nvme$subsystem", 00:27:23.252 "trtype": "$TEST_TRANSPORT", 00:27:23.252 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:23.252 "adrfam": "ipv4", 00:27:23.252 "trsvcid": "$NVMF_PORT", 00:27:23.252 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:23.252 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:23.252 "hdgst": ${hdgst:-false}, 00:27:23.252 "ddgst": ${ddgst:-false} 00:27:23.252 }, 00:27:23.252 "method": "bdev_nvme_attach_controller" 00:27:23.252 } 00:27:23.252 EOF 00:27:23.252 )") 00:27:23.252 02:05:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:23.252 02:05:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:23.252 02:05:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:23.252 { 00:27:23.252 "params": { 00:27:23.252 "name": "Nvme$subsystem", 00:27:23.252 "trtype": "$TEST_TRANSPORT", 00:27:23.252 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:23.252 "adrfam": "ipv4", 00:27:23.252 "trsvcid": "$NVMF_PORT", 00:27:23.252 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:23.252 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:23.252 "hdgst": ${hdgst:-false}, 00:27:23.252 "ddgst": ${ddgst:-false} 00:27:23.252 }, 00:27:23.252 "method": "bdev_nvme_attach_controller" 00:27:23.252 } 00:27:23.252 EOF 00:27:23.252 )") 00:27:23.252 02:05:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:23.252 02:05:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:23.252 02:05:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:23.252 { 00:27:23.252 "params": { 00:27:23.252 "name": "Nvme$subsystem", 00:27:23.252 "trtype": "$TEST_TRANSPORT", 00:27:23.252 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:23.252 "adrfam": "ipv4", 00:27:23.252 "trsvcid": "$NVMF_PORT", 00:27:23.252 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:23.252 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:23.252 "hdgst": ${hdgst:-false}, 00:27:23.252 "ddgst": ${ddgst:-false} 00:27:23.252 }, 00:27:23.252 "method": "bdev_nvme_attach_controller" 00:27:23.252 } 00:27:23.252 EOF 00:27:23.252 )") 00:27:23.252 02:05:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:23.252 02:05:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:23.252 02:05:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:23.252 { 00:27:23.252 "params": { 00:27:23.252 "name": "Nvme$subsystem", 00:27:23.252 "trtype": "$TEST_TRANSPORT", 00:27:23.252 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:23.252 "adrfam": "ipv4", 00:27:23.252 "trsvcid": "$NVMF_PORT", 00:27:23.252 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:23.252 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:23.252 "hdgst": ${hdgst:-false}, 00:27:23.252 "ddgst": ${ddgst:-false} 00:27:23.252 }, 00:27:23.252 "method": "bdev_nvme_attach_controller" 00:27:23.252 } 00:27:23.252 EOF 00:27:23.252 )") 00:27:23.252 02:05:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:23.252 02:05:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:23.252 02:05:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:23.252 { 00:27:23.252 "params": { 00:27:23.252 "name": "Nvme$subsystem", 00:27:23.252 "trtype": "$TEST_TRANSPORT", 00:27:23.252 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:23.252 "adrfam": "ipv4", 00:27:23.252 "trsvcid": "$NVMF_PORT", 00:27:23.252 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:23.252 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:23.252 "hdgst": ${hdgst:-false}, 00:27:23.252 "ddgst": ${ddgst:-false} 00:27:23.252 }, 00:27:23.252 "method": "bdev_nvme_attach_controller" 00:27:23.252 } 00:27:23.252 EOF 00:27:23.252 )") 00:27:23.252 02:05:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:23.252 02:05:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:23.252 02:05:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:23.252 { 00:27:23.252 "params": { 00:27:23.252 "name": "Nvme$subsystem", 00:27:23.252 "trtype": "$TEST_TRANSPORT", 00:27:23.252 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:23.252 "adrfam": "ipv4", 00:27:23.252 "trsvcid": "$NVMF_PORT", 00:27:23.252 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:23.252 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:23.252 "hdgst": ${hdgst:-false}, 00:27:23.252 "ddgst": ${ddgst:-false} 00:27:23.252 }, 00:27:23.252 "method": "bdev_nvme_attach_controller" 00:27:23.252 } 00:27:23.252 EOF 00:27:23.252 )") 00:27:23.252 02:05:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:23.252 02:05:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:23.252 02:05:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:23.252 { 00:27:23.252 "params": { 00:27:23.252 "name": "Nvme$subsystem", 00:27:23.252 "trtype": "$TEST_TRANSPORT", 00:27:23.252 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:23.252 "adrfam": "ipv4", 00:27:23.252 "trsvcid": "$NVMF_PORT", 00:27:23.252 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:23.252 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:23.252 "hdgst": ${hdgst:-false}, 00:27:23.252 "ddgst": ${ddgst:-false} 00:27:23.252 }, 00:27:23.252 "method": "bdev_nvme_attach_controller" 00:27:23.252 } 00:27:23.252 EOF 00:27:23.252 )") 00:27:23.252 02:05:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:23.252 02:05:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:23.252 02:05:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:23.252 { 00:27:23.252 "params": { 00:27:23.252 "name": "Nvme$subsystem", 00:27:23.252 "trtype": "$TEST_TRANSPORT", 00:27:23.252 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:23.252 "adrfam": "ipv4", 00:27:23.252 "trsvcid": "$NVMF_PORT", 00:27:23.252 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:23.252 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:23.252 "hdgst": ${hdgst:-false}, 00:27:23.252 "ddgst": ${ddgst:-false} 00:27:23.252 }, 00:27:23.252 "method": "bdev_nvme_attach_controller" 00:27:23.252 } 00:27:23.252 EOF 00:27:23.252 )") 00:27:23.252 02:05:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:23.252 02:05:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:23.252 02:05:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:23.252 { 00:27:23.252 "params": { 00:27:23.252 "name": "Nvme$subsystem", 00:27:23.252 "trtype": "$TEST_TRANSPORT", 00:27:23.252 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:23.252 "adrfam": "ipv4", 00:27:23.252 "trsvcid": "$NVMF_PORT", 00:27:23.252 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:23.252 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:23.252 "hdgst": ${hdgst:-false}, 00:27:23.252 "ddgst": ${ddgst:-false} 00:27:23.252 }, 00:27:23.252 "method": "bdev_nvme_attach_controller" 00:27:23.252 } 00:27:23.252 EOF 00:27:23.252 )") 00:27:23.252 02:05:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:23.252 02:05:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:23.252 02:05:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:23.252 { 00:27:23.252 "params": { 00:27:23.252 "name": "Nvme$subsystem", 00:27:23.252 "trtype": "$TEST_TRANSPORT", 00:27:23.252 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:23.252 "adrfam": "ipv4", 00:27:23.252 "trsvcid": "$NVMF_PORT", 00:27:23.252 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:23.252 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:23.252 "hdgst": ${hdgst:-false}, 00:27:23.252 "ddgst": ${ddgst:-false} 00:27:23.252 }, 00:27:23.252 "method": "bdev_nvme_attach_controller" 00:27:23.252 } 00:27:23.253 EOF 00:27:23.253 )") 00:27:23.538 02:05:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:23.538 02:05:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:27:23.538 02:05:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:27:23.538 02:05:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:23.538 "params": { 00:27:23.538 "name": "Nvme1", 00:27:23.538 "trtype": "tcp", 00:27:23.538 "traddr": "10.0.0.2", 00:27:23.538 "adrfam": "ipv4", 00:27:23.538 "trsvcid": "4420", 00:27:23.538 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:23.538 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:23.538 "hdgst": false, 00:27:23.538 "ddgst": false 00:27:23.538 }, 00:27:23.538 "method": "bdev_nvme_attach_controller" 00:27:23.538 },{ 00:27:23.538 "params": { 00:27:23.538 "name": "Nvme2", 00:27:23.538 "trtype": "tcp", 00:27:23.538 "traddr": "10.0.0.2", 00:27:23.538 "adrfam": "ipv4", 00:27:23.538 "trsvcid": "4420", 00:27:23.538 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:23.538 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:23.538 "hdgst": false, 00:27:23.538 "ddgst": false 00:27:23.538 }, 00:27:23.538 "method": "bdev_nvme_attach_controller" 00:27:23.538 },{ 00:27:23.538 "params": { 00:27:23.538 "name": "Nvme3", 00:27:23.538 "trtype": "tcp", 00:27:23.538 "traddr": "10.0.0.2", 00:27:23.538 "adrfam": "ipv4", 00:27:23.538 "trsvcid": "4420", 00:27:23.538 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:23.538 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:23.538 "hdgst": false, 00:27:23.538 "ddgst": false 00:27:23.538 }, 00:27:23.538 "method": "bdev_nvme_attach_controller" 00:27:23.538 },{ 00:27:23.538 "params": { 00:27:23.538 "name": "Nvme4", 00:27:23.538 "trtype": "tcp", 00:27:23.538 "traddr": "10.0.0.2", 00:27:23.538 "adrfam": "ipv4", 00:27:23.538 "trsvcid": "4420", 00:27:23.538 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:23.538 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:23.538 "hdgst": false, 00:27:23.538 "ddgst": false 00:27:23.538 }, 00:27:23.538 "method": "bdev_nvme_attach_controller" 00:27:23.538 },{ 00:27:23.538 "params": { 00:27:23.538 "name": "Nvme5", 00:27:23.538 "trtype": "tcp", 00:27:23.538 "traddr": "10.0.0.2", 00:27:23.538 "adrfam": "ipv4", 00:27:23.538 "trsvcid": "4420", 00:27:23.538 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:23.538 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:23.538 "hdgst": false, 00:27:23.538 "ddgst": false 00:27:23.538 }, 00:27:23.538 "method": "bdev_nvme_attach_controller" 00:27:23.538 },{ 00:27:23.538 "params": { 00:27:23.538 "name": "Nvme6", 00:27:23.538 "trtype": "tcp", 00:27:23.538 "traddr": "10.0.0.2", 00:27:23.539 "adrfam": "ipv4", 00:27:23.539 "trsvcid": "4420", 00:27:23.539 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:23.539 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:23.539 "hdgst": false, 00:27:23.539 "ddgst": false 00:27:23.539 }, 00:27:23.539 "method": "bdev_nvme_attach_controller" 00:27:23.539 },{ 00:27:23.539 "params": { 00:27:23.539 "name": "Nvme7", 00:27:23.539 "trtype": "tcp", 00:27:23.539 "traddr": "10.0.0.2", 00:27:23.539 "adrfam": "ipv4", 00:27:23.539 "trsvcid": "4420", 00:27:23.539 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:23.539 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:23.539 "hdgst": false, 00:27:23.539 "ddgst": false 00:27:23.539 }, 00:27:23.539 "method": "bdev_nvme_attach_controller" 00:27:23.539 },{ 00:27:23.539 "params": { 00:27:23.539 "name": "Nvme8", 00:27:23.539 "trtype": "tcp", 00:27:23.539 "traddr": "10.0.0.2", 00:27:23.539 "adrfam": "ipv4", 00:27:23.539 "trsvcid": "4420", 00:27:23.539 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:23.539 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:23.539 "hdgst": false, 00:27:23.539 "ddgst": false 00:27:23.539 }, 00:27:23.539 "method": "bdev_nvme_attach_controller" 00:27:23.539 },{ 00:27:23.539 "params": { 00:27:23.539 "name": "Nvme9", 00:27:23.539 "trtype": "tcp", 00:27:23.539 "traddr": "10.0.0.2", 00:27:23.539 "adrfam": "ipv4", 00:27:23.539 "trsvcid": "4420", 00:27:23.539 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:23.539 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:23.539 "hdgst": false, 00:27:23.539 "ddgst": false 00:27:23.539 }, 00:27:23.539 "method": "bdev_nvme_attach_controller" 00:27:23.539 },{ 00:27:23.539 "params": { 00:27:23.539 "name": "Nvme10", 00:27:23.539 "trtype": "tcp", 00:27:23.539 "traddr": "10.0.0.2", 00:27:23.539 "adrfam": "ipv4", 00:27:23.539 "trsvcid": "4420", 00:27:23.539 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:23.539 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:23.539 "hdgst": false, 00:27:23.539 "ddgst": false 00:27:23.539 }, 00:27:23.539 "method": "bdev_nvme_attach_controller" 00:27:23.539 }' 00:27:23.539 [2024-07-24 02:05:38.160040] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:27:23.539 [2024-07-24 02:05:38.160123] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1510563 ] 00:27:23.539 EAL: No free 2048 kB hugepages reported on node 1 00:27:23.539 [2024-07-24 02:05:38.225020] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:23.539 [2024-07-24 02:05:38.312739] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:25.436 Running I/O for 10 seconds... 00:27:25.436 02:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:25.436 02:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:27:25.436 02:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:25.436 02:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.436 02:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:25.436 02:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.436 02:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:27:25.436 02:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:27:25.436 02:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:27:25.436 02:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:27:25.436 02:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:27:25.436 02:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:27:25.436 02:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:25.436 02:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:25.436 02:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:25.436 02:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.436 02:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:25.436 02:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.436 02:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:27:25.436 02:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:27:25.436 02:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:27:25.693 02:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:27:25.693 02:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:25.693 02:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:25.693 02:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:25.693 02:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.693 02:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:25.693 02:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.693 02:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:27:25.694 02:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:27:25.694 02:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:27:25.952 02:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:27:25.952 02:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:25.952 02:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:25.952 02:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:25.952 02:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.952 02:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:25.952 02:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.952 02:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:27:25.952 02:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:27:25.952 02:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:27:25.952 02:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:27:25.952 02:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:27:25.952 02:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 1510563 00:27:25.952 02:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 1510563 ']' 00:27:25.952 02:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 1510563 00:27:25.952 02:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:27:25.952 02:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:25.952 02:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1510563 00:27:25.952 02:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:25.952 02:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:25.952 02:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1510563' 00:27:25.952 killing process with pid 1510563 00:27:25.952 02:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 1510563 00:27:25.952 02:05:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 1510563 00:27:26.210 Received shutdown signal, test time was about 0.987477 seconds 00:27:26.210 00:27:26.210 Latency(us) 00:27:26.210 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:26.210 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:26.210 Verification LBA range: start 0x0 length 0x400 00:27:26.210 Nvme1n1 : 0.98 260.18 16.26 0.00 0.00 243204.17 21748.24 240784.12 00:27:26.210 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:26.210 Verification LBA range: start 0x0 length 0x400 00:27:26.210 Nvme2n1 : 0.97 267.39 16.71 0.00 0.00 231044.27 4490.43 236123.78 00:27:26.210 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:26.210 Verification LBA range: start 0x0 length 0x400 00:27:26.210 Nvme3n1 : 0.97 263.63 16.48 0.00 0.00 230699.24 19029.71 250104.79 00:27:26.210 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:26.210 Verification LBA range: start 0x0 length 0x400 00:27:26.210 Nvme4n1 : 0.99 259.46 16.22 0.00 0.00 229993.62 19029.71 260978.92 00:27:26.210 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:26.210 Verification LBA range: start 0x0 length 0x400 00:27:26.210 Nvme5n1 : 0.96 200.01 12.50 0.00 0.00 291947.96 22622.06 287387.50 00:27:26.210 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:26.210 Verification LBA range: start 0x0 length 0x400 00:27:26.210 Nvme6n1 : 0.95 201.20 12.57 0.00 0.00 283967.08 22622.06 254765.13 00:27:26.210 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:26.210 Verification LBA range: start 0x0 length 0x400 00:27:26.210 Nvme7n1 : 0.95 202.65 12.67 0.00 0.00 275392.28 20097.71 256318.58 00:27:26.210 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:26.210 Verification LBA range: start 0x0 length 0x400 00:27:26.210 Nvme8n1 : 0.98 269.80 16.86 0.00 0.00 203026.81 3070.48 251658.24 00:27:26.210 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:26.210 Verification LBA range: start 0x0 length 0x400 00:27:26.210 Nvme9n1 : 0.97 198.90 12.43 0.00 0.00 269327.17 22524.97 273406.48 00:27:26.210 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:26.210 Verification LBA range: start 0x0 length 0x400 00:27:26.210 Nvme10n1 : 0.94 204.78 12.80 0.00 0.00 254096.88 20291.89 256318.58 00:27:26.210 =================================================================================================================== 00:27:26.210 Total : 2328.00 145.50 0.00 0.00 247698.34 3070.48 287387.50 00:27:26.467 02:05:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:27:27.398 02:05:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 1510384 00:27:27.398 02:05:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:27:27.398 02:05:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:27:27.398 02:05:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:27:27.398 02:05:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:27.398 02:05:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:27:27.398 02:05:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:27.398 02:05:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:27:27.399 02:05:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:27.399 02:05:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:27:27.399 02:05:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:27.399 02:05:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:27.399 rmmod nvme_tcp 00:27:27.399 rmmod nvme_fabrics 00:27:27.399 rmmod nvme_keyring 00:27:27.399 02:05:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:27.399 02:05:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:27:27.399 02:05:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:27:27.399 02:05:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 1510384 ']' 00:27:27.399 02:05:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 1510384 00:27:27.399 02:05:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 1510384 ']' 00:27:27.399 02:05:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 1510384 00:27:27.399 02:05:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:27:27.399 02:05:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:27.399 02:05:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1510384 00:27:27.399 02:05:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:27.399 02:05:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:27.399 02:05:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1510384' 00:27:27.399 killing process with pid 1510384 00:27:27.399 02:05:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 1510384 00:27:27.399 02:05:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 1510384 00:27:27.963 02:05:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:27.963 02:05:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:27.963 02:05:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:27.963 02:05:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:27.963 02:05:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:27.963 02:05:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:27.963 02:05:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:27.963 02:05:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:29.863 02:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:29.863 00:27:29.863 real 0m7.718s 00:27:29.863 user 0m23.362s 00:27:29.863 sys 0m1.554s 00:27:29.863 02:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:29.863 02:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:29.863 ************************************ 00:27:29.863 END TEST nvmf_shutdown_tc2 00:27:29.863 ************************************ 00:27:29.863 02:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:27:29.863 02:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:29.863 02:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:29.863 02:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:30.122 ************************************ 00:27:30.122 START TEST nvmf_shutdown_tc3 00:27:30.122 ************************************ 00:27:30.122 02:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc3 00:27:30.122 02:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:27:30.122 02:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:27:30.122 02:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:30.122 02:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:30.122 02:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:30.122 02:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:30.122 02:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:30.122 02:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:30.122 02:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:30.122 02:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:30.122 02:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:30.122 02:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:30.122 02:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:27:30.122 02:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:30.122 02:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:30.122 02:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:27:30.122 02:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:30.122 02:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:30.122 02:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:30.122 02:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:30.122 02:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:30.122 02:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:27:30.122 02:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:30.122 02:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:27:30.122 02:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:27:30.122 02:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:27:30.122 02:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:27:30.122 02:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:27:30.122 02:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:27:30.122 02:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:30.122 02:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:30.122 02:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:30.122 02:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:30.122 02:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:30.122 02:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:30.123 02:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:30.123 02:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:30.123 02:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:30.123 02:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:30.123 02:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:30.123 02:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:30.123 02:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:30.123 02:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:30.123 02:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:30.123 02:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:30.123 02:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:30.123 02:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:30.123 02:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:30.123 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:30.123 02:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:30.123 02:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:30.123 02:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:30.123 02:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:30.123 02:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:30.123 02:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:30.123 02:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:30.123 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:30.123 02:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:30.123 02:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:30.123 02:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:30.123 02:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:30.123 02:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:30.123 02:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:30.123 02:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:30.123 02:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:30.123 02:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:30.123 02:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:30.123 02:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:30.123 02:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:30.123 02:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:30.123 02:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:30.123 02:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:30.123 02:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:30.123 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:30.123 02:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:30.123 02:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:30.123 02:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:30.123 02:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:30.123 02:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:30.123 02:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:30.123 02:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:30.123 02:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:30.123 02:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:30.123 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:30.123 02:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:30.123 02:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:30.123 02:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:27:30.123 02:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:30.123 02:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:30.123 02:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:30.123 02:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:30.123 02:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:30.123 02:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:30.123 02:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:30.123 02:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:30.123 02:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:30.123 02:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:30.123 02:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:30.123 02:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:30.123 02:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:30.123 02:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:30.123 02:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:30.123 02:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:30.123 02:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:30.123 02:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:30.123 02:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:30.123 02:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:30.123 02:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:30.123 02:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:30.123 02:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:30.123 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:30.123 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.242 ms 00:27:30.123 00:27:30.123 --- 10.0.0.2 ping statistics --- 00:27:30.123 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:30.123 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:27:30.123 02:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:30.123 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:30.123 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.136 ms 00:27:30.123 00:27:30.123 --- 10.0.0.1 ping statistics --- 00:27:30.123 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:30.123 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:27:30.123 02:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:30.123 02:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:27:30.123 02:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:30.123 02:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:30.123 02:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:30.123 02:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:30.123 02:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:30.123 02:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:30.123 02:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:30.123 02:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:27:30.123 02:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:30.123 02:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:30.124 02:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:30.124 02:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=1511476 00:27:30.124 02:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:27:30.124 02:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 1511476 00:27:30.124 02:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 1511476 ']' 00:27:30.124 02:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:30.124 02:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:30.124 02:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:30.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:30.124 02:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:30.124 02:05:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:30.124 [2024-07-24 02:05:45.000117] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:27:30.124 [2024-07-24 02:05:45.000206] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:30.381 EAL: No free 2048 kB hugepages reported on node 1 00:27:30.382 [2024-07-24 02:05:45.065629] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:30.382 [2024-07-24 02:05:45.152788] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:30.382 [2024-07-24 02:05:45.152868] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:30.382 [2024-07-24 02:05:45.152882] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:30.382 [2024-07-24 02:05:45.152894] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:30.382 [2024-07-24 02:05:45.152903] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:30.382 [2024-07-24 02:05:45.152992] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:30.382 [2024-07-24 02:05:45.153055] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:30.382 [2024-07-24 02:05:45.153121] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:27:30.382 [2024-07-24 02:05:45.153123] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:30.639 02:05:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:30.639 02:05:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:27:30.639 02:05:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:30.639 02:05:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:30.639 02:05:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:30.639 02:05:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:30.640 02:05:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:30.640 02:05:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.640 02:05:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:30.640 [2024-07-24 02:05:45.307777] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:30.640 02:05:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.640 02:05:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:27:30.640 02:05:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:27:30.640 02:05:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:30.640 02:05:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:30.640 02:05:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:30.640 02:05:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:30.640 02:05:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:30.640 02:05:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:30.640 02:05:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:30.640 02:05:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:30.640 02:05:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:30.640 02:05:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:30.640 02:05:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:30.640 02:05:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:30.640 02:05:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:30.640 02:05:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:30.640 02:05:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:30.640 02:05:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:30.640 02:05:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:30.640 02:05:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:30.640 02:05:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:30.640 02:05:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:30.640 02:05:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:30.640 02:05:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:30.640 02:05:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:30.640 02:05:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:27:30.640 02:05:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.640 02:05:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:30.640 Malloc1 00:27:30.640 [2024-07-24 02:05:45.397199] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:30.640 Malloc2 00:27:30.640 Malloc3 00:27:30.640 Malloc4 00:27:30.898 Malloc5 00:27:30.898 Malloc6 00:27:30.898 Malloc7 00:27:30.898 Malloc8 00:27:30.898 Malloc9 00:27:31.156 Malloc10 00:27:31.156 02:05:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.156 02:05:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:27:31.156 02:05:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:31.156 02:05:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:31.156 02:05:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=1511552 00:27:31.156 02:05:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 1511552 /var/tmp/bdevperf.sock 00:27:31.156 02:05:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 1511552 ']' 00:27:31.156 02:05:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:31.156 02:05:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:31.156 02:05:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:31.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:31.156 02:05:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:31.156 02:05:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:31.156 02:05:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:27:31.156 02:05:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:31.156 02:05:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:27:31.156 02:05:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:27:31.156 02:05:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:31.156 02:05:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:31.156 { 00:27:31.156 "params": { 00:27:31.156 "name": "Nvme$subsystem", 00:27:31.156 "trtype": "$TEST_TRANSPORT", 00:27:31.156 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:31.156 "adrfam": "ipv4", 00:27:31.156 "trsvcid": "$NVMF_PORT", 00:27:31.156 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:31.156 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:31.156 "hdgst": ${hdgst:-false}, 00:27:31.156 "ddgst": ${ddgst:-false} 00:27:31.156 }, 00:27:31.156 "method": "bdev_nvme_attach_controller" 00:27:31.156 } 00:27:31.156 EOF 00:27:31.156 )") 00:27:31.156 02:05:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:31.157 02:05:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:31.157 02:05:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:31.157 { 00:27:31.157 "params": { 00:27:31.157 "name": "Nvme$subsystem", 00:27:31.157 "trtype": "$TEST_TRANSPORT", 00:27:31.157 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:31.157 "adrfam": "ipv4", 00:27:31.157 "trsvcid": "$NVMF_PORT", 00:27:31.157 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:31.157 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:31.157 "hdgst": ${hdgst:-false}, 00:27:31.157 "ddgst": ${ddgst:-false} 00:27:31.157 }, 00:27:31.157 "method": "bdev_nvme_attach_controller" 00:27:31.157 } 00:27:31.157 EOF 00:27:31.157 )") 00:27:31.157 02:05:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:31.157 02:05:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:31.157 02:05:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:31.157 { 00:27:31.157 "params": { 00:27:31.157 "name": "Nvme$subsystem", 00:27:31.157 "trtype": "$TEST_TRANSPORT", 00:27:31.157 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:31.157 "adrfam": "ipv4", 00:27:31.157 "trsvcid": "$NVMF_PORT", 00:27:31.157 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:31.157 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:31.157 "hdgst": ${hdgst:-false}, 00:27:31.157 "ddgst": ${ddgst:-false} 00:27:31.157 }, 00:27:31.157 "method": "bdev_nvme_attach_controller" 00:27:31.157 } 00:27:31.157 EOF 00:27:31.157 )") 00:27:31.157 02:05:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:31.157 02:05:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:31.157 02:05:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:31.157 { 00:27:31.157 "params": { 00:27:31.157 "name": "Nvme$subsystem", 00:27:31.157 "trtype": "$TEST_TRANSPORT", 00:27:31.157 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:31.157 "adrfam": "ipv4", 00:27:31.157 "trsvcid": "$NVMF_PORT", 00:27:31.157 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:31.157 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:31.157 "hdgst": ${hdgst:-false}, 00:27:31.157 "ddgst": ${ddgst:-false} 00:27:31.157 }, 00:27:31.157 "method": "bdev_nvme_attach_controller" 00:27:31.157 } 00:27:31.157 EOF 00:27:31.157 )") 00:27:31.157 02:05:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:31.157 02:05:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:31.157 02:05:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:31.157 { 00:27:31.157 "params": { 00:27:31.157 "name": "Nvme$subsystem", 00:27:31.157 "trtype": "$TEST_TRANSPORT", 00:27:31.157 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:31.157 "adrfam": "ipv4", 00:27:31.157 "trsvcid": "$NVMF_PORT", 00:27:31.157 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:31.157 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:31.157 "hdgst": ${hdgst:-false}, 00:27:31.157 "ddgst": ${ddgst:-false} 00:27:31.157 }, 00:27:31.157 "method": "bdev_nvme_attach_controller" 00:27:31.157 } 00:27:31.157 EOF 00:27:31.157 )") 00:27:31.157 02:05:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:31.157 02:05:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:31.157 02:05:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:31.157 { 00:27:31.157 "params": { 00:27:31.157 "name": "Nvme$subsystem", 00:27:31.157 "trtype": "$TEST_TRANSPORT", 00:27:31.157 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:31.157 "adrfam": "ipv4", 00:27:31.157 "trsvcid": "$NVMF_PORT", 00:27:31.157 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:31.157 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:31.157 "hdgst": ${hdgst:-false}, 00:27:31.157 "ddgst": ${ddgst:-false} 00:27:31.157 }, 00:27:31.157 "method": "bdev_nvme_attach_controller" 00:27:31.157 } 00:27:31.157 EOF 00:27:31.157 )") 00:27:31.157 02:05:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:31.157 02:05:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:31.157 02:05:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:31.157 { 00:27:31.157 "params": { 00:27:31.157 "name": "Nvme$subsystem", 00:27:31.157 "trtype": "$TEST_TRANSPORT", 00:27:31.157 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:31.157 "adrfam": "ipv4", 00:27:31.157 "trsvcid": "$NVMF_PORT", 00:27:31.157 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:31.157 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:31.157 "hdgst": ${hdgst:-false}, 00:27:31.157 "ddgst": ${ddgst:-false} 00:27:31.157 }, 00:27:31.157 "method": "bdev_nvme_attach_controller" 00:27:31.157 } 00:27:31.157 EOF 00:27:31.157 )") 00:27:31.157 02:05:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:31.157 02:05:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:31.157 02:05:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:31.157 { 00:27:31.157 "params": { 00:27:31.157 "name": "Nvme$subsystem", 00:27:31.157 "trtype": "$TEST_TRANSPORT", 00:27:31.157 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:31.157 "adrfam": "ipv4", 00:27:31.157 "trsvcid": "$NVMF_PORT", 00:27:31.157 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:31.157 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:31.157 "hdgst": ${hdgst:-false}, 00:27:31.157 "ddgst": ${ddgst:-false} 00:27:31.157 }, 00:27:31.157 "method": "bdev_nvme_attach_controller" 00:27:31.157 } 00:27:31.157 EOF 00:27:31.157 )") 00:27:31.157 02:05:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:31.157 02:05:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:31.157 02:05:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:31.157 { 00:27:31.157 "params": { 00:27:31.157 "name": "Nvme$subsystem", 00:27:31.157 "trtype": "$TEST_TRANSPORT", 00:27:31.157 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:31.157 "adrfam": "ipv4", 00:27:31.157 "trsvcid": "$NVMF_PORT", 00:27:31.157 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:31.157 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:31.157 "hdgst": ${hdgst:-false}, 00:27:31.157 "ddgst": ${ddgst:-false} 00:27:31.157 }, 00:27:31.157 "method": "bdev_nvme_attach_controller" 00:27:31.157 } 00:27:31.157 EOF 00:27:31.157 )") 00:27:31.157 02:05:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:31.157 02:05:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:31.157 02:05:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:31.157 { 00:27:31.157 "params": { 00:27:31.157 "name": "Nvme$subsystem", 00:27:31.157 "trtype": "$TEST_TRANSPORT", 00:27:31.157 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:31.157 "adrfam": "ipv4", 00:27:31.157 "trsvcid": "$NVMF_PORT", 00:27:31.157 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:31.157 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:31.157 "hdgst": ${hdgst:-false}, 00:27:31.157 "ddgst": ${ddgst:-false} 00:27:31.157 }, 00:27:31.157 "method": "bdev_nvme_attach_controller" 00:27:31.157 } 00:27:31.157 EOF 00:27:31.157 )") 00:27:31.157 02:05:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:31.158 02:05:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:27:31.158 02:05:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:27:31.158 02:05:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:31.158 "params": { 00:27:31.158 "name": "Nvme1", 00:27:31.158 "trtype": "tcp", 00:27:31.158 "traddr": "10.0.0.2", 00:27:31.158 "adrfam": "ipv4", 00:27:31.158 "trsvcid": "4420", 00:27:31.158 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:31.158 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:31.158 "hdgst": false, 00:27:31.158 "ddgst": false 00:27:31.158 }, 00:27:31.158 "method": "bdev_nvme_attach_controller" 00:27:31.158 },{ 00:27:31.158 "params": { 00:27:31.158 "name": "Nvme2", 00:27:31.158 "trtype": "tcp", 00:27:31.158 "traddr": "10.0.0.2", 00:27:31.158 "adrfam": "ipv4", 00:27:31.158 "trsvcid": "4420", 00:27:31.158 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:31.158 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:31.158 "hdgst": false, 00:27:31.158 "ddgst": false 00:27:31.158 }, 00:27:31.158 "method": "bdev_nvme_attach_controller" 00:27:31.158 },{ 00:27:31.158 "params": { 00:27:31.158 "name": "Nvme3", 00:27:31.158 "trtype": "tcp", 00:27:31.158 "traddr": "10.0.0.2", 00:27:31.158 "adrfam": "ipv4", 00:27:31.158 "trsvcid": "4420", 00:27:31.158 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:31.158 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:31.158 "hdgst": false, 00:27:31.158 "ddgst": false 00:27:31.158 }, 00:27:31.158 "method": "bdev_nvme_attach_controller" 00:27:31.158 },{ 00:27:31.158 "params": { 00:27:31.158 "name": "Nvme4", 00:27:31.158 "trtype": "tcp", 00:27:31.158 "traddr": "10.0.0.2", 00:27:31.158 "adrfam": "ipv4", 00:27:31.158 "trsvcid": "4420", 00:27:31.158 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:31.158 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:31.158 "hdgst": false, 00:27:31.158 "ddgst": false 00:27:31.158 }, 00:27:31.158 "method": "bdev_nvme_attach_controller" 00:27:31.158 },{ 00:27:31.158 "params": { 00:27:31.158 "name": "Nvme5", 00:27:31.158 "trtype": "tcp", 00:27:31.158 "traddr": "10.0.0.2", 00:27:31.158 "adrfam": "ipv4", 00:27:31.158 "trsvcid": "4420", 00:27:31.158 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:31.158 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:31.158 "hdgst": false, 00:27:31.158 "ddgst": false 00:27:31.158 }, 00:27:31.158 "method": "bdev_nvme_attach_controller" 00:27:31.158 },{ 00:27:31.158 "params": { 00:27:31.158 "name": "Nvme6", 00:27:31.158 "trtype": "tcp", 00:27:31.158 "traddr": "10.0.0.2", 00:27:31.158 "adrfam": "ipv4", 00:27:31.158 "trsvcid": "4420", 00:27:31.158 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:31.158 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:31.158 "hdgst": false, 00:27:31.158 "ddgst": false 00:27:31.158 }, 00:27:31.158 "method": "bdev_nvme_attach_controller" 00:27:31.158 },{ 00:27:31.158 "params": { 00:27:31.158 "name": "Nvme7", 00:27:31.158 "trtype": "tcp", 00:27:31.158 "traddr": "10.0.0.2", 00:27:31.158 "adrfam": "ipv4", 00:27:31.158 "trsvcid": "4420", 00:27:31.158 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:31.158 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:31.158 "hdgst": false, 00:27:31.158 "ddgst": false 00:27:31.158 }, 00:27:31.158 "method": "bdev_nvme_attach_controller" 00:27:31.158 },{ 00:27:31.158 "params": { 00:27:31.158 "name": "Nvme8", 00:27:31.158 "trtype": "tcp", 00:27:31.158 "traddr": "10.0.0.2", 00:27:31.158 "adrfam": "ipv4", 00:27:31.158 "trsvcid": "4420", 00:27:31.158 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:31.158 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:31.158 "hdgst": false, 00:27:31.158 "ddgst": false 00:27:31.158 }, 00:27:31.158 "method": "bdev_nvme_attach_controller" 00:27:31.158 },{ 00:27:31.158 "params": { 00:27:31.158 "name": "Nvme9", 00:27:31.158 "trtype": "tcp", 00:27:31.158 "traddr": "10.0.0.2", 00:27:31.158 "adrfam": "ipv4", 00:27:31.158 "trsvcid": "4420", 00:27:31.158 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:31.158 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:31.158 "hdgst": false, 00:27:31.158 "ddgst": false 00:27:31.158 }, 00:27:31.158 "method": "bdev_nvme_attach_controller" 00:27:31.158 },{ 00:27:31.158 "params": { 00:27:31.158 "name": "Nvme10", 00:27:31.158 "trtype": "tcp", 00:27:31.158 "traddr": "10.0.0.2", 00:27:31.158 "adrfam": "ipv4", 00:27:31.158 "trsvcid": "4420", 00:27:31.158 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:31.158 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:31.158 "hdgst": false, 00:27:31.158 "ddgst": false 00:27:31.158 }, 00:27:31.158 "method": "bdev_nvme_attach_controller" 00:27:31.158 }' 00:27:31.158 [2024-07-24 02:05:45.910975] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:27:31.158 [2024-07-24 02:05:45.911052] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1511552 ] 00:27:31.158 EAL: No free 2048 kB hugepages reported on node 1 00:27:31.158 [2024-07-24 02:05:45.976091] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:31.415 [2024-07-24 02:05:46.063737] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:33.311 Running I/O for 10 seconds... 00:27:33.311 02:05:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:33.311 02:05:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:27:33.311 02:05:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:33.311 02:05:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.311 02:05:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:33.311 02:05:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.311 02:05:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:33.311 02:05:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:27:33.311 02:05:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:27:33.311 02:05:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:27:33.311 02:05:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:27:33.311 02:05:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:27:33.311 02:05:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:27:33.311 02:05:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:33.311 02:05:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:33.311 02:05:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:33.311 02:05:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.311 02:05:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:33.311 02:05:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.311 02:05:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:27:33.311 02:05:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:27:33.311 02:05:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:27:33.569 02:05:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:27:33.569 02:05:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:33.569 02:05:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:33.569 02:05:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:33.569 02:05:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.569 02:05:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:33.569 02:05:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.569 02:05:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:27:33.569 02:05:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:27:33.569 02:05:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:27:33.847 02:05:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:27:33.847 02:05:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:33.847 02:05:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:33.847 02:05:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:33.847 02:05:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.847 02:05:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:33.847 02:05:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.847 02:05:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=138 00:27:33.847 02:05:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 138 -ge 100 ']' 00:27:33.847 02:05:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:27:33.847 02:05:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:27:33.847 02:05:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:27:33.847 02:05:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 1511476 00:27:33.847 02:05:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@948 -- # '[' -z 1511476 ']' 00:27:33.847 02:05:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # kill -0 1511476 00:27:33.847 02:05:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # uname 00:27:33.847 02:05:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:33.847 02:05:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1511476 00:27:33.847 02:05:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:33.847 02:05:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:33.847 02:05:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1511476' 00:27:33.847 killing process with pid 1511476 00:27:33.847 02:05:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@967 -- # kill 1511476 00:27:33.847 02:05:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # wait 1511476 00:27:33.847 [2024-07-24 02:05:48.580647] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952550 is same with the state(5) to be set 00:27:33.847 [2024-07-24 02:05:48.580752] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952550 is same with the state(5) to be set 00:27:33.847 [2024-07-24 02:05:48.580785] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952550 is same with the state(5) to be set 00:27:33.847 [2024-07-24 02:05:48.580806] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952550 is same with the state(5) to be set 00:27:33.847 [2024-07-24 02:05:48.580828] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952550 is same with the state(5) to be set 00:27:33.847 [2024-07-24 02:05:48.580853] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952550 is same with the state(5) to be set 00:27:33.847 [2024-07-24 02:05:48.580873] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952550 is same with the state(5) to be set 00:27:33.847 [2024-07-24 02:05:48.580911] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952550 is same with the state(5) to be set 00:27:33.847 [2024-07-24 02:05:48.580929] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952550 is same with the state(5) to be set 00:27:33.847 [2024-07-24 02:05:48.580942] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952550 is same with the state(5) to be set 00:27:33.847 [2024-07-24 02:05:48.580954] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952550 is same with the state(5) to be set 00:27:33.847 [2024-07-24 02:05:48.580966] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952550 is same with the state(5) to be set 00:27:33.847 [2024-07-24 02:05:48.580981] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952550 is same with the state(5) to be set 00:27:33.847 [2024-07-24 02:05:48.580994] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952550 is same with the state(5) to be set 00:27:33.847 [2024-07-24 02:05:48.581006] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952550 is same with the state(5) to be set 00:27:33.847 [2024-07-24 02:05:48.581018] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952550 is same with the state(5) to be set 00:27:33.847 [2024-07-24 02:05:48.581033] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952550 is same with the state(5) to be set 00:27:33.847 [2024-07-24 02:05:48.581046] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952550 is same with the state(5) to be set 00:27:33.847 [2024-07-24 02:05:48.581059] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952550 is same with the state(5) to be set 00:27:33.847 [2024-07-24 02:05:48.581079] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952550 is same with the state(5) to be set 00:27:33.847 [2024-07-24 02:05:48.581102] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952550 is same with the state(5) to be set 00:27:33.847 [2024-07-24 02:05:48.581119] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952550 is same with the state(5) to be set 00:27:33.847 [2024-07-24 02:05:48.581134] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952550 is same with the state(5) to be set 00:27:33.847 [2024-07-24 02:05:48.581155] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952550 is same with the state(5) to be set 00:27:33.847 [2024-07-24 02:05:48.581178] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952550 is same with the state(5) to be set 00:27:33.847 [2024-07-24 02:05:48.581193] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952550 is same with the state(5) to be set 00:27:33.847 [2024-07-24 02:05:48.581208] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952550 is same with the state(5) to be set 00:27:33.847 [2024-07-24 02:05:48.581229] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952550 is same with the state(5) to be set 00:27:33.847 [2024-07-24 02:05:48.581252] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952550 is same with the state(5) to be set 00:27:33.847 [2024-07-24 02:05:48.581268] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952550 is same with the state(5) to be set 00:27:33.847 [2024-07-24 02:05:48.581289] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952550 is same with the state(5) to be set 00:27:33.847 [2024-07-24 02:05:48.581327] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952550 is same with the state(5) to be set 00:27:33.847 [2024-07-24 02:05:48.581345] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952550 is same with the state(5) to be set 00:27:33.847 [2024-07-24 02:05:48.581358] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952550 is same with the state(5) to be set 00:27:33.847 [2024-07-24 02:05:48.581384] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952550 is same with the state(5) to be set 00:27:33.847 [2024-07-24 02:05:48.581405] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952550 is same with the state(5) to be set 00:27:33.847 [2024-07-24 02:05:48.581419] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952550 is same with the state(5) to be set 00:27:33.847 [2024-07-24 02:05:48.581435] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952550 is same with the state(5) to be set 00:27:33.847 [2024-07-24 02:05:48.581456] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952550 is same with the state(5) to be set 00:27:33.847 [2024-07-24 02:05:48.581479] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952550 is same with the state(5) to be set 00:27:33.847 [2024-07-24 02:05:48.581501] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952550 is same with the state(5) to be set 00:27:33.847 [2024-07-24 02:05:48.581524] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952550 is same with the state(5) to be set 00:27:33.847 [2024-07-24 02:05:48.581548] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952550 is same with the state(5) to be set 00:27:33.847 [2024-07-24 02:05:48.581565] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952550 is same with the state(5) to be set 00:27:33.847 [2024-07-24 02:05:48.581587] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952550 is same with the state(5) to be set 00:27:33.847 [2024-07-24 02:05:48.581621] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952550 is same with the state(5) to be set 00:27:33.847 [2024-07-24 02:05:48.581636] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952550 is same with the state(5) to be set 00:27:33.847 [2024-07-24 02:05:48.581653] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952550 is same with the state(5) to be set 00:27:33.847 [2024-07-24 02:05:48.581674] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952550 is same with the state(5) to be set 00:27:33.847 [2024-07-24 02:05:48.581695] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952550 is same with the state(5) to be set 00:27:33.847 [2024-07-24 02:05:48.581710] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952550 is same with the state(5) to be set 00:27:33.847 [2024-07-24 02:05:48.581724] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952550 is same with the state(5) to be set 00:27:33.847 [2024-07-24 02:05:48.581745] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952550 is same with the state(5) to be set 00:27:33.847 [2024-07-24 02:05:48.581769] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952550 is same with the state(5) to be set 00:27:33.847 [2024-07-24 02:05:48.581785] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952550 is same with the state(5) to be set 00:27:33.847 [2024-07-24 02:05:48.581803] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952550 is same with the state(5) to be set 00:27:33.847 [2024-07-24 02:05:48.581824] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952550 is same with the state(5) to be set 00:27:33.847 [2024-07-24 02:05:48.581849] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952550 is same with the state(5) to be set 00:27:33.848 [2024-07-24 02:05:48.581870] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952550 is same with the state(5) to be set 00:27:33.848 [2024-07-24 02:05:48.581892] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952550 is same with the state(5) to be set 00:27:33.848 [2024-07-24 02:05:48.581910] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952550 is same with the state(5) to be set 00:27:33.848 [2024-07-24 02:05:48.581929] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952550 is same with the state(5) to be set 00:27:33.848 [2024-07-24 02:05:48.581956] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952550 is same with the state(5) to be set 00:27:33.848 [2024-07-24 02:05:48.585408] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955070 is same with the state(5) to be set 00:27:33.848 [2024-07-24 02:05:48.585449] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955070 is same with the state(5) to be set 00:27:33.848 [2024-07-24 02:05:48.585465] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955070 is same with the state(5) to be set 00:27:33.848 [2024-07-24 02:05:48.585620] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955070 is same with the state(5) to be set 00:27:33.848 [2024-07-24 02:05:48.585637] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955070 is same with the state(5) to be set 00:27:33.848 [2024-07-24 02:05:48.585650] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955070 is same with the state(5) to be set 00:27:33.848 [2024-07-24 02:05:48.585664] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955070 is same with the state(5) to be set 00:27:33.848 [2024-07-24 02:05:48.585676] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955070 is same with the state(5) to be set 00:27:33.848 [2024-07-24 02:05:48.585689] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955070 is same with the state(5) to be set 00:27:33.848 [2024-07-24 02:05:48.585700] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955070 is same with the state(5) to be set 00:27:33.848 [2024-07-24 02:05:48.585713] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955070 is same with the state(5) to be set 00:27:33.848 [2024-07-24 02:05:48.585725] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955070 is same with the state(5) to be set 00:27:33.848 [2024-07-24 02:05:48.585737] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955070 is same with the state(5) to be set 00:27:33.848 [2024-07-24 02:05:48.585749] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955070 is same with the state(5) to be set 00:27:33.848 [2024-07-24 02:05:48.585761] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955070 is same with the state(5) to be set 00:27:33.848 [2024-07-24 02:05:48.585773] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955070 is same with the state(5) to be set 00:27:33.848 [2024-07-24 02:05:48.585786] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955070 is same with the state(5) to be set 00:27:33.848 [2024-07-24 02:05:48.585799] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955070 is same with the state(5) to be set 00:27:33.848 [2024-07-24 02:05:48.585811] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955070 is same with the state(5) to be set 00:27:33.848 [2024-07-24 02:05:48.585823] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955070 is same with the state(5) to be set 00:27:33.848 [2024-07-24 02:05:48.585835] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955070 is same with the state(5) to be set 00:27:33.848 [2024-07-24 02:05:48.585848] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955070 is same with the state(5) to be set 00:27:33.848 [2024-07-24 02:05:48.585860] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955070 is same with the state(5) to be set 00:27:33.848 [2024-07-24 02:05:48.585874] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955070 is same with the state(5) to be set 00:27:33.848 [2024-07-24 02:05:48.585886] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955070 is same with the state(5) to be set 00:27:33.848 [2024-07-24 02:05:48.585898] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955070 is same with the state(5) to be set 00:27:33.848 [2024-07-24 02:05:48.585923] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955070 is same with the state(5) to be set 00:27:33.848 [2024-07-24 02:05:48.585936] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955070 is same with the state(5) to be set 00:27:33.848 [2024-07-24 02:05:48.585948] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955070 is same with the state(5) to be set 00:27:33.848 [2024-07-24 02:05:48.585961] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955070 is same with the state(5) to be set 00:27:33.848 [2024-07-24 02:05:48.586100] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955070 is same with the state(5) to be set 00:27:33.848 [2024-07-24 02:05:48.586116] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955070 is same with the state(5) to be set 00:27:33.848 [2024-07-24 02:05:48.586128] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955070 is same with the state(5) to be set 00:27:33.848 [2024-07-24 02:05:48.586141] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955070 is same with the state(5) to be set 00:27:33.848 [2024-07-24 02:05:48.586153] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955070 is same with the state(5) to be set 00:27:33.848 [2024-07-24 02:05:48.586165] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955070 is same with the state(5) to be set 00:27:33.848 [2024-07-24 02:05:48.586178] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955070 is same with the state(5) to be set 00:27:33.848 [2024-07-24 02:05:48.586190] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955070 is same with the state(5) to be set 00:27:33.848 [2024-07-24 02:05:48.586203] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955070 is same with the state(5) to be set 00:27:33.848 [2024-07-24 02:05:48.586216] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955070 is same with the state(5) to be set 00:27:33.848 [2024-07-24 02:05:48.586229] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955070 is same with the state(5) to be set 00:27:33.848 [2024-07-24 02:05:48.586241] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955070 is same with the state(5) to be set 00:27:33.848 [2024-07-24 02:05:48.586253] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955070 is same with the state(5) to be set 00:27:33.848 [2024-07-24 02:05:48.586265] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955070 is same with the state(5) to be set 00:27:33.848 [2024-07-24 02:05:48.586280] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955070 is same with the state(5) to be set 00:27:33.848 [2024-07-24 02:05:48.586296] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955070 is same with the state(5) to be set 00:27:33.848 [2024-07-24 02:05:48.586311] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955070 is same with the state(5) to be set 00:27:33.848 [2024-07-24 02:05:48.586335] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955070 is same with the state(5) to be set 00:27:33.848 [2024-07-24 02:05:48.586351] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955070 is same with the state(5) to be set 00:27:33.848 [2024-07-24 02:05:48.586364] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955070 is same with the state(5) to be set 00:27:33.848 [2024-07-24 02:05:48.586376] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955070 is same with the state(5) to be set 00:27:33.848 [2024-07-24 02:05:48.586389] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955070 is same with the state(5) to be set 00:27:33.848 [2024-07-24 02:05:48.586402] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955070 is same with the state(5) to be set 00:27:33.848 [2024-07-24 02:05:48.586422] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955070 is same with the state(5) to be set 00:27:33.848 [2024-07-24 02:05:48.586436] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955070 is same with the state(5) to be set 00:27:33.848 [2024-07-24 02:05:48.586449] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955070 is same with the state(5) to be set 00:27:33.848 [2024-07-24 02:05:48.586466] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955070 is same with the state(5) to be set 00:27:33.848 [2024-07-24 02:05:48.586479] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955070 is same with the state(5) to be set 00:27:33.848 [2024-07-24 02:05:48.586492] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955070 is same with the state(5) to be set 00:27:33.848 [2024-07-24 02:05:48.586504] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955070 is same with the state(5) to be set 00:27:33.848 [2024-07-24 02:05:48.586516] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955070 is same with the state(5) to be set 00:27:33.848 [2024-07-24 02:05:48.586529] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955070 is same with the state(5) to be set 00:27:33.848 [2024-07-24 02:05:48.586542] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955070 is same with the state(5) to be set 00:27:33.848 [2024-07-24 02:05:48.588534] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:33.848 [2024-07-24 02:05:48.591185] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:33.848 [2024-07-24 02:05:48.591351] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.848 [2024-07-24 02:05:48.591378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.848 [2024-07-24 02:05:48.591395] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.849 [2024-07-24 02:05:48.591409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.849 [2024-07-24 02:05:48.591423] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.849 [2024-07-24 02:05:48.591436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.849 [2024-07-24 02:05:48.591450] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.849 [2024-07-24 02:05:48.591463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.849 [2024-07-24 02:05:48.591477] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1902ee0 is same with the state(5) to be set 00:27:33.849 [2024-07-24 02:05:48.597904] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952a10 is same with the state(5) to be set 00:27:33.849 [2024-07-24 02:05:48.597946] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952a10 is same with the state(5) to be set 00:27:33.849 [2024-07-24 02:05:48.597965] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952a10 is same with the state(5) to be set 00:27:33.849 [2024-07-24 02:05:48.597977] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952a10 is same with the state(5) to be set 00:27:33.849 [2024-07-24 02:05:48.597988] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952a10 is same with the state(5) to be set 00:27:33.849 [2024-07-24 02:05:48.598001] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952a10 is same with the state(5) to be set 00:27:33.849 [2024-07-24 02:05:48.598014] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952a10 is same with the state(5) to be set 00:27:33.849 [2024-07-24 02:05:48.598036] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952a10 is same with the state(5) to be set 00:27:33.849 [2024-07-24 02:05:48.598050] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952a10 is same with the state(5) to be set 00:27:33.849 [2024-07-24 02:05:48.598062] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952a10 is same with the state(5) to be set 00:27:33.849 [2024-07-24 02:05:48.598075] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952a10 is same with the state(5) to be set 00:27:33.849 [2024-07-24 02:05:48.598087] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952a10 is same with the state(5) to be set 00:27:33.849 [2024-07-24 02:05:48.598099] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952a10 is same with the state(5) to be set 00:27:33.849 [2024-07-24 02:05:48.598111] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952a10 is same with the state(5) to be set 00:27:33.849 [2024-07-24 02:05:48.598123] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952a10 is same with the state(5) to be set 00:27:33.849 [2024-07-24 02:05:48.598135] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952a10 is same with the state(5) to be set 00:27:33.849 [2024-07-24 02:05:48.598147] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952a10 is same with the state(5) to be set 00:27:33.849 [2024-07-24 02:05:48.598159] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952a10 is same with the state(5) to be set 00:27:33.849 [2024-07-24 02:05:48.598171] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952a10 is same with the state(5) to be set 00:27:33.849 [2024-07-24 02:05:48.598183] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952a10 is same with the state(5) to be set 00:27:33.849 [2024-07-24 02:05:48.598196] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952a10 is same with the state(5) to be set 00:27:33.849 [2024-07-24 02:05:48.598208] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952a10 is same with the state(5) to be set 00:27:33.849 [2024-07-24 02:05:48.598220] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952a10 is same with the state(5) to be set 00:27:33.849 [2024-07-24 02:05:48.598232] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952a10 is same with the state(5) to be set 00:27:33.849 [2024-07-24 02:05:48.598244] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952a10 is same with the state(5) to be set 00:27:33.849 [2024-07-24 02:05:48.598256] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952a10 is same with the state(5) to be set 00:27:33.849 [2024-07-24 02:05:48.598269] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952a10 is same with the state(5) to be set 00:27:33.849 [2024-07-24 02:05:48.598281] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952a10 is same with the state(5) to be set 00:27:33.849 [2024-07-24 02:05:48.598293] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952a10 is same with the state(5) to be set 00:27:33.849 [2024-07-24 02:05:48.598309] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952a10 is same with the state(5) to be set 00:27:33.849 [2024-07-24 02:05:48.598355] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952a10 is same with the state(5) to be set 00:27:33.849 [2024-07-24 02:05:48.598370] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952a10 is same with the state(5) to be set 00:27:33.849 [2024-07-24 02:05:48.598383] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952a10 is same with the state(5) to be set 00:27:33.849 [2024-07-24 02:05:48.598395] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952a10 is same with the state(5) to be set 00:27:33.849 [2024-07-24 02:05:48.598412] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952a10 is same with the state(5) to be set 00:27:33.849 [2024-07-24 02:05:48.598425] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952a10 is same with the state(5) to be set 00:27:33.849 [2024-07-24 02:05:48.598437] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952a10 is same with the state(5) to be set 00:27:33.849 [2024-07-24 02:05:48.598450] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952a10 is same with the state(5) to be set 00:27:33.849 [2024-07-24 02:05:48.598462] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952a10 is same with the state(5) to be set 00:27:33.849 [2024-07-24 02:05:48.598474] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952a10 is same with the state(5) to be set 00:27:33.849 [2024-07-24 02:05:48.598487] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952a10 is same with the state(5) to be set 00:27:33.849 [2024-07-24 02:05:48.598500] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952a10 is same with the state(5) to be set 00:27:33.849 [2024-07-24 02:05:48.598512] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952a10 is same with the state(5) to be set 00:27:33.849 [2024-07-24 02:05:48.598525] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952a10 is same with the state(5) to be set 00:27:33.849 [2024-07-24 02:05:48.598537] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952a10 is same with the state(5) to be set 00:27:33.849 [2024-07-24 02:05:48.598549] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952a10 is same with the state(5) to be set 00:27:33.849 [2024-07-24 02:05:48.598562] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952a10 is same with the state(5) to be set 00:27:33.849 [2024-07-24 02:05:48.598575] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952a10 is same with the state(5) to be set 00:27:33.849 [2024-07-24 02:05:48.598587] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952a10 is same with the state(5) to be set 00:27:33.849 [2024-07-24 02:05:48.598599] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952a10 is same with the state(5) to be set 00:27:33.849 [2024-07-24 02:05:48.598622] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952a10 is same with the state(5) to be set 00:27:33.849 [2024-07-24 02:05:48.598635] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952a10 is same with the state(5) to be set 00:27:33.849 [2024-07-24 02:05:48.598647] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952a10 is same with the state(5) to be set 00:27:33.849 [2024-07-24 02:05:48.598659] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952a10 is same with the state(5) to be set 00:27:33.849 [2024-07-24 02:05:48.598672] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952a10 is same with the state(5) to be set 00:27:33.849 [2024-07-24 02:05:48.598684] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952a10 is same with the state(5) to be set 00:27:33.849 [2024-07-24 02:05:48.598704] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952a10 is same with the state(5) to be set 00:27:33.849 [2024-07-24 02:05:48.598716] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952a10 is same with the state(5) to be set 00:27:33.849 [2024-07-24 02:05:48.598728] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952a10 is same with the state(5) to be set 00:27:33.849 [2024-07-24 02:05:48.598740] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952a10 is same with the state(5) to be set 00:27:33.849 [2024-07-24 02:05:48.598751] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952a10 is same with the state(5) to be set 00:27:33.849 [2024-07-24 02:05:48.598767] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952a10 is same with the state(5) to be set 00:27:33.849 [2024-07-24 02:05:48.598779] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952a10 is same with the state(5) to be set 00:27:33.849 [2024-07-24 02:05:48.603957] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952ed0 is same with the state(5) to be set 00:27:33.849 [2024-07-24 02:05:48.603992] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952ed0 is same with the state(5) to be set 00:27:33.849 [2024-07-24 02:05:48.604010] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952ed0 is same with the state(5) to be set 00:27:33.849 [2024-07-24 02:05:48.604023] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952ed0 is same with the state(5) to be set 00:27:33.849 [2024-07-24 02:05:48.604035] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952ed0 is same with the state(5) to be set 00:27:33.849 [2024-07-24 02:05:48.604047] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952ed0 is same with the state(5) to be set 00:27:33.849 [2024-07-24 02:05:48.604059] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952ed0 is same with the state(5) to be set 00:27:33.849 [2024-07-24 02:05:48.604071] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952ed0 is same with the state(5) to be set 00:27:33.849 [2024-07-24 02:05:48.604083] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952ed0 is same with the state(5) to be set 00:27:33.849 [2024-07-24 02:05:48.604095] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952ed0 is same with the state(5) to be set 00:27:33.850 [2024-07-24 02:05:48.604107] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952ed0 is same with the state(5) to be set 00:27:33.850 [2024-07-24 02:05:48.604120] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952ed0 is same with the state(5) to be set 00:27:33.850 [2024-07-24 02:05:48.604132] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952ed0 is same with the state(5) to be set 00:27:33.850 [2024-07-24 02:05:48.604144] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952ed0 is same with the state(5) to be set 00:27:33.850 [2024-07-24 02:05:48.604156] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952ed0 is same with the state(5) to be set 00:27:33.850 [2024-07-24 02:05:48.604168] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952ed0 is same with the state(5) to be set 00:27:33.850 [2024-07-24 02:05:48.604179] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952ed0 is same with the state(5) to be set 00:27:33.850 [2024-07-24 02:05:48.604206] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952ed0 is same with the state(5) to be set 00:27:33.850 [2024-07-24 02:05:48.604218] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952ed0 is same with the state(5) to be set 00:27:33.850 [2024-07-24 02:05:48.604229] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952ed0 is same with the state(5) to be set 00:27:33.850 [2024-07-24 02:05:48.604241] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952ed0 is same with the state(5) to be set 00:27:33.850 [2024-07-24 02:05:48.604253] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952ed0 is same with the state(5) to be set 00:27:33.850 [2024-07-24 02:05:48.604265] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952ed0 is same with the state(5) to be set 00:27:33.850 [2024-07-24 02:05:48.604277] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952ed0 is same with the state(5) to be set 00:27:33.850 [2024-07-24 02:05:48.604289] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952ed0 is same with the state(5) to be set 00:27:33.850 [2024-07-24 02:05:48.604313] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952ed0 is same with the state(5) to be set 00:27:33.850 [2024-07-24 02:05:48.604368] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952ed0 is same with the state(5) to be set 00:27:33.850 [2024-07-24 02:05:48.604382] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952ed0 is same with the state(5) to be set 00:27:33.850 [2024-07-24 02:05:48.604394] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952ed0 is same with the state(5) to be set 00:27:33.850 [2024-07-24 02:05:48.604406] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952ed0 is same with the state(5) to be set 00:27:33.850 [2024-07-24 02:05:48.604418] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952ed0 is same with the state(5) to be set 00:27:33.850 [2024-07-24 02:05:48.604430] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952ed0 is same with the state(5) to be set 00:27:33.850 [2024-07-24 02:05:48.604442] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952ed0 is same with the state(5) to be set 00:27:33.850 [2024-07-24 02:05:48.604453] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952ed0 is same with the state(5) to be set 00:27:33.850 [2024-07-24 02:05:48.604465] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952ed0 is same with the state(5) to be set 00:27:33.850 [2024-07-24 02:05:48.604477] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952ed0 is same with the state(5) to be set 00:27:33.850 [2024-07-24 02:05:48.604488] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952ed0 is same with the state(5) to be set 00:27:33.850 [2024-07-24 02:05:48.604500] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952ed0 is same with the state(5) to be set 00:27:33.850 [2024-07-24 02:05:48.604512] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952ed0 is same with the state(5) to be set 00:27:33.850 [2024-07-24 02:05:48.604523] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952ed0 is same with the state(5) to be set 00:27:33.850 [2024-07-24 02:05:48.604535] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952ed0 is same with the state(5) to be set 00:27:33.850 [2024-07-24 02:05:48.604548] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952ed0 is same with the state(5) to be set 00:27:33.850 [2024-07-24 02:05:48.604560] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952ed0 is same with the state(5) to be set 00:27:33.850 [2024-07-24 02:05:48.604572] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952ed0 is same with the state(5) to be set 00:27:33.850 [2024-07-24 02:05:48.604584] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952ed0 is same with the state(5) to be set 00:27:33.850 [2024-07-24 02:05:48.604596] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952ed0 is same with the state(5) to be set 00:27:33.850 [2024-07-24 02:05:48.604612] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952ed0 is same with the state(5) to be set 00:27:33.850 [2024-07-24 02:05:48.604624] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952ed0 is same with the state(5) to be set 00:27:33.850 [2024-07-24 02:05:48.604652] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952ed0 is same with the state(5) to be set 00:27:33.850 [2024-07-24 02:05:48.604664] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952ed0 is same with the state(5) to be set 00:27:33.850 [2024-07-24 02:05:48.604685] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952ed0 is same with the state(5) to be set 00:27:33.850 [2024-07-24 02:05:48.604697] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952ed0 is same with the state(5) to be set 00:27:33.850 [2024-07-24 02:05:48.604708] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952ed0 is same with the state(5) to be set 00:27:33.850 [2024-07-24 02:05:48.604724] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952ed0 is same with the state(5) to be set 00:27:33.850 [2024-07-24 02:05:48.604736] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952ed0 is same with the state(5) to be set 00:27:33.850 [2024-07-24 02:05:48.604748] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952ed0 is same with the state(5) to be set 00:27:33.850 [2024-07-24 02:05:48.604760] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952ed0 is same with the state(5) to be set 00:27:33.850 [2024-07-24 02:05:48.604771] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952ed0 is same with the state(5) to be set 00:27:33.850 [2024-07-24 02:05:48.604783] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952ed0 is same with the state(5) to be set 00:27:33.850 [2024-07-24 02:05:48.604794] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952ed0 is same with the state(5) to be set 00:27:33.850 [2024-07-24 02:05:48.604805] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952ed0 is same with the state(5) to be set 00:27:33.850 [2024-07-24 02:05:48.604817] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952ed0 is same with the state(5) to be set 00:27:33.850 [2024-07-24 02:05:48.604828] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x952ed0 is same with the state(5) to be set 00:27:33.850 [2024-07-24 02:05:48.606008] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9533b0 is same with the state(5) to be set 00:27:33.850 [2024-07-24 02:05:48.606042] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9533b0 is same with the state(5) to be set 00:27:33.850 [2024-07-24 02:05:48.606057] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9533b0 is same with the state(5) to be set 00:27:33.850 [2024-07-24 02:05:48.606070] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9533b0 is same with the state(5) to be set 00:27:33.850 [2024-07-24 02:05:48.606082] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9533b0 is same with the state(5) to be set 00:27:33.850 [2024-07-24 02:05:48.606094] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9533b0 is same with the state(5) to be set 00:27:33.850 [2024-07-24 02:05:48.606106] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9533b0 is same with the state(5) to be set 00:27:33.850 [2024-07-24 02:05:48.606119] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9533b0 is same with the state(5) to be set 00:27:33.850 [2024-07-24 02:05:48.606131] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9533b0 is same with the state(5) to be set 00:27:33.850 [2024-07-24 02:05:48.606143] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9533b0 is same with the state(5) to be set 00:27:33.850 [2024-07-24 02:05:48.606155] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9533b0 is same with the state(5) to be set 00:27:33.850 [2024-07-24 02:05:48.606168] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9533b0 is same with the state(5) to be set 00:27:33.850 [2024-07-24 02:05:48.606180] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9533b0 is same with the state(5) to be set 00:27:33.850 [2024-07-24 02:05:48.606192] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9533b0 is same with the state(5) to be set 00:27:33.850 [2024-07-24 02:05:48.606203] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9533b0 is same with the state(5) to be set 00:27:33.850 [2024-07-24 02:05:48.606216] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9533b0 is same with the state(5) to be set 00:27:33.850 [2024-07-24 02:05:48.606227] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9533b0 is same with the state(5) to be set 00:27:33.850 [2024-07-24 02:05:48.606252] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9533b0 is same with the state(5) to be set 00:27:33.851 [2024-07-24 02:05:48.606266] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9533b0 is same with the state(5) to be set 00:27:33.851 [2024-07-24 02:05:48.606278] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9533b0 is same with the state(5) to be set 00:27:33.851 [2024-07-24 02:05:48.606291] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9533b0 is same with the state(5) to be set 00:27:33.851 [2024-07-24 02:05:48.606333] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9533b0 is same with the state(5) to be set 00:27:33.851 [2024-07-24 02:05:48.606347] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9533b0 is same with the state(5) to be set 00:27:33.851 [2024-07-24 02:05:48.606359] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9533b0 is same with the state(5) to be set 00:27:33.851 [2024-07-24 02:05:48.606371] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9533b0 is same with the state(5) to be set 00:27:33.851 [2024-07-24 02:05:48.606399] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9533b0 is same with the state(5) to be set 00:27:33.851 [2024-07-24 02:05:48.606411] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9533b0 is same with the state(5) to be set 00:27:33.851 [2024-07-24 02:05:48.606424] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9533b0 is same with the state(5) to be set 00:27:33.851 [2024-07-24 02:05:48.606437] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9533b0 is same with the state(5) to be set 00:27:33.851 [2024-07-24 02:05:48.606450] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9533b0 is same with the state(5) to be set 00:27:33.851 [2024-07-24 02:05:48.606462] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9533b0 is same with the state(5) to be set 00:27:33.851 [2024-07-24 02:05:48.606475] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9533b0 is same with the state(5) to be set 00:27:33.851 [2024-07-24 02:05:48.606488] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9533b0 is same with the state(5) to be set 00:27:33.851 [2024-07-24 02:05:48.606500] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9533b0 is same with the state(5) to be set 00:27:33.851 [2024-07-24 02:05:48.606513] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9533b0 is same with the state(5) to be set 00:27:33.851 [2024-07-24 02:05:48.606526] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9533b0 is same with the state(5) to be set 00:27:33.851 [2024-07-24 02:05:48.606539] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9533b0 is same with the state(5) to be set 00:27:33.851 [2024-07-24 02:05:48.606551] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9533b0 is same with the state(5) to be set 00:27:33.851 [2024-07-24 02:05:48.606564] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9533b0 is same with the state(5) to be set 00:27:33.851 [2024-07-24 02:05:48.606578] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9533b0 is same with the state(5) to be set 00:27:33.851 [2024-07-24 02:05:48.606591] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9533b0 is same with the state(5) to be set 00:27:33.851 [2024-07-24 02:05:48.606603] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9533b0 is same with the state(5) to be set 00:27:33.851 [2024-07-24 02:05:48.606617] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9533b0 is same with the state(5) to be set 00:27:33.851 [2024-07-24 02:05:48.606629] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9533b0 is same with the state(5) to be set 00:27:33.851 [2024-07-24 02:05:48.606645] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9533b0 is same with the state(5) to be set 00:27:33.851 [2024-07-24 02:05:48.606660] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9533b0 is same with the state(5) to be set 00:27:33.851 [2024-07-24 02:05:48.606672] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9533b0 is same with the state(5) to be set 00:27:33.851 [2024-07-24 02:05:48.606700] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9533b0 is same with the state(5) to be set 00:27:33.851 [2024-07-24 02:05:48.606711] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9533b0 is same with the state(5) to be set 00:27:33.851 [2024-07-24 02:05:48.606723] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9533b0 is same with the state(5) to be set 00:27:33.851 [2024-07-24 02:05:48.606735] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9533b0 is same with the state(5) to be set 00:27:33.851 [2024-07-24 02:05:48.606747] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9533b0 is same with the state(5) to be set 00:27:33.851 [2024-07-24 02:05:48.606758] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9533b0 is same with the state(5) to be set 00:27:33.851 [2024-07-24 02:05:48.606770] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9533b0 is same with the state(5) to be set 00:27:33.851 [2024-07-24 02:05:48.606782] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9533b0 is same with the state(5) to be set 00:27:33.851 [2024-07-24 02:05:48.606794] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9533b0 is same with the state(5) to be set 00:27:33.851 [2024-07-24 02:05:48.606805] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9533b0 is same with the state(5) to be set 00:27:33.851 [2024-07-24 02:05:48.606817] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9533b0 is same with the state(5) to be set 00:27:33.851 [2024-07-24 02:05:48.606829] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9533b0 is same with the state(5) to be set 00:27:33.851 [2024-07-24 02:05:48.606841] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9533b0 is same with the state(5) to be set 00:27:33.851 [2024-07-24 02:05:48.606853] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9533b0 is same with the state(5) to be set 00:27:33.851 [2024-07-24 02:05:48.606865] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9533b0 is same with the state(5) to be set 00:27:33.851 [2024-07-24 02:05:48.606876] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9533b0 is same with the state(5) to be set 00:27:33.851 [2024-07-24 02:05:48.607773] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953870 is same with the state(5) to be set 00:27:33.851 [2024-07-24 02:05:48.607804] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953870 is same with the state(5) to be set 00:27:33.851 [2024-07-24 02:05:48.607819] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953870 is same with the state(5) to be set 00:27:33.851 [2024-07-24 02:05:48.607831] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953870 is same with the state(5) to be set 00:27:33.851 [2024-07-24 02:05:48.607844] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953870 is same with the state(5) to be set 00:27:33.851 [2024-07-24 02:05:48.607856] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953870 is same with the state(5) to be set 00:27:33.851 [2024-07-24 02:05:48.607868] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953870 is same with the state(5) to be set 00:27:33.851 [2024-07-24 02:05:48.607896] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953870 is same with the state(5) to be set 00:27:33.851 [2024-07-24 02:05:48.607908] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953870 is same with the state(5) to be set 00:27:33.851 [2024-07-24 02:05:48.607925] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953870 is same with the state(5) to be set 00:27:33.851 [2024-07-24 02:05:48.607938] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953870 is same with the state(5) to be set 00:27:33.851 [2024-07-24 02:05:48.607950] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953870 is same with the state(5) to be set 00:27:33.851 [2024-07-24 02:05:48.607962] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953870 is same with the state(5) to be set 00:27:33.851 [2024-07-24 02:05:48.607974] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953870 is same with the state(5) to be set 00:27:33.851 [2024-07-24 02:05:48.607985] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953870 is same with the state(5) to be set 00:27:33.851 [2024-07-24 02:05:48.607997] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953870 is same with the state(5) to be set 00:27:33.851 [2024-07-24 02:05:48.608009] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953870 is same with the state(5) to be set 00:27:33.851 [2024-07-24 02:05:48.608021] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953870 is same with the state(5) to be set 00:27:33.851 [2024-07-24 02:05:48.608032] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953870 is same with the state(5) to be set 00:27:33.851 [2024-07-24 02:05:48.608044] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953870 is same with the state(5) to be set 00:27:33.851 [2024-07-24 02:05:48.608056] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953870 is same with the state(5) to be set 00:27:33.851 [2024-07-24 02:05:48.608067] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953870 is same with the state(5) to be set 00:27:33.851 [2024-07-24 02:05:48.608079] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953870 is same with the state(5) to be set 00:27:33.851 [2024-07-24 02:05:48.608091] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953870 is same with the state(5) to be set 00:27:33.851 [2024-07-24 02:05:48.608103] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953870 is same with the state(5) to be set 00:27:33.852 [2024-07-24 02:05:48.608114] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953870 is same with the state(5) to be set 00:27:33.852 [2024-07-24 02:05:48.608126] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953870 is same with the state(5) to be set 00:27:33.852 [2024-07-24 02:05:48.608137] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953870 is same with the state(5) to be set 00:27:33.852 [2024-07-24 02:05:48.608150] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953870 is same with the state(5) to be set 00:27:33.852 [2024-07-24 02:05:48.608162] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953870 is same with the state(5) to be set 00:27:33.852 [2024-07-24 02:05:48.608174] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953870 is same with the state(5) to be set 00:27:33.852 [2024-07-24 02:05:48.608185] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953870 is same with the state(5) to be set 00:27:33.852 [2024-07-24 02:05:48.608197] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953870 is same with the state(5) to be set 00:27:33.852 [2024-07-24 02:05:48.608209] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953870 is same with the state(5) to be set 00:27:33.852 [2024-07-24 02:05:48.608221] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953870 is same with the state(5) to be set 00:27:33.852 [2024-07-24 02:05:48.608233] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953870 is same with the state(5) to be set 00:27:33.852 [2024-07-24 02:05:48.608248] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953870 is same with the state(5) to be set 00:27:33.852 [2024-07-24 02:05:48.608261] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953870 is same with the state(5) to be set 00:27:33.852 [2024-07-24 02:05:48.608272] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953870 is same with the state(5) to be set 00:27:33.852 [2024-07-24 02:05:48.608285] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953870 is same with the state(5) to be set 00:27:33.852 [2024-07-24 02:05:48.608308] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953870 is same with the state(5) to be set 00:27:33.852 [2024-07-24 02:05:48.608343] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953870 is same with the state(5) to be set 00:27:33.852 [2024-07-24 02:05:48.608359] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953870 is same with the state(5) to be set 00:27:33.852 [2024-07-24 02:05:48.608371] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953870 is same with the state(5) to be set 00:27:33.852 [2024-07-24 02:05:48.608385] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953870 is same with the state(5) to be set 00:27:33.852 [2024-07-24 02:05:48.608397] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953870 is same with the state(5) to be set 00:27:33.852 [2024-07-24 02:05:48.608409] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953870 is same with the state(5) to be set 00:27:33.852 [2024-07-24 02:05:48.608422] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953870 is same with the state(5) to be set 00:27:33.852 [2024-07-24 02:05:48.608434] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953870 is same with the state(5) to be set 00:27:33.852 [2024-07-24 02:05:48.608447] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953870 is same with the state(5) to be set 00:27:33.852 [2024-07-24 02:05:48.608459] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953870 is same with the state(5) to be set 00:27:33.852 [2024-07-24 02:05:48.608471] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953870 is same with the state(5) to be set 00:27:33.852 [2024-07-24 02:05:48.608483] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953870 is same with the state(5) to be set 00:27:33.852 [2024-07-24 02:05:48.608495] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953870 is same with the state(5) to be set 00:27:33.852 [2024-07-24 02:05:48.608508] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953870 is same with the state(5) to be set 00:27:33.852 [2024-07-24 02:05:48.608520] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953870 is same with the state(5) to be set 00:27:33.852 [2024-07-24 02:05:48.608533] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953870 is same with the state(5) to be set 00:27:33.852 [2024-07-24 02:05:48.608545] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953870 is same with the state(5) to be set 00:27:33.852 [2024-07-24 02:05:48.608557] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953870 is same with the state(5) to be set 00:27:33.852 [2024-07-24 02:05:48.608570] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953870 is same with the state(5) to be set 00:27:33.852 [2024-07-24 02:05:48.608582] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953870 is same with the state(5) to be set 00:27:33.852 [2024-07-24 02:05:48.608594] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953870 is same with the state(5) to be set 00:27:33.852 [2024-07-24 02:05:48.608616] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953870 is same with the state(5) to be set 00:27:33.852 [2024-07-24 02:05:48.609769] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953d50 is same with the state(5) to be set 00:27:33.852 [2024-07-24 02:05:48.609798] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953d50 is same with the state(5) to be set 00:27:33.852 [2024-07-24 02:05:48.609813] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953d50 is same with the state(5) to be set 00:27:33.852 [2024-07-24 02:05:48.609825] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953d50 is same with the state(5) to be set 00:27:33.852 [2024-07-24 02:05:48.609838] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953d50 is same with the state(5) to be set 00:27:33.852 [2024-07-24 02:05:48.609851] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953d50 is same with the state(5) to be set 00:27:33.852 [2024-07-24 02:05:48.609864] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953d50 is same with the state(5) to be set 00:27:33.852 [2024-07-24 02:05:48.609890] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953d50 is same with the state(5) to be set 00:27:33.852 [2024-07-24 02:05:48.609903] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953d50 is same with the state(5) to be set 00:27:33.852 [2024-07-24 02:05:48.609916] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953d50 is same with the state(5) to be set 00:27:33.852 [2024-07-24 02:05:48.609928] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953d50 is same with the state(5) to be set 00:27:33.852 [2024-07-24 02:05:48.609939] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953d50 is same with the state(5) to be set 00:27:33.852 [2024-07-24 02:05:48.609951] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953d50 is same with the state(5) to be set 00:27:33.852 [2024-07-24 02:05:48.609962] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953d50 is same with the state(5) to be set 00:27:33.852 [2024-07-24 02:05:48.609974] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953d50 is same with the state(5) to be set 00:27:33.852 [2024-07-24 02:05:48.609986] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953d50 is same with the state(5) to be set 00:27:33.852 [2024-07-24 02:05:48.609998] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953d50 is same with the state(5) to be set 00:27:33.852 [2024-07-24 02:05:48.610010] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953d50 is same with the state(5) to be set 00:27:33.852 [2024-07-24 02:05:48.610021] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953d50 is same with the state(5) to be set 00:27:33.852 [2024-07-24 02:05:48.610034] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953d50 is same with the state(5) to be set 00:27:33.852 [2024-07-24 02:05:48.610045] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953d50 is same with the state(5) to be set 00:27:33.852 [2024-07-24 02:05:48.610058] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953d50 is same with the state(5) to be set 00:27:33.852 [2024-07-24 02:05:48.610070] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953d50 is same with the state(5) to be set 00:27:33.852 [2024-07-24 02:05:48.610081] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953d50 is same with the state(5) to be set 00:27:33.852 [2024-07-24 02:05:48.610093] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953d50 is same with the state(5) to be set 00:27:33.852 [2024-07-24 02:05:48.610106] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953d50 is same with the state(5) to be set 00:27:33.852 [2024-07-24 02:05:48.610118] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953d50 is same with the state(5) to be set 00:27:33.852 [2024-07-24 02:05:48.610129] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953d50 is same with the state(5) to be set 00:27:33.852 [2024-07-24 02:05:48.610150] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953d50 is same with the state(5) to be set 00:27:33.852 [2024-07-24 02:05:48.610163] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953d50 is same with the state(5) to be set 00:27:33.852 [2024-07-24 02:05:48.610175] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953d50 is same with the state(5) to be set 00:27:33.852 [2024-07-24 02:05:48.610187] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953d50 is same with the state(5) to be set 00:27:33.852 [2024-07-24 02:05:48.610198] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953d50 is same with the state(5) to be set 00:27:33.852 [2024-07-24 02:05:48.610210] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953d50 is same with the state(5) to be set 00:27:33.852 [2024-07-24 02:05:48.610222] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953d50 is same with the state(5) to be set 00:27:33.852 [2024-07-24 02:05:48.610234] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953d50 is same with the state(5) to be set 00:27:33.852 [2024-07-24 02:05:48.610246] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953d50 is same with the state(5) to be set 00:27:33.853 [2024-07-24 02:05:48.610257] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953d50 is same with the state(5) to be set 00:27:33.853 [2024-07-24 02:05:48.610269] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953d50 is same with the state(5) to be set 00:27:33.853 [2024-07-24 02:05:48.610281] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953d50 is same with the state(5) to be set 00:27:33.853 [2024-07-24 02:05:48.610293] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953d50 is same with the state(5) to be set 00:27:33.853 [2024-07-24 02:05:48.610313] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953d50 is same with the state(5) to be set 00:27:33.853 [2024-07-24 02:05:48.610351] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953d50 is same with the state(5) to be set 00:27:33.853 [2024-07-24 02:05:48.610364] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953d50 is same with the state(5) to be set 00:27:33.853 [2024-07-24 02:05:48.610376] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953d50 is same with the state(5) to be set 00:27:33.853 [2024-07-24 02:05:48.610389] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953d50 is same with the state(5) to be set 00:27:33.853 [2024-07-24 02:05:48.610401] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953d50 is same with the state(5) to be set 00:27:33.853 [2024-07-24 02:05:48.610414] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953d50 is same with the state(5) to be set 00:27:33.853 [2024-07-24 02:05:48.610426] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953d50 is same with the state(5) to be set 00:27:33.853 [2024-07-24 02:05:48.610437] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953d50 is same with the state(5) to be set 00:27:33.853 [2024-07-24 02:05:48.610449] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953d50 is same with the state(5) to be set 00:27:33.853 [2024-07-24 02:05:48.610462] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953d50 is same with the state(5) to be set 00:27:33.853 [2024-07-24 02:05:48.610474] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x953d50 is same with the state(5) to be set 00:27:33.853 [2024-07-24 02:05:48.611641] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.853 [2024-07-24 02:05:48.611677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.853 [2024-07-24 02:05:48.611710] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.853 [2024-07-24 02:05:48.611709] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954210 is same with the state(5) to be set 00:27:33.853 [2024-07-24 02:05:48.611724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.853 [2024-07-24 02:05:48.611734] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954210 is same with the state(5) to be set 00:27:33.853 [2024-07-24 02:05:48.611739] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.853 [2024-07-24 02:05:48.611747] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954210 is same with the state(5) to be set 00:27:33.853 [2024-07-24 02:05:48.611753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.853 [2024-07-24 02:05:48.611760] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954210 is same with the state(5) to be set 00:27:33.853 [2024-07-24 02:05:48.611768] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.853 [2024-07-24 02:05:48.611773] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954210 is same with the state(5) to be set 00:27:33.853 [2024-07-24 02:05:48.611781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.853 [2024-07-24 02:05:48.611785] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954210 is same with the state(5) to be set 00:27:33.853 [2024-07-24 02:05:48.611795] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ddcf80 is same with the state(5) to be set 00:27:33.853 [2024-07-24 02:05:48.611799] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954210 is same with the state(5) to be set 00:27:33.853 [2024-07-24 02:05:48.611811] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954210 is same with the state(5) to be set 00:27:33.853 [2024-07-24 02:05:48.611823] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954210 is same with the state(5) to be set 00:27:33.853 [2024-07-24 02:05:48.611835] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954210 is same with the state(5) to be set 00:27:33.853 [2024-07-24 02:05:48.611848] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954210 is same with the state(5) to be set 00:27:33.853 [2024-07-24 02:05:48.611849] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.853 [2024-07-24 02:05:48.611859] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954210 is same with the state(5) to be set 00:27:33.853 [2024-07-24 02:05:48.611870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-07-24 02:05:48.611872] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954210 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.853 he state(5) to be set 00:27:33.853 [2024-07-24 02:05:48.611886] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954210 is same with the state(5) to be set 00:27:33.853 [2024-07-24 02:05:48.611888] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.853 [2024-07-24 02:05:48.611898] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954210 is same with the state(5) to be set 00:27:33.853 [2024-07-24 02:05:48.611902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.853 [2024-07-24 02:05:48.611910] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954210 is same with the state(5) to be set 00:27:33.853 [2024-07-24 02:05:48.611920] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 ns[2024-07-24 02:05:48.611923] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954210 is same with tid:0 cdw10:00000000 cdw11:00000000 00:27:33.853 he state(5) to be set 00:27:33.853 [2024-07-24 02:05:48.611936] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954210 is same with t[2024-07-24 02:05:48.611936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 che state(5) to be set 00:27:33.853 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.853 [2024-07-24 02:05:48.611951] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954210 is same with the state(5) to be set 00:27:33.853 [2024-07-24 02:05:48.611954] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.853 [2024-07-24 02:05:48.611964] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954210 is same with the state(5) to be set 00:27:33.853 [2024-07-24 02:05:48.611968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.853 [2024-07-24 02:05:48.611976] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954210 is same with the state(5) to be set 00:27:33.853 [2024-07-24 02:05:48.611981] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3b940 is same with the state(5) to be set 00:27:33.853 [2024-07-24 02:05:48.611988] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954210 is same with the state(5) to be set 00:27:33.853 [2024-07-24 02:05:48.612001] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954210 is same with the state(5) to be set 00:27:33.853 [2024-07-24 02:05:48.612014] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954210 is same with the state(5) to be set 00:27:33.853 [2024-07-24 02:05:48.612026] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954210 is same with the state(5) to be set 00:27:33.853 [2024-07-24 02:05:48.612027] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.853 [2024-07-24 02:05:48.612038] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954210 is same with the state(5) to be set 00:27:33.853 [2024-07-24 02:05:48.612048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.853 [2024-07-24 02:05:48.612051] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954210 is same with the state(5) to be set 00:27:33.853 [2024-07-24 02:05:48.612063] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 ns[2024-07-24 02:05:48.612064] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954210 is same with tid:0 cdw10:00000000 cdw11:00000000 00:27:33.853 he state(5) to be set 00:27:33.853 [2024-07-24 02:05:48.612078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-07-24 02:05:48.612078] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954210 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.853 he state(5) to be set 00:27:33.853 [2024-07-24 02:05:48.612094] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954210 is same with t[2024-07-24 02:05:48.612095] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nshe state(5) to be set 00:27:33.853 id:0 cdw10:00000000 cdw11:00000000 00:27:33.853 [2024-07-24 02:05:48.612109] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954210 is same with t[2024-07-24 02:05:48.612110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 che state(5) to be set 00:27:33.853 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.853 [2024-07-24 02:05:48.612126] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954210 is same with the state(5) to be set 00:27:33.854 [2024-07-24 02:05:48.612129] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.854 [2024-07-24 02:05:48.612139] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954210 is same with the state(5) to be set 00:27:33.854 [2024-07-24 02:05:48.612143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.854 [2024-07-24 02:05:48.612152] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954210 is same with the state(5) to be set 00:27:33.854 [2024-07-24 02:05:48.612156] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d45f00 is same with the state(5) to be set 00:27:33.854 [2024-07-24 02:05:48.612165] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954210 is same with the state(5) to be set 00:27:33.854 [2024-07-24 02:05:48.612178] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954210 is same with the state(5) to be set 00:27:33.854 [2024-07-24 02:05:48.612194] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954210 is same with the state(5) to be set 00:27:33.854 [2024-07-24 02:05:48.612207] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954210 is same with the state(5) to be set 00:27:33.854 [2024-07-24 02:05:48.612211] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.854 [2024-07-24 02:05:48.612220] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954210 is same with the state(5) to be set 00:27:33.854 [2024-07-24 02:05:48.612232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.854 [2024-07-24 02:05:48.612247] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.854 [2024-07-24 02:05:48.612260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.854 [2024-07-24 02:05:48.612232] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954210 is same with t[2024-07-24 02:05:48.612274] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nshe state(5) to be set 00:27:33.854 id:0 cdw10:00000000 cdw11:00000000 00:27:33.854 [2024-07-24 02:05:48.612290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-07-24 02:05:48.612289] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954210 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.854 he state(5) to be set 00:27:33.854 [2024-07-24 02:05:48.612314] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954210 is same with t[2024-07-24 02:05:48.612323] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nshe state(5) to be set 00:27:33.854 id:0 cdw10:00000000 cdw11:00000000 00:27:33.854 [2024-07-24 02:05:48.612339] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954210 is same with t[2024-07-24 02:05:48.612339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 che state(5) to be set 00:27:33.854 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.854 [2024-07-24 02:05:48.612354] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954210 is same with t[2024-07-24 02:05:48.612356] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d43670 is same he state(5) to be set 00:27:33.854 with the state(5) to be set 00:27:33.854 [2024-07-24 02:05:48.612369] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954210 is same with the state(5) to be set 00:27:33.854 [2024-07-24 02:05:48.612387] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954210 is same with the state(5) to be set 00:27:33.854 [2024-07-24 02:05:48.612390] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1902ee0 (9): Bad file descriptor 00:27:33.854 [2024-07-24 02:05:48.612400] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954210 is same with the state(5) to be set 00:27:33.854 [2024-07-24 02:05:48.612412] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954210 is same with the state(5) to be set 00:27:33.854 [2024-07-24 02:05:48.612424] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954210 is same with the state(5) to be set 00:27:33.854 [2024-07-24 02:05:48.612436] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954210 is same with the state(5) to be set 00:27:33.854 [2024-07-24 02:05:48.612441] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns[2024-07-24 02:05:48.612448] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954210 is same with tid:0 cdw10:00000000 cdw11:00000000 00:27:33.854 he state(5) to be set 00:27:33.854 [2024-07-24 02:05:48.612463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.854 [2024-07-24 02:05:48.612461] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954210 is same with the state(5) to be set 00:27:33.854 [2024-07-24 02:05:48.612479] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 ns[2024-07-24 02:05:48.612480] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954210 is same with tid:0 cdw10:00000000 cdw11:00000000 00:27:33.854 he state(5) to be set 00:27:33.854 [2024-07-24 02:05:48.612494] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954210 is same with the state(5) to be set 00:27:33.854 [2024-07-24 02:05:48.612495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.854 [2024-07-24 02:05:48.612506] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954210 is same with the state(5) to be set 00:27:33.854 [2024-07-24 02:05:48.612511] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.854 [2024-07-24 02:05:48.612519] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954210 is same with the state(5) to be set 00:27:33.854 [2024-07-24 02:05:48.612525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.854 [2024-07-24 02:05:48.612531] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954210 is same with the state(5) to be set 00:27:33.854 [2024-07-24 02:05:48.612539] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.854 [2024-07-24 02:05:48.612544] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954210 is same with the state(5) to be set 00:27:33.854 [2024-07-24 02:05:48.612553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.854 [2024-07-24 02:05:48.612556] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954210 is same with the state(5) to be set 00:27:33.854 [2024-07-24 02:05:48.612566] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df9f20 is same with the state(5) to be set 00:27:33.854 [2024-07-24 02:05:48.612568] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954210 is same with the state(5) to be set 00:27:33.854 [2024-07-24 02:05:48.612581] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954210 is same with the state(5) to be set 00:27:33.854 [2024-07-24 02:05:48.612593] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954210 is same with the state(5) to be set 00:27:33.854 [2024-07-24 02:05:48.612615] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954210 is same with the state(5) to be set 00:27:33.854 [2024-07-24 02:05:48.612620] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.854 [2024-07-24 02:05:48.612640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.854 [2024-07-24 02:05:48.612655] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.854 [2024-07-24 02:05:48.612668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.854 [2024-07-24 02:05:48.612691] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.854 [2024-07-24 02:05:48.612704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.854 [2024-07-24 02:05:48.612718] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.854 [2024-07-24 02:05:48.612731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.854 [2024-07-24 02:05:48.612744] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df8480 is same with the state(5) to be set 00:27:33.854 [2024-07-24 02:05:48.612787] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.854 [2024-07-24 02:05:48.612807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.854 [2024-07-24 02:05:48.612828] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.854 [2024-07-24 02:05:48.612852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.854 [2024-07-24 02:05:48.612875] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.854 [2024-07-24 02:05:48.612889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.854 [2024-07-24 02:05:48.612904] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.854 [2024-07-24 02:05:48.612917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.854 [2024-07-24 02:05:48.612930] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1831610 is same with the state(5) to be set 00:27:33.854 [2024-07-24 02:05:48.613747] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9546d0 is same with the state(5) to be set 00:27:33.854 [2024-07-24 02:05:48.613775] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9546d0 is same with the state(5) to be set 00:27:33.854 [2024-07-24 02:05:48.613789] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9546d0 is same with the state(5) to be set 00:27:33.854 [2024-07-24 02:05:48.613801] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9546d0 is same with the state(5) to be set 00:27:33.854 [2024-07-24 02:05:48.613813] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9546d0 is same with the state(5) to be set 00:27:33.854 [2024-07-24 02:05:48.613824] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9546d0 is same with the state(5) to be set 00:27:33.854 [2024-07-24 02:05:48.613836] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9546d0 is same with the state(5) to be set 00:27:33.854 [2024-07-24 02:05:48.613853] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9546d0 is same with the state(5) to be set 00:27:33.855 [2024-07-24 02:05:48.613865] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9546d0 is same with the state(5) to be set 00:27:33.855 [2024-07-24 02:05:48.613877] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9546d0 is same with the state(5) to be set 00:27:33.855 [2024-07-24 02:05:48.613889] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9546d0 is same with the state(5) to be set 00:27:33.855 [2024-07-24 02:05:48.613901] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9546d0 is same with the state(5) to be set 00:27:33.855 [2024-07-24 02:05:48.613913] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9546d0 is same with the state(5) to be set 00:27:33.855 [2024-07-24 02:05:48.613925] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9546d0 is same with the state(5) to be set 00:27:33.855 [2024-07-24 02:05:48.613936] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9546d0 is same with the state(5) to be set 00:27:33.855 [2024-07-24 02:05:48.613948] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9546d0 is same with the state(5) to be set 00:27:33.855 [2024-07-24 02:05:48.613959] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9546d0 is same with the state(5) to be set 00:27:33.855 [2024-07-24 02:05:48.613972] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9546d0 is same with the state(5) to be set 00:27:33.855 [2024-07-24 02:05:48.613984] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9546d0 is same with the state(5) to be set 00:27:33.855 [2024-07-24 02:05:48.613996] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9546d0 is same with the state(5) to be set 00:27:33.855 [2024-07-24 02:05:48.614008] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9546d0 is same with the state(5) to be set 00:27:33.855 [2024-07-24 02:05:48.614020] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9546d0 is same with the state(5) to be set 00:27:33.855 [2024-07-24 02:05:48.614032] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9546d0 is same with the state(5) to be set 00:27:33.855 [2024-07-24 02:05:48.614044] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9546d0 is same with the state(5) to be set 00:27:33.855 [2024-07-24 02:05:48.614056] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9546d0 is same with the state(5) to be set 00:27:33.855 [2024-07-24 02:05:48.614068] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9546d0 is same with the state(5) to be set 00:27:33.855 [2024-07-24 02:05:48.614080] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9546d0 is same with the state(5) to be set 00:27:33.855 [2024-07-24 02:05:48.614092] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9546d0 is same with the state(5) to be set 00:27:33.855 [2024-07-24 02:05:48.614104] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9546d0 is same with the state(5) to be set 00:27:33.855 [2024-07-24 02:05:48.614116] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9546d0 is same with the state(5) to be set 00:27:33.855 [2024-07-24 02:05:48.614128] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9546d0 is same with the state(5) to be set 00:27:33.855 [2024-07-24 02:05:48.614140] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9546d0 is same with t[2024-07-24 02:05:48.614134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128he state(5) to be set 00:27:33.855 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.855 [2024-07-24 02:05:48.614156] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9546d0 is same with the state(5) to be set 00:27:33.855 [2024-07-24 02:05:48.614162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.855 [2024-07-24 02:05:48.614173] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9546d0 is same with the state(5) to be set 00:27:33.855 [2024-07-24 02:05:48.614186] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9546d0 is same with the state(5) to be set 00:27:33.855 [2024-07-24 02:05:48.614192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.855 [2024-07-24 02:05:48.614199] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9546d0 is same with the state(5) to be set 00:27:33.855 [2024-07-24 02:05:48.614208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.855 [2024-07-24 02:05:48.614212] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9546d0 is same with the state(5) to be set 00:27:33.855 [2024-07-24 02:05:48.614224] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9546d0 is same with the state(5) to be set 00:27:33.855 [2024-07-24 02:05:48.614226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.855 [2024-07-24 02:05:48.614237] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9546d0 is same with the state(5) to be set 00:27:33.855 [2024-07-24 02:05:48.614240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.855 [2024-07-24 02:05:48.614249] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9546d0 is same with the state(5) to be set 00:27:33.855 [2024-07-24 02:05:48.614257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.855 [2024-07-24 02:05:48.614262] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9546d0 is same with the state(5) to be set 00:27:33.855 [2024-07-24 02:05:48.614271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.855 [2024-07-24 02:05:48.614274] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9546d0 is same with the state(5) to be set 00:27:33.855 [2024-07-24 02:05:48.614287] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9546d0 is same with the state(5) to be set 00:27:33.855 [2024-07-24 02:05:48.614288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.855 [2024-07-24 02:05:48.614299] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9546d0 is same with the state(5) to be set 00:27:33.855 [2024-07-24 02:05:48.614312] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9546d0 is same with the state(5) to be set 00:27:33.855 [2024-07-24 02:05:48.614314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.855 [2024-07-24 02:05:48.614332] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9546d0 is same with the state(5) to be set 00:27:33.855 [2024-07-24 02:05:48.614339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.855 [2024-07-24 02:05:48.614346] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9546d0 is same with the state(5) to be set 00:27:33.855 [2024-07-24 02:05:48.614354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.855 [2024-07-24 02:05:48.614359] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9546d0 is same with the state(5) to be set 00:27:33.855 [2024-07-24 02:05:48.614370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128[2024-07-24 02:05:48.614371] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9546d0 is same with t SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.855 he state(5) to be set 00:27:33.855 [2024-07-24 02:05:48.614391] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9546d0 is same with t[2024-07-24 02:05:48.614391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:27:33.855 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.855 [2024-07-24 02:05:48.614405] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9546d0 is same with the state(5) to be set 00:27:33.855 [2024-07-24 02:05:48.614410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.855 [2024-07-24 02:05:48.614417] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9546d0 is same with the state(5) to be set 00:27:33.855 [2024-07-24 02:05:48.614424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.855 [2024-07-24 02:05:48.614429] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9546d0 is same with the state(5) to be set 00:27:33.855 [2024-07-24 02:05:48.614440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128[2024-07-24 02:05:48.614442] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9546d0 is same with t SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.855 he state(5) to be set 00:27:33.855 [2024-07-24 02:05:48.614456] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9546d0 is same with t[2024-07-24 02:05:48.614456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:27:33.855 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.855 [2024-07-24 02:05:48.614471] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9546d0 is same with the state(5) to be set 00:27:33.855 [2024-07-24 02:05:48.614475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.855 [2024-07-24 02:05:48.614483] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9546d0 is same with the state(5) to be set 00:27:33.855 [2024-07-24 02:05:48.614489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.855 [2024-07-24 02:05:48.614496] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9546d0 is same with the state(5) to be set 00:27:33.855 [2024-07-24 02:05:48.614505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.855 [2024-07-24 02:05:48.614508] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9546d0 is same with the state(5) to be set 00:27:33.855 [2024-07-24 02:05:48.614519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.855 [2024-07-24 02:05:48.614521] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9546d0 is same with the state(5) to be set 00:27:33.855 [2024-07-24 02:05:48.614534] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9546d0 is same with t[2024-07-24 02:05:48.614535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:12he state(5) to be set 00:27:33.855 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.855 [2024-07-24 02:05:48.614550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.855 [2024-07-24 02:05:48.614566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.855 [2024-07-24 02:05:48.614580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.856 [2024-07-24 02:05:48.614600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.856 [2024-07-24 02:05:48.614624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.856 [2024-07-24 02:05:48.614656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.856 [2024-07-24 02:05:48.614669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.856 [2024-07-24 02:05:48.614684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.856 [2024-07-24 02:05:48.614698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.856 [2024-07-24 02:05:48.614728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.856 [2024-07-24 02:05:48.614743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.856 [2024-07-24 02:05:48.614758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.856 [2024-07-24 02:05:48.614772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.856 [2024-07-24 02:05:48.614787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.856 [2024-07-24 02:05:48.614801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.856 [2024-07-24 02:05:48.614816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.856 [2024-07-24 02:05:48.614830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.856 [2024-07-24 02:05:48.614845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.856 [2024-07-24 02:05:48.614859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.856 [2024-07-24 02:05:48.614875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.856 [2024-07-24 02:05:48.614889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.856 [2024-07-24 02:05:48.614904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.856 [2024-07-24 02:05:48.614918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.856 [2024-07-24 02:05:48.614934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.856 [2024-07-24 02:05:48.614947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.856 [2024-07-24 02:05:48.614963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.856 [2024-07-24 02:05:48.614977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.856 [2024-07-24 02:05:48.614992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.856 [2024-07-24 02:05:48.615012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.856 [2024-07-24 02:05:48.615029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.856 [2024-07-24 02:05:48.615043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.856 [2024-07-24 02:05:48.615059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.856 [2024-07-24 02:05:48.615087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.856 [2024-07-24 02:05:48.615104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.856 [2024-07-24 02:05:48.615117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.856 [2024-07-24 02:05:48.615132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.856 [2024-07-24 02:05:48.615146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.856 [2024-07-24 02:05:48.615161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.856 [2024-07-24 02:05:48.615175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.856 [2024-07-24 02:05:48.615189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.856 [2024-07-24 02:05:48.615203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.856 [2024-07-24 02:05:48.615218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.856 [2024-07-24 02:05:48.615232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.856 [2024-07-24 02:05:48.615247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.856 [2024-07-24 02:05:48.615260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.856 [2024-07-24 02:05:48.615275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.856 [2024-07-24 02:05:48.615289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.856 [2024-07-24 02:05:48.615315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.856 [2024-07-24 02:05:48.615351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.856 [2024-07-24 02:05:48.615367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.856 [2024-07-24 02:05:48.615381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.856 [2024-07-24 02:05:48.615397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.856 [2024-07-24 02:05:48.615412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.856 [2024-07-24 02:05:48.615431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.856 [2024-07-24 02:05:48.615446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.856 [2024-07-24 02:05:48.615463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.856 [2024-07-24 02:05:48.615477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.856 [2024-07-24 02:05:48.615492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.856 [2024-07-24 02:05:48.615506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.856 [2024-07-24 02:05:48.615522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.856 [2024-07-24 02:05:48.615527] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954b90 is same with t[2024-07-24 02:05:48.615535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:27:33.856 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.856 [2024-07-24 02:05:48.615556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:12[2024-07-24 02:05:48.615557] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954b90 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.856 he state(5) to be set 00:27:33.857 [2024-07-24 02:05:48.615572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-24 02:05:48.615572] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954b90 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.857 he state(5) to be set 00:27:33.857 [2024-07-24 02:05:48.615588] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954b90 is same with the state(5) to be set 00:27:33.857 [2024-07-24 02:05:48.615590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.857 [2024-07-24 02:05:48.615600] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954b90 is same with the state(5) to be set 00:27:33.857 [2024-07-24 02:05:48.615604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.857 [2024-07-24 02:05:48.615622] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954b90 is same with the state(5) to be set 00:27:33.857 [2024-07-24 02:05:48.615626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.857 [2024-07-24 02:05:48.615635] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954b90 is same with the state(5) to be set 00:27:33.857 [2024-07-24 02:05:48.615640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.857 [2024-07-24 02:05:48.615647] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954b90 is same with the state(5) to be set 00:27:33.857 [2024-07-24 02:05:48.615656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.857 [2024-07-24 02:05:48.615660] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954b90 is same with the state(5) to be set 00:27:33.857 [2024-07-24 02:05:48.615670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-24 02:05:48.615672] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954b90 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.857 he state(5) to be set 00:27:33.857 [2024-07-24 02:05:48.615690] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954b90 is same with the state(5) to be set 00:27:33.857 [2024-07-24 02:05:48.615692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.857 [2024-07-24 02:05:48.615703] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954b90 is same with the state(5) to be set 00:27:33.857 [2024-07-24 02:05:48.615707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.857 [2024-07-24 02:05:48.615715] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954b90 is same with the state(5) to be set 00:27:33.857 [2024-07-24 02:05:48.615723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.857 [2024-07-24 02:05:48.615727] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954b90 is same with the state(5) to be set 00:27:33.857 [2024-07-24 02:05:48.615737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.857 [2024-07-24 02:05:48.615740] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954b90 is same with the state(5) to be set 00:27:33.857 [2024-07-24 02:05:48.615753] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954b90 is same with the state(5) to be set 00:27:33.857 [2024-07-24 02:05:48.615754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.857 [2024-07-24 02:05:48.615764] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954b90 is same with the state(5) to be set 00:27:33.857 [2024-07-24 02:05:48.615769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.857 [2024-07-24 02:05:48.615777] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954b90 is same with the state(5) to be set 00:27:33.857 [2024-07-24 02:05:48.615785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.857 [2024-07-24 02:05:48.615789] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954b90 is same with the state(5) to be set 00:27:33.857 [2024-07-24 02:05:48.615799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.857 [2024-07-24 02:05:48.615802] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954b90 is same with the state(5) to be set 00:27:33.857 [2024-07-24 02:05:48.615814] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954b90 is same with t[2024-07-24 02:05:48.615815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:12he state(5) to be set 00:27:33.857 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.857 [2024-07-24 02:05:48.615828] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954b90 is same with the state(5) to be set 00:27:33.857 [2024-07-24 02:05:48.615830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.857 [2024-07-24 02:05:48.615840] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954b90 is same with the state(5) to be set 00:27:33.857 [2024-07-24 02:05:48.615861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.857 [2024-07-24 02:05:48.615866] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954b90 is same with the state(5) to be set 00:27:33.857 [2024-07-24 02:05:48.615876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-24 02:05:48.615878] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954b90 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.857 he state(5) to be set 00:27:33.857 [2024-07-24 02:05:48.615893] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954b90 is same with the state(5) to be set 00:27:33.857 [2024-07-24 02:05:48.615896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.857 [2024-07-24 02:05:48.615905] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954b90 is same with the state(5) to be set 00:27:33.857 [2024-07-24 02:05:48.615910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.857 [2024-07-24 02:05:48.615917] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954b90 is same with the state(5) to be set 00:27:33.857 [2024-07-24 02:05:48.615930] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954b90 is same with the state(5) to be set 00:27:33.857 [2024-07-24 02:05:48.615931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.857 [2024-07-24 02:05:48.615941] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954b90 is same with the state(5) to be set 00:27:33.857 [2024-07-24 02:05:48.615946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.857 [2024-07-24 02:05:48.615954] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954b90 is same with the state(5) to be set 00:27:33.857 [2024-07-24 02:05:48.615961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.857 [2024-07-24 02:05:48.615966] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954b90 is same with the state(5) to be set 00:27:33.857 [2024-07-24 02:05:48.615975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.857 [2024-07-24 02:05:48.615978] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954b90 is same with the state(5) to be set 00:27:33.857 [2024-07-24 02:05:48.615990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:12[2024-07-24 02:05:48.615991] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954b90 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.857 he state(5) to be set 00:27:33.857 [2024-07-24 02:05:48.616006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-24 02:05:48.616006] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954b90 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.857 he state(5) to be set 00:27:33.857 [2024-07-24 02:05:48.616021] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954b90 is same with the state(5) to be set 00:27:33.857 [2024-07-24 02:05:48.616023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.857 [2024-07-24 02:05:48.616032] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954b90 is same with the state(5) to be set 00:27:33.857 [2024-07-24 02:05:48.616036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.857 [2024-07-24 02:05:48.616044] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954b90 is same with the state(5) to be set 00:27:33.857 [2024-07-24 02:05:48.616051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.857 [2024-07-24 02:05:48.616055] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954b90 is same with the state(5) to be set 00:27:33.857 [2024-07-24 02:05:48.616064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.857 [2024-07-24 02:05:48.616071] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954b90 is same with the state(5) to be set 00:27:33.857 [2024-07-24 02:05:48.616079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.857 [2024-07-24 02:05:48.616083] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954b90 is same with the state(5) to be set 00:27:33.857 [2024-07-24 02:05:48.616093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.857 [2024-07-24 02:05:48.616095] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954b90 is same with the state(5) to be set 00:27:33.857 [2024-07-24 02:05:48.616107] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954b90 is same with t[2024-07-24 02:05:48.616108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:12he state(5) to be set 00:27:33.857 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.857 [2024-07-24 02:05:48.616120] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954b90 is same with the state(5) to be set 00:27:33.857 [2024-07-24 02:05:48.616122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.857 [2024-07-24 02:05:48.616132] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954b90 is same with the state(5) to be set 00:27:33.857 [2024-07-24 02:05:48.616137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.858 [2024-07-24 02:05:48.616144] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954b90 is same with the state(5) to be set 00:27:33.858 [2024-07-24 02:05:48.616151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.858 [2024-07-24 02:05:48.616156] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954b90 is same with the state(5) to be set 00:27:33.858 [2024-07-24 02:05:48.616166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.858 [2024-07-24 02:05:48.616169] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954b90 is same with the state(5) to be set 00:27:33.858 [2024-07-24 02:05:48.616179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.858 [2024-07-24 02:05:48.616181] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954b90 is same with the state(5) to be set 00:27:33.858 [2024-07-24 02:05:48.616194] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954b90 is same with t[2024-07-24 02:05:48.616194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:12he state(5) to be set 00:27:33.858 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.858 [2024-07-24 02:05:48.616207] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954b90 is same with t[2024-07-24 02:05:48.616208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:27:33.858 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.858 [2024-07-24 02:05:48.616220] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954b90 is same with the state(5) to be set 00:27:33.858 [2024-07-24 02:05:48.616224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.858 [2024-07-24 02:05:48.616232] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954b90 is same with the state(5) to be set 00:27:33.858 [2024-07-24 02:05:48.616238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.858 [2024-07-24 02:05:48.616247] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954b90 is same with the state(5) to be set 00:27:33.858 [2024-07-24 02:05:48.616259] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954b90 is same with the state(5) to be set 00:27:33.858 [2024-07-24 02:05:48.616271] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954b90 is same with the state(5) to be set 00:27:33.858 [2024-07-24 02:05:48.616283] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954b90 is same with the state(5) to be set 00:27:33.858 [2024-07-24 02:05:48.616295] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954b90 is same with the state(5) to be set 00:27:33.858 [2024-07-24 02:05:48.616306] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x954b90 is same with the state(5) to be set 00:27:33.858 [2024-07-24 02:05:48.616364] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1eb9d80 was disconnected and freed. reset controller. 00:27:33.858 [2024-07-24 02:05:48.616833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.858 [2024-07-24 02:05:48.616857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.858 [2024-07-24 02:05:48.616878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.858 [2024-07-24 02:05:48.616893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.858 [2024-07-24 02:05:48.616908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.858 [2024-07-24 02:05:48.616922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.858 [2024-07-24 02:05:48.616937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.858 [2024-07-24 02:05:48.616951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.858 [2024-07-24 02:05:48.616966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.858 [2024-07-24 02:05:48.616986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.858 [2024-07-24 02:05:48.617002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.858 [2024-07-24 02:05:48.617015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.858 [2024-07-24 02:05:48.617030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.858 [2024-07-24 02:05:48.617044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.858 [2024-07-24 02:05:48.617059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.858 [2024-07-24 02:05:48.617087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.858 [2024-07-24 02:05:48.617103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.858 [2024-07-24 02:05:48.617116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.858 [2024-07-24 02:05:48.617135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.858 [2024-07-24 02:05:48.617149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.858 [2024-07-24 02:05:48.617163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.858 [2024-07-24 02:05:48.617176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.858 [2024-07-24 02:05:48.617191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.858 [2024-07-24 02:05:48.617204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.858 [2024-07-24 02:05:48.617219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.858 [2024-07-24 02:05:48.617232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.858 [2024-07-24 02:05:48.617246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.858 [2024-07-24 02:05:48.617260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.858 [2024-07-24 02:05:48.617274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.858 [2024-07-24 02:05:48.617287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.858 [2024-07-24 02:05:48.617312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.858 [2024-07-24 02:05:48.617349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.858 [2024-07-24 02:05:48.617367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.858 [2024-07-24 02:05:48.617381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.858 [2024-07-24 02:05:48.617396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.858 [2024-07-24 02:05:48.617409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.858 [2024-07-24 02:05:48.617424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.858 [2024-07-24 02:05:48.617438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.858 [2024-07-24 02:05:48.617453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.858 [2024-07-24 02:05:48.617467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.858 [2024-07-24 02:05:48.617482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.858 [2024-07-24 02:05:48.617501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.858 [2024-07-24 02:05:48.617517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.858 [2024-07-24 02:05:48.617534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.858 [2024-07-24 02:05:48.617550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.858 [2024-07-24 02:05:48.617563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.858 [2024-07-24 02:05:48.617578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.858 [2024-07-24 02:05:48.617592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.858 [2024-07-24 02:05:48.617606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.858 [2024-07-24 02:05:48.617625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.858 [2024-07-24 02:05:48.617641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.858 [2024-07-24 02:05:48.617654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.858 [2024-07-24 02:05:48.617669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.858 [2024-07-24 02:05:48.617682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.858 [2024-07-24 02:05:48.617697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.859 [2024-07-24 02:05:48.617711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.859 [2024-07-24 02:05:48.617726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.859 [2024-07-24 02:05:48.617739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.859 [2024-07-24 02:05:48.617754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.859 [2024-07-24 02:05:48.617768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.859 [2024-07-24 02:05:48.617783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.859 [2024-07-24 02:05:48.617796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.859 [2024-07-24 02:05:48.617811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.859 [2024-07-24 02:05:48.617825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.859 [2024-07-24 02:05:48.617840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.859 [2024-07-24 02:05:48.617854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.859 [2024-07-24 02:05:48.617884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.859 [2024-07-24 02:05:48.617898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.859 [2024-07-24 02:05:48.617916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.859 [2024-07-24 02:05:48.617932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.859 [2024-07-24 02:05:48.617947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.859 [2024-07-24 02:05:48.617961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.859 [2024-07-24 02:05:48.617976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.859 [2024-07-24 02:05:48.617994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.859 [2024-07-24 02:05:48.618011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.859 [2024-07-24 02:05:48.618025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.859 [2024-07-24 02:05:48.618040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.859 [2024-07-24 02:05:48.618053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.859 [2024-07-24 02:05:48.618068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.859 [2024-07-24 02:05:48.618081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.859 [2024-07-24 02:05:48.618096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.859 [2024-07-24 02:05:48.618110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.859 [2024-07-24 02:05:48.618125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.859 [2024-07-24 02:05:48.618138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.859 [2024-07-24 02:05:48.618153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.859 [2024-07-24 02:05:48.618166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.859 [2024-07-24 02:05:48.618181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.859 [2024-07-24 02:05:48.618194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.859 [2024-07-24 02:05:48.618209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.859 [2024-07-24 02:05:48.618223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.859 [2024-07-24 02:05:48.618238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.859 [2024-07-24 02:05:48.618252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.859 [2024-07-24 02:05:48.618266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.859 [2024-07-24 02:05:48.618283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.859 [2024-07-24 02:05:48.618310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.859 [2024-07-24 02:05:48.618345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.859 [2024-07-24 02:05:48.618363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.859 [2024-07-24 02:05:48.618378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.859 [2024-07-24 02:05:48.618393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.859 [2024-07-24 02:05:48.618407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.859 [2024-07-24 02:05:48.618422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.859 [2024-07-24 02:05:48.618436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.859 [2024-07-24 02:05:48.618451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.859 [2024-07-24 02:05:48.618468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.859 [2024-07-24 02:05:48.618484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.859 [2024-07-24 02:05:48.618499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.859 [2024-07-24 02:05:48.618515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.859 [2024-07-24 02:05:48.618530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.859 [2024-07-24 02:05:48.618546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.859 [2024-07-24 02:05:48.618560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.859 [2024-07-24 02:05:48.618576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.859 [2024-07-24 02:05:48.618600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.859 [2024-07-24 02:05:48.618616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.859 [2024-07-24 02:05:48.618645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.859 [2024-07-24 02:05:48.618661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.859 [2024-07-24 02:05:48.618676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.859 [2024-07-24 02:05:48.618691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.859 [2024-07-24 02:05:48.618705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.859 [2024-07-24 02:05:48.618724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.859 [2024-07-24 02:05:48.618738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.859 [2024-07-24 02:05:48.618753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.859 [2024-07-24 02:05:48.618767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.859 [2024-07-24 02:05:48.618782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.859 [2024-07-24 02:05:48.618796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.859 [2024-07-24 02:05:48.618811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.859 [2024-07-24 02:05:48.618824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.859 [2024-07-24 02:05:48.618839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.859 [2024-07-24 02:05:48.618852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.859 [2024-07-24 02:05:48.618884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:33.859 [2024-07-24 02:05:48.618946] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1e9cdf0 was disconnected and freed. reset controller. 00:27:33.860 [2024-07-24 02:05:48.621608] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:27:33.860 [2024-07-24 02:05:48.621642] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:27:33.860 [2024-07-24 02:05:48.621708] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1edafb0 (9): Bad file descriptor 00:27:33.860 [2024-07-24 02:05:48.621735] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df8480 (9): Bad file descriptor 00:27:33.860 [2024-07-24 02:05:48.621759] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ddcf80 (9): Bad file descriptor 00:27:33.860 [2024-07-24 02:05:48.621792] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d3b940 (9): Bad file descriptor 00:27:33.860 [2024-07-24 02:05:48.621822] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d45f00 (9): Bad file descriptor 00:27:33.860 [2024-07-24 02:05:48.621873] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.860 [2024-07-24 02:05:48.621895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.860 [2024-07-24 02:05:48.621911] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.860 [2024-07-24 02:05:48.621936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.860 [2024-07-24 02:05:48.621950] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.860 [2024-07-24 02:05:48.621963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.860 [2024-07-24 02:05:48.621978] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.860 [2024-07-24 02:05:48.621997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.860 [2024-07-24 02:05:48.622010] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edaa50 is same with the state(5) to be set 00:27:33.860 [2024-07-24 02:05:48.622041] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d43670 (9): Bad file descriptor 00:27:33.860 [2024-07-24 02:05:48.622077] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df9f20 (9): Bad file descriptor 00:27:33.860 [2024-07-24 02:05:48.622109] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1831610 (9): Bad file descriptor 00:27:33.860 [2024-07-24 02:05:48.622619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.860 [2024-07-24 02:05:48.622643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.860 [2024-07-24 02:05:48.622665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.860 [2024-07-24 02:05:48.622680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.860 [2024-07-24 02:05:48.622696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.860 [2024-07-24 02:05:48.622710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.860 [2024-07-24 02:05:48.622726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.860 [2024-07-24 02:05:48.622740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.860 [2024-07-24 02:05:48.622757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.860 [2024-07-24 02:05:48.622771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.860 [2024-07-24 02:05:48.622787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.860 [2024-07-24 02:05:48.622800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.860 [2024-07-24 02:05:48.622816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.860 [2024-07-24 02:05:48.622830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.860 [2024-07-24 02:05:48.622845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.860 [2024-07-24 02:05:48.622860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.860 [2024-07-24 02:05:48.622875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.860 [2024-07-24 02:05:48.622889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.860 [2024-07-24 02:05:48.622905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.860 [2024-07-24 02:05:48.622919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.860 [2024-07-24 02:05:48.622935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.860 [2024-07-24 02:05:48.622954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.860 [2024-07-24 02:05:48.622971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.860 [2024-07-24 02:05:48.622985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.860 [2024-07-24 02:05:48.623000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.860 [2024-07-24 02:05:48.623014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.860 [2024-07-24 02:05:48.623030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.860 [2024-07-24 02:05:48.623044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.860 [2024-07-24 02:05:48.623060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.860 [2024-07-24 02:05:48.623074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.860 [2024-07-24 02:05:48.623090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.860 [2024-07-24 02:05:48.623104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.860 [2024-07-24 02:05:48.623119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.860 [2024-07-24 02:05:48.623133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.860 [2024-07-24 02:05:48.623148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.860 [2024-07-24 02:05:48.623162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.860 [2024-07-24 02:05:48.623178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.860 [2024-07-24 02:05:48.623192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.860 [2024-07-24 02:05:48.623208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.860 [2024-07-24 02:05:48.623222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.860 [2024-07-24 02:05:48.623237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.860 [2024-07-24 02:05:48.623251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.860 [2024-07-24 02:05:48.623266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.860 [2024-07-24 02:05:48.623280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.860 [2024-07-24 02:05:48.623296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.860 [2024-07-24 02:05:48.623331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.860 [2024-07-24 02:05:48.623352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.860 [2024-07-24 02:05:48.623367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.860 [2024-07-24 02:05:48.623384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.860 [2024-07-24 02:05:48.623398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.860 [2024-07-24 02:05:48.623414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.860 [2024-07-24 02:05:48.623428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.860 [2024-07-24 02:05:48.623444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.860 [2024-07-24 02:05:48.623457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.860 [2024-07-24 02:05:48.623473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.860 [2024-07-24 02:05:48.623488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.860 [2024-07-24 02:05:48.623505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.861 [2024-07-24 02:05:48.623518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.861 [2024-07-24 02:05:48.623534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.861 [2024-07-24 02:05:48.623548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.861 [2024-07-24 02:05:48.623564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.861 [2024-07-24 02:05:48.623577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.861 [2024-07-24 02:05:48.623594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.861 [2024-07-24 02:05:48.623614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.861 [2024-07-24 02:05:48.623630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.861 [2024-07-24 02:05:48.623643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.861 [2024-07-24 02:05:48.623659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.861 [2024-07-24 02:05:48.623673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.861 [2024-07-24 02:05:48.623688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.861 [2024-07-24 02:05:48.623702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.861 [2024-07-24 02:05:48.623717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.861 [2024-07-24 02:05:48.623736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.861 [2024-07-24 02:05:48.623752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.861 [2024-07-24 02:05:48.623766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.861 [2024-07-24 02:05:48.623783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.861 [2024-07-24 02:05:48.623797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.861 [2024-07-24 02:05:48.623812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.861 [2024-07-24 02:05:48.623826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.861 [2024-07-24 02:05:48.623842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.861 [2024-07-24 02:05:48.623857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.861 [2024-07-24 02:05:48.623873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.861 [2024-07-24 02:05:48.623887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.861 [2024-07-24 02:05:48.623903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.861 [2024-07-24 02:05:48.623917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.861 [2024-07-24 02:05:48.623933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.861 [2024-07-24 02:05:48.623947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.861 [2024-07-24 02:05:48.623963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.861 [2024-07-24 02:05:48.623977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.861 [2024-07-24 02:05:48.623992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.861 [2024-07-24 02:05:48.624006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.861 [2024-07-24 02:05:48.624022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.861 [2024-07-24 02:05:48.624036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.861 [2024-07-24 02:05:48.624052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.861 [2024-07-24 02:05:48.624066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.861 [2024-07-24 02:05:48.624083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.861 [2024-07-24 02:05:48.624097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.861 [2024-07-24 02:05:48.624117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.861 [2024-07-24 02:05:48.624132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.861 [2024-07-24 02:05:48.624148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.861 [2024-07-24 02:05:48.624162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.861 [2024-07-24 02:05:48.624178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.861 [2024-07-24 02:05:48.624192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.861 [2024-07-24 02:05:48.624207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.861 [2024-07-24 02:05:48.624221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.861 [2024-07-24 02:05:48.624236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.861 [2024-07-24 02:05:48.624251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.861 [2024-07-24 02:05:48.624266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.861 [2024-07-24 02:05:48.624280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.861 [2024-07-24 02:05:48.624296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.861 [2024-07-24 02:05:48.624314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.861 [2024-07-24 02:05:48.624337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.861 [2024-07-24 02:05:48.624352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.861 [2024-07-24 02:05:48.624368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.861 [2024-07-24 02:05:48.624382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.861 [2024-07-24 02:05:48.624397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.861 [2024-07-24 02:05:48.624411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.861 [2024-07-24 02:05:48.624428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.861 [2024-07-24 02:05:48.624442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.861 [2024-07-24 02:05:48.624458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.862 [2024-07-24 02:05:48.624472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.862 [2024-07-24 02:05:48.624487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.862 [2024-07-24 02:05:48.624505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.862 [2024-07-24 02:05:48.624522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.862 [2024-07-24 02:05:48.624536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.862 [2024-07-24 02:05:48.624551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.862 [2024-07-24 02:05:48.624565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.862 [2024-07-24 02:05:48.624581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.862 [2024-07-24 02:05:48.624595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.862 [2024-07-24 02:05:48.624613] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19060e0 is same with the state(5) to be set 00:27:33.862 [2024-07-24 02:05:48.625883] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:33.862 [2024-07-24 02:05:48.625959] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:33.862 [2024-07-24 02:05:48.626012] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:33.862 [2024-07-24 02:05:48.626081] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:33.862 [2024-07-24 02:05:48.626439] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:33.862 [2024-07-24 02:05:48.626783] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.862 [2024-07-24 02:05:48.626994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.862 [2024-07-24 02:05:48.627025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df8480 with addr=10.0.0.2, port=4420 00:27:33.862 [2024-07-24 02:05:48.627042] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df8480 is same with the state(5) to be set 00:27:33.862 [2024-07-24 02:05:48.627156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.862 [2024-07-24 02:05:48.627182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edafb0 with addr=10.0.0.2, port=4420 00:27:33.862 [2024-07-24 02:05:48.627197] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edafb0 is same with the state(5) to be set 00:27:33.862 [2024-07-24 02:05:48.627389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.862 [2024-07-24 02:05:48.627414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.862 [2024-07-24 02:05:48.627437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.862 [2024-07-24 02:05:48.627453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.862 [2024-07-24 02:05:48.627470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.862 [2024-07-24 02:05:48.627484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.862 [2024-07-24 02:05:48.627500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.862 [2024-07-24 02:05:48.627515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.862 [2024-07-24 02:05:48.627540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.862 [2024-07-24 02:05:48.627555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.862 [2024-07-24 02:05:48.627571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.862 [2024-07-24 02:05:48.627585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.862 [2024-07-24 02:05:48.627611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.862 [2024-07-24 02:05:48.627625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.862 [2024-07-24 02:05:48.627641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.862 [2024-07-24 02:05:48.627655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.862 [2024-07-24 02:05:48.627672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.862 [2024-07-24 02:05:48.627687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.862 [2024-07-24 02:05:48.627703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.862 [2024-07-24 02:05:48.627717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.862 [2024-07-24 02:05:48.627733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.862 [2024-07-24 02:05:48.627747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.862 [2024-07-24 02:05:48.627763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.862 [2024-07-24 02:05:48.627777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.862 [2024-07-24 02:05:48.627793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.862 [2024-07-24 02:05:48.627807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.862 [2024-07-24 02:05:48.627823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.862 [2024-07-24 02:05:48.627837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.862 [2024-07-24 02:05:48.627853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.862 [2024-07-24 02:05:48.627867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.862 [2024-07-24 02:05:48.627883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.862 [2024-07-24 02:05:48.627897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.862 [2024-07-24 02:05:48.627913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.862 [2024-07-24 02:05:48.627931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.862 [2024-07-24 02:05:48.627949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.862 [2024-07-24 02:05:48.627963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.862 [2024-07-24 02:05:48.627979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.862 [2024-07-24 02:05:48.627993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.862 [2024-07-24 02:05:48.628009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.862 [2024-07-24 02:05:48.628023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.862 [2024-07-24 02:05:48.628039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.862 [2024-07-24 02:05:48.628053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.862 [2024-07-24 02:05:48.628069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.862 [2024-07-24 02:05:48.628083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.862 [2024-07-24 02:05:48.628099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.862 [2024-07-24 02:05:48.628113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.862 [2024-07-24 02:05:48.628129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.862 [2024-07-24 02:05:48.628143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.862 [2024-07-24 02:05:48.628159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.862 [2024-07-24 02:05:48.628173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.862 [2024-07-24 02:05:48.628188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.862 [2024-07-24 02:05:48.628203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.862 [2024-07-24 02:05:48.628218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.862 [2024-07-24 02:05:48.628233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.862 [2024-07-24 02:05:48.628249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.862 [2024-07-24 02:05:48.628263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.862 [2024-07-24 02:05:48.628279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.863 [2024-07-24 02:05:48.628293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.863 [2024-07-24 02:05:48.628331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.863 [2024-07-24 02:05:48.628347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.863 [2024-07-24 02:05:48.628364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.863 [2024-07-24 02:05:48.628378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.863 [2024-07-24 02:05:48.628394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.863 [2024-07-24 02:05:48.628408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.863 [2024-07-24 02:05:48.628423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.863 [2024-07-24 02:05:48.628437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.863 [2024-07-24 02:05:48.628454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.863 [2024-07-24 02:05:48.628469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.863 [2024-07-24 02:05:48.628485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.863 [2024-07-24 02:05:48.628499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.863 [2024-07-24 02:05:48.628515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.863 [2024-07-24 02:05:48.628529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.863 [2024-07-24 02:05:48.628544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.863 [2024-07-24 02:05:48.628558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.863 [2024-07-24 02:05:48.628575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.863 [2024-07-24 02:05:48.628588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.863 [2024-07-24 02:05:48.628610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.863 [2024-07-24 02:05:48.628623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.863 [2024-07-24 02:05:48.628639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.863 [2024-07-24 02:05:48.628653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.863 [2024-07-24 02:05:48.628675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.863 [2024-07-24 02:05:48.628689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.863 [2024-07-24 02:05:48.628705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.863 [2024-07-24 02:05:48.628722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.863 [2024-07-24 02:05:48.628739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.863 [2024-07-24 02:05:48.628753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.863 [2024-07-24 02:05:48.628769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.863 [2024-07-24 02:05:48.628783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.863 [2024-07-24 02:05:48.628798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.863 [2024-07-24 02:05:48.628812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.863 [2024-07-24 02:05:48.628828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.863 [2024-07-24 02:05:48.628842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.863 [2024-07-24 02:05:48.628858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.863 [2024-07-24 02:05:48.628872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.863 [2024-07-24 02:05:48.628888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.863 [2024-07-24 02:05:48.628902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.863 [2024-07-24 02:05:48.628917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.863 [2024-07-24 02:05:48.628931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.863 [2024-07-24 02:05:48.628948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.863 [2024-07-24 02:05:48.628962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.863 [2024-07-24 02:05:48.628977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.863 [2024-07-24 02:05:48.628991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.863 [2024-07-24 02:05:48.629007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.863 [2024-07-24 02:05:48.629021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.863 [2024-07-24 02:05:48.629036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.863 [2024-07-24 02:05:48.629050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.863 [2024-07-24 02:05:48.629067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.863 [2024-07-24 02:05:48.629080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.863 [2024-07-24 02:05:48.629100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.863 [2024-07-24 02:05:48.629115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.863 [2024-07-24 02:05:48.629131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.863 [2024-07-24 02:05:48.629146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.863 [2024-07-24 02:05:48.629162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.863 [2024-07-24 02:05:48.629175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.863 [2024-07-24 02:05:48.629191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.863 [2024-07-24 02:05:48.629204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.863 [2024-07-24 02:05:48.629220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.863 [2024-07-24 02:05:48.629234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.863 [2024-07-24 02:05:48.629250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.863 [2024-07-24 02:05:48.629264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.863 [2024-07-24 02:05:48.629279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.863 [2024-07-24 02:05:48.629293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.863 [2024-07-24 02:05:48.629329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.863 [2024-07-24 02:05:48.629346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.863 [2024-07-24 02:05:48.629362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.863 [2024-07-24 02:05:48.629376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.863 [2024-07-24 02:05:48.629392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.863 [2024-07-24 02:05:48.629406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.863 [2024-07-24 02:05:48.629422] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e9ba50 is same with the state(5) to be set 00:27:33.863 [2024-07-24 02:05:48.629495] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1e9ba50 was disconnected and freed. reset controller. 00:27:33.863 [2024-07-24 02:05:48.629651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.863 [2024-07-24 02:05:48.629679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1902ee0 with addr=10.0.0.2, port=4420 00:27:33.863 [2024-07-24 02:05:48.629696] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1902ee0 is same with the state(5) to be set 00:27:33.864 [2024-07-24 02:05:48.629721] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df8480 (9): Bad file descriptor 00:27:33.864 [2024-07-24 02:05:48.629746] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1edafb0 (9): Bad file descriptor 00:27:33.864 [2024-07-24 02:05:48.631222] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:27:33.864 [2024-07-24 02:05:48.631256] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1edaa50 (9): Bad file descriptor 00:27:33.864 [2024-07-24 02:05:48.631280] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1902ee0 (9): Bad file descriptor 00:27:33.864 [2024-07-24 02:05:48.631297] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:27:33.864 [2024-07-24 02:05:48.631321] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:27:33.864 [2024-07-24 02:05:48.631339] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:27:33.864 [2024-07-24 02:05:48.631360] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:27:33.864 [2024-07-24 02:05:48.631374] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:27:33.864 [2024-07-24 02:05:48.631387] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:27:33.864 [2024-07-24 02:05:48.631483] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.864 [2024-07-24 02:05:48.631505] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.864 [2024-07-24 02:05:48.631528] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.864 [2024-07-24 02:05:48.631545] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.864 [2024-07-24 02:05:48.631559] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.864 [2024-07-24 02:05:48.631871] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.864 [2024-07-24 02:05:48.631987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.864 [2024-07-24 02:05:48.632014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edaa50 with addr=10.0.0.2, port=4420 00:27:33.864 [2024-07-24 02:05:48.632030] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edaa50 is same with the state(5) to be set 00:27:33.864 [2024-07-24 02:05:48.632159] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1edaa50 (9): Bad file descriptor 00:27:33.864 [2024-07-24 02:05:48.632224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.864 [2024-07-24 02:05:48.632245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.864 [2024-07-24 02:05:48.632269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.864 [2024-07-24 02:05:48.632284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.864 [2024-07-24 02:05:48.632311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.864 [2024-07-24 02:05:48.632335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.864 [2024-07-24 02:05:48.632352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.864 [2024-07-24 02:05:48.632366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.864 [2024-07-24 02:05:48.632387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.864 [2024-07-24 02:05:48.632402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.864 [2024-07-24 02:05:48.632418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.864 [2024-07-24 02:05:48.632432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.864 [2024-07-24 02:05:48.632449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.864 [2024-07-24 02:05:48.632462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.864 [2024-07-24 02:05:48.632478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.864 [2024-07-24 02:05:48.632492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.864 [2024-07-24 02:05:48.632507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.864 [2024-07-24 02:05:48.632521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.864 [2024-07-24 02:05:48.632536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.864 [2024-07-24 02:05:48.632550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.864 [2024-07-24 02:05:48.632566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.864 [2024-07-24 02:05:48.632580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.864 [2024-07-24 02:05:48.632596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.864 [2024-07-24 02:05:48.632618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.864 [2024-07-24 02:05:48.632633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.864 [2024-07-24 02:05:48.632647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.864 [2024-07-24 02:05:48.632673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.864 [2024-07-24 02:05:48.632687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.864 [2024-07-24 02:05:48.632703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.864 [2024-07-24 02:05:48.632717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.864 [2024-07-24 02:05:48.632733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.864 [2024-07-24 02:05:48.632747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.864 [2024-07-24 02:05:48.632763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.864 [2024-07-24 02:05:48.632780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.864 [2024-07-24 02:05:48.632797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.864 [2024-07-24 02:05:48.632811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.864 [2024-07-24 02:05:48.632827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.864 [2024-07-24 02:05:48.632841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.864 [2024-07-24 02:05:48.632857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.864 [2024-07-24 02:05:48.632871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.864 [2024-07-24 02:05:48.632888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.864 [2024-07-24 02:05:48.632902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.864 [2024-07-24 02:05:48.632918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.864 [2024-07-24 02:05:48.632932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.864 [2024-07-24 02:05:48.632948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.864 [2024-07-24 02:05:48.632963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.864 [2024-07-24 02:05:48.632979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.864 [2024-07-24 02:05:48.632993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.864 [2024-07-24 02:05:48.633009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.864 [2024-07-24 02:05:48.633022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.864 [2024-07-24 02:05:48.633038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.864 [2024-07-24 02:05:48.633052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.864 [2024-07-24 02:05:48.633068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.864 [2024-07-24 02:05:48.633081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.864 [2024-07-24 02:05:48.633097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.864 [2024-07-24 02:05:48.633112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.864 [2024-07-24 02:05:48.633128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.865 [2024-07-24 02:05:48.633142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.865 [2024-07-24 02:05:48.633157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.865 [2024-07-24 02:05:48.633176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.865 [2024-07-24 02:05:48.633192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.865 [2024-07-24 02:05:48.633207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.865 [2024-07-24 02:05:48.633223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.865 [2024-07-24 02:05:48.633237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.865 [2024-07-24 02:05:48.633253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.865 [2024-07-24 02:05:48.633267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.865 [2024-07-24 02:05:48.633283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.865 [2024-07-24 02:05:48.633307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.865 [2024-07-24 02:05:48.633329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.865 [2024-07-24 02:05:48.633345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.865 [2024-07-24 02:05:48.633362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.865 [2024-07-24 02:05:48.633376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.865 [2024-07-24 02:05:48.633392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.865 [2024-07-24 02:05:48.633406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.865 [2024-07-24 02:05:48.633422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.865 [2024-07-24 02:05:48.633436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.865 [2024-07-24 02:05:48.633452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.865 [2024-07-24 02:05:48.633466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.865 [2024-07-24 02:05:48.633482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.865 [2024-07-24 02:05:48.633496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.865 [2024-07-24 02:05:48.633511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.865 [2024-07-24 02:05:48.633525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.865 [2024-07-24 02:05:48.633541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.865 [2024-07-24 02:05:48.633555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.865 [2024-07-24 02:05:48.633575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.865 [2024-07-24 02:05:48.633589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.865 [2024-07-24 02:05:48.633616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.865 [2024-07-24 02:05:48.633629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.865 [2024-07-24 02:05:48.633645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.865 [2024-07-24 02:05:48.633659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.865 [2024-07-24 02:05:48.633675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.865 [2024-07-24 02:05:48.633689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.865 [2024-07-24 02:05:48.633705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.865 [2024-07-24 02:05:48.633719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.865 [2024-07-24 02:05:48.633734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.865 [2024-07-24 02:05:48.633748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.865 [2024-07-24 02:05:48.633764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.865 [2024-07-24 02:05:48.633777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.865 [2024-07-24 02:05:48.633794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.865 [2024-07-24 02:05:48.633807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.865 [2024-07-24 02:05:48.633824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.865 [2024-07-24 02:05:48.633838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.865 [2024-07-24 02:05:48.633853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.865 [2024-07-24 02:05:48.633868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.865 [2024-07-24 02:05:48.633884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.865 [2024-07-24 02:05:48.633899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.865 [2024-07-24 02:05:48.633915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.865 [2024-07-24 02:05:48.633929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.865 [2024-07-24 02:05:48.633945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.865 [2024-07-24 02:05:48.633962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.865 [2024-07-24 02:05:48.633979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.865 [2024-07-24 02:05:48.633993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.865 [2024-07-24 02:05:48.634009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.865 [2024-07-24 02:05:48.634022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.865 [2024-07-24 02:05:48.634038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.865 [2024-07-24 02:05:48.634052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.865 [2024-07-24 02:05:48.634068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.865 [2024-07-24 02:05:48.634082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.865 [2024-07-24 02:05:48.634098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.865 [2024-07-24 02:05:48.634112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.865 [2024-07-24 02:05:48.634127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.865 [2024-07-24 02:05:48.634141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.865 [2024-07-24 02:05:48.634157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.865 [2024-07-24 02:05:48.634171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.865 [2024-07-24 02:05:48.634187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.865 [2024-07-24 02:05:48.634201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.865 [2024-07-24 02:05:48.634216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.865 [2024-07-24 02:05:48.634230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.865 [2024-07-24 02:05:48.634245] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1907380 is same with the state(5) to be set 00:27:33.865 [2024-07-24 02:05:48.635508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.865 [2024-07-24 02:05:48.635531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.866 [2024-07-24 02:05:48.635552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.866 [2024-07-24 02:05:48.635568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.866 [2024-07-24 02:05:48.635584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.866 [2024-07-24 02:05:48.635612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.866 [2024-07-24 02:05:48.635629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.866 [2024-07-24 02:05:48.635643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.866 [2024-07-24 02:05:48.635659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.866 [2024-07-24 02:05:48.635673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.866 [2024-07-24 02:05:48.635689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.866 [2024-07-24 02:05:48.635703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.866 [2024-07-24 02:05:48.635719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.866 [2024-07-24 02:05:48.635733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.866 [2024-07-24 02:05:48.635749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.866 [2024-07-24 02:05:48.635762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.866 [2024-07-24 02:05:48.635779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.866 [2024-07-24 02:05:48.635793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.866 [2024-07-24 02:05:48.635809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.866 [2024-07-24 02:05:48.635823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.866 [2024-07-24 02:05:48.635838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.866 [2024-07-24 02:05:48.635852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.866 [2024-07-24 02:05:48.635868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.866 [2024-07-24 02:05:48.635882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.866 [2024-07-24 02:05:48.635898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.866 [2024-07-24 02:05:48.635912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.866 [2024-07-24 02:05:48.635927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.866 [2024-07-24 02:05:48.635941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.866 [2024-07-24 02:05:48.635957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.866 [2024-07-24 02:05:48.635971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.866 [2024-07-24 02:05:48.635990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.866 [2024-07-24 02:05:48.636005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.866 [2024-07-24 02:05:48.636021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.866 [2024-07-24 02:05:48.636035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.866 [2024-07-24 02:05:48.636053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.866 [2024-07-24 02:05:48.636067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.866 [2024-07-24 02:05:48.636084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.866 [2024-07-24 02:05:48.636098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.866 [2024-07-24 02:05:48.636114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.866 [2024-07-24 02:05:48.636128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.866 [2024-07-24 02:05:48.636144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.866 [2024-07-24 02:05:48.636157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.866 [2024-07-24 02:05:48.636173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.866 [2024-07-24 02:05:48.636187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.866 [2024-07-24 02:05:48.636203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.866 [2024-07-24 02:05:48.636216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.866 [2024-07-24 02:05:48.636232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.866 [2024-07-24 02:05:48.636246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.866 [2024-07-24 02:05:48.636262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.866 [2024-07-24 02:05:48.636275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.866 [2024-07-24 02:05:48.636291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.866 [2024-07-24 02:05:48.636311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.866 [2024-07-24 02:05:48.636334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.866 [2024-07-24 02:05:48.636348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.866 [2024-07-24 02:05:48.636364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.866 [2024-07-24 02:05:48.636382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.866 [2024-07-24 02:05:48.636399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.866 [2024-07-24 02:05:48.636412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.866 [2024-07-24 02:05:48.636428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.866 [2024-07-24 02:05:48.636442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.866 [2024-07-24 02:05:48.636458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.866 [2024-07-24 02:05:48.636472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.866 [2024-07-24 02:05:48.636488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.866 [2024-07-24 02:05:48.636501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.866 [2024-07-24 02:05:48.636517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.866 [2024-07-24 02:05:48.636531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.866 [2024-07-24 02:05:48.636547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.867 [2024-07-24 02:05:48.636562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.867 [2024-07-24 02:05:48.636577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.867 [2024-07-24 02:05:48.636591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.867 [2024-07-24 02:05:48.636609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.867 [2024-07-24 02:05:48.636623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.867 [2024-07-24 02:05:48.636639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.867 [2024-07-24 02:05:48.636653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.867 [2024-07-24 02:05:48.636677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.867 [2024-07-24 02:05:48.636690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.867 [2024-07-24 02:05:48.636706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.867 [2024-07-24 02:05:48.636720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.867 [2024-07-24 02:05:48.636735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.867 [2024-07-24 02:05:48.636749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.867 [2024-07-24 02:05:48.636770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.867 [2024-07-24 02:05:48.636784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.867 [2024-07-24 02:05:48.636800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.867 [2024-07-24 02:05:48.636814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.867 [2024-07-24 02:05:48.636830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.867 [2024-07-24 02:05:48.636844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.867 [2024-07-24 02:05:48.636860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.867 [2024-07-24 02:05:48.636874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.867 [2024-07-24 02:05:48.636890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.867 [2024-07-24 02:05:48.636904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.867 [2024-07-24 02:05:48.636920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.867 [2024-07-24 02:05:48.636934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.867 [2024-07-24 02:05:48.636949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.867 [2024-07-24 02:05:48.636963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.867 [2024-07-24 02:05:48.636979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.867 [2024-07-24 02:05:48.636993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.867 [2024-07-24 02:05:48.637008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.867 [2024-07-24 02:05:48.637022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.867 [2024-07-24 02:05:48.637038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.867 [2024-07-24 02:05:48.637052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.867 [2024-07-24 02:05:48.637068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.867 [2024-07-24 02:05:48.637082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.867 [2024-07-24 02:05:48.637097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.867 [2024-07-24 02:05:48.637112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.867 [2024-07-24 02:05:48.637128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.867 [2024-07-24 02:05:48.637149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.867 [2024-07-24 02:05:48.637166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.867 [2024-07-24 02:05:48.637180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.867 [2024-07-24 02:05:48.637195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.867 [2024-07-24 02:05:48.637209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.867 [2024-07-24 02:05:48.637225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.867 [2024-07-24 02:05:48.637239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.867 [2024-07-24 02:05:48.637254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.867 [2024-07-24 02:05:48.637268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.867 [2024-07-24 02:05:48.637284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.867 [2024-07-24 02:05:48.637307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.867 [2024-07-24 02:05:48.637335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.867 [2024-07-24 02:05:48.637351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.867 [2024-07-24 02:05:48.637367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.867 [2024-07-24 02:05:48.637381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.867 [2024-07-24 02:05:48.637397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.867 [2024-07-24 02:05:48.637411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.867 [2024-07-24 02:05:48.637426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.867 [2024-07-24 02:05:48.637440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.867 [2024-07-24 02:05:48.637456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.867 [2024-07-24 02:05:48.637470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.867 [2024-07-24 02:05:48.637485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.867 [2024-07-24 02:05:48.637499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.867 [2024-07-24 02:05:48.637513] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb5f20 is same with the state(5) to be set 00:27:33.867 [2024-07-24 02:05:48.638746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.867 [2024-07-24 02:05:48.638774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.867 [2024-07-24 02:05:48.638797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.867 [2024-07-24 02:05:48.638813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.867 [2024-07-24 02:05:48.638829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.867 [2024-07-24 02:05:48.638843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.867 [2024-07-24 02:05:48.638859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.867 [2024-07-24 02:05:48.638873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.867 [2024-07-24 02:05:48.638889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.867 [2024-07-24 02:05:48.638902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.867 [2024-07-24 02:05:48.638919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.867 [2024-07-24 02:05:48.638932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.867 [2024-07-24 02:05:48.638949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.867 [2024-07-24 02:05:48.638962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.868 [2024-07-24 02:05:48.638979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.868 [2024-07-24 02:05:48.638992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.868 [2024-07-24 02:05:48.639008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.868 [2024-07-24 02:05:48.639023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.868 [2024-07-24 02:05:48.639039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.868 [2024-07-24 02:05:48.639053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.868 [2024-07-24 02:05:48.639068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.868 [2024-07-24 02:05:48.639082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.868 [2024-07-24 02:05:48.639098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.868 [2024-07-24 02:05:48.639112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.868 [2024-07-24 02:05:48.639127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.868 [2024-07-24 02:05:48.639141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.868 [2024-07-24 02:05:48.639161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.868 [2024-07-24 02:05:48.639176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.868 [2024-07-24 02:05:48.639192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.868 [2024-07-24 02:05:48.639206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.868 [2024-07-24 02:05:48.639221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.868 [2024-07-24 02:05:48.639236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.868 [2024-07-24 02:05:48.639251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.868 [2024-07-24 02:05:48.639266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.868 [2024-07-24 02:05:48.639283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.868 [2024-07-24 02:05:48.639296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.868 [2024-07-24 02:05:48.639312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.868 [2024-07-24 02:05:48.639335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.868 [2024-07-24 02:05:48.639352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.868 [2024-07-24 02:05:48.639366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.868 [2024-07-24 02:05:48.639382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.868 [2024-07-24 02:05:48.639396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.868 [2024-07-24 02:05:48.639412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.868 [2024-07-24 02:05:48.639425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.868 [2024-07-24 02:05:48.639441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.868 [2024-07-24 02:05:48.639455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.868 [2024-07-24 02:05:48.639471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.868 [2024-07-24 02:05:48.639485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.868 [2024-07-24 02:05:48.639500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.868 [2024-07-24 02:05:48.639515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.868 [2024-07-24 02:05:48.639531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.868 [2024-07-24 02:05:48.639549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.868 [2024-07-24 02:05:48.639565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.868 [2024-07-24 02:05:48.639579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.868 [2024-07-24 02:05:48.639595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.868 [2024-07-24 02:05:48.639609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.868 [2024-07-24 02:05:48.639626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.868 [2024-07-24 02:05:48.639640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.868 [2024-07-24 02:05:48.639655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.868 [2024-07-24 02:05:48.639669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.868 [2024-07-24 02:05:48.639685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.868 [2024-07-24 02:05:48.639699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.868 [2024-07-24 02:05:48.639714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.868 [2024-07-24 02:05:48.639728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.868 [2024-07-24 02:05:48.639745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.868 [2024-07-24 02:05:48.639759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.868 [2024-07-24 02:05:48.639775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.868 [2024-07-24 02:05:48.639789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.868 [2024-07-24 02:05:48.639805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.868 [2024-07-24 02:05:48.639819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.868 [2024-07-24 02:05:48.639835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.868 [2024-07-24 02:05:48.639848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.868 [2024-07-24 02:05:48.639864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.868 [2024-07-24 02:05:48.639878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.868 [2024-07-24 02:05:48.639894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.868 [2024-07-24 02:05:48.639908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.868 [2024-07-24 02:05:48.639928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.868 [2024-07-24 02:05:48.639942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.868 [2024-07-24 02:05:48.639958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.868 [2024-07-24 02:05:48.639972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.868 [2024-07-24 02:05:48.639989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.868 [2024-07-24 02:05:48.640002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.868 [2024-07-24 02:05:48.640018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.868 [2024-07-24 02:05:48.640032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.868 [2024-07-24 02:05:48.640048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.868 [2024-07-24 02:05:48.640062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.868 [2024-07-24 02:05:48.640078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.868 [2024-07-24 02:05:48.640091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.868 [2024-07-24 02:05:48.640107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.869 [2024-07-24 02:05:48.640121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.869 [2024-07-24 02:05:48.640137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.869 [2024-07-24 02:05:48.640151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.869 [2024-07-24 02:05:48.640166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.869 [2024-07-24 02:05:48.640180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.869 [2024-07-24 02:05:48.640196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.869 [2024-07-24 02:05:48.640210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.869 [2024-07-24 02:05:48.640226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.869 [2024-07-24 02:05:48.640240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.869 [2024-07-24 02:05:48.640256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.869 [2024-07-24 02:05:48.640270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.869 [2024-07-24 02:05:48.640286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.869 [2024-07-24 02:05:48.640313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.869 [2024-07-24 02:05:48.640336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.869 [2024-07-24 02:05:48.640350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.869 [2024-07-24 02:05:48.640366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.869 [2024-07-24 02:05:48.640380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.869 [2024-07-24 02:05:48.640396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.869 [2024-07-24 02:05:48.640410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.869 [2024-07-24 02:05:48.640426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.869 [2024-07-24 02:05:48.640440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.869 [2024-07-24 02:05:48.640455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.869 [2024-07-24 02:05:48.640469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.869 [2024-07-24 02:05:48.640485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.869 [2024-07-24 02:05:48.640500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.869 [2024-07-24 02:05:48.640516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.869 [2024-07-24 02:05:48.640530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.869 [2024-07-24 02:05:48.640546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.869 [2024-07-24 02:05:48.640559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.869 [2024-07-24 02:05:48.640575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.869 [2024-07-24 02:05:48.640589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.869 [2024-07-24 02:05:48.640605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.869 [2024-07-24 02:05:48.640624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.869 [2024-07-24 02:05:48.640640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.869 [2024-07-24 02:05:48.640654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.869 [2024-07-24 02:05:48.640670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.869 [2024-07-24 02:05:48.640683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.869 [2024-07-24 02:05:48.640703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.869 [2024-07-24 02:05:48.640718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.869 [2024-07-24 02:05:48.640732] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb7460 is same with the state(5) to be set 00:27:33.869 [2024-07-24 02:05:48.641966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.869 [2024-07-24 02:05:48.641991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.869 [2024-07-24 02:05:48.642011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.869 [2024-07-24 02:05:48.642026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.869 [2024-07-24 02:05:48.642043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.869 [2024-07-24 02:05:48.642058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.869 [2024-07-24 02:05:48.642074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.869 [2024-07-24 02:05:48.642088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.869 [2024-07-24 02:05:48.642104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.869 [2024-07-24 02:05:48.642118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.869 [2024-07-24 02:05:48.642134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.869 [2024-07-24 02:05:48.642148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.869 [2024-07-24 02:05:48.642164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.869 [2024-07-24 02:05:48.642178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.869 [2024-07-24 02:05:48.642194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.869 [2024-07-24 02:05:48.642208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.869 [2024-07-24 02:05:48.642224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.869 [2024-07-24 02:05:48.642238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.869 [2024-07-24 02:05:48.642254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.869 [2024-07-24 02:05:48.642268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.869 [2024-07-24 02:05:48.642284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.869 [2024-07-24 02:05:48.642298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.869 [2024-07-24 02:05:48.642313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.869 [2024-07-24 02:05:48.642339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.869 [2024-07-24 02:05:48.642356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.869 [2024-07-24 02:05:48.642370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.869 [2024-07-24 02:05:48.642386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.869 [2024-07-24 02:05:48.642400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.869 [2024-07-24 02:05:48.642416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.869 [2024-07-24 02:05:48.642430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.869 [2024-07-24 02:05:48.642446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.869 [2024-07-24 02:05:48.642460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.869 [2024-07-24 02:05:48.642477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.869 [2024-07-24 02:05:48.642491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.869 [2024-07-24 02:05:48.642507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.869 [2024-07-24 02:05:48.642521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.870 [2024-07-24 02:05:48.642536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.870 [2024-07-24 02:05:48.642550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.870 [2024-07-24 02:05:48.642566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.870 [2024-07-24 02:05:48.642580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.870 [2024-07-24 02:05:48.642595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.870 [2024-07-24 02:05:48.642609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.870 [2024-07-24 02:05:48.642625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.870 [2024-07-24 02:05:48.642639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.870 [2024-07-24 02:05:48.642655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.870 [2024-07-24 02:05:48.642670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.870 [2024-07-24 02:05:48.642686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.870 [2024-07-24 02:05:48.642699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.870 [2024-07-24 02:05:48.642719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.870 [2024-07-24 02:05:48.642734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.870 [2024-07-24 02:05:48.642749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.870 [2024-07-24 02:05:48.642763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.870 [2024-07-24 02:05:48.642779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.870 [2024-07-24 02:05:48.642793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.870 [2024-07-24 02:05:48.642809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.870 [2024-07-24 02:05:48.642823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.870 [2024-07-24 02:05:48.642839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.870 [2024-07-24 02:05:48.642853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.870 [2024-07-24 02:05:48.642869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.870 [2024-07-24 02:05:48.642883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.870 [2024-07-24 02:05:48.642899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.870 [2024-07-24 02:05:48.642913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.870 [2024-07-24 02:05:48.642929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.870 [2024-07-24 02:05:48.642943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.870 [2024-07-24 02:05:48.642959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.870 [2024-07-24 02:05:48.642973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.870 [2024-07-24 02:05:48.642989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.870 [2024-07-24 02:05:48.643003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.870 [2024-07-24 02:05:48.643019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.870 [2024-07-24 02:05:48.643033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.870 [2024-07-24 02:05:48.643049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.870 [2024-07-24 02:05:48.643062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.870 [2024-07-24 02:05:48.643079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.870 [2024-07-24 02:05:48.643096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.870 [2024-07-24 02:05:48.643113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.870 [2024-07-24 02:05:48.643127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.870 [2024-07-24 02:05:48.643143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.870 [2024-07-24 02:05:48.643157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.870 [2024-07-24 02:05:48.643173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.870 [2024-07-24 02:05:48.643187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.870 [2024-07-24 02:05:48.643203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.870 [2024-07-24 02:05:48.643216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.870 [2024-07-24 02:05:48.643232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.870 [2024-07-24 02:05:48.643246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.870 [2024-07-24 02:05:48.643261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.870 [2024-07-24 02:05:48.643275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.870 [2024-07-24 02:05:48.643291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.870 [2024-07-24 02:05:48.643305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.870 [2024-07-24 02:05:48.643327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.870 [2024-07-24 02:05:48.643343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.870 [2024-07-24 02:05:48.643359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.870 [2024-07-24 02:05:48.643374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.870 [2024-07-24 02:05:48.643390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.870 [2024-07-24 02:05:48.643404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.870 [2024-07-24 02:05:48.643420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.870 [2024-07-24 02:05:48.643433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.870 [2024-07-24 02:05:48.643450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.870 [2024-07-24 02:05:48.643464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.870 [2024-07-24 02:05:48.643484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.870 [2024-07-24 02:05:48.643499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.870 [2024-07-24 02:05:48.643515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.870 [2024-07-24 02:05:48.643529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.870 [2024-07-24 02:05:48.643545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.870 [2024-07-24 02:05:48.643558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.870 [2024-07-24 02:05:48.643574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.871 [2024-07-24 02:05:48.643589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.871 [2024-07-24 02:05:48.643605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.871 [2024-07-24 02:05:48.643619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.871 [2024-07-24 02:05:48.643634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.871 [2024-07-24 02:05:48.643648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.871 [2024-07-24 02:05:48.643664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.871 [2024-07-24 02:05:48.643678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.871 [2024-07-24 02:05:48.643695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.871 [2024-07-24 02:05:48.643709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.871 [2024-07-24 02:05:48.643725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.871 [2024-07-24 02:05:48.643739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.871 [2024-07-24 02:05:48.643755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.871 [2024-07-24 02:05:48.643769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.871 [2024-07-24 02:05:48.643785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.871 [2024-07-24 02:05:48.643798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.871 [2024-07-24 02:05:48.643814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.871 [2024-07-24 02:05:48.643828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.871 [2024-07-24 02:05:48.643844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.871 [2024-07-24 02:05:48.643862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.871 [2024-07-24 02:05:48.643879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.871 [2024-07-24 02:05:48.643893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.871 [2024-07-24 02:05:48.643909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.871 [2024-07-24 02:05:48.643922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.871 [2024-07-24 02:05:48.643937] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb88f0 is same with the state(5) to be set 00:27:33.871 [2024-07-24 02:05:48.645171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.871 [2024-07-24 02:05:48.645194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.871 [2024-07-24 02:05:48.645216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.871 [2024-07-24 02:05:48.645232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.871 [2024-07-24 02:05:48.645249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.871 [2024-07-24 02:05:48.645263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.871 [2024-07-24 02:05:48.645279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.871 [2024-07-24 02:05:48.645293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.871 [2024-07-24 02:05:48.645309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.871 [2024-07-24 02:05:48.645334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.871 [2024-07-24 02:05:48.645351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.871 [2024-07-24 02:05:48.645365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.871 [2024-07-24 02:05:48.645381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.871 [2024-07-24 02:05:48.645394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.871 [2024-07-24 02:05:48.645410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.871 [2024-07-24 02:05:48.645424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.871 [2024-07-24 02:05:48.645440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.871 [2024-07-24 02:05:48.645454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.871 [2024-07-24 02:05:48.645470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.871 [2024-07-24 02:05:48.645488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.871 [2024-07-24 02:05:48.645504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.871 [2024-07-24 02:05:48.645518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.871 [2024-07-24 02:05:48.645533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.871 [2024-07-24 02:05:48.645547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.871 [2024-07-24 02:05:48.645563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.871 [2024-07-24 02:05:48.645577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.871 [2024-07-24 02:05:48.645594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.871 [2024-07-24 02:05:48.645608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.871 [2024-07-24 02:05:48.645624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.871 [2024-07-24 02:05:48.645638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.871 [2024-07-24 02:05:48.645653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.871 [2024-07-24 02:05:48.645667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.871 [2024-07-24 02:05:48.645682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.871 [2024-07-24 02:05:48.645696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.871 [2024-07-24 02:05:48.645712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.871 [2024-07-24 02:05:48.645726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.871 [2024-07-24 02:05:48.645741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.871 [2024-07-24 02:05:48.645755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.871 [2024-07-24 02:05:48.645771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.871 [2024-07-24 02:05:48.645785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.871 [2024-07-24 02:05:48.645801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.871 [2024-07-24 02:05:48.645814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.871 [2024-07-24 02:05:48.645830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.871 [2024-07-24 02:05:48.645844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.872 [2024-07-24 02:05:48.645863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.872 [2024-07-24 02:05:48.645878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.872 [2024-07-24 02:05:48.645894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.872 [2024-07-24 02:05:48.645908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.872 [2024-07-24 02:05:48.645924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.872 [2024-07-24 02:05:48.645938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.872 [2024-07-24 02:05:48.645954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.872 [2024-07-24 02:05:48.645968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.872 [2024-07-24 02:05:48.645984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.872 [2024-07-24 02:05:48.645997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.872 [2024-07-24 02:05:48.646014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.872 [2024-07-24 02:05:48.646028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.872 [2024-07-24 02:05:48.646043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.872 [2024-07-24 02:05:48.646057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.872 [2024-07-24 02:05:48.646074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.872 [2024-07-24 02:05:48.646088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.872 [2024-07-24 02:05:48.646104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.872 [2024-07-24 02:05:48.646118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.872 [2024-07-24 02:05:48.646134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.872 [2024-07-24 02:05:48.646147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.872 [2024-07-24 02:05:48.646164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.872 [2024-07-24 02:05:48.646177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.872 [2024-07-24 02:05:48.646194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.872 [2024-07-24 02:05:48.646208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.872 [2024-07-24 02:05:48.646224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.872 [2024-07-24 02:05:48.646241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.872 [2024-07-24 02:05:48.646257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.872 [2024-07-24 02:05:48.646272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.872 [2024-07-24 02:05:48.646288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.872 [2024-07-24 02:05:48.646302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.872 [2024-07-24 02:05:48.646324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.872 [2024-07-24 02:05:48.646339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.872 [2024-07-24 02:05:48.646355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.872 [2024-07-24 02:05:48.646369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.872 [2024-07-24 02:05:48.646385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.872 [2024-07-24 02:05:48.646398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.872 [2024-07-24 02:05:48.646415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.872 [2024-07-24 02:05:48.646429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.872 [2024-07-24 02:05:48.646445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.872 [2024-07-24 02:05:48.646459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.872 [2024-07-24 02:05:48.646474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.872 [2024-07-24 02:05:48.646488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.872 [2024-07-24 02:05:48.646504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.872 [2024-07-24 02:05:48.646517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.872 [2024-07-24 02:05:48.646533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.872 [2024-07-24 02:05:48.646546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.872 [2024-07-24 02:05:48.646562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.872 [2024-07-24 02:05:48.646576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.872 [2024-07-24 02:05:48.646592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.872 [2024-07-24 02:05:48.646606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.872 [2024-07-24 02:05:48.646628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.872 [2024-07-24 02:05:48.646643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.872 [2024-07-24 02:05:48.646658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.872 [2024-07-24 02:05:48.646672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.872 [2024-07-24 02:05:48.646688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.872 [2024-07-24 02:05:48.646701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.872 [2024-07-24 02:05:48.646717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.872 [2024-07-24 02:05:48.646730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.872 [2024-07-24 02:05:48.646746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.872 [2024-07-24 02:05:48.646761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.872 [2024-07-24 02:05:48.646776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.872 [2024-07-24 02:05:48.646790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.872 [2024-07-24 02:05:48.646806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.872 [2024-07-24 02:05:48.646820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.872 [2024-07-24 02:05:48.646836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.872 [2024-07-24 02:05:48.646850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.872 [2024-07-24 02:05:48.646865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.872 [2024-07-24 02:05:48.646879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.872 [2024-07-24 02:05:48.646895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.872 [2024-07-24 02:05:48.646908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.872 [2024-07-24 02:05:48.646924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.872 [2024-07-24 02:05:48.646937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.872 [2024-07-24 02:05:48.646953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.872 [2024-07-24 02:05:48.646967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.872 [2024-07-24 02:05:48.646982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.872 [2024-07-24 02:05:48.647000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.873 [2024-07-24 02:05:48.647016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.873 [2024-07-24 02:05:48.647030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.873 [2024-07-24 02:05:48.647047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.873 [2024-07-24 02:05:48.647061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.873 [2024-07-24 02:05:48.647077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.873 [2024-07-24 02:05:48.647091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.873 [2024-07-24 02:05:48.647107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.873 [2024-07-24 02:05:48.647120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.873 [2024-07-24 02:05:48.647135] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e9a580 is same with the state(5) to be set 00:27:33.873 [2024-07-24 02:05:48.648781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.873 [2024-07-24 02:05:48.648806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.873 [2024-07-24 02:05:48.648830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.873 [2024-07-24 02:05:48.648846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.873 [2024-07-24 02:05:48.648862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.873 [2024-07-24 02:05:48.648876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.873 [2024-07-24 02:05:48.648892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.873 [2024-07-24 02:05:48.648906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.873 [2024-07-24 02:05:48.648922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.873 [2024-07-24 02:05:48.648936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.873 [2024-07-24 02:05:48.648952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.873 [2024-07-24 02:05:48.648965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.873 [2024-07-24 02:05:48.648981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.873 [2024-07-24 02:05:48.648996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.873 [2024-07-24 02:05:48.649012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.873 [2024-07-24 02:05:48.649031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.873 [2024-07-24 02:05:48.649048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.873 [2024-07-24 02:05:48.649062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.873 [2024-07-24 02:05:48.649078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.873 [2024-07-24 02:05:48.649092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.873 [2024-07-24 02:05:48.649110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.873 [2024-07-24 02:05:48.649124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.873 [2024-07-24 02:05:48.649140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.873 [2024-07-24 02:05:48.649154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.873 [2024-07-24 02:05:48.649170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.873 [2024-07-24 02:05:48.649184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.873 [2024-07-24 02:05:48.649199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.873 [2024-07-24 02:05:48.649213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.873 [2024-07-24 02:05:48.649229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.873 [2024-07-24 02:05:48.649243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.873 [2024-07-24 02:05:48.649258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.873 [2024-07-24 02:05:48.649272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.873 [2024-07-24 02:05:48.649288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.873 [2024-07-24 02:05:48.649302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.873 [2024-07-24 02:05:48.649327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.873 [2024-07-24 02:05:48.649344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.873 [2024-07-24 02:05:48.649360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.873 [2024-07-24 02:05:48.649374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.873 [2024-07-24 02:05:48.649390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.873 [2024-07-24 02:05:48.649404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.873 [2024-07-24 02:05:48.649424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.873 [2024-07-24 02:05:48.649438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.873 [2024-07-24 02:05:48.649454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.873 [2024-07-24 02:05:48.649468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.873 [2024-07-24 02:05:48.649484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.873 [2024-07-24 02:05:48.649498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.873 [2024-07-24 02:05:48.649513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.873 [2024-07-24 02:05:48.649528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.873 [2024-07-24 02:05:48.649544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.873 [2024-07-24 02:05:48.649558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.873 [2024-07-24 02:05:48.649574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.873 [2024-07-24 02:05:48.649588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.873 [2024-07-24 02:05:48.649604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.873 [2024-07-24 02:05:48.649618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.873 [2024-07-24 02:05:48.649635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.873 [2024-07-24 02:05:48.649650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.873 [2024-07-24 02:05:48.649666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.873 [2024-07-24 02:05:48.649680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.873 [2024-07-24 02:05:48.649696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.873 [2024-07-24 02:05:48.649710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.873 [2024-07-24 02:05:48.649726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.873 [2024-07-24 02:05:48.649740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.873 [2024-07-24 02:05:48.649756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.873 [2024-07-24 02:05:48.649770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.873 [2024-07-24 02:05:48.649785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.873 [2024-07-24 02:05:48.649802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.873 [2024-07-24 02:05:48.649819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.874 [2024-07-24 02:05:48.649833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.874 [2024-07-24 02:05:48.649849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.874 [2024-07-24 02:05:48.649862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.874 [2024-07-24 02:05:48.649878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.874 [2024-07-24 02:05:48.649892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.874 [2024-07-24 02:05:48.649909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.874 [2024-07-24 02:05:48.649923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.874 [2024-07-24 02:05:48.649939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.874 [2024-07-24 02:05:48.649953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.874 [2024-07-24 02:05:48.649969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.874 [2024-07-24 02:05:48.649984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.874 [2024-07-24 02:05:48.650000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.874 [2024-07-24 02:05:48.650014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.874 [2024-07-24 02:05:48.650029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.874 [2024-07-24 02:05:48.650043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.874 [2024-07-24 02:05:48.650058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.874 [2024-07-24 02:05:48.650072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.874 [2024-07-24 02:05:48.650089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.874 [2024-07-24 02:05:48.650102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.874 [2024-07-24 02:05:48.650118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.874 [2024-07-24 02:05:48.650132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.874 [2024-07-24 02:05:48.650147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.874 [2024-07-24 02:05:48.650162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.874 [2024-07-24 02:05:48.650181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.874 [2024-07-24 02:05:48.650196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.874 [2024-07-24 02:05:48.650212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.874 [2024-07-24 02:05:48.650226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.874 [2024-07-24 02:05:48.650241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.874 [2024-07-24 02:05:48.650255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.874 [2024-07-24 02:05:48.650271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.874 [2024-07-24 02:05:48.650284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.874 [2024-07-24 02:05:48.650300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.874 [2024-07-24 02:05:48.650314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.874 [2024-07-24 02:05:48.650338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.874 [2024-07-24 02:05:48.650352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.874 [2024-07-24 02:05:48.650369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.874 [2024-07-24 02:05:48.650383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.874 [2024-07-24 02:05:48.650399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.874 [2024-07-24 02:05:48.650413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.874 [2024-07-24 02:05:48.650429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.874 [2024-07-24 02:05:48.650443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.874 [2024-07-24 02:05:48.650459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.874 [2024-07-24 02:05:48.650473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.874 [2024-07-24 02:05:48.650489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.874 [2024-07-24 02:05:48.650502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.874 [2024-07-24 02:05:48.650519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.874 [2024-07-24 02:05:48.650533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.874 [2024-07-24 02:05:48.650549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.874 [2024-07-24 02:05:48.650567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.874 [2024-07-24 02:05:48.650583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.874 [2024-07-24 02:05:48.650598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.874 [2024-07-24 02:05:48.650614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.874 [2024-07-24 02:05:48.650628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.874 [2024-07-24 02:05:48.650644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.874 [2024-07-24 02:05:48.650658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.874 [2024-07-24 02:05:48.650674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.874 [2024-07-24 02:05:48.650687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.874 [2024-07-24 02:05:48.650703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.874 [2024-07-24 02:05:48.650717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.874 [2024-07-24 02:05:48.650733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:33.874 [2024-07-24 02:05:48.650747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.874 [2024-07-24 02:05:48.650761] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d15200 is same with the state(5) to be set 00:27:33.874 [2024-07-24 02:05:48.652336] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:27:33.874 [2024-07-24 02:05:48.652368] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:27:33.874 [2024-07-24 02:05:48.652387] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:27:33.874 [2024-07-24 02:05:48.652405] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:27:33.874 [2024-07-24 02:05:48.652464] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:27:33.874 [2024-07-24 02:05:48.652482] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:27:33.874 [2024-07-24 02:05:48.652499] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:27:33.874 [2024-07-24 02:05:48.652596] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:33.874 [2024-07-24 02:05:48.652629] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:33.874 [2024-07-24 02:05:48.652656] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:33.874 [2024-07-24 02:05:48.652746] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:27:33.874 task offset: 16384 on job bdev=Nvme6n1 fails 00:27:33.874 00:27:33.874 Latency(us) 00:27:33.874 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:33.874 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:33.874 Job: Nvme1n1 ended in about 0.91 seconds with error 00:27:33.874 Verification LBA range: start 0x0 length 0x400 00:27:33.874 Nvme1n1 : 0.91 159.03 9.94 70.68 0.00 275533.14 26408.58 276513.37 00:27:33.875 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:33.875 Job: Nvme2n1 ended in about 0.92 seconds with error 00:27:33.875 Verification LBA range: start 0x0 length 0x400 00:27:33.875 Nvme2n1 : 0.92 139.87 8.74 69.94 0.00 295603.52 23981.32 276513.37 00:27:33.875 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:33.875 Job: Nvme3n1 ended in about 0.92 seconds with error 00:27:33.875 Verification LBA range: start 0x0 length 0x400 00:27:33.875 Nvme3n1 : 0.92 209.07 13.07 69.69 0.00 217800.25 16893.72 257872.02 00:27:33.875 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:33.875 Job: Nvme4n1 ended in about 0.92 seconds with error 00:27:33.875 Verification LBA range: start 0x0 length 0x400 00:27:33.875 Nvme4n1 : 0.92 143.23 8.95 69.45 0.00 279569.99 21651.15 257872.02 00:27:33.875 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:33.875 Job: Nvme5n1 ended in about 0.92 seconds with error 00:27:33.875 Verification LBA range: start 0x0 length 0x400 00:27:33.875 Nvme5n1 : 0.92 138.41 8.65 69.21 0.00 280289.53 21262.79 259425.47 00:27:33.875 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:33.875 Job: Nvme6n1 ended in about 0.90 seconds with error 00:27:33.875 Verification LBA range: start 0x0 length 0x400 00:27:33.875 Nvme6n1 : 0.90 142.20 8.89 71.10 0.00 265994.62 6092.42 306028.85 00:27:33.875 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:33.875 Job: Nvme7n1 ended in about 0.93 seconds with error 00:27:33.875 Verification LBA range: start 0x0 length 0x400 00:27:33.875 Nvme7n1 : 0.93 137.94 8.62 68.97 0.00 269302.90 18932.62 248551.35 00:27:33.875 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:33.875 Job: Nvme8n1 ended in about 0.91 seconds with error 00:27:33.875 Verification LBA range: start 0x0 length 0x400 00:27:33.875 Nvme8n1 : 0.91 210.77 13.17 70.26 0.00 193223.87 16602.45 257872.02 00:27:33.875 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:33.875 Job: Nvme9n1 ended in about 0.90 seconds with error 00:27:33.875 Verification LBA range: start 0x0 length 0x400 00:27:33.875 Nvme9n1 : 0.90 213.01 13.31 71.00 0.00 186393.13 6893.42 254765.13 00:27:33.875 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:33.875 Job: Nvme10n1 ended in about 0.93 seconds with error 00:27:33.875 Verification LBA range: start 0x0 length 0x400 00:27:33.875 Nvme10n1 : 0.93 137.40 8.59 68.70 0.00 252855.44 20291.89 262532.36 00:27:33.875 =================================================================================================================== 00:27:33.875 Total : 1630.93 101.93 698.98 0.00 247158.66 6092.42 306028.85 00:27:33.875 [2024-07-24 02:05:48.677872] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:33.875 [2024-07-24 02:05:48.677962] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:27:33.875 [2024-07-24 02:05:48.677995] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.875 [2024-07-24 02:05:48.678312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.875 [2024-07-24 02:05:48.678366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3b940 with addr=10.0.0.2, port=4420 00:27:33.875 [2024-07-24 02:05:48.678386] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3b940 is same with the state(5) to be set 00:27:33.875 [2024-07-24 02:05:48.678501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.875 [2024-07-24 02:05:48.678530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d43670 with addr=10.0.0.2, port=4420 00:27:33.875 [2024-07-24 02:05:48.678547] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d43670 is same with the state(5) to be set 00:27:33.875 [2024-07-24 02:05:48.678685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.875 [2024-07-24 02:05:48.678712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d45f00 with addr=10.0.0.2, port=4420 00:27:33.875 [2024-07-24 02:05:48.678727] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d45f00 is same with the state(5) to be set 00:27:33.875 [2024-07-24 02:05:48.678830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.875 [2024-07-24 02:05:48.678856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df9f20 with addr=10.0.0.2, port=4420 00:27:33.875 [2024-07-24 02:05:48.678872] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df9f20 is same with the state(5) to be set 00:27:33.875 [2024-07-24 02:05:48.680507] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:27:33.875 [2024-07-24 02:05:48.680536] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:27:33.875 [2024-07-24 02:05:48.680560] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:33.875 [2024-07-24 02:05:48.680740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.875 [2024-07-24 02:05:48.680769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1831610 with addr=10.0.0.2, port=4420 00:27:33.875 [2024-07-24 02:05:48.680786] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1831610 is same with the state(5) to be set 00:27:33.875 [2024-07-24 02:05:48.680888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.875 [2024-07-24 02:05:48.680915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ddcf80 with addr=10.0.0.2, port=4420 00:27:33.875 [2024-07-24 02:05:48.680930] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ddcf80 is same with the state(5) to be set 00:27:33.875 [2024-07-24 02:05:48.680956] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d3b940 (9): Bad file descriptor 00:27:33.875 [2024-07-24 02:05:48.680978] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d43670 (9): Bad file descriptor 00:27:33.875 [2024-07-24 02:05:48.680996] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d45f00 (9): Bad file descriptor 00:27:33.875 [2024-07-24 02:05:48.681013] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df9f20 (9): Bad file descriptor 00:27:33.875 [2024-07-24 02:05:48.681074] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:33.875 [2024-07-24 02:05:48.681098] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:33.875 [2024-07-24 02:05:48.681120] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:33.875 [2024-07-24 02:05:48.681139] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:33.875 [2024-07-24 02:05:48.681327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.875 [2024-07-24 02:05:48.681356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edafb0 with addr=10.0.0.2, port=4420 00:27:33.875 [2024-07-24 02:05:48.681372] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edafb0 is same with the state(5) to be set 00:27:33.875 [2024-07-24 02:05:48.681477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.875 [2024-07-24 02:05:48.681504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df8480 with addr=10.0.0.2, port=4420 00:27:33.875 [2024-07-24 02:05:48.681521] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df8480 is same with the state(5) to be set 00:27:33.875 [2024-07-24 02:05:48.681624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.875 [2024-07-24 02:05:48.681651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1902ee0 with addr=10.0.0.2, port=4420 00:27:33.875 [2024-07-24 02:05:48.681667] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1902ee0 is same with the state(5) to be set 00:27:33.875 [2024-07-24 02:05:48.681686] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1831610 (9): Bad file descriptor 00:27:33.875 [2024-07-24 02:05:48.681705] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ddcf80 (9): Bad file descriptor 00:27:33.875 [2024-07-24 02:05:48.681721] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:27:33.875 [2024-07-24 02:05:48.681735] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:27:33.875 [2024-07-24 02:05:48.681751] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:27:33.875 [2024-07-24 02:05:48.681770] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:27:33.875 [2024-07-24 02:05:48.681784] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:27:33.875 [2024-07-24 02:05:48.681797] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:27:33.875 [2024-07-24 02:05:48.681815] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:27:33.875 [2024-07-24 02:05:48.681829] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:27:33.876 [2024-07-24 02:05:48.681842] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:27:33.876 [2024-07-24 02:05:48.681858] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:27:33.876 [2024-07-24 02:05:48.681872] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:27:33.876 [2024-07-24 02:05:48.681885] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:27:33.876 [2024-07-24 02:05:48.681987] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:27:33.876 [2024-07-24 02:05:48.682012] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.876 [2024-07-24 02:05:48.682026] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.876 [2024-07-24 02:05:48.682038] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.876 [2024-07-24 02:05:48.682049] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.876 [2024-07-24 02:05:48.682072] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1edafb0 (9): Bad file descriptor 00:27:33.876 [2024-07-24 02:05:48.682093] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df8480 (9): Bad file descriptor 00:27:33.876 [2024-07-24 02:05:48.682112] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1902ee0 (9): Bad file descriptor 00:27:33.876 [2024-07-24 02:05:48.682127] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:27:33.876 [2024-07-24 02:05:48.682140] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:27:33.876 [2024-07-24 02:05:48.682153] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:27:33.876 [2024-07-24 02:05:48.682169] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:27:33.876 [2024-07-24 02:05:48.682183] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:27:33.876 [2024-07-24 02:05:48.682201] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:27:33.876 [2024-07-24 02:05:48.682241] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.876 [2024-07-24 02:05:48.682259] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.876 [2024-07-24 02:05:48.682439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:33.876 [2024-07-24 02:05:48.682466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edaa50 with addr=10.0.0.2, port=4420 00:27:33.876 [2024-07-24 02:05:48.682482] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edaa50 is same with the state(5) to be set 00:27:33.876 [2024-07-24 02:05:48.682497] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:27:33.876 [2024-07-24 02:05:48.682510] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:27:33.876 [2024-07-24 02:05:48.682523] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:27:33.876 [2024-07-24 02:05:48.682541] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:27:33.876 [2024-07-24 02:05:48.682555] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:27:33.876 [2024-07-24 02:05:48.682568] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:27:33.876 [2024-07-24 02:05:48.682584] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:33.876 [2024-07-24 02:05:48.682597] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:33.876 [2024-07-24 02:05:48.682610] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:33.876 [2024-07-24 02:05:48.682657] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.876 [2024-07-24 02:05:48.682675] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.876 [2024-07-24 02:05:48.682687] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.876 [2024-07-24 02:05:48.682704] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1edaa50 (9): Bad file descriptor 00:27:33.876 [2024-07-24 02:05:48.682744] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:27:33.876 [2024-07-24 02:05:48.682763] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:27:33.876 [2024-07-24 02:05:48.682777] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:27:33.876 [2024-07-24 02:05:48.682814] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:34.440 02:05:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:27:34.440 02:05:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:27:35.373 02:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 1511552 00:27:35.373 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (1511552) - No such process 00:27:35.373 02:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:27:35.373 02:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:27:35.373 02:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:27:35.373 02:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:27:35.373 02:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:35.374 02:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:27:35.374 02:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:35.374 02:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:27:35.374 02:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:35.374 02:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:27:35.374 02:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:35.374 02:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:35.374 rmmod nvme_tcp 00:27:35.374 rmmod nvme_fabrics 00:27:35.374 rmmod nvme_keyring 00:27:35.374 02:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:35.374 02:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:27:35.374 02:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:27:35.374 02:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:27:35.374 02:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:35.374 02:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:35.374 02:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:35.374 02:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:35.374 02:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:35.374 02:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:35.374 02:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:35.374 02:05:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:37.904 02:05:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:37.904 00:27:37.904 real 0m7.509s 00:27:37.904 user 0m18.298s 00:27:37.904 sys 0m1.511s 00:27:37.904 02:05:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:37.904 02:05:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:37.904 ************************************ 00:27:37.904 END TEST nvmf_shutdown_tc3 00:27:37.904 ************************************ 00:27:37.904 02:05:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:27:37.904 00:27:37.904 real 0m27.175s 00:27:37.904 user 1m16.092s 00:27:37.904 sys 0m6.340s 00:27:37.904 02:05:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:37.904 02:05:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:37.904 ************************************ 00:27:37.904 END TEST nvmf_shutdown 00:27:37.904 ************************************ 00:27:37.904 02:05:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@66 -- # trap - SIGINT SIGTERM EXIT 00:27:37.904 00:27:37.904 real 16m42.327s 00:27:37.904 user 47m3.164s 00:27:37.904 sys 3m50.703s 00:27:37.904 02:05:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:37.904 02:05:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:37.904 ************************************ 00:27:37.904 END TEST nvmf_target_extra 00:27:37.904 ************************************ 00:27:37.904 02:05:52 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:27:37.904 02:05:52 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:37.904 02:05:52 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:37.904 02:05:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:37.904 ************************************ 00:27:37.904 START TEST nvmf_host 00:27:37.904 ************************************ 00:27:37.904 02:05:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:27:37.904 * Looking for test storage... 00:27:37.904 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:27:37.904 02:05:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:37.904 02:05:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:27:37.904 02:05:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:37.904 02:05:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:37.904 02:05:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:37.904 02:05:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:37.904 02:05:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:37.904 02:05:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:37.904 02:05:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:37.904 02:05:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:37.905 02:05:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:37.905 02:05:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:37.905 02:05:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:37.905 02:05:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:37.905 02:05:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:37.905 02:05:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:37.905 02:05:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:37.905 02:05:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:37.905 02:05:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:37.905 02:05:52 nvmf_tcp.nvmf_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:37.905 02:05:52 nvmf_tcp.nvmf_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:37.905 02:05:52 nvmf_tcp.nvmf_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:37.905 02:05:52 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:37.905 02:05:52 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:37.905 02:05:52 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:37.905 02:05:52 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:27:37.905 02:05:52 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:37.905 02:05:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@47 -- # : 0 00:27:37.905 02:05:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:37.905 02:05:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:37.905 02:05:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:37.905 02:05:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:37.905 02:05:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:37.905 02:05:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:37.905 02:05:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:37.905 02:05:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:37.905 02:05:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:27:37.905 02:05:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:27:37.905 02:05:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:27:37.905 02:05:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:27:37.905 02:05:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:37.905 02:05:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:37.905 02:05:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.905 ************************************ 00:27:37.905 START TEST nvmf_multicontroller 00:27:37.905 ************************************ 00:27:37.905 02:05:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:27:37.905 * Looking for test storage... 00:27:37.905 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:37.905 02:05:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:37.905 02:05:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:27:37.905 02:05:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:37.905 02:05:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:37.905 02:05:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:37.905 02:05:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:37.905 02:05:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:37.905 02:05:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:37.905 02:05:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:37.905 02:05:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:37.905 02:05:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:37.905 02:05:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:37.905 02:05:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:37.905 02:05:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:37.905 02:05:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:37.905 02:05:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:37.905 02:05:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:37.905 02:05:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:37.905 02:05:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:37.905 02:05:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:37.905 02:05:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:37.905 02:05:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:37.905 02:05:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:37.905 02:05:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:37.905 02:05:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:37.905 02:05:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:27:37.905 02:05:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:37.905 02:05:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:27:37.905 02:05:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:37.905 02:05:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:37.905 02:05:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:37.905 02:05:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:37.905 02:05:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:37.905 02:05:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:37.905 02:05:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:37.905 02:05:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:37.905 02:05:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:37.905 02:05:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:37.905 02:05:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:27:37.905 02:05:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:27:37.905 02:05:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:37.905 02:05:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:27:37.905 02:05:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:27:37.906 02:05:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:37.906 02:05:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:37.906 02:05:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:37.906 02:05:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:37.906 02:05:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:37.906 02:05:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:37.906 02:05:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:37.906 02:05:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:37.906 02:05:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:37.906 02:05:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:37.906 02:05:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:27:37.906 02:05:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:39.804 02:05:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:39.804 02:05:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:27:39.804 02:05:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:39.804 02:05:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:39.804 02:05:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:39.804 02:05:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:39.804 02:05:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:39.804 02:05:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:27:39.804 02:05:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:39.804 02:05:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:27:39.804 02:05:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:27:39.804 02:05:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:27:39.804 02:05:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:27:39.804 02:05:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:27:39.804 02:05:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:27:39.804 02:05:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:39.804 02:05:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:39.804 02:05:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:39.804 02:05:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:39.804 02:05:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:39.804 02:05:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:39.804 02:05:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:39.804 02:05:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:39.804 02:05:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:39.804 02:05:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:39.804 02:05:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:39.804 02:05:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:39.804 02:05:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:39.804 02:05:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:39.804 02:05:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:39.804 02:05:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:39.804 02:05:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:39.804 02:05:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:39.804 02:05:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:39.804 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:39.804 02:05:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:39.804 02:05:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:39.804 02:05:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:39.804 02:05:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:39.804 02:05:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:39.804 02:05:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:39.804 02:05:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:39.804 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:39.804 02:05:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:39.804 02:05:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:39.804 02:05:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:39.804 02:05:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:39.804 02:05:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:39.804 02:05:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:39.804 02:05:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:39.804 02:05:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:39.804 02:05:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:39.804 02:05:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:39.804 02:05:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:39.804 02:05:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:39.804 02:05:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:39.804 02:05:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:39.804 02:05:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:39.804 02:05:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:39.804 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:39.804 02:05:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:39.804 02:05:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:39.804 02:05:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:39.804 02:05:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:39.804 02:05:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:39.804 02:05:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:39.805 02:05:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:39.805 02:05:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:39.805 02:05:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:39.805 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:39.805 02:05:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:39.805 02:05:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:39.805 02:05:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:27:39.805 02:05:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:39.805 02:05:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:39.805 02:05:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:39.805 02:05:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:39.805 02:05:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:39.805 02:05:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:39.805 02:05:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:39.805 02:05:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:39.805 02:05:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:39.805 02:05:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:39.805 02:05:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:39.805 02:05:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:39.805 02:05:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:39.805 02:05:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:39.805 02:05:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:39.805 02:05:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:39.805 02:05:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:39.805 02:05:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:39.805 02:05:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:39.805 02:05:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:39.805 02:05:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:39.805 02:05:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:39.805 02:05:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:39.805 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:39.805 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.267 ms 00:27:39.805 00:27:39.805 --- 10.0.0.2 ping statistics --- 00:27:39.805 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:39.805 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:27:39.805 02:05:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:39.805 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:39.805 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.145 ms 00:27:39.805 00:27:39.805 --- 10.0.0.1 ping statistics --- 00:27:39.805 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:39.805 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:27:39.805 02:05:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:39.805 02:05:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:27:39.805 02:05:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:39.805 02:05:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:39.805 02:05:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:39.805 02:05:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:39.805 02:05:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:39.805 02:05:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:39.805 02:05:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:39.805 02:05:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:27:39.805 02:05:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:39.805 02:05:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:39.805 02:05:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:39.805 02:05:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=1514085 00:27:39.805 02:05:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:39.805 02:05:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 1514085 00:27:39.805 02:05:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 1514085 ']' 00:27:39.805 02:05:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:39.805 02:05:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:39.805 02:05:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:39.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:39.805 02:05:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:39.805 02:05:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:39.805 [2024-07-24 02:05:54.637962] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:27:39.805 [2024-07-24 02:05:54.638060] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:39.805 EAL: No free 2048 kB hugepages reported on node 1 00:27:40.063 [2024-07-24 02:05:54.703579] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:40.063 [2024-07-24 02:05:54.792307] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:40.063 [2024-07-24 02:05:54.792399] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:40.063 [2024-07-24 02:05:54.792414] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:40.063 [2024-07-24 02:05:54.792425] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:40.063 [2024-07-24 02:05:54.792436] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:40.063 [2024-07-24 02:05:54.792565] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:40.063 [2024-07-24 02:05:54.792596] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:40.063 [2024-07-24 02:05:54.792599] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:40.063 02:05:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:40.063 02:05:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:27:40.063 02:05:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:40.063 02:05:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:40.063 02:05:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:40.063 02:05:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:40.063 02:05:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:40.063 02:05:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.063 02:05:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:40.063 [2024-07-24 02:05:54.936230] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:40.063 02:05:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.063 02:05:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:40.063 02:05:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.063 02:05:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:40.321 Malloc0 00:27:40.321 02:05:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.321 02:05:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:40.321 02:05:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.321 02:05:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:40.321 02:05:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.321 02:05:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:40.321 02:05:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.321 02:05:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:40.321 02:05:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.321 02:05:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:40.322 02:05:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.322 02:05:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:40.322 [2024-07-24 02:05:54.998167] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:40.322 02:05:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.322 02:05:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:40.322 02:05:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.322 02:05:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:40.322 [2024-07-24 02:05:55.006038] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:40.322 02:05:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.322 02:05:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:40.322 02:05:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.322 02:05:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:40.322 Malloc1 00:27:40.322 02:05:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.322 02:05:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:27:40.322 02:05:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.322 02:05:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:40.322 02:05:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.322 02:05:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:27:40.322 02:05:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.322 02:05:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:40.322 02:05:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.322 02:05:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:27:40.322 02:05:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.322 02:05:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:40.322 02:05:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.322 02:05:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:27:40.322 02:05:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.322 02:05:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:40.322 02:05:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.322 02:05:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=1514177 00:27:40.322 02:05:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:27:40.322 02:05:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:40.322 02:05:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 1514177 /var/tmp/bdevperf.sock 00:27:40.322 02:05:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 1514177 ']' 00:27:40.322 02:05:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:40.322 02:05:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:40.322 02:05:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:40.322 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:40.322 02:05:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:40.322 02:05:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:40.582 02:05:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:40.582 02:05:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:27:40.582 02:05:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:27:40.582 02:05:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.582 02:05:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:40.870 NVMe0n1 00:27:40.870 02:05:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.870 02:05:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:40.870 02:05:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:27:40.870 02:05:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.870 02:05:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:40.870 02:05:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.870 1 00:27:40.870 02:05:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:27:40.870 02:05:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:27:40.870 02:05:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:27:40.870 02:05:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:40.870 02:05:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:40.870 02:05:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:40.870 02:05:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:40.871 02:05:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:27:40.871 02:05:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.871 02:05:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:40.871 request: 00:27:40.871 { 00:27:40.871 "name": "NVMe0", 00:27:40.871 "trtype": "tcp", 00:27:40.871 "traddr": "10.0.0.2", 00:27:40.871 "adrfam": "ipv4", 00:27:40.871 "trsvcid": "4420", 00:27:40.871 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:40.871 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:27:40.871 "hostaddr": "10.0.0.2", 00:27:40.871 "hostsvcid": "60000", 00:27:40.871 "prchk_reftag": false, 00:27:40.871 "prchk_guard": false, 00:27:40.871 "hdgst": false, 00:27:40.871 "ddgst": false, 00:27:40.871 "method": "bdev_nvme_attach_controller", 00:27:40.871 "req_id": 1 00:27:40.871 } 00:27:40.871 Got JSON-RPC error response 00:27:40.871 response: 00:27:40.871 { 00:27:40.871 "code": -114, 00:27:40.871 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:27:40.871 } 00:27:40.871 02:05:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:40.871 02:05:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:27:40.871 02:05:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:40.871 02:05:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:40.871 02:05:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:40.871 02:05:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:27:40.871 02:05:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:27:40.871 02:05:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:27:40.871 02:05:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:40.871 02:05:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:40.871 02:05:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:40.871 02:05:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:40.871 02:05:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:27:40.871 02:05:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.871 02:05:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:40.871 request: 00:27:40.871 { 00:27:40.871 "name": "NVMe0", 00:27:40.871 "trtype": "tcp", 00:27:40.871 "traddr": "10.0.0.2", 00:27:40.871 "adrfam": "ipv4", 00:27:40.871 "trsvcid": "4420", 00:27:40.871 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:40.871 "hostaddr": "10.0.0.2", 00:27:40.871 "hostsvcid": "60000", 00:27:40.871 "prchk_reftag": false, 00:27:40.871 "prchk_guard": false, 00:27:40.871 "hdgst": false, 00:27:40.871 "ddgst": false, 00:27:40.871 "method": "bdev_nvme_attach_controller", 00:27:40.871 "req_id": 1 00:27:40.871 } 00:27:40.871 Got JSON-RPC error response 00:27:40.871 response: 00:27:40.871 { 00:27:40.871 "code": -114, 00:27:40.871 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:27:40.871 } 00:27:40.871 02:05:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:40.871 02:05:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:27:40.871 02:05:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:40.871 02:05:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:40.871 02:05:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:40.871 02:05:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:27:40.871 02:05:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:27:40.871 02:05:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:27:40.871 02:05:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:40.871 02:05:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:40.871 02:05:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:40.871 02:05:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:40.871 02:05:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:27:40.871 02:05:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.871 02:05:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:40.871 request: 00:27:40.871 { 00:27:40.871 "name": "NVMe0", 00:27:40.871 "trtype": "tcp", 00:27:40.871 "traddr": "10.0.0.2", 00:27:40.871 "adrfam": "ipv4", 00:27:40.871 "trsvcid": "4420", 00:27:40.871 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:40.871 "hostaddr": "10.0.0.2", 00:27:40.871 "hostsvcid": "60000", 00:27:40.871 "prchk_reftag": false, 00:27:40.871 "prchk_guard": false, 00:27:40.871 "hdgst": false, 00:27:40.871 "ddgst": false, 00:27:40.871 "multipath": "disable", 00:27:40.871 "method": "bdev_nvme_attach_controller", 00:27:40.871 "req_id": 1 00:27:40.871 } 00:27:40.871 Got JSON-RPC error response 00:27:40.871 response: 00:27:40.871 { 00:27:40.871 "code": -114, 00:27:40.871 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:27:40.871 } 00:27:40.871 02:05:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:40.871 02:05:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:27:40.871 02:05:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:40.871 02:05:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:40.871 02:05:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:40.871 02:05:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:27:40.871 02:05:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:27:40.871 02:05:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:27:40.871 02:05:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:40.871 02:05:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:40.871 02:05:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:40.871 02:05:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:40.871 02:05:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:27:40.871 02:05:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.871 02:05:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:40.871 request: 00:27:40.871 { 00:27:40.871 "name": "NVMe0", 00:27:40.871 "trtype": "tcp", 00:27:40.871 "traddr": "10.0.0.2", 00:27:40.871 "adrfam": "ipv4", 00:27:40.871 "trsvcid": "4420", 00:27:40.871 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:40.871 "hostaddr": "10.0.0.2", 00:27:40.871 "hostsvcid": "60000", 00:27:40.871 "prchk_reftag": false, 00:27:40.871 "prchk_guard": false, 00:27:40.871 "hdgst": false, 00:27:40.871 "ddgst": false, 00:27:40.871 "multipath": "failover", 00:27:40.871 "method": "bdev_nvme_attach_controller", 00:27:40.871 "req_id": 1 00:27:40.871 } 00:27:40.871 Got JSON-RPC error response 00:27:40.871 response: 00:27:40.871 { 00:27:40.871 "code": -114, 00:27:40.871 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:27:40.871 } 00:27:40.871 02:05:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:40.871 02:05:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:27:40.871 02:05:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:40.871 02:05:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:40.871 02:05:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:40.871 02:05:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:40.871 02:05:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.871 02:05:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:40.871 00:27:40.872 02:05:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.872 02:05:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:40.872 02:05:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.872 02:05:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:40.872 02:05:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.872 02:05:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:27:40.872 02:05:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.872 02:05:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:41.130 00:27:41.130 02:05:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.130 02:05:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:41.130 02:05:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:27:41.130 02:05:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.130 02:05:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:41.130 02:05:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.130 02:05:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:27:41.130 02:05:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:42.064 0 00:27:42.064 02:05:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:27:42.064 02:05:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.064 02:05:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:42.064 02:05:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.064 02:05:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 1514177 00:27:42.064 02:05:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 1514177 ']' 00:27:42.064 02:05:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 1514177 00:27:42.064 02:05:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:27:42.064 02:05:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:42.064 02:05:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1514177 00:27:42.322 02:05:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:42.322 02:05:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:42.322 02:05:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1514177' 00:27:42.322 killing process with pid 1514177 00:27:42.322 02:05:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 1514177 00:27:42.322 02:05:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 1514177 00:27:42.322 02:05:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:42.322 02:05:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.322 02:05:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:42.322 02:05:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.322 02:05:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:27:42.322 02:05:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.322 02:05:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:42.322 02:05:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.322 02:05:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:27:42.322 02:05:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:42.322 02:05:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1610 -- # read -r file 00:27:42.322 02:05:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1609 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:27:42.322 02:05:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1609 -- # sort -u 00:27:42.580 02:05:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # cat 00:27:42.580 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:27:42.580 [2024-07-24 02:05:55.111389] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:27:42.580 [2024-07-24 02:05:55.111492] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1514177 ] 00:27:42.580 EAL: No free 2048 kB hugepages reported on node 1 00:27:42.580 [2024-07-24 02:05:55.171474] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:42.580 [2024-07-24 02:05:55.256889] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:42.580 [2024-07-24 02:05:55.778506] bdev.c:4633:bdev_name_add: *ERROR*: Bdev name d08742ab-260d-4493-a763-8e9309dfb52e already exists 00:27:42.580 [2024-07-24 02:05:55.778544] bdev.c:7755:bdev_register: *ERROR*: Unable to add uuid:d08742ab-260d-4493-a763-8e9309dfb52e alias for bdev NVMe1n1 00:27:42.580 [2024-07-24 02:05:55.778574] bdev_nvme.c:4318:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:27:42.580 Running I/O for 1 seconds... 00:27:42.580 00:27:42.580 Latency(us) 00:27:42.580 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:42.580 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:27:42.580 NVMe0n1 : 1.01 17990.93 70.28 0.00 0.00 7103.56 2038.90 12621.75 00:27:42.580 =================================================================================================================== 00:27:42.580 Total : 17990.93 70.28 0.00 0.00 7103.56 2038.90 12621.75 00:27:42.580 Received shutdown signal, test time was about 1.000000 seconds 00:27:42.580 00:27:42.580 Latency(us) 00:27:42.580 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:42.580 =================================================================================================================== 00:27:42.580 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:42.580 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:27:42.580 02:05:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1616 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:42.580 02:05:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1610 -- # read -r file 00:27:42.580 02:05:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:27:42.580 02:05:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:42.580 02:05:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:27:42.580 02:05:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:42.580 02:05:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:27:42.580 02:05:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:42.580 02:05:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:42.580 rmmod nvme_tcp 00:27:42.580 rmmod nvme_fabrics 00:27:42.581 rmmod nvme_keyring 00:27:42.581 02:05:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:42.581 02:05:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:27:42.581 02:05:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:27:42.581 02:05:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 1514085 ']' 00:27:42.581 02:05:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 1514085 00:27:42.581 02:05:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 1514085 ']' 00:27:42.581 02:05:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 1514085 00:27:42.581 02:05:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:27:42.581 02:05:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:42.581 02:05:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1514085 00:27:42.581 02:05:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:42.581 02:05:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:42.581 02:05:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1514085' 00:27:42.581 killing process with pid 1514085 00:27:42.581 02:05:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 1514085 00:27:42.581 02:05:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 1514085 00:27:42.838 02:05:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:42.838 02:05:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:42.838 02:05:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:42.838 02:05:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:42.838 02:05:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:42.838 02:05:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:42.838 02:05:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:42.838 02:05:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:44.738 02:05:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:44.996 00:27:44.996 real 0m7.183s 00:27:44.996 user 0m11.040s 00:27:44.996 sys 0m2.251s 00:27:44.997 02:05:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:44.997 02:05:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:44.997 ************************************ 00:27:44.997 END TEST nvmf_multicontroller 00:27:44.997 ************************************ 00:27:44.997 02:05:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:27:44.997 02:05:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:44.997 02:05:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:44.997 02:05:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.997 ************************************ 00:27:44.997 START TEST nvmf_aer 00:27:44.997 ************************************ 00:27:44.997 02:05:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:27:44.997 * Looking for test storage... 00:27:44.997 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:44.997 02:05:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:44.997 02:05:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:27:44.997 02:05:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:44.997 02:05:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:44.997 02:05:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:44.997 02:05:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:44.997 02:05:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:44.997 02:05:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:44.997 02:05:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:44.997 02:05:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:44.997 02:05:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:44.997 02:05:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:44.997 02:05:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:44.997 02:05:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:44.997 02:05:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:44.997 02:05:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:44.997 02:05:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:44.997 02:05:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:44.997 02:05:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:44.997 02:05:59 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:44.997 02:05:59 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:44.997 02:05:59 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:44.997 02:05:59 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:44.997 02:05:59 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:44.997 02:05:59 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:44.997 02:05:59 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:27:44.997 02:05:59 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:44.997 02:05:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:27:44.997 02:05:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:44.997 02:05:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:44.997 02:05:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:44.997 02:05:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:44.997 02:05:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:44.997 02:05:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:44.997 02:05:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:44.997 02:05:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:44.997 02:05:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:27:44.997 02:05:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:44.997 02:05:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:44.997 02:05:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:44.997 02:05:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:44.997 02:05:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:44.997 02:05:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:44.997 02:05:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:44.997 02:05:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:44.997 02:05:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:44.997 02:05:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:44.997 02:05:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:27:44.997 02:05:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:46.898 02:06:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:46.898 02:06:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:27:46.898 02:06:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:46.898 02:06:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:46.898 02:06:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:46.898 02:06:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:46.898 02:06:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:46.898 02:06:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:27:46.898 02:06:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:46.898 02:06:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:27:46.898 02:06:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:27:46.898 02:06:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:27:46.898 02:06:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:27:46.898 02:06:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:27:46.898 02:06:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:27:46.898 02:06:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:46.898 02:06:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:46.898 02:06:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:46.898 02:06:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:46.898 02:06:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:46.898 02:06:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:46.899 02:06:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:46.899 02:06:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:46.899 02:06:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:46.899 02:06:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:46.899 02:06:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:46.899 02:06:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:46.899 02:06:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:46.899 02:06:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:46.899 02:06:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:46.899 02:06:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:46.899 02:06:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:46.899 02:06:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:46.899 02:06:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:46.899 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:46.899 02:06:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:46.899 02:06:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:46.899 02:06:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:46.899 02:06:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:46.899 02:06:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:46.899 02:06:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:46.899 02:06:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:46.899 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:46.899 02:06:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:46.899 02:06:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:46.899 02:06:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:46.899 02:06:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:46.899 02:06:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:46.899 02:06:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:46.899 02:06:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:46.899 02:06:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:46.899 02:06:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:46.899 02:06:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:46.899 02:06:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:46.899 02:06:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:46.899 02:06:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:46.899 02:06:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:46.899 02:06:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:46.899 02:06:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:46.899 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:46.899 02:06:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:46.899 02:06:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:46.899 02:06:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:46.899 02:06:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:46.899 02:06:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:46.899 02:06:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:46.899 02:06:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:46.899 02:06:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:46.899 02:06:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:46.899 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:46.899 02:06:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:46.899 02:06:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:46.899 02:06:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:27:46.899 02:06:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:46.899 02:06:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:46.899 02:06:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:46.899 02:06:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:46.899 02:06:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:46.899 02:06:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:46.899 02:06:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:46.899 02:06:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:46.899 02:06:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:46.899 02:06:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:46.899 02:06:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:46.899 02:06:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:46.899 02:06:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:46.899 02:06:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:46.899 02:06:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:46.899 02:06:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:46.899 02:06:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:46.899 02:06:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:46.899 02:06:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:46.899 02:06:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:46.899 02:06:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:46.899 02:06:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:46.899 02:06:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:46.899 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:46.899 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.253 ms 00:27:46.899 00:27:46.899 --- 10.0.0.2 ping statistics --- 00:27:46.899 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:46.899 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:27:46.899 02:06:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:46.899 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:46.899 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.084 ms 00:27:46.899 00:27:46.899 --- 10.0.0.1 ping statistics --- 00:27:46.899 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:46.899 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:27:46.899 02:06:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:46.899 02:06:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:27:46.899 02:06:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:46.899 02:06:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:46.899 02:06:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:46.899 02:06:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:46.899 02:06:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:46.899 02:06:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:46.899 02:06:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:46.899 02:06:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:27:46.899 02:06:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:46.899 02:06:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:46.899 02:06:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:46.899 02:06:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=1516411 00:27:46.899 02:06:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:46.899 02:06:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 1516411 00:27:46.899 02:06:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@829 -- # '[' -z 1516411 ']' 00:27:46.899 02:06:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:46.899 02:06:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:46.899 02:06:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:46.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:46.899 02:06:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:46.899 02:06:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:46.899 [2024-07-24 02:06:01.728961] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:27:46.899 [2024-07-24 02:06:01.729049] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:46.899 EAL: No free 2048 kB hugepages reported on node 1 00:27:47.157 [2024-07-24 02:06:01.795527] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:47.157 [2024-07-24 02:06:01.892658] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:47.157 [2024-07-24 02:06:01.892710] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:47.157 [2024-07-24 02:06:01.892724] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:47.157 [2024-07-24 02:06:01.892736] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:47.157 [2024-07-24 02:06:01.892745] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:47.157 [2024-07-24 02:06:01.892832] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:47.157 [2024-07-24 02:06:01.893593] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:47.157 [2024-07-24 02:06:01.893653] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:47.157 [2024-07-24 02:06:01.893655] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:47.157 02:06:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:47.157 02:06:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@862 -- # return 0 00:27:47.157 02:06:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:47.157 02:06:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:47.157 02:06:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:47.157 02:06:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:47.157 02:06:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:47.157 02:06:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.157 02:06:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:47.415 [2024-07-24 02:06:02.053869] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:47.415 02:06:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.415 02:06:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:27:47.415 02:06:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.415 02:06:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:47.415 Malloc0 00:27:47.415 02:06:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.415 02:06:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:27:47.415 02:06:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.415 02:06:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:47.415 02:06:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.415 02:06:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:47.415 02:06:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.415 02:06:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:47.415 02:06:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.415 02:06:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:47.415 02:06:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.415 02:06:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:47.415 [2024-07-24 02:06:02.107547] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:47.415 02:06:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.415 02:06:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:27:47.415 02:06:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.415 02:06:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:47.415 [ 00:27:47.415 { 00:27:47.415 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:47.415 "subtype": "Discovery", 00:27:47.415 "listen_addresses": [], 00:27:47.415 "allow_any_host": true, 00:27:47.415 "hosts": [] 00:27:47.415 }, 00:27:47.415 { 00:27:47.415 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:47.415 "subtype": "NVMe", 00:27:47.415 "listen_addresses": [ 00:27:47.415 { 00:27:47.415 "trtype": "TCP", 00:27:47.415 "adrfam": "IPv4", 00:27:47.415 "traddr": "10.0.0.2", 00:27:47.415 "trsvcid": "4420" 00:27:47.415 } 00:27:47.415 ], 00:27:47.415 "allow_any_host": true, 00:27:47.415 "hosts": [], 00:27:47.415 "serial_number": "SPDK00000000000001", 00:27:47.415 "model_number": "SPDK bdev Controller", 00:27:47.415 "max_namespaces": 2, 00:27:47.415 "min_cntlid": 1, 00:27:47.415 "max_cntlid": 65519, 00:27:47.415 "namespaces": [ 00:27:47.415 { 00:27:47.415 "nsid": 1, 00:27:47.415 "bdev_name": "Malloc0", 00:27:47.415 "name": "Malloc0", 00:27:47.415 "nguid": "44E63921CE7045BBA2EC7CD5C84F0D78", 00:27:47.415 "uuid": "44e63921-ce70-45bb-a2ec-7cd5c84f0d78" 00:27:47.415 } 00:27:47.415 ] 00:27:47.415 } 00:27:47.415 ] 00:27:47.415 02:06:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.415 02:06:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:27:47.415 02:06:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:27:47.415 02:06:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=1516574 00:27:47.415 02:06:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:27:47.415 02:06:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:27:47.416 02:06:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1263 -- # local i=0 00:27:47.416 02:06:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1264 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:47.416 02:06:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1265 -- # '[' 0 -lt 200 ']' 00:27:47.416 02:06:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # i=1 00:27:47.416 02:06:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # sleep 0.1 00:27:47.416 EAL: No free 2048 kB hugepages reported on node 1 00:27:47.416 02:06:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1264 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:47.416 02:06:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1265 -- # '[' 1 -lt 200 ']' 00:27:47.416 02:06:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # i=2 00:27:47.416 02:06:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # sleep 0.1 00:27:47.673 02:06:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1264 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:47.673 02:06:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:47.673 02:06:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1274 -- # return 0 00:27:47.673 02:06:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:27:47.673 02:06:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.673 02:06:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:47.673 Malloc1 00:27:47.673 02:06:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.673 02:06:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:27:47.673 02:06:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.673 02:06:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:47.673 02:06:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.673 02:06:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:27:47.673 02:06:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.673 02:06:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:47.673 Asynchronous Event Request test 00:27:47.673 Attaching to 10.0.0.2 00:27:47.673 Attached to 10.0.0.2 00:27:47.673 Registering asynchronous event callbacks... 00:27:47.673 Starting namespace attribute notice tests for all controllers... 00:27:47.673 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:27:47.673 aer_cb - Changed Namespace 00:27:47.673 Cleaning up... 00:27:47.673 [ 00:27:47.673 { 00:27:47.673 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:47.673 "subtype": "Discovery", 00:27:47.673 "listen_addresses": [], 00:27:47.673 "allow_any_host": true, 00:27:47.673 "hosts": [] 00:27:47.673 }, 00:27:47.673 { 00:27:47.673 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:47.673 "subtype": "NVMe", 00:27:47.673 "listen_addresses": [ 00:27:47.673 { 00:27:47.673 "trtype": "TCP", 00:27:47.674 "adrfam": "IPv4", 00:27:47.674 "traddr": "10.0.0.2", 00:27:47.674 "trsvcid": "4420" 00:27:47.674 } 00:27:47.674 ], 00:27:47.674 "allow_any_host": true, 00:27:47.674 "hosts": [], 00:27:47.674 "serial_number": "SPDK00000000000001", 00:27:47.674 "model_number": "SPDK bdev Controller", 00:27:47.674 "max_namespaces": 2, 00:27:47.674 "min_cntlid": 1, 00:27:47.674 "max_cntlid": 65519, 00:27:47.674 "namespaces": [ 00:27:47.674 { 00:27:47.674 "nsid": 1, 00:27:47.674 "bdev_name": "Malloc0", 00:27:47.674 "name": "Malloc0", 00:27:47.674 "nguid": "44E63921CE7045BBA2EC7CD5C84F0D78", 00:27:47.674 "uuid": "44e63921-ce70-45bb-a2ec-7cd5c84f0d78" 00:27:47.674 }, 00:27:47.674 { 00:27:47.674 "nsid": 2, 00:27:47.674 "bdev_name": "Malloc1", 00:27:47.674 "name": "Malloc1", 00:27:47.674 "nguid": "136A027C78F14F0899D58C57E1BF6ECD", 00:27:47.674 "uuid": "136a027c-78f1-4f08-99d5-8c57e1bf6ecd" 00:27:47.674 } 00:27:47.674 ] 00:27:47.674 } 00:27:47.674 ] 00:27:47.674 02:06:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.674 02:06:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 1516574 00:27:47.674 02:06:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:27:47.674 02:06:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.674 02:06:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:47.674 02:06:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.674 02:06:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:27:47.674 02:06:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.674 02:06:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:47.674 02:06:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.674 02:06:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:47.674 02:06:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.674 02:06:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:47.674 02:06:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.674 02:06:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:27:47.674 02:06:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:27:47.674 02:06:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:47.674 02:06:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:27:47.674 02:06:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:47.674 02:06:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:27:47.674 02:06:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:47.674 02:06:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:47.674 rmmod nvme_tcp 00:27:47.674 rmmod nvme_fabrics 00:27:47.674 rmmod nvme_keyring 00:27:47.674 02:06:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:47.674 02:06:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:27:47.674 02:06:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:27:47.674 02:06:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 1516411 ']' 00:27:47.674 02:06:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 1516411 00:27:47.674 02:06:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@948 -- # '[' -z 1516411 ']' 00:27:47.674 02:06:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@952 -- # kill -0 1516411 00:27:47.674 02:06:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@953 -- # uname 00:27:47.674 02:06:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:47.674 02:06:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1516411 00:27:47.932 02:06:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:47.932 02:06:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:47.932 02:06:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1516411' 00:27:47.932 killing process with pid 1516411 00:27:47.932 02:06:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@967 -- # kill 1516411 00:27:47.932 02:06:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # wait 1516411 00:27:47.932 02:06:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:47.932 02:06:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:47.932 02:06:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:47.932 02:06:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:47.932 02:06:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:47.932 02:06:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:47.932 02:06:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:47.932 02:06:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:50.461 02:06:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:50.461 00:27:50.461 real 0m5.183s 00:27:50.461 user 0m4.214s 00:27:50.461 sys 0m1.763s 00:27:50.461 02:06:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:50.461 02:06:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:50.461 ************************************ 00:27:50.461 END TEST nvmf_aer 00:27:50.461 ************************************ 00:27:50.461 02:06:04 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:27:50.461 02:06:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:50.461 02:06:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:50.461 02:06:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.461 ************************************ 00:27:50.461 START TEST nvmf_async_init 00:27:50.461 ************************************ 00:27:50.461 02:06:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:27:50.461 * Looking for test storage... 00:27:50.461 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:50.461 02:06:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:50.461 02:06:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:27:50.461 02:06:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:50.461 02:06:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:50.461 02:06:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:50.461 02:06:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:50.461 02:06:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:50.461 02:06:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:50.461 02:06:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:50.461 02:06:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:50.461 02:06:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:50.461 02:06:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:50.461 02:06:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:50.461 02:06:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:50.461 02:06:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:50.461 02:06:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:50.461 02:06:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:50.461 02:06:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:50.461 02:06:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:50.461 02:06:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:50.461 02:06:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:50.461 02:06:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:50.461 02:06:04 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:50.461 02:06:04 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:50.461 02:06:04 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:50.461 02:06:04 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:27:50.461 02:06:04 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:50.461 02:06:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:27:50.461 02:06:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:50.461 02:06:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:50.461 02:06:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:50.461 02:06:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:50.461 02:06:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:50.461 02:06:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:50.461 02:06:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:50.461 02:06:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:50.461 02:06:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:27:50.461 02:06:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:27:50.461 02:06:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:27:50.461 02:06:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:27:50.461 02:06:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:27:50.461 02:06:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:27:50.461 02:06:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=f8775516062c42d6a8affb4949329f08 00:27:50.461 02:06:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:27:50.461 02:06:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:50.461 02:06:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:50.461 02:06:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:50.461 02:06:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:50.461 02:06:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:50.461 02:06:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:50.461 02:06:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:50.461 02:06:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:50.461 02:06:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:50.461 02:06:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:50.461 02:06:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:27:50.461 02:06:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:52.359 02:06:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:52.359 02:06:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:27:52.359 02:06:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:52.359 02:06:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:52.359 02:06:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:52.359 02:06:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:52.359 02:06:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:52.359 02:06:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:27:52.359 02:06:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:52.359 02:06:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:27:52.359 02:06:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:27:52.359 02:06:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:27:52.359 02:06:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:27:52.359 02:06:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:27:52.359 02:06:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:27:52.359 02:06:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:52.359 02:06:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:52.359 02:06:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:52.359 02:06:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:52.359 02:06:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:52.359 02:06:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:52.359 02:06:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:52.359 02:06:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:52.359 02:06:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:52.359 02:06:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:52.359 02:06:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:52.359 02:06:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:52.359 02:06:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:52.359 02:06:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:52.359 02:06:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:52.359 02:06:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:52.359 02:06:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:52.359 02:06:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:52.359 02:06:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:52.359 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:52.359 02:06:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:52.359 02:06:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:52.359 02:06:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:52.359 02:06:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:52.359 02:06:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:52.359 02:06:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:52.359 02:06:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:52.359 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:52.359 02:06:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:52.360 02:06:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:52.360 02:06:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:52.360 02:06:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:52.360 02:06:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:52.360 02:06:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:52.360 02:06:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:52.360 02:06:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:52.360 02:06:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:52.360 02:06:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:52.360 02:06:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:52.360 02:06:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:52.360 02:06:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:52.360 02:06:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:52.360 02:06:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:52.360 02:06:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:52.360 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:52.360 02:06:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:52.360 02:06:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:52.360 02:06:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:52.360 02:06:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:52.360 02:06:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:52.360 02:06:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:52.360 02:06:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:52.360 02:06:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:52.360 02:06:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:52.360 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:52.360 02:06:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:52.360 02:06:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:52.360 02:06:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:27:52.360 02:06:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:52.360 02:06:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:52.360 02:06:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:52.360 02:06:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:52.360 02:06:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:52.360 02:06:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:52.360 02:06:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:52.360 02:06:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:52.360 02:06:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:52.360 02:06:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:52.360 02:06:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:52.360 02:06:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:52.360 02:06:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:52.360 02:06:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:52.360 02:06:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:52.360 02:06:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:52.360 02:06:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:52.360 02:06:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:52.360 02:06:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:52.360 02:06:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:52.360 02:06:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:52.360 02:06:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:52.360 02:06:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:52.360 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:52.360 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.238 ms 00:27:52.360 00:27:52.360 --- 10.0.0.2 ping statistics --- 00:27:52.360 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:52.360 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:27:52.360 02:06:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:52.360 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:52.360 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.086 ms 00:27:52.360 00:27:52.360 --- 10.0.0.1 ping statistics --- 00:27:52.360 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:52.360 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:27:52.360 02:06:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:52.360 02:06:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:27:52.360 02:06:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:52.360 02:06:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:52.360 02:06:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:52.360 02:06:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:52.360 02:06:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:52.360 02:06:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:52.360 02:06:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:52.360 02:06:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:27:52.360 02:06:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:52.360 02:06:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:52.360 02:06:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:52.360 02:06:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=1518508 00:27:52.360 02:06:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:27:52.360 02:06:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 1518508 00:27:52.360 02:06:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@829 -- # '[' -z 1518508 ']' 00:27:52.360 02:06:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:52.360 02:06:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:52.360 02:06:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:52.360 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:52.360 02:06:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:52.360 02:06:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:52.360 [2024-07-24 02:06:07.125538] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:27:52.360 [2024-07-24 02:06:07.125608] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:52.360 EAL: No free 2048 kB hugepages reported on node 1 00:27:52.360 [2024-07-24 02:06:07.191180] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:52.618 [2024-07-24 02:06:07.284327] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:52.618 [2024-07-24 02:06:07.284393] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:52.618 [2024-07-24 02:06:07.284419] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:52.618 [2024-07-24 02:06:07.284441] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:52.618 [2024-07-24 02:06:07.284453] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:52.618 [2024-07-24 02:06:07.284500] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:52.618 02:06:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:52.618 02:06:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@862 -- # return 0 00:27:52.618 02:06:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:52.618 02:06:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:52.618 02:06:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:52.618 02:06:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:52.618 02:06:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:27:52.618 02:06:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.618 02:06:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:52.618 [2024-07-24 02:06:07.429366] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:52.618 02:06:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.618 02:06:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:27:52.618 02:06:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.618 02:06:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:52.618 null0 00:27:52.618 02:06:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.618 02:06:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:27:52.618 02:06:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.618 02:06:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:52.618 02:06:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.618 02:06:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:27:52.618 02:06:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.618 02:06:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:52.618 02:06:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.619 02:06:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g f8775516062c42d6a8affb4949329f08 00:27:52.619 02:06:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.619 02:06:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:52.619 02:06:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.619 02:06:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:52.619 02:06:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.619 02:06:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:52.619 [2024-07-24 02:06:07.469628] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:52.619 02:06:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.619 02:06:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:27:52.619 02:06:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.619 02:06:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:52.881 nvme0n1 00:27:52.881 02:06:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.881 02:06:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:27:52.881 02:06:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.881 02:06:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:52.881 [ 00:27:52.881 { 00:27:52.881 "name": "nvme0n1", 00:27:52.881 "aliases": [ 00:27:52.881 "f8775516-062c-42d6-a8af-fb4949329f08" 00:27:52.881 ], 00:27:52.881 "product_name": "NVMe disk", 00:27:52.881 "block_size": 512, 00:27:52.881 "num_blocks": 2097152, 00:27:52.881 "uuid": "f8775516-062c-42d6-a8af-fb4949329f08", 00:27:52.881 "assigned_rate_limits": { 00:27:52.881 "rw_ios_per_sec": 0, 00:27:52.881 "rw_mbytes_per_sec": 0, 00:27:52.881 "r_mbytes_per_sec": 0, 00:27:52.881 "w_mbytes_per_sec": 0 00:27:52.881 }, 00:27:52.881 "claimed": false, 00:27:52.881 "zoned": false, 00:27:52.881 "supported_io_types": { 00:27:52.881 "read": true, 00:27:52.881 "write": true, 00:27:52.881 "unmap": false, 00:27:52.881 "flush": true, 00:27:52.881 "reset": true, 00:27:52.881 "nvme_admin": true, 00:27:52.881 "nvme_io": true, 00:27:52.881 "nvme_io_md": false, 00:27:52.881 "write_zeroes": true, 00:27:52.881 "zcopy": false, 00:27:52.881 "get_zone_info": false, 00:27:52.881 "zone_management": false, 00:27:52.881 "zone_append": false, 00:27:52.881 "compare": true, 00:27:52.881 "compare_and_write": true, 00:27:52.881 "abort": true, 00:27:52.881 "seek_hole": false, 00:27:52.881 "seek_data": false, 00:27:52.881 "copy": true, 00:27:52.881 "nvme_iov_md": false 00:27:52.881 }, 00:27:52.881 "memory_domains": [ 00:27:52.881 { 00:27:52.881 "dma_device_id": "system", 00:27:52.881 "dma_device_type": 1 00:27:52.881 } 00:27:52.881 ], 00:27:52.881 "driver_specific": { 00:27:52.881 "nvme": [ 00:27:52.881 { 00:27:52.881 "trid": { 00:27:52.881 "trtype": "TCP", 00:27:52.881 "adrfam": "IPv4", 00:27:52.881 "traddr": "10.0.0.2", 00:27:52.881 "trsvcid": "4420", 00:27:52.881 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:27:52.881 }, 00:27:52.881 "ctrlr_data": { 00:27:52.881 "cntlid": 1, 00:27:52.881 "vendor_id": "0x8086", 00:27:52.881 "model_number": "SPDK bdev Controller", 00:27:52.881 "serial_number": "00000000000000000000", 00:27:52.881 "firmware_revision": "24.09", 00:27:52.881 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:52.881 "oacs": { 00:27:52.881 "security": 0, 00:27:52.881 "format": 0, 00:27:52.881 "firmware": 0, 00:27:52.881 "ns_manage": 0 00:27:52.881 }, 00:27:52.881 "multi_ctrlr": true, 00:27:52.881 "ana_reporting": false 00:27:52.881 }, 00:27:52.881 "vs": { 00:27:52.881 "nvme_version": "1.3" 00:27:52.881 }, 00:27:52.881 "ns_data": { 00:27:52.881 "id": 1, 00:27:52.881 "can_share": true 00:27:52.881 } 00:27:52.881 } 00:27:52.881 ], 00:27:52.881 "mp_policy": "active_passive" 00:27:52.881 } 00:27:52.881 } 00:27:52.881 ] 00:27:52.881 02:06:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.881 02:06:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:27:52.881 02:06:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.881 02:06:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:52.881 [2024-07-24 02:06:07.722107] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:52.881 [2024-07-24 02:06:07.722189] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6cfd20 (9): Bad file descriptor 00:27:53.139 [2024-07-24 02:06:07.864450] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:53.139 02:06:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.139 02:06:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:27:53.139 02:06:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.139 02:06:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:53.139 [ 00:27:53.139 { 00:27:53.139 "name": "nvme0n1", 00:27:53.139 "aliases": [ 00:27:53.139 "f8775516-062c-42d6-a8af-fb4949329f08" 00:27:53.139 ], 00:27:53.139 "product_name": "NVMe disk", 00:27:53.139 "block_size": 512, 00:27:53.139 "num_blocks": 2097152, 00:27:53.139 "uuid": "f8775516-062c-42d6-a8af-fb4949329f08", 00:27:53.139 "assigned_rate_limits": { 00:27:53.139 "rw_ios_per_sec": 0, 00:27:53.139 "rw_mbytes_per_sec": 0, 00:27:53.139 "r_mbytes_per_sec": 0, 00:27:53.139 "w_mbytes_per_sec": 0 00:27:53.139 }, 00:27:53.139 "claimed": false, 00:27:53.139 "zoned": false, 00:27:53.139 "supported_io_types": { 00:27:53.139 "read": true, 00:27:53.139 "write": true, 00:27:53.139 "unmap": false, 00:27:53.139 "flush": true, 00:27:53.139 "reset": true, 00:27:53.139 "nvme_admin": true, 00:27:53.139 "nvme_io": true, 00:27:53.139 "nvme_io_md": false, 00:27:53.139 "write_zeroes": true, 00:27:53.139 "zcopy": false, 00:27:53.139 "get_zone_info": false, 00:27:53.139 "zone_management": false, 00:27:53.139 "zone_append": false, 00:27:53.139 "compare": true, 00:27:53.139 "compare_and_write": true, 00:27:53.139 "abort": true, 00:27:53.139 "seek_hole": false, 00:27:53.139 "seek_data": false, 00:27:53.139 "copy": true, 00:27:53.139 "nvme_iov_md": false 00:27:53.139 }, 00:27:53.139 "memory_domains": [ 00:27:53.139 { 00:27:53.139 "dma_device_id": "system", 00:27:53.139 "dma_device_type": 1 00:27:53.139 } 00:27:53.139 ], 00:27:53.139 "driver_specific": { 00:27:53.139 "nvme": [ 00:27:53.139 { 00:27:53.139 "trid": { 00:27:53.139 "trtype": "TCP", 00:27:53.139 "adrfam": "IPv4", 00:27:53.139 "traddr": "10.0.0.2", 00:27:53.139 "trsvcid": "4420", 00:27:53.139 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:27:53.139 }, 00:27:53.139 "ctrlr_data": { 00:27:53.139 "cntlid": 2, 00:27:53.139 "vendor_id": "0x8086", 00:27:53.139 "model_number": "SPDK bdev Controller", 00:27:53.139 "serial_number": "00000000000000000000", 00:27:53.139 "firmware_revision": "24.09", 00:27:53.139 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:53.139 "oacs": { 00:27:53.139 "security": 0, 00:27:53.139 "format": 0, 00:27:53.139 "firmware": 0, 00:27:53.139 "ns_manage": 0 00:27:53.139 }, 00:27:53.139 "multi_ctrlr": true, 00:27:53.139 "ana_reporting": false 00:27:53.139 }, 00:27:53.139 "vs": { 00:27:53.139 "nvme_version": "1.3" 00:27:53.139 }, 00:27:53.139 "ns_data": { 00:27:53.139 "id": 1, 00:27:53.139 "can_share": true 00:27:53.139 } 00:27:53.139 } 00:27:53.139 ], 00:27:53.139 "mp_policy": "active_passive" 00:27:53.139 } 00:27:53.139 } 00:27:53.139 ] 00:27:53.139 02:06:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.139 02:06:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:53.139 02:06:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.139 02:06:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:53.139 02:06:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.139 02:06:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:27:53.139 02:06:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.Arkqj7Da7Y 00:27:53.139 02:06:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:27:53.139 02:06:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.Arkqj7Da7Y 00:27:53.139 02:06:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:27:53.139 02:06:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.139 02:06:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:53.139 02:06:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.139 02:06:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:27:53.139 02:06:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.139 02:06:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:53.139 [2024-07-24 02:06:07.914760] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:27:53.139 [2024-07-24 02:06:07.914943] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:53.139 02:06:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.139 02:06:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Arkqj7Da7Y 00:27:53.139 02:06:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.139 02:06:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:53.139 [2024-07-24 02:06:07.922769] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:27:53.139 02:06:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.139 02:06:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Arkqj7Da7Y 00:27:53.139 02:06:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.139 02:06:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:53.139 [2024-07-24 02:06:07.930797] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:27:53.139 [2024-07-24 02:06:07.930853] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:27:53.139 nvme0n1 00:27:53.139 02:06:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.139 02:06:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:27:53.139 02:06:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.139 02:06:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:53.139 [ 00:27:53.139 { 00:27:53.139 "name": "nvme0n1", 00:27:53.139 "aliases": [ 00:27:53.139 "f8775516-062c-42d6-a8af-fb4949329f08" 00:27:53.139 ], 00:27:53.139 "product_name": "NVMe disk", 00:27:53.139 "block_size": 512, 00:27:53.139 "num_blocks": 2097152, 00:27:53.139 "uuid": "f8775516-062c-42d6-a8af-fb4949329f08", 00:27:53.139 "assigned_rate_limits": { 00:27:53.139 "rw_ios_per_sec": 0, 00:27:53.139 "rw_mbytes_per_sec": 0, 00:27:53.139 "r_mbytes_per_sec": 0, 00:27:53.139 "w_mbytes_per_sec": 0 00:27:53.139 }, 00:27:53.139 "claimed": false, 00:27:53.139 "zoned": false, 00:27:53.139 "supported_io_types": { 00:27:53.139 "read": true, 00:27:53.139 "write": true, 00:27:53.139 "unmap": false, 00:27:53.139 "flush": true, 00:27:53.139 "reset": true, 00:27:53.139 "nvme_admin": true, 00:27:53.139 "nvme_io": true, 00:27:53.139 "nvme_io_md": false, 00:27:53.139 "write_zeroes": true, 00:27:53.139 "zcopy": false, 00:27:53.139 "get_zone_info": false, 00:27:53.139 "zone_management": false, 00:27:53.140 "zone_append": false, 00:27:53.140 "compare": true, 00:27:53.140 "compare_and_write": true, 00:27:53.140 "abort": true, 00:27:53.140 "seek_hole": false, 00:27:53.140 "seek_data": false, 00:27:53.140 "copy": true, 00:27:53.140 "nvme_iov_md": false 00:27:53.140 }, 00:27:53.140 "memory_domains": [ 00:27:53.140 { 00:27:53.140 "dma_device_id": "system", 00:27:53.140 "dma_device_type": 1 00:27:53.140 } 00:27:53.140 ], 00:27:53.140 "driver_specific": { 00:27:53.140 "nvme": [ 00:27:53.140 { 00:27:53.140 "trid": { 00:27:53.140 "trtype": "TCP", 00:27:53.140 "adrfam": "IPv4", 00:27:53.140 "traddr": "10.0.0.2", 00:27:53.140 "trsvcid": "4421", 00:27:53.140 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:27:53.140 }, 00:27:53.140 "ctrlr_data": { 00:27:53.140 "cntlid": 3, 00:27:53.140 "vendor_id": "0x8086", 00:27:53.140 "model_number": "SPDK bdev Controller", 00:27:53.140 "serial_number": "00000000000000000000", 00:27:53.140 "firmware_revision": "24.09", 00:27:53.140 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:53.140 "oacs": { 00:27:53.140 "security": 0, 00:27:53.140 "format": 0, 00:27:53.140 "firmware": 0, 00:27:53.140 "ns_manage": 0 00:27:53.140 }, 00:27:53.140 "multi_ctrlr": true, 00:27:53.140 "ana_reporting": false 00:27:53.140 }, 00:27:53.140 "vs": { 00:27:53.140 "nvme_version": "1.3" 00:27:53.140 }, 00:27:53.140 "ns_data": { 00:27:53.140 "id": 1, 00:27:53.140 "can_share": true 00:27:53.140 } 00:27:53.140 } 00:27:53.140 ], 00:27:53.140 "mp_policy": "active_passive" 00:27:53.140 } 00:27:53.140 } 00:27:53.140 ] 00:27:53.140 02:06:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.140 02:06:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:53.140 02:06:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.140 02:06:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:53.140 02:06:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.140 02:06:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.Arkqj7Da7Y 00:27:53.398 02:06:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:27:53.398 02:06:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:27:53.398 02:06:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:53.398 02:06:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:27:53.398 02:06:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:53.398 02:06:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:27:53.398 02:06:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:53.398 02:06:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:53.398 rmmod nvme_tcp 00:27:53.398 rmmod nvme_fabrics 00:27:53.398 rmmod nvme_keyring 00:27:53.398 02:06:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:53.398 02:06:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:27:53.398 02:06:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:27:53.398 02:06:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 1518508 ']' 00:27:53.398 02:06:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 1518508 00:27:53.398 02:06:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@948 -- # '[' -z 1518508 ']' 00:27:53.398 02:06:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@952 -- # kill -0 1518508 00:27:53.398 02:06:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@953 -- # uname 00:27:53.398 02:06:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:53.398 02:06:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1518508 00:27:53.398 02:06:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:53.398 02:06:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:53.398 02:06:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1518508' 00:27:53.398 killing process with pid 1518508 00:27:53.398 02:06:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@967 -- # kill 1518508 00:27:53.398 02:06:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # wait 1518508 00:27:53.398 [2024-07-24 02:06:08.131420] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:27:53.398 [2024-07-24 02:06:08.131455] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:27:53.656 02:06:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:53.656 02:06:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:53.656 02:06:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:53.656 02:06:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:53.656 02:06:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:53.656 02:06:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:53.656 02:06:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:53.656 02:06:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:55.556 02:06:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:55.556 00:27:55.556 real 0m5.452s 00:27:55.556 user 0m2.001s 00:27:55.556 sys 0m1.825s 00:27:55.556 02:06:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:55.556 02:06:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:55.556 ************************************ 00:27:55.556 END TEST nvmf_async_init 00:27:55.556 ************************************ 00:27:55.556 02:06:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:27:55.556 02:06:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:55.556 02:06:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:55.556 02:06:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.556 ************************************ 00:27:55.556 START TEST dma 00:27:55.556 ************************************ 00:27:55.556 02:06:10 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:27:55.815 * Looking for test storage... 00:27:55.815 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:55.815 02:06:10 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:55.815 02:06:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:27:55.815 02:06:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:55.815 02:06:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:55.815 02:06:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:55.815 02:06:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:55.815 02:06:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:55.815 02:06:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:55.815 02:06:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:55.815 02:06:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:55.815 02:06:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:55.815 02:06:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:55.815 02:06:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:55.816 02:06:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:55.816 02:06:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:55.816 02:06:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:55.816 02:06:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:55.816 02:06:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:55.816 02:06:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:55.816 02:06:10 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:55.816 02:06:10 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:55.816 02:06:10 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:55.816 02:06:10 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:55.816 02:06:10 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:55.816 02:06:10 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:55.816 02:06:10 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:27:55.816 02:06:10 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:55.816 02:06:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@47 -- # : 0 00:27:55.816 02:06:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:55.816 02:06:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:55.816 02:06:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:55.816 02:06:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:55.816 02:06:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:55.816 02:06:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:55.816 02:06:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:55.816 02:06:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:55.816 02:06:10 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:27:55.816 02:06:10 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:27:55.816 00:27:55.816 real 0m0.071s 00:27:55.816 user 0m0.025s 00:27:55.816 sys 0m0.052s 00:27:55.816 02:06:10 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:55.816 02:06:10 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:27:55.816 ************************************ 00:27:55.816 END TEST dma 00:27:55.816 ************************************ 00:27:55.816 02:06:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:27:55.816 02:06:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:55.816 02:06:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:55.816 02:06:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.816 ************************************ 00:27:55.816 START TEST nvmf_identify 00:27:55.816 ************************************ 00:27:55.816 02:06:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:27:55.816 * Looking for test storage... 00:27:55.816 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:55.816 02:06:10 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:55.816 02:06:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:27:55.816 02:06:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:55.816 02:06:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:55.816 02:06:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:55.816 02:06:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:55.816 02:06:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:55.816 02:06:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:55.816 02:06:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:55.816 02:06:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:55.816 02:06:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:55.816 02:06:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:55.816 02:06:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:55.816 02:06:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:55.816 02:06:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:55.816 02:06:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:55.816 02:06:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:55.816 02:06:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:55.816 02:06:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:55.816 02:06:10 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:55.816 02:06:10 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:55.816 02:06:10 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:55.816 02:06:10 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:55.816 02:06:10 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:55.816 02:06:10 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:55.816 02:06:10 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:27:55.816 02:06:10 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:55.816 02:06:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:27:55.816 02:06:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:55.816 02:06:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:55.816 02:06:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:55.816 02:06:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:55.816 02:06:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:55.816 02:06:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:55.816 02:06:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:55.816 02:06:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:55.816 02:06:10 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:55.816 02:06:10 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:55.816 02:06:10 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:27:55.816 02:06:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:55.816 02:06:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:55.816 02:06:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:55.816 02:06:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:55.816 02:06:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:55.816 02:06:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:55.816 02:06:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:55.816 02:06:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:55.816 02:06:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:55.816 02:06:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:55.816 02:06:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:27:55.816 02:06:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:58.347 02:06:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:58.347 02:06:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:27:58.347 02:06:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:58.347 02:06:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:58.347 02:06:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:58.347 02:06:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:58.347 02:06:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:58.347 02:06:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:27:58.347 02:06:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:58.347 02:06:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:27:58.347 02:06:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:27:58.347 02:06:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:27:58.347 02:06:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:27:58.347 02:06:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:27:58.347 02:06:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:27:58.347 02:06:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:58.347 02:06:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:58.347 02:06:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:58.348 02:06:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:58.348 02:06:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:58.348 02:06:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:58.348 02:06:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:58.348 02:06:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:58.348 02:06:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:58.348 02:06:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:58.348 02:06:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:58.348 02:06:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:58.348 02:06:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:58.348 02:06:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:58.348 02:06:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:58.348 02:06:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:58.348 02:06:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:58.348 02:06:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:58.348 02:06:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:58.348 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:58.348 02:06:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:58.348 02:06:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:58.348 02:06:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:58.348 02:06:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:58.348 02:06:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:58.348 02:06:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:58.348 02:06:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:58.348 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:58.348 02:06:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:58.348 02:06:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:58.348 02:06:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:58.348 02:06:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:58.348 02:06:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:58.348 02:06:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:58.348 02:06:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:58.348 02:06:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:58.348 02:06:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:58.348 02:06:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:58.348 02:06:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:58.348 02:06:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:58.348 02:06:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:58.348 02:06:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:58.348 02:06:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:58.348 02:06:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:58.348 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:58.348 02:06:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:58.348 02:06:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:58.348 02:06:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:58.348 02:06:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:58.348 02:06:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:58.348 02:06:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:58.348 02:06:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:58.348 02:06:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:58.348 02:06:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:58.348 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:58.348 02:06:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:58.348 02:06:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:58.348 02:06:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:27:58.348 02:06:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:58.348 02:06:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:58.348 02:06:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:58.348 02:06:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:58.348 02:06:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:58.348 02:06:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:58.348 02:06:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:58.348 02:06:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:58.348 02:06:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:58.348 02:06:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:58.348 02:06:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:58.348 02:06:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:58.348 02:06:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:58.348 02:06:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:58.348 02:06:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:58.348 02:06:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:58.348 02:06:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:58.348 02:06:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:58.348 02:06:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:58.348 02:06:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:58.348 02:06:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:58.348 02:06:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:58.348 02:06:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:58.348 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:58.348 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.233 ms 00:27:58.348 00:27:58.348 --- 10.0.0.2 ping statistics --- 00:27:58.348 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:58.348 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:27:58.348 02:06:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:58.348 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:58.348 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.088 ms 00:27:58.348 00:27:58.348 --- 10.0.0.1 ping statistics --- 00:27:58.348 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:58.348 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:27:58.348 02:06:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:58.348 02:06:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:27:58.348 02:06:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:58.348 02:06:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:58.348 02:06:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:58.348 02:06:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:58.348 02:06:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:58.348 02:06:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:58.348 02:06:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:58.348 02:06:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:27:58.348 02:06:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:58.348 02:06:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:58.348 02:06:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=1521132 00:27:58.348 02:06:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:58.348 02:06:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:58.348 02:06:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 1521132 00:27:58.348 02:06:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 1521132 ']' 00:27:58.348 02:06:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:58.348 02:06:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:58.348 02:06:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:58.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:58.348 02:06:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:58.348 02:06:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:58.348 [2024-07-24 02:06:12.854555] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:27:58.349 [2024-07-24 02:06:12.854640] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:58.349 EAL: No free 2048 kB hugepages reported on node 1 00:27:58.349 [2024-07-24 02:06:12.924011] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:58.349 [2024-07-24 02:06:13.012857] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:58.349 [2024-07-24 02:06:13.012910] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:58.349 [2024-07-24 02:06:13.012923] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:58.349 [2024-07-24 02:06:13.012933] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:58.349 [2024-07-24 02:06:13.012943] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:58.349 [2024-07-24 02:06:13.013028] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:58.349 [2024-07-24 02:06:13.013052] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:58.349 [2024-07-24 02:06:13.013112] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:58.349 [2024-07-24 02:06:13.013114] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:58.349 02:06:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:58.349 02:06:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:27:58.349 02:06:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:58.349 02:06:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:58.349 02:06:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:58.349 [2024-07-24 02:06:13.145762] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:58.349 02:06:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:58.349 02:06:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:27:58.349 02:06:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:58.349 02:06:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:58.349 02:06:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:58.349 02:06:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:58.349 02:06:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:58.349 Malloc0 00:27:58.349 02:06:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:58.349 02:06:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:58.349 02:06:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:58.349 02:06:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:58.349 02:06:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:58.349 02:06:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:27:58.349 02:06:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:58.349 02:06:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:58.349 02:06:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:58.349 02:06:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:58.349 02:06:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:58.349 02:06:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:58.349 [2024-07-24 02:06:13.223491] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:58.349 02:06:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:58.349 02:06:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:58.349 02:06:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:58.349 02:06:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:58.349 02:06:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:58.349 02:06:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:27:58.349 02:06:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:58.349 02:06:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:58.619 [ 00:27:58.619 { 00:27:58.619 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:58.619 "subtype": "Discovery", 00:27:58.619 "listen_addresses": [ 00:27:58.619 { 00:27:58.619 "trtype": "TCP", 00:27:58.619 "adrfam": "IPv4", 00:27:58.619 "traddr": "10.0.0.2", 00:27:58.619 "trsvcid": "4420" 00:27:58.620 } 00:27:58.620 ], 00:27:58.620 "allow_any_host": true, 00:27:58.620 "hosts": [] 00:27:58.620 }, 00:27:58.620 { 00:27:58.620 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:58.620 "subtype": "NVMe", 00:27:58.620 "listen_addresses": [ 00:27:58.620 { 00:27:58.620 "trtype": "TCP", 00:27:58.620 "adrfam": "IPv4", 00:27:58.620 "traddr": "10.0.0.2", 00:27:58.620 "trsvcid": "4420" 00:27:58.620 } 00:27:58.620 ], 00:27:58.620 "allow_any_host": true, 00:27:58.620 "hosts": [], 00:27:58.620 "serial_number": "SPDK00000000000001", 00:27:58.620 "model_number": "SPDK bdev Controller", 00:27:58.620 "max_namespaces": 32, 00:27:58.620 "min_cntlid": 1, 00:27:58.620 "max_cntlid": 65519, 00:27:58.620 "namespaces": [ 00:27:58.620 { 00:27:58.620 "nsid": 1, 00:27:58.620 "bdev_name": "Malloc0", 00:27:58.620 "name": "Malloc0", 00:27:58.620 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:27:58.620 "eui64": "ABCDEF0123456789", 00:27:58.620 "uuid": "f0ae6a18-c3c8-45a5-98b0-14249211538b" 00:27:58.620 } 00:27:58.620 ] 00:27:58.620 } 00:27:58.620 ] 00:27:58.620 02:06:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:58.620 02:06:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:27:58.620 [2024-07-24 02:06:13.265209] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:27:58.620 [2024-07-24 02:06:13.265252] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1521172 ] 00:27:58.620 EAL: No free 2048 kB hugepages reported on node 1 00:27:58.620 [2024-07-24 02:06:13.300805] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:27:58.620 [2024-07-24 02:06:13.300866] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:27:58.620 [2024-07-24 02:06:13.300876] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:27:58.620 [2024-07-24 02:06:13.300890] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:27:58.620 [2024-07-24 02:06:13.300904] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:27:58.620 [2024-07-24 02:06:13.301143] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:27:58.620 [2024-07-24 02:06:13.301188] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x53cae0 0 00:27:58.620 [2024-07-24 02:06:13.307331] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:27:58.620 [2024-07-24 02:06:13.307356] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:27:58.620 [2024-07-24 02:06:13.307366] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:27:58.620 [2024-07-24 02:06:13.307372] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:27:58.620 [2024-07-24 02:06:13.307422] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.620 [2024-07-24 02:06:13.307439] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.620 [2024-07-24 02:06:13.307448] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x53cae0) 00:27:58.620 [2024-07-24 02:06:13.307465] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:27:58.620 [2024-07-24 02:06:13.307492] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x593240, cid 0, qid 0 00:27:58.620 [2024-07-24 02:06:13.311332] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.620 [2024-07-24 02:06:13.311350] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.620 [2024-07-24 02:06:13.311358] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.620 [2024-07-24 02:06:13.311365] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x593240) on tqpair=0x53cae0 00:27:58.620 [2024-07-24 02:06:13.311380] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:27:58.620 [2024-07-24 02:06:13.311391] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:27:58.620 [2024-07-24 02:06:13.311401] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:27:58.620 [2024-07-24 02:06:13.311422] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.620 [2024-07-24 02:06:13.311431] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.620 [2024-07-24 02:06:13.311437] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x53cae0) 00:27:58.620 [2024-07-24 02:06:13.311449] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.620 [2024-07-24 02:06:13.311472] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x593240, cid 0, qid 0 00:27:58.620 [2024-07-24 02:06:13.311617] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.620 [2024-07-24 02:06:13.311641] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.620 [2024-07-24 02:06:13.311648] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.620 [2024-07-24 02:06:13.311655] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x593240) on tqpair=0x53cae0 00:27:58.620 [2024-07-24 02:06:13.311668] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:27:58.620 [2024-07-24 02:06:13.311683] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:27:58.620 [2024-07-24 02:06:13.311695] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.620 [2024-07-24 02:06:13.311703] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.620 [2024-07-24 02:06:13.311709] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x53cae0) 00:27:58.620 [2024-07-24 02:06:13.311720] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.620 [2024-07-24 02:06:13.311741] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x593240, cid 0, qid 0 00:27:58.620 [2024-07-24 02:06:13.311836] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.620 [2024-07-24 02:06:13.311857] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.620 [2024-07-24 02:06:13.311864] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.620 [2024-07-24 02:06:13.311871] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x593240) on tqpair=0x53cae0 00:27:58.620 [2024-07-24 02:06:13.311880] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:27:58.620 [2024-07-24 02:06:13.311894] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:27:58.620 [2024-07-24 02:06:13.311906] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.620 [2024-07-24 02:06:13.311919] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.620 [2024-07-24 02:06:13.311926] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x53cae0) 00:27:58.620 [2024-07-24 02:06:13.311936] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.620 [2024-07-24 02:06:13.311958] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x593240, cid 0, qid 0 00:27:58.620 [2024-07-24 02:06:13.312056] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.620 [2024-07-24 02:06:13.312068] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.620 [2024-07-24 02:06:13.312075] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.620 [2024-07-24 02:06:13.312082] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x593240) on tqpair=0x53cae0 00:27:58.620 [2024-07-24 02:06:13.312091] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:27:58.620 [2024-07-24 02:06:13.312107] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.620 [2024-07-24 02:06:13.312116] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.620 [2024-07-24 02:06:13.312122] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x53cae0) 00:27:58.620 [2024-07-24 02:06:13.312133] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.620 [2024-07-24 02:06:13.312153] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x593240, cid 0, qid 0 00:27:58.620 [2024-07-24 02:06:13.312247] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.620 [2024-07-24 02:06:13.312263] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.620 [2024-07-24 02:06:13.312270] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.621 [2024-07-24 02:06:13.312277] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x593240) on tqpair=0x53cae0 00:27:58.621 [2024-07-24 02:06:13.312285] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:27:58.621 [2024-07-24 02:06:13.312294] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:27:58.621 [2024-07-24 02:06:13.312307] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:27:58.621 [2024-07-24 02:06:13.312423] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:27:58.621 [2024-07-24 02:06:13.312433] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:27:58.621 [2024-07-24 02:06:13.312446] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.621 [2024-07-24 02:06:13.312454] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.621 [2024-07-24 02:06:13.312460] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x53cae0) 00:27:58.621 [2024-07-24 02:06:13.312470] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.621 [2024-07-24 02:06:13.312507] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x593240, cid 0, qid 0 00:27:58.621 [2024-07-24 02:06:13.312691] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.621 [2024-07-24 02:06:13.312704] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.621 [2024-07-24 02:06:13.312711] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.621 [2024-07-24 02:06:13.312718] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x593240) on tqpair=0x53cae0 00:27:58.621 [2024-07-24 02:06:13.312726] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:27:58.621 [2024-07-24 02:06:13.312747] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.621 [2024-07-24 02:06:13.312757] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.621 [2024-07-24 02:06:13.312763] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x53cae0) 00:27:58.621 [2024-07-24 02:06:13.312774] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.621 [2024-07-24 02:06:13.312794] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x593240, cid 0, qid 0 00:27:58.621 [2024-07-24 02:06:13.312903] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.621 [2024-07-24 02:06:13.312915] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.621 [2024-07-24 02:06:13.312921] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.621 [2024-07-24 02:06:13.312928] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x593240) on tqpair=0x53cae0 00:27:58.621 [2024-07-24 02:06:13.312936] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:27:58.621 [2024-07-24 02:06:13.312944] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:27:58.621 [2024-07-24 02:06:13.312957] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:27:58.621 [2024-07-24 02:06:13.312971] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:27:58.621 [2024-07-24 02:06:13.312985] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.621 [2024-07-24 02:06:13.312992] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x53cae0) 00:27:58.621 [2024-07-24 02:06:13.313003] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.621 [2024-07-24 02:06:13.313024] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x593240, cid 0, qid 0 00:27:58.621 [2024-07-24 02:06:13.313171] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:58.621 [2024-07-24 02:06:13.313187] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:58.621 [2024-07-24 02:06:13.313194] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:58.621 [2024-07-24 02:06:13.313201] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x53cae0): datao=0, datal=4096, cccid=0 00:27:58.621 [2024-07-24 02:06:13.313209] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x593240) on tqpair(0x53cae0): expected_datao=0, payload_size=4096 00:27:58.621 [2024-07-24 02:06:13.313216] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.621 [2024-07-24 02:06:13.313227] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:58.621 [2024-07-24 02:06:13.313235] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:58.621 [2024-07-24 02:06:13.313248] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.621 [2024-07-24 02:06:13.313257] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.621 [2024-07-24 02:06:13.313264] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.621 [2024-07-24 02:06:13.313271] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x593240) on tqpair=0x53cae0 00:27:58.621 [2024-07-24 02:06:13.313281] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:27:58.621 [2024-07-24 02:06:13.313290] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:27:58.621 [2024-07-24 02:06:13.313297] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:27:58.621 [2024-07-24 02:06:13.313323] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:27:58.621 [2024-07-24 02:06:13.313336] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:27:58.621 [2024-07-24 02:06:13.313345] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:27:58.621 [2024-07-24 02:06:13.313359] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:27:58.621 [2024-07-24 02:06:13.313375] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.621 [2024-07-24 02:06:13.313384] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.621 [2024-07-24 02:06:13.313390] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x53cae0) 00:27:58.621 [2024-07-24 02:06:13.313401] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:27:58.621 [2024-07-24 02:06:13.313422] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x593240, cid 0, qid 0 00:27:58.621 [2024-07-24 02:06:13.313537] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.621 [2024-07-24 02:06:13.313553] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.621 [2024-07-24 02:06:13.313560] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.621 [2024-07-24 02:06:13.313566] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x593240) on tqpair=0x53cae0 00:27:58.621 [2024-07-24 02:06:13.313578] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.621 [2024-07-24 02:06:13.313585] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.621 [2024-07-24 02:06:13.313592] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x53cae0) 00:27:58.621 [2024-07-24 02:06:13.313601] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:58.621 [2024-07-24 02:06:13.313612] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.621 [2024-07-24 02:06:13.313618] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.621 [2024-07-24 02:06:13.313625] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x53cae0) 00:27:58.621 [2024-07-24 02:06:13.313633] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:58.621 [2024-07-24 02:06:13.313642] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.621 [2024-07-24 02:06:13.313649] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.621 [2024-07-24 02:06:13.313655] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x53cae0) 00:27:58.621 [2024-07-24 02:06:13.313664] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:58.621 [2024-07-24 02:06:13.313673] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.621 [2024-07-24 02:06:13.313680] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.621 [2024-07-24 02:06:13.313686] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x53cae0) 00:27:58.621 [2024-07-24 02:06:13.313694] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:58.621 [2024-07-24 02:06:13.313703] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:27:58.621 [2024-07-24 02:06:13.313737] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:27:58.621 [2024-07-24 02:06:13.313750] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.621 [2024-07-24 02:06:13.313757] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x53cae0) 00:27:58.621 [2024-07-24 02:06:13.313767] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.621 [2024-07-24 02:06:13.313806] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x593240, cid 0, qid 0 00:27:58.621 [2024-07-24 02:06:13.313818] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5933c0, cid 1, qid 0 00:27:58.621 [2024-07-24 02:06:13.313825] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x593540, cid 2, qid 0 00:27:58.621 [2024-07-24 02:06:13.313832] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5936c0, cid 3, qid 0 00:27:58.621 [2024-07-24 02:06:13.313853] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x593840, cid 4, qid 0 00:27:58.621 [2024-07-24 02:06:13.314031] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.621 [2024-07-24 02:06:13.314046] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.621 [2024-07-24 02:06:13.314053] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.621 [2024-07-24 02:06:13.314060] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x593840) on tqpair=0x53cae0 00:27:58.621 [2024-07-24 02:06:13.314069] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:27:58.621 [2024-07-24 02:06:13.314078] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:27:58.621 [2024-07-24 02:06:13.314095] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.622 [2024-07-24 02:06:13.314104] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x53cae0) 00:27:58.622 [2024-07-24 02:06:13.314115] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.622 [2024-07-24 02:06:13.314135] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x593840, cid 4, qid 0 00:27:58.622 [2024-07-24 02:06:13.314286] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:58.622 [2024-07-24 02:06:13.314298] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:58.622 [2024-07-24 02:06:13.314304] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:58.622 [2024-07-24 02:06:13.314311] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x53cae0): datao=0, datal=4096, cccid=4 00:27:58.622 [2024-07-24 02:06:13.314327] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x593840) on tqpair(0x53cae0): expected_datao=0, payload_size=4096 00:27:58.622 [2024-07-24 02:06:13.314335] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.622 [2024-07-24 02:06:13.314358] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:58.622 [2024-07-24 02:06:13.314366] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:58.622 [2024-07-24 02:06:13.354469] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.622 [2024-07-24 02:06:13.354489] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.622 [2024-07-24 02:06:13.354498] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.622 [2024-07-24 02:06:13.354505] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x593840) on tqpair=0x53cae0 00:27:58.622 [2024-07-24 02:06:13.354525] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:27:58.622 [2024-07-24 02:06:13.354561] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.622 [2024-07-24 02:06:13.354572] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x53cae0) 00:27:58.622 [2024-07-24 02:06:13.354583] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.622 [2024-07-24 02:06:13.354594] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.622 [2024-07-24 02:06:13.354601] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.622 [2024-07-24 02:06:13.354607] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x53cae0) 00:27:58.622 [2024-07-24 02:06:13.354626] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:27:58.622 [2024-07-24 02:06:13.354655] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x593840, cid 4, qid 0 00:27:58.622 [2024-07-24 02:06:13.354667] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5939c0, cid 5, qid 0 00:27:58.622 [2024-07-24 02:06:13.354817] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:58.622 [2024-07-24 02:06:13.354833] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:58.622 [2024-07-24 02:06:13.354840] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:58.622 [2024-07-24 02:06:13.354846] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x53cae0): datao=0, datal=1024, cccid=4 00:27:58.622 [2024-07-24 02:06:13.354854] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x593840) on tqpair(0x53cae0): expected_datao=0, payload_size=1024 00:27:58.622 [2024-07-24 02:06:13.354861] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.622 [2024-07-24 02:06:13.354871] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:58.622 [2024-07-24 02:06:13.354878] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:58.622 [2024-07-24 02:06:13.354887] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.622 [2024-07-24 02:06:13.354896] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.622 [2024-07-24 02:06:13.354902] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.622 [2024-07-24 02:06:13.354909] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5939c0) on tqpair=0x53cae0 00:27:58.622 [2024-07-24 02:06:13.399330] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.622 [2024-07-24 02:06:13.399348] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.622 [2024-07-24 02:06:13.399355] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.622 [2024-07-24 02:06:13.399362] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x593840) on tqpair=0x53cae0 00:27:58.622 [2024-07-24 02:06:13.399378] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.622 [2024-07-24 02:06:13.399387] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x53cae0) 00:27:58.622 [2024-07-24 02:06:13.399397] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.622 [2024-07-24 02:06:13.399427] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x593840, cid 4, qid 0 00:27:58.622 [2024-07-24 02:06:13.399593] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:58.622 [2024-07-24 02:06:13.399606] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:58.622 [2024-07-24 02:06:13.399612] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:58.622 [2024-07-24 02:06:13.399619] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x53cae0): datao=0, datal=3072, cccid=4 00:27:58.622 [2024-07-24 02:06:13.399626] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x593840) on tqpair(0x53cae0): expected_datao=0, payload_size=3072 00:27:58.622 [2024-07-24 02:06:13.399634] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.622 [2024-07-24 02:06:13.399644] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:58.622 [2024-07-24 02:06:13.399651] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:58.622 [2024-07-24 02:06:13.399662] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.622 [2024-07-24 02:06:13.399672] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.622 [2024-07-24 02:06:13.399679] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.622 [2024-07-24 02:06:13.399686] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x593840) on tqpair=0x53cae0 00:27:58.622 [2024-07-24 02:06:13.399700] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.622 [2024-07-24 02:06:13.399708] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x53cae0) 00:27:58.622 [2024-07-24 02:06:13.399724] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.622 [2024-07-24 02:06:13.399753] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x593840, cid 4, qid 0 00:27:58.622 [2024-07-24 02:06:13.399866] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:58.622 [2024-07-24 02:06:13.399881] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:58.622 [2024-07-24 02:06:13.399888] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:58.622 [2024-07-24 02:06:13.399894] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x53cae0): datao=0, datal=8, cccid=4 00:27:58.622 [2024-07-24 02:06:13.399902] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x593840) on tqpair(0x53cae0): expected_datao=0, payload_size=8 00:27:58.622 [2024-07-24 02:06:13.399909] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.622 [2024-07-24 02:06:13.399918] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:58.622 [2024-07-24 02:06:13.399926] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:58.622 [2024-07-24 02:06:13.440443] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.622 [2024-07-24 02:06:13.440462] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.622 [2024-07-24 02:06:13.440469] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.622 [2024-07-24 02:06:13.440476] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x593840) on tqpair=0x53cae0 00:27:58.622 ===================================================== 00:27:58.622 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:27:58.622 ===================================================== 00:27:58.622 Controller Capabilities/Features 00:27:58.622 ================================ 00:27:58.622 Vendor ID: 0000 00:27:58.622 Subsystem Vendor ID: 0000 00:27:58.622 Serial Number: .................... 00:27:58.622 Model Number: ........................................ 00:27:58.622 Firmware Version: 24.09 00:27:58.622 Recommended Arb Burst: 0 00:27:58.622 IEEE OUI Identifier: 00 00 00 00:27:58.622 Multi-path I/O 00:27:58.622 May have multiple subsystem ports: No 00:27:58.622 May have multiple controllers: No 00:27:58.622 Associated with SR-IOV VF: No 00:27:58.622 Max Data Transfer Size: 131072 00:27:58.622 Max Number of Namespaces: 0 00:27:58.622 Max Number of I/O Queues: 1024 00:27:58.622 NVMe Specification Version (VS): 1.3 00:27:58.622 NVMe Specification Version (Identify): 1.3 00:27:58.622 Maximum Queue Entries: 128 00:27:58.622 Contiguous Queues Required: Yes 00:27:58.622 Arbitration Mechanisms Supported 00:27:58.622 Weighted Round Robin: Not Supported 00:27:58.622 Vendor Specific: Not Supported 00:27:58.622 Reset Timeout: 15000 ms 00:27:58.622 Doorbell Stride: 4 bytes 00:27:58.622 NVM Subsystem Reset: Not Supported 00:27:58.622 Command Sets Supported 00:27:58.622 NVM Command Set: Supported 00:27:58.622 Boot Partition: Not Supported 00:27:58.622 Memory Page Size Minimum: 4096 bytes 00:27:58.622 Memory Page Size Maximum: 4096 bytes 00:27:58.622 Persistent Memory Region: Not Supported 00:27:58.622 Optional Asynchronous Events Supported 00:27:58.622 Namespace Attribute Notices: Not Supported 00:27:58.622 Firmware Activation Notices: Not Supported 00:27:58.622 ANA Change Notices: Not Supported 00:27:58.622 PLE Aggregate Log Change Notices: Not Supported 00:27:58.622 LBA Status Info Alert Notices: Not Supported 00:27:58.622 EGE Aggregate Log Change Notices: Not Supported 00:27:58.622 Normal NVM Subsystem Shutdown event: Not Supported 00:27:58.622 Zone Descriptor Change Notices: Not Supported 00:27:58.622 Discovery Log Change Notices: Supported 00:27:58.622 Controller Attributes 00:27:58.622 128-bit Host Identifier: Not Supported 00:27:58.622 Non-Operational Permissive Mode: Not Supported 00:27:58.622 NVM Sets: Not Supported 00:27:58.622 Read Recovery Levels: Not Supported 00:27:58.622 Endurance Groups: Not Supported 00:27:58.622 Predictable Latency Mode: Not Supported 00:27:58.622 Traffic Based Keep ALive: Not Supported 00:27:58.622 Namespace Granularity: Not Supported 00:27:58.622 SQ Associations: Not Supported 00:27:58.622 UUID List: Not Supported 00:27:58.622 Multi-Domain Subsystem: Not Supported 00:27:58.622 Fixed Capacity Management: Not Supported 00:27:58.622 Variable Capacity Management: Not Supported 00:27:58.622 Delete Endurance Group: Not Supported 00:27:58.622 Delete NVM Set: Not Supported 00:27:58.622 Extended LBA Formats Supported: Not Supported 00:27:58.622 Flexible Data Placement Supported: Not Supported 00:27:58.622 00:27:58.622 Controller Memory Buffer Support 00:27:58.622 ================================ 00:27:58.622 Supported: No 00:27:58.622 00:27:58.622 Persistent Memory Region Support 00:27:58.622 ================================ 00:27:58.622 Supported: No 00:27:58.622 00:27:58.622 Admin Command Set Attributes 00:27:58.622 ============================ 00:27:58.622 Security Send/Receive: Not Supported 00:27:58.622 Format NVM: Not Supported 00:27:58.622 Firmware Activate/Download: Not Supported 00:27:58.622 Namespace Management: Not Supported 00:27:58.622 Device Self-Test: Not Supported 00:27:58.622 Directives: Not Supported 00:27:58.622 NVMe-MI: Not Supported 00:27:58.622 Virtualization Management: Not Supported 00:27:58.622 Doorbell Buffer Config: Not Supported 00:27:58.622 Get LBA Status Capability: Not Supported 00:27:58.622 Command & Feature Lockdown Capability: Not Supported 00:27:58.622 Abort Command Limit: 1 00:27:58.622 Async Event Request Limit: 4 00:27:58.622 Number of Firmware Slots: N/A 00:27:58.622 Firmware Slot 1 Read-Only: N/A 00:27:58.622 Firmware Activation Without Reset: N/A 00:27:58.622 Multiple Update Detection Support: N/A 00:27:58.622 Firmware Update Granularity: No Information Provided 00:27:58.622 Per-Namespace SMART Log: No 00:27:58.622 Asymmetric Namespace Access Log Page: Not Supported 00:27:58.622 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:27:58.622 Command Effects Log Page: Not Supported 00:27:58.622 Get Log Page Extended Data: Supported 00:27:58.622 Telemetry Log Pages: Not Supported 00:27:58.622 Persistent Event Log Pages: Not Supported 00:27:58.622 Supported Log Pages Log Page: May Support 00:27:58.622 Commands Supported & Effects Log Page: Not Supported 00:27:58.622 Feature Identifiers & Effects Log Page:May Support 00:27:58.622 NVMe-MI Commands & Effects Log Page: May Support 00:27:58.622 Data Area 4 for Telemetry Log: Not Supported 00:27:58.622 Error Log Page Entries Supported: 128 00:27:58.622 Keep Alive: Not Supported 00:27:58.622 00:27:58.622 NVM Command Set Attributes 00:27:58.622 ========================== 00:27:58.622 Submission Queue Entry Size 00:27:58.622 Max: 1 00:27:58.622 Min: 1 00:27:58.622 Completion Queue Entry Size 00:27:58.622 Max: 1 00:27:58.622 Min: 1 00:27:58.622 Number of Namespaces: 0 00:27:58.622 Compare Command: Not Supported 00:27:58.622 Write Uncorrectable Command: Not Supported 00:27:58.622 Dataset Management Command: Not Supported 00:27:58.622 Write Zeroes Command: Not Supported 00:27:58.622 Set Features Save Field: Not Supported 00:27:58.622 Reservations: Not Supported 00:27:58.622 Timestamp: Not Supported 00:27:58.622 Copy: Not Supported 00:27:58.622 Volatile Write Cache: Not Present 00:27:58.622 Atomic Write Unit (Normal): 1 00:27:58.622 Atomic Write Unit (PFail): 1 00:27:58.622 Atomic Compare & Write Unit: 1 00:27:58.622 Fused Compare & Write: Supported 00:27:58.622 Scatter-Gather List 00:27:58.622 SGL Command Set: Supported 00:27:58.622 SGL Keyed: Supported 00:27:58.622 SGL Bit Bucket Descriptor: Not Supported 00:27:58.622 SGL Metadata Pointer: Not Supported 00:27:58.622 Oversized SGL: Not Supported 00:27:58.622 SGL Metadata Address: Not Supported 00:27:58.622 SGL Offset: Supported 00:27:58.622 Transport SGL Data Block: Not Supported 00:27:58.623 Replay Protected Memory Block: Not Supported 00:27:58.623 00:27:58.623 Firmware Slot Information 00:27:58.623 ========================= 00:27:58.623 Active slot: 0 00:27:58.623 00:27:58.623 00:27:58.623 Error Log 00:27:58.623 ========= 00:27:58.623 00:27:58.623 Active Namespaces 00:27:58.623 ================= 00:27:58.623 Discovery Log Page 00:27:58.623 ================== 00:27:58.623 Generation Counter: 2 00:27:58.623 Number of Records: 2 00:27:58.623 Record Format: 0 00:27:58.623 00:27:58.623 Discovery Log Entry 0 00:27:58.623 ---------------------- 00:27:58.623 Transport Type: 3 (TCP) 00:27:58.623 Address Family: 1 (IPv4) 00:27:58.623 Subsystem Type: 3 (Current Discovery Subsystem) 00:27:58.623 Entry Flags: 00:27:58.623 Duplicate Returned Information: 1 00:27:58.623 Explicit Persistent Connection Support for Discovery: 1 00:27:58.623 Transport Requirements: 00:27:58.623 Secure Channel: Not Required 00:27:58.623 Port ID: 0 (0x0000) 00:27:58.623 Controller ID: 65535 (0xffff) 00:27:58.623 Admin Max SQ Size: 128 00:27:58.623 Transport Service Identifier: 4420 00:27:58.623 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:27:58.623 Transport Address: 10.0.0.2 00:27:58.623 Discovery Log Entry 1 00:27:58.623 ---------------------- 00:27:58.623 Transport Type: 3 (TCP) 00:27:58.623 Address Family: 1 (IPv4) 00:27:58.623 Subsystem Type: 2 (NVM Subsystem) 00:27:58.623 Entry Flags: 00:27:58.623 Duplicate Returned Information: 0 00:27:58.623 Explicit Persistent Connection Support for Discovery: 0 00:27:58.623 Transport Requirements: 00:27:58.623 Secure Channel: Not Required 00:27:58.623 Port ID: 0 (0x0000) 00:27:58.623 Controller ID: 65535 (0xffff) 00:27:58.623 Admin Max SQ Size: 128 00:27:58.623 Transport Service Identifier: 4420 00:27:58.623 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:27:58.623 Transport Address: 10.0.0.2 [2024-07-24 02:06:13.440584] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:27:58.623 [2024-07-24 02:06:13.440605] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x593240) on tqpair=0x53cae0 00:27:58.623 [2024-07-24 02:06:13.440617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.623 [2024-07-24 02:06:13.440626] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5933c0) on tqpair=0x53cae0 00:27:58.623 [2024-07-24 02:06:13.440634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.623 [2024-07-24 02:06:13.440642] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x593540) on tqpair=0x53cae0 00:27:58.623 [2024-07-24 02:06:13.440649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.623 [2024-07-24 02:06:13.440658] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5936c0) on tqpair=0x53cae0 00:27:58.623 [2024-07-24 02:06:13.440665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.623 [2024-07-24 02:06:13.440682] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.623 [2024-07-24 02:06:13.440691] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.623 [2024-07-24 02:06:13.440697] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x53cae0) 00:27:58.623 [2024-07-24 02:06:13.440708] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.623 [2024-07-24 02:06:13.440747] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5936c0, cid 3, qid 0 00:27:58.623 [2024-07-24 02:06:13.440909] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.623 [2024-07-24 02:06:13.440926] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.623 [2024-07-24 02:06:13.440933] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.623 [2024-07-24 02:06:13.440940] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5936c0) on tqpair=0x53cae0 00:27:58.623 [2024-07-24 02:06:13.440951] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.623 [2024-07-24 02:06:13.440959] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.623 [2024-07-24 02:06:13.440966] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x53cae0) 00:27:58.623 [2024-07-24 02:06:13.440980] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.623 [2024-07-24 02:06:13.441008] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5936c0, cid 3, qid 0 00:27:58.623 [2024-07-24 02:06:13.441120] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.623 [2024-07-24 02:06:13.441135] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.623 [2024-07-24 02:06:13.441142] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.623 [2024-07-24 02:06:13.441149] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5936c0) on tqpair=0x53cae0 00:27:58.623 [2024-07-24 02:06:13.441157] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:27:58.623 [2024-07-24 02:06:13.441165] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:27:58.623 [2024-07-24 02:06:13.441181] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.623 [2024-07-24 02:06:13.441190] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.623 [2024-07-24 02:06:13.441197] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x53cae0) 00:27:58.623 [2024-07-24 02:06:13.441207] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.623 [2024-07-24 02:06:13.441228] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5936c0, cid 3, qid 0 00:27:58.623 [2024-07-24 02:06:13.441333] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.623 [2024-07-24 02:06:13.441347] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.623 [2024-07-24 02:06:13.441354] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.623 [2024-07-24 02:06:13.441361] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5936c0) on tqpair=0x53cae0 00:27:58.623 [2024-07-24 02:06:13.441377] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.623 [2024-07-24 02:06:13.441387] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.623 [2024-07-24 02:06:13.441393] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x53cae0) 00:27:58.623 [2024-07-24 02:06:13.441403] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.623 [2024-07-24 02:06:13.441424] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5936c0, cid 3, qid 0 00:27:58.623 [2024-07-24 02:06:13.441520] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.623 [2024-07-24 02:06:13.441536] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.623 [2024-07-24 02:06:13.441543] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.623 [2024-07-24 02:06:13.441550] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5936c0) on tqpair=0x53cae0 00:27:58.623 [2024-07-24 02:06:13.441566] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.623 [2024-07-24 02:06:13.441576] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.623 [2024-07-24 02:06:13.441582] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x53cae0) 00:27:58.623 [2024-07-24 02:06:13.441593] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.623 [2024-07-24 02:06:13.441613] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5936c0, cid 3, qid 0 00:27:58.623 [2024-07-24 02:06:13.441712] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.623 [2024-07-24 02:06:13.441724] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.623 [2024-07-24 02:06:13.441731] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.623 [2024-07-24 02:06:13.441738] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5936c0) on tqpair=0x53cae0 00:27:58.623 [2024-07-24 02:06:13.441754] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.623 [2024-07-24 02:06:13.441767] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.623 [2024-07-24 02:06:13.441774] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x53cae0) 00:27:58.623 [2024-07-24 02:06:13.441785] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.623 [2024-07-24 02:06:13.441805] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5936c0, cid 3, qid 0 00:27:58.623 [2024-07-24 02:06:13.441910] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.623 [2024-07-24 02:06:13.441922] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.623 [2024-07-24 02:06:13.441929] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.623 [2024-07-24 02:06:13.441935] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5936c0) on tqpair=0x53cae0 00:27:58.623 [2024-07-24 02:06:13.441951] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.623 [2024-07-24 02:06:13.441960] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.623 [2024-07-24 02:06:13.441967] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x53cae0) 00:27:58.623 [2024-07-24 02:06:13.441977] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.623 [2024-07-24 02:06:13.441997] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5936c0, cid 3, qid 0 00:27:58.623 [2024-07-24 02:06:13.442096] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.623 [2024-07-24 02:06:13.442112] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.623 [2024-07-24 02:06:13.442119] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.623 [2024-07-24 02:06:13.442126] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5936c0) on tqpair=0x53cae0 00:27:58.623 [2024-07-24 02:06:13.442142] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.623 [2024-07-24 02:06:13.442151] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.623 [2024-07-24 02:06:13.442157] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x53cae0) 00:27:58.623 [2024-07-24 02:06:13.442168] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.623 [2024-07-24 02:06:13.442189] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5936c0, cid 3, qid 0 00:27:58.623 [2024-07-24 02:06:13.442282] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.623 [2024-07-24 02:06:13.442297] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.623 [2024-07-24 02:06:13.442304] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.623 [2024-07-24 02:06:13.442311] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5936c0) on tqpair=0x53cae0 00:27:58.623 [2024-07-24 02:06:13.442335] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.623 [2024-07-24 02:06:13.442345] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.623 [2024-07-24 02:06:13.442351] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x53cae0) 00:27:58.623 [2024-07-24 02:06:13.442361] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.623 [2024-07-24 02:06:13.442383] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5936c0, cid 3, qid 0 00:27:58.623 [2024-07-24 02:06:13.442482] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.623 [2024-07-24 02:06:13.442495] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.623 [2024-07-24 02:06:13.442502] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.623 [2024-07-24 02:06:13.442508] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5936c0) on tqpair=0x53cae0 00:27:58.623 [2024-07-24 02:06:13.442524] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.623 [2024-07-24 02:06:13.442533] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.623 [2024-07-24 02:06:13.442539] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x53cae0) 00:27:58.623 [2024-07-24 02:06:13.442553] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.623 [2024-07-24 02:06:13.442575] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5936c0, cid 3, qid 0 00:27:58.623 [2024-07-24 02:06:13.442670] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.623 [2024-07-24 02:06:13.442685] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.623 [2024-07-24 02:06:13.442692] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.623 [2024-07-24 02:06:13.442699] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5936c0) on tqpair=0x53cae0 00:27:58.623 [2024-07-24 02:06:13.442715] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.623 [2024-07-24 02:06:13.442725] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.623 [2024-07-24 02:06:13.442731] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x53cae0) 00:27:58.623 [2024-07-24 02:06:13.442741] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.624 [2024-07-24 02:06:13.442762] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5936c0, cid 3, qid 0 00:27:58.624 [2024-07-24 02:06:13.442856] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.624 [2024-07-24 02:06:13.442869] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.624 [2024-07-24 02:06:13.442875] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.624 [2024-07-24 02:06:13.442882] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5936c0) on tqpair=0x53cae0 00:27:58.624 [2024-07-24 02:06:13.442898] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.624 [2024-07-24 02:06:13.442907] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.624 [2024-07-24 02:06:13.442914] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x53cae0) 00:27:58.624 [2024-07-24 02:06:13.442924] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.624 [2024-07-24 02:06:13.442944] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5936c0, cid 3, qid 0 00:27:58.624 [2024-07-24 02:06:13.443042] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.624 [2024-07-24 02:06:13.443054] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.624 [2024-07-24 02:06:13.443060] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.624 [2024-07-24 02:06:13.443067] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5936c0) on tqpair=0x53cae0 00:27:58.624 [2024-07-24 02:06:13.443083] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.624 [2024-07-24 02:06:13.443092] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.624 [2024-07-24 02:06:13.443098] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x53cae0) 00:27:58.624 [2024-07-24 02:06:13.443108] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.624 [2024-07-24 02:06:13.443128] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5936c0, cid 3, qid 0 00:27:58.624 [2024-07-24 02:06:13.443225] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.624 [2024-07-24 02:06:13.443240] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.624 [2024-07-24 02:06:13.443247] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.624 [2024-07-24 02:06:13.443254] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5936c0) on tqpair=0x53cae0 00:27:58.624 [2024-07-24 02:06:13.443271] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.624 [2024-07-24 02:06:13.443280] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.624 [2024-07-24 02:06:13.443286] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x53cae0) 00:27:58.624 [2024-07-24 02:06:13.443297] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.624 [2024-07-24 02:06:13.447323] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5936c0, cid 3, qid 0 00:27:58.624 [2024-07-24 02:06:13.447344] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.624 [2024-07-24 02:06:13.447355] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.624 [2024-07-24 02:06:13.447361] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.624 [2024-07-24 02:06:13.447368] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5936c0) on tqpair=0x53cae0 00:27:58.624 [2024-07-24 02:06:13.447385] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.624 [2024-07-24 02:06:13.447394] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.624 [2024-07-24 02:06:13.447400] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x53cae0) 00:27:58.624 [2024-07-24 02:06:13.447411] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.624 [2024-07-24 02:06:13.447432] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5936c0, cid 3, qid 0 00:27:58.624 [2024-07-24 02:06:13.447567] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.624 [2024-07-24 02:06:13.447583] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.624 [2024-07-24 02:06:13.447590] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.624 [2024-07-24 02:06:13.447597] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5936c0) on tqpair=0x53cae0 00:27:58.624 [2024-07-24 02:06:13.447610] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 6 milliseconds 00:27:58.624 00:27:58.624 02:06:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:27:58.624 [2024-07-24 02:06:13.478863] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:27:58.624 [2024-07-24 02:06:13.478904] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1521286 ] 00:27:58.624 EAL: No free 2048 kB hugepages reported on node 1 00:27:58.890 [2024-07-24 02:06:13.510172] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:27:58.890 [2024-07-24 02:06:13.510217] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:27:58.890 [2024-07-24 02:06:13.510226] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:27:58.890 [2024-07-24 02:06:13.510240] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:27:58.890 [2024-07-24 02:06:13.510251] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:27:58.890 [2024-07-24 02:06:13.510442] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:27:58.890 [2024-07-24 02:06:13.510480] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x116eae0 0 00:27:58.890 [2024-07-24 02:06:13.521328] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:27:58.890 [2024-07-24 02:06:13.521352] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:27:58.890 [2024-07-24 02:06:13.521362] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:27:58.890 [2024-07-24 02:06:13.521368] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:27:58.890 [2024-07-24 02:06:13.521405] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.890 [2024-07-24 02:06:13.521422] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.890 [2024-07-24 02:06:13.521430] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x116eae0) 00:27:58.890 [2024-07-24 02:06:13.521444] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:27:58.890 [2024-07-24 02:06:13.521470] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11c5240, cid 0, qid 0 00:27:58.890 [2024-07-24 02:06:13.529350] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.890 [2024-07-24 02:06:13.529368] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.890 [2024-07-24 02:06:13.529375] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.890 [2024-07-24 02:06:13.529383] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11c5240) on tqpair=0x116eae0 00:27:58.890 [2024-07-24 02:06:13.529396] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:27:58.890 [2024-07-24 02:06:13.529407] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:27:58.890 [2024-07-24 02:06:13.529417] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:27:58.890 [2024-07-24 02:06:13.529435] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.890 [2024-07-24 02:06:13.529444] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.890 [2024-07-24 02:06:13.529450] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x116eae0) 00:27:58.890 [2024-07-24 02:06:13.529462] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.890 [2024-07-24 02:06:13.529486] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11c5240, cid 0, qid 0 00:27:58.890 [2024-07-24 02:06:13.529646] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.890 [2024-07-24 02:06:13.529662] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.890 [2024-07-24 02:06:13.529669] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.890 [2024-07-24 02:06:13.529676] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11c5240) on tqpair=0x116eae0 00:27:58.890 [2024-07-24 02:06:13.529687] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:27:58.890 [2024-07-24 02:06:13.529702] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:27:58.890 [2024-07-24 02:06:13.529715] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.890 [2024-07-24 02:06:13.529722] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.890 [2024-07-24 02:06:13.529729] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x116eae0) 00:27:58.890 [2024-07-24 02:06:13.529740] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.890 [2024-07-24 02:06:13.529762] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11c5240, cid 0, qid 0 00:27:58.890 [2024-07-24 02:06:13.529862] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.890 [2024-07-24 02:06:13.529877] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.890 [2024-07-24 02:06:13.529884] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.890 [2024-07-24 02:06:13.529891] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11c5240) on tqpair=0x116eae0 00:27:58.890 [2024-07-24 02:06:13.529899] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:27:58.890 [2024-07-24 02:06:13.529913] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:27:58.890 [2024-07-24 02:06:13.529926] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.890 [2024-07-24 02:06:13.529933] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.890 [2024-07-24 02:06:13.529944] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x116eae0) 00:27:58.890 [2024-07-24 02:06:13.529955] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.890 [2024-07-24 02:06:13.529977] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11c5240, cid 0, qid 0 00:27:58.890 [2024-07-24 02:06:13.530077] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.890 [2024-07-24 02:06:13.530090] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.890 [2024-07-24 02:06:13.530097] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.890 [2024-07-24 02:06:13.530103] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11c5240) on tqpair=0x116eae0 00:27:58.890 [2024-07-24 02:06:13.530112] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:27:58.890 [2024-07-24 02:06:13.530128] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.890 [2024-07-24 02:06:13.530138] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.890 [2024-07-24 02:06:13.530144] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x116eae0) 00:27:58.890 [2024-07-24 02:06:13.530155] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.890 [2024-07-24 02:06:13.530176] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11c5240, cid 0, qid 0 00:27:58.890 [2024-07-24 02:06:13.530277] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.890 [2024-07-24 02:06:13.530289] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.891 [2024-07-24 02:06:13.530296] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.891 [2024-07-24 02:06:13.530303] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11c5240) on tqpair=0x116eae0 00:27:58.891 [2024-07-24 02:06:13.530310] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:27:58.891 [2024-07-24 02:06:13.530327] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:27:58.891 [2024-07-24 02:06:13.530341] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:27:58.891 [2024-07-24 02:06:13.530464] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:27:58.891 [2024-07-24 02:06:13.530472] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:27:58.891 [2024-07-24 02:06:13.530483] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.891 [2024-07-24 02:06:13.530491] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.891 [2024-07-24 02:06:13.530497] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x116eae0) 00:27:58.891 [2024-07-24 02:06:13.530508] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.891 [2024-07-24 02:06:13.530529] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11c5240, cid 0, qid 0 00:27:58.891 [2024-07-24 02:06:13.530676] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.891 [2024-07-24 02:06:13.530692] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.891 [2024-07-24 02:06:13.530699] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.891 [2024-07-24 02:06:13.530706] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11c5240) on tqpair=0x116eae0 00:27:58.891 [2024-07-24 02:06:13.530714] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:27:58.891 [2024-07-24 02:06:13.530731] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.891 [2024-07-24 02:06:13.530744] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.891 [2024-07-24 02:06:13.530752] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x116eae0) 00:27:58.891 [2024-07-24 02:06:13.530762] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.891 [2024-07-24 02:06:13.530784] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11c5240, cid 0, qid 0 00:27:58.891 [2024-07-24 02:06:13.530883] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.891 [2024-07-24 02:06:13.530895] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.891 [2024-07-24 02:06:13.530902] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.891 [2024-07-24 02:06:13.530909] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11c5240) on tqpair=0x116eae0 00:27:58.891 [2024-07-24 02:06:13.530916] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:27:58.891 [2024-07-24 02:06:13.530925] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:27:58.891 [2024-07-24 02:06:13.530938] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:27:58.891 [2024-07-24 02:06:13.530952] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:27:58.891 [2024-07-24 02:06:13.530966] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.891 [2024-07-24 02:06:13.530973] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x116eae0) 00:27:58.891 [2024-07-24 02:06:13.530984] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.891 [2024-07-24 02:06:13.531005] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11c5240, cid 0, qid 0 00:27:58.891 [2024-07-24 02:06:13.531162] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:58.891 [2024-07-24 02:06:13.531178] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:58.891 [2024-07-24 02:06:13.531185] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:58.891 [2024-07-24 02:06:13.531191] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x116eae0): datao=0, datal=4096, cccid=0 00:27:58.891 [2024-07-24 02:06:13.531199] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x11c5240) on tqpair(0x116eae0): expected_datao=0, payload_size=4096 00:27:58.891 [2024-07-24 02:06:13.531206] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.891 [2024-07-24 02:06:13.531216] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:58.891 [2024-07-24 02:06:13.531224] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:58.891 [2024-07-24 02:06:13.571467] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.891 [2024-07-24 02:06:13.571486] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.891 [2024-07-24 02:06:13.571494] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.891 [2024-07-24 02:06:13.571501] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11c5240) on tqpair=0x116eae0 00:27:58.891 [2024-07-24 02:06:13.571512] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:27:58.891 [2024-07-24 02:06:13.571521] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:27:58.891 [2024-07-24 02:06:13.571528] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:27:58.891 [2024-07-24 02:06:13.571535] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:27:58.891 [2024-07-24 02:06:13.571542] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:27:58.891 [2024-07-24 02:06:13.571555] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:27:58.891 [2024-07-24 02:06:13.571570] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:27:58.891 [2024-07-24 02:06:13.571587] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.891 [2024-07-24 02:06:13.571596] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.891 [2024-07-24 02:06:13.571603] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x116eae0) 00:27:58.891 [2024-07-24 02:06:13.571614] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:27:58.891 [2024-07-24 02:06:13.571638] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11c5240, cid 0, qid 0 00:27:58.891 [2024-07-24 02:06:13.571751] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.891 [2024-07-24 02:06:13.571764] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.891 [2024-07-24 02:06:13.571770] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.891 [2024-07-24 02:06:13.571777] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11c5240) on tqpair=0x116eae0 00:27:58.891 [2024-07-24 02:06:13.571788] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.891 [2024-07-24 02:06:13.571795] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.891 [2024-07-24 02:06:13.571801] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x116eae0) 00:27:58.891 [2024-07-24 02:06:13.571811] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:58.891 [2024-07-24 02:06:13.571821] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.891 [2024-07-24 02:06:13.571828] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.891 [2024-07-24 02:06:13.571835] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x116eae0) 00:27:58.891 [2024-07-24 02:06:13.571843] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:58.891 [2024-07-24 02:06:13.571853] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.891 [2024-07-24 02:06:13.571860] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.891 [2024-07-24 02:06:13.571866] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x116eae0) 00:27:58.891 [2024-07-24 02:06:13.571875] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:58.891 [2024-07-24 02:06:13.571884] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.891 [2024-07-24 02:06:13.571891] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.891 [2024-07-24 02:06:13.571897] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x116eae0) 00:27:58.891 [2024-07-24 02:06:13.571906] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:58.891 [2024-07-24 02:06:13.571930] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:27:58.891 [2024-07-24 02:06:13.571948] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:27:58.891 [2024-07-24 02:06:13.571961] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.891 [2024-07-24 02:06:13.571968] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x116eae0) 00:27:58.891 [2024-07-24 02:06:13.571978] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.891 [2024-07-24 02:06:13.571999] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11c5240, cid 0, qid 0 00:27:58.891 [2024-07-24 02:06:13.572028] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11c53c0, cid 1, qid 0 00:27:58.891 [2024-07-24 02:06:13.572037] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11c5540, cid 2, qid 0 00:27:58.891 [2024-07-24 02:06:13.572045] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11c56c0, cid 3, qid 0 00:27:58.891 [2024-07-24 02:06:13.572053] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11c5840, cid 4, qid 0 00:27:58.891 [2024-07-24 02:06:13.572200] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.891 [2024-07-24 02:06:13.572215] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.891 [2024-07-24 02:06:13.572222] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.892 [2024-07-24 02:06:13.572229] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11c5840) on tqpair=0x116eae0 00:27:58.892 [2024-07-24 02:06:13.572237] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:27:58.892 [2024-07-24 02:06:13.572246] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:27:58.892 [2024-07-24 02:06:13.572264] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:27:58.892 [2024-07-24 02:06:13.572276] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:27:58.892 [2024-07-24 02:06:13.572287] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.892 [2024-07-24 02:06:13.572294] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.892 [2024-07-24 02:06:13.572301] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x116eae0) 00:27:58.892 [2024-07-24 02:06:13.572311] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:27:58.892 [2024-07-24 02:06:13.572356] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11c5840, cid 4, qid 0 00:27:58.892 [2024-07-24 02:06:13.572519] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.892 [2024-07-24 02:06:13.572535] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.892 [2024-07-24 02:06:13.572542] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.892 [2024-07-24 02:06:13.572549] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11c5840) on tqpair=0x116eae0 00:27:58.892 [2024-07-24 02:06:13.572618] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:27:58.892 [2024-07-24 02:06:13.572637] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:27:58.892 [2024-07-24 02:06:13.572652] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.892 [2024-07-24 02:06:13.572660] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x116eae0) 00:27:58.892 [2024-07-24 02:06:13.572671] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.892 [2024-07-24 02:06:13.572707] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11c5840, cid 4, qid 0 00:27:58.892 [2024-07-24 02:06:13.572899] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:58.892 [2024-07-24 02:06:13.572915] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:58.892 [2024-07-24 02:06:13.572922] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:58.892 [2024-07-24 02:06:13.572928] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x116eae0): datao=0, datal=4096, cccid=4 00:27:58.892 [2024-07-24 02:06:13.572936] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x11c5840) on tqpair(0x116eae0): expected_datao=0, payload_size=4096 00:27:58.892 [2024-07-24 02:06:13.572944] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.892 [2024-07-24 02:06:13.572965] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:58.892 [2024-07-24 02:06:13.572975] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:58.892 [2024-07-24 02:06:13.616329] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.892 [2024-07-24 02:06:13.616347] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.892 [2024-07-24 02:06:13.616355] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.892 [2024-07-24 02:06:13.616361] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11c5840) on tqpair=0x116eae0 00:27:58.892 [2024-07-24 02:06:13.616375] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:27:58.892 [2024-07-24 02:06:13.616396] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:27:58.892 [2024-07-24 02:06:13.616415] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:27:58.892 [2024-07-24 02:06:13.616429] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.892 [2024-07-24 02:06:13.616436] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x116eae0) 00:27:58.892 [2024-07-24 02:06:13.616447] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.892 [2024-07-24 02:06:13.616469] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11c5840, cid 4, qid 0 00:27:58.892 [2024-07-24 02:06:13.616630] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:58.892 [2024-07-24 02:06:13.616646] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:58.892 [2024-07-24 02:06:13.616653] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:58.892 [2024-07-24 02:06:13.616660] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x116eae0): datao=0, datal=4096, cccid=4 00:27:58.892 [2024-07-24 02:06:13.616668] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x11c5840) on tqpair(0x116eae0): expected_datao=0, payload_size=4096 00:27:58.892 [2024-07-24 02:06:13.616675] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.892 [2024-07-24 02:06:13.616693] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:58.892 [2024-07-24 02:06:13.616703] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:58.892 [2024-07-24 02:06:13.657445] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.892 [2024-07-24 02:06:13.657463] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.892 [2024-07-24 02:06:13.657470] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.892 [2024-07-24 02:06:13.657477] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11c5840) on tqpair=0x116eae0 00:27:58.892 [2024-07-24 02:06:13.657500] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:27:58.892 [2024-07-24 02:06:13.657519] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:27:58.892 [2024-07-24 02:06:13.657533] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.892 [2024-07-24 02:06:13.657542] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x116eae0) 00:27:58.892 [2024-07-24 02:06:13.657553] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.892 [2024-07-24 02:06:13.657576] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11c5840, cid 4, qid 0 00:27:58.892 [2024-07-24 02:06:13.657686] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:58.892 [2024-07-24 02:06:13.657702] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:58.892 [2024-07-24 02:06:13.657709] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:58.892 [2024-07-24 02:06:13.657715] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x116eae0): datao=0, datal=4096, cccid=4 00:27:58.892 [2024-07-24 02:06:13.657727] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x11c5840) on tqpair(0x116eae0): expected_datao=0, payload_size=4096 00:27:58.892 [2024-07-24 02:06:13.657736] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.892 [2024-07-24 02:06:13.657753] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:58.892 [2024-07-24 02:06:13.657762] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:58.892 [2024-07-24 02:06:13.698433] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.892 [2024-07-24 02:06:13.698451] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.892 [2024-07-24 02:06:13.698458] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.892 [2024-07-24 02:06:13.698465] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11c5840) on tqpair=0x116eae0 00:27:58.892 [2024-07-24 02:06:13.698479] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:27:58.892 [2024-07-24 02:06:13.698495] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:27:58.892 [2024-07-24 02:06:13.698510] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:27:58.892 [2024-07-24 02:06:13.698523] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:27:58.892 [2024-07-24 02:06:13.698532] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:27:58.892 [2024-07-24 02:06:13.698541] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:27:58.892 [2024-07-24 02:06:13.698549] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:27:58.892 [2024-07-24 02:06:13.698557] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:27:58.892 [2024-07-24 02:06:13.698566] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:27:58.892 [2024-07-24 02:06:13.698585] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.892 [2024-07-24 02:06:13.698594] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x116eae0) 00:27:58.892 [2024-07-24 02:06:13.698605] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.892 [2024-07-24 02:06:13.698616] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.892 [2024-07-24 02:06:13.698623] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.892 [2024-07-24 02:06:13.698630] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x116eae0) 00:27:58.892 [2024-07-24 02:06:13.698639] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:27:58.892 [2024-07-24 02:06:13.698665] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11c5840, cid 4, qid 0 00:27:58.892 [2024-07-24 02:06:13.698677] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11c59c0, cid 5, qid 0 00:27:58.893 [2024-07-24 02:06:13.698787] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.893 [2024-07-24 02:06:13.698800] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.893 [2024-07-24 02:06:13.698807] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.893 [2024-07-24 02:06:13.698814] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11c5840) on tqpair=0x116eae0 00:27:58.893 [2024-07-24 02:06:13.698824] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.893 [2024-07-24 02:06:13.698833] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.893 [2024-07-24 02:06:13.698843] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.893 [2024-07-24 02:06:13.698851] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11c59c0) on tqpair=0x116eae0 00:27:58.893 [2024-07-24 02:06:13.698867] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.893 [2024-07-24 02:06:13.698877] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x116eae0) 00:27:58.893 [2024-07-24 02:06:13.698887] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.893 [2024-07-24 02:06:13.698909] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11c59c0, cid 5, qid 0 00:27:58.893 [2024-07-24 02:06:13.699013] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.893 [2024-07-24 02:06:13.699029] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.893 [2024-07-24 02:06:13.699036] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.893 [2024-07-24 02:06:13.699043] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11c59c0) on tqpair=0x116eae0 00:27:58.893 [2024-07-24 02:06:13.699059] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.893 [2024-07-24 02:06:13.699068] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x116eae0) 00:27:58.893 [2024-07-24 02:06:13.699079] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.893 [2024-07-24 02:06:13.699100] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11c59c0, cid 5, qid 0 00:27:58.893 [2024-07-24 02:06:13.699197] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.893 [2024-07-24 02:06:13.699212] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.893 [2024-07-24 02:06:13.699219] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.893 [2024-07-24 02:06:13.699226] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11c59c0) on tqpair=0x116eae0 00:27:58.893 [2024-07-24 02:06:13.699242] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.893 [2024-07-24 02:06:13.699251] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x116eae0) 00:27:58.893 [2024-07-24 02:06:13.699262] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.893 [2024-07-24 02:06:13.699283] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11c59c0, cid 5, qid 0 00:27:58.893 [2024-07-24 02:06:13.699397] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.893 [2024-07-24 02:06:13.699413] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.893 [2024-07-24 02:06:13.699421] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.893 [2024-07-24 02:06:13.699427] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11c59c0) on tqpair=0x116eae0 00:27:58.893 [2024-07-24 02:06:13.699451] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.893 [2024-07-24 02:06:13.699462] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x116eae0) 00:27:58.893 [2024-07-24 02:06:13.699473] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.893 [2024-07-24 02:06:13.699485] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.893 [2024-07-24 02:06:13.699492] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x116eae0) 00:27:58.893 [2024-07-24 02:06:13.699502] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.893 [2024-07-24 02:06:13.699513] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.893 [2024-07-24 02:06:13.699520] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x116eae0) 00:27:58.893 [2024-07-24 02:06:13.699530] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.893 [2024-07-24 02:06:13.699545] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.893 [2024-07-24 02:06:13.699554] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x116eae0) 00:27:58.893 [2024-07-24 02:06:13.699564] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.893 [2024-07-24 02:06:13.699586] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11c59c0, cid 5, qid 0 00:27:58.893 [2024-07-24 02:06:13.699613] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11c5840, cid 4, qid 0 00:27:58.893 [2024-07-24 02:06:13.699621] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11c5b40, cid 6, qid 0 00:27:58.893 [2024-07-24 02:06:13.699628] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11c5cc0, cid 7, qid 0 00:27:58.893 [2024-07-24 02:06:13.699907] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:58.893 [2024-07-24 02:06:13.699921] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:58.893 [2024-07-24 02:06:13.699928] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:58.893 [2024-07-24 02:06:13.699934] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x116eae0): datao=0, datal=8192, cccid=5 00:27:58.893 [2024-07-24 02:06:13.699942] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x11c59c0) on tqpair(0x116eae0): expected_datao=0, payload_size=8192 00:27:58.893 [2024-07-24 02:06:13.699949] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.893 [2024-07-24 02:06:13.699990] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:58.893 [2024-07-24 02:06:13.700001] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:58.893 [2024-07-24 02:06:13.700009] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:58.893 [2024-07-24 02:06:13.700019] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:58.893 [2024-07-24 02:06:13.700025] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:58.893 [2024-07-24 02:06:13.700031] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x116eae0): datao=0, datal=512, cccid=4 00:27:58.893 [2024-07-24 02:06:13.700039] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x11c5840) on tqpair(0x116eae0): expected_datao=0, payload_size=512 00:27:58.893 [2024-07-24 02:06:13.700046] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.893 [2024-07-24 02:06:13.700055] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:58.893 [2024-07-24 02:06:13.700061] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:58.893 [2024-07-24 02:06:13.700070] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:58.893 [2024-07-24 02:06:13.700079] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:58.893 [2024-07-24 02:06:13.700085] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:58.893 [2024-07-24 02:06:13.700091] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x116eae0): datao=0, datal=512, cccid=6 00:27:58.893 [2024-07-24 02:06:13.700099] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x11c5b40) on tqpair(0x116eae0): expected_datao=0, payload_size=512 00:27:58.893 [2024-07-24 02:06:13.700106] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.893 [2024-07-24 02:06:13.700114] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:58.893 [2024-07-24 02:06:13.700121] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:58.893 [2024-07-24 02:06:13.700129] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:58.893 [2024-07-24 02:06:13.700138] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:58.893 [2024-07-24 02:06:13.700145] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:58.893 [2024-07-24 02:06:13.700151] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x116eae0): datao=0, datal=4096, cccid=7 00:27:58.893 [2024-07-24 02:06:13.700161] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x11c5cc0) on tqpair(0x116eae0): expected_datao=0, payload_size=4096 00:27:58.893 [2024-07-24 02:06:13.700169] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.893 [2024-07-24 02:06:13.700178] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:58.893 [2024-07-24 02:06:13.700186] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:58.893 [2024-07-24 02:06:13.700197] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.893 [2024-07-24 02:06:13.700206] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.893 [2024-07-24 02:06:13.700213] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.893 [2024-07-24 02:06:13.700219] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11c59c0) on tqpair=0x116eae0 00:27:58.893 [2024-07-24 02:06:13.700237] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.893 [2024-07-24 02:06:13.700248] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.893 [2024-07-24 02:06:13.700254] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.893 [2024-07-24 02:06:13.700261] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11c5840) on tqpair=0x116eae0 00:27:58.893 [2024-07-24 02:06:13.700291] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.893 [2024-07-24 02:06:13.700301] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.893 [2024-07-24 02:06:13.700307] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.893 [2024-07-24 02:06:13.700313] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11c5b40) on tqpair=0x116eae0 00:27:58.893 [2024-07-24 02:06:13.704337] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.894 [2024-07-24 02:06:13.704348] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.894 [2024-07-24 02:06:13.704355] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.894 [2024-07-24 02:06:13.704362] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11c5cc0) on tqpair=0x116eae0 00:27:58.894 ===================================================== 00:27:58.894 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:58.894 ===================================================== 00:27:58.894 Controller Capabilities/Features 00:27:58.894 ================================ 00:27:58.894 Vendor ID: 8086 00:27:58.894 Subsystem Vendor ID: 8086 00:27:58.894 Serial Number: SPDK00000000000001 00:27:58.894 Model Number: SPDK bdev Controller 00:27:58.894 Firmware Version: 24.09 00:27:58.894 Recommended Arb Burst: 6 00:27:58.894 IEEE OUI Identifier: e4 d2 5c 00:27:58.894 Multi-path I/O 00:27:58.894 May have multiple subsystem ports: Yes 00:27:58.894 May have multiple controllers: Yes 00:27:58.894 Associated with SR-IOV VF: No 00:27:58.894 Max Data Transfer Size: 131072 00:27:58.894 Max Number of Namespaces: 32 00:27:58.894 Max Number of I/O Queues: 127 00:27:58.894 NVMe Specification Version (VS): 1.3 00:27:58.894 NVMe Specification Version (Identify): 1.3 00:27:58.894 Maximum Queue Entries: 128 00:27:58.894 Contiguous Queues Required: Yes 00:27:58.894 Arbitration Mechanisms Supported 00:27:58.894 Weighted Round Robin: Not Supported 00:27:58.894 Vendor Specific: Not Supported 00:27:58.894 Reset Timeout: 15000 ms 00:27:58.894 Doorbell Stride: 4 bytes 00:27:58.894 NVM Subsystem Reset: Not Supported 00:27:58.894 Command Sets Supported 00:27:58.894 NVM Command Set: Supported 00:27:58.894 Boot Partition: Not Supported 00:27:58.894 Memory Page Size Minimum: 4096 bytes 00:27:58.894 Memory Page Size Maximum: 4096 bytes 00:27:58.894 Persistent Memory Region: Not Supported 00:27:58.894 Optional Asynchronous Events Supported 00:27:58.894 Namespace Attribute Notices: Supported 00:27:58.894 Firmware Activation Notices: Not Supported 00:27:58.894 ANA Change Notices: Not Supported 00:27:58.894 PLE Aggregate Log Change Notices: Not Supported 00:27:58.894 LBA Status Info Alert Notices: Not Supported 00:27:58.894 EGE Aggregate Log Change Notices: Not Supported 00:27:58.894 Normal NVM Subsystem Shutdown event: Not Supported 00:27:58.894 Zone Descriptor Change Notices: Not Supported 00:27:58.894 Discovery Log Change Notices: Not Supported 00:27:58.894 Controller Attributes 00:27:58.894 128-bit Host Identifier: Supported 00:27:58.894 Non-Operational Permissive Mode: Not Supported 00:27:58.894 NVM Sets: Not Supported 00:27:58.894 Read Recovery Levels: Not Supported 00:27:58.894 Endurance Groups: Not Supported 00:27:58.894 Predictable Latency Mode: Not Supported 00:27:58.894 Traffic Based Keep ALive: Not Supported 00:27:58.894 Namespace Granularity: Not Supported 00:27:58.894 SQ Associations: Not Supported 00:27:58.894 UUID List: Not Supported 00:27:58.894 Multi-Domain Subsystem: Not Supported 00:27:58.894 Fixed Capacity Management: Not Supported 00:27:58.894 Variable Capacity Management: Not Supported 00:27:58.894 Delete Endurance Group: Not Supported 00:27:58.894 Delete NVM Set: Not Supported 00:27:58.894 Extended LBA Formats Supported: Not Supported 00:27:58.894 Flexible Data Placement Supported: Not Supported 00:27:58.894 00:27:58.894 Controller Memory Buffer Support 00:27:58.894 ================================ 00:27:58.894 Supported: No 00:27:58.894 00:27:58.894 Persistent Memory Region Support 00:27:58.894 ================================ 00:27:58.894 Supported: No 00:27:58.894 00:27:58.894 Admin Command Set Attributes 00:27:58.894 ============================ 00:27:58.894 Security Send/Receive: Not Supported 00:27:58.894 Format NVM: Not Supported 00:27:58.894 Firmware Activate/Download: Not Supported 00:27:58.894 Namespace Management: Not Supported 00:27:58.894 Device Self-Test: Not Supported 00:27:58.894 Directives: Not Supported 00:27:58.894 NVMe-MI: Not Supported 00:27:58.894 Virtualization Management: Not Supported 00:27:58.894 Doorbell Buffer Config: Not Supported 00:27:58.894 Get LBA Status Capability: Not Supported 00:27:58.894 Command & Feature Lockdown Capability: Not Supported 00:27:58.894 Abort Command Limit: 4 00:27:58.894 Async Event Request Limit: 4 00:27:58.894 Number of Firmware Slots: N/A 00:27:58.894 Firmware Slot 1 Read-Only: N/A 00:27:58.894 Firmware Activation Without Reset: N/A 00:27:58.894 Multiple Update Detection Support: N/A 00:27:58.894 Firmware Update Granularity: No Information Provided 00:27:58.894 Per-Namespace SMART Log: No 00:27:58.894 Asymmetric Namespace Access Log Page: Not Supported 00:27:58.894 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:27:58.894 Command Effects Log Page: Supported 00:27:58.894 Get Log Page Extended Data: Supported 00:27:58.894 Telemetry Log Pages: Not Supported 00:27:58.894 Persistent Event Log Pages: Not Supported 00:27:58.894 Supported Log Pages Log Page: May Support 00:27:58.894 Commands Supported & Effects Log Page: Not Supported 00:27:58.894 Feature Identifiers & Effects Log Page:May Support 00:27:58.894 NVMe-MI Commands & Effects Log Page: May Support 00:27:58.894 Data Area 4 for Telemetry Log: Not Supported 00:27:58.894 Error Log Page Entries Supported: 128 00:27:58.894 Keep Alive: Supported 00:27:58.894 Keep Alive Granularity: 10000 ms 00:27:58.894 00:27:58.894 NVM Command Set Attributes 00:27:58.894 ========================== 00:27:58.894 Submission Queue Entry Size 00:27:58.894 Max: 64 00:27:58.894 Min: 64 00:27:58.894 Completion Queue Entry Size 00:27:58.894 Max: 16 00:27:58.894 Min: 16 00:27:58.894 Number of Namespaces: 32 00:27:58.894 Compare Command: Supported 00:27:58.894 Write Uncorrectable Command: Not Supported 00:27:58.894 Dataset Management Command: Supported 00:27:58.894 Write Zeroes Command: Supported 00:27:58.894 Set Features Save Field: Not Supported 00:27:58.894 Reservations: Supported 00:27:58.894 Timestamp: Not Supported 00:27:58.894 Copy: Supported 00:27:58.894 Volatile Write Cache: Present 00:27:58.894 Atomic Write Unit (Normal): 1 00:27:58.894 Atomic Write Unit (PFail): 1 00:27:58.894 Atomic Compare & Write Unit: 1 00:27:58.894 Fused Compare & Write: Supported 00:27:58.894 Scatter-Gather List 00:27:58.894 SGL Command Set: Supported 00:27:58.894 SGL Keyed: Supported 00:27:58.894 SGL Bit Bucket Descriptor: Not Supported 00:27:58.894 SGL Metadata Pointer: Not Supported 00:27:58.894 Oversized SGL: Not Supported 00:27:58.894 SGL Metadata Address: Not Supported 00:27:58.894 SGL Offset: Supported 00:27:58.894 Transport SGL Data Block: Not Supported 00:27:58.894 Replay Protected Memory Block: Not Supported 00:27:58.894 00:27:58.894 Firmware Slot Information 00:27:58.894 ========================= 00:27:58.894 Active slot: 1 00:27:58.894 Slot 1 Firmware Revision: 24.09 00:27:58.894 00:27:58.894 00:27:58.894 Commands Supported and Effects 00:27:58.894 ============================== 00:27:58.894 Admin Commands 00:27:58.894 -------------- 00:27:58.894 Get Log Page (02h): Supported 00:27:58.894 Identify (06h): Supported 00:27:58.894 Abort (08h): Supported 00:27:58.894 Set Features (09h): Supported 00:27:58.894 Get Features (0Ah): Supported 00:27:58.894 Asynchronous Event Request (0Ch): Supported 00:27:58.894 Keep Alive (18h): Supported 00:27:58.894 I/O Commands 00:27:58.894 ------------ 00:27:58.894 Flush (00h): Supported LBA-Change 00:27:58.894 Write (01h): Supported LBA-Change 00:27:58.894 Read (02h): Supported 00:27:58.894 Compare (05h): Supported 00:27:58.894 Write Zeroes (08h): Supported LBA-Change 00:27:58.894 Dataset Management (09h): Supported LBA-Change 00:27:58.894 Copy (19h): Supported LBA-Change 00:27:58.894 00:27:58.894 Error Log 00:27:58.894 ========= 00:27:58.894 00:27:58.894 Arbitration 00:27:58.894 =========== 00:27:58.894 Arbitration Burst: 1 00:27:58.894 00:27:58.894 Power Management 00:27:58.894 ================ 00:27:58.894 Number of Power States: 1 00:27:58.894 Current Power State: Power State #0 00:27:58.894 Power State #0: 00:27:58.894 Max Power: 0.00 W 00:27:58.894 Non-Operational State: Operational 00:27:58.894 Entry Latency: Not Reported 00:27:58.894 Exit Latency: Not Reported 00:27:58.894 Relative Read Throughput: 0 00:27:58.894 Relative Read Latency: 0 00:27:58.894 Relative Write Throughput: 0 00:27:58.895 Relative Write Latency: 0 00:27:58.895 Idle Power: Not Reported 00:27:58.895 Active Power: Not Reported 00:27:58.895 Non-Operational Permissive Mode: Not Supported 00:27:58.895 00:27:58.895 Health Information 00:27:58.895 ================== 00:27:58.895 Critical Warnings: 00:27:58.895 Available Spare Space: OK 00:27:58.895 Temperature: OK 00:27:58.895 Device Reliability: OK 00:27:58.895 Read Only: No 00:27:58.895 Volatile Memory Backup: OK 00:27:58.895 Current Temperature: 0 Kelvin (-273 Celsius) 00:27:58.895 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:27:58.895 Available Spare: 0% 00:27:58.895 Available Spare Threshold: 0% 00:27:58.895 Life Percentage Used:[2024-07-24 02:06:13.704475] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.895 [2024-07-24 02:06:13.704487] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x116eae0) 00:27:58.895 [2024-07-24 02:06:13.704498] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.895 [2024-07-24 02:06:13.704521] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11c5cc0, cid 7, qid 0 00:27:58.895 [2024-07-24 02:06:13.704677] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.895 [2024-07-24 02:06:13.704691] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.895 [2024-07-24 02:06:13.704698] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.895 [2024-07-24 02:06:13.704705] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11c5cc0) on tqpair=0x116eae0 00:27:58.895 [2024-07-24 02:06:13.704747] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:27:58.895 [2024-07-24 02:06:13.704767] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11c5240) on tqpair=0x116eae0 00:27:58.895 [2024-07-24 02:06:13.704778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.895 [2024-07-24 02:06:13.704787] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11c53c0) on tqpair=0x116eae0 00:27:58.895 [2024-07-24 02:06:13.704795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.895 [2024-07-24 02:06:13.704803] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11c5540) on tqpair=0x116eae0 00:27:58.895 [2024-07-24 02:06:13.704810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.895 [2024-07-24 02:06:13.704819] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11c56c0) on tqpair=0x116eae0 00:27:58.895 [2024-07-24 02:06:13.704844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.895 [2024-07-24 02:06:13.704858] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.895 [2024-07-24 02:06:13.704866] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.895 [2024-07-24 02:06:13.704873] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x116eae0) 00:27:58.895 [2024-07-24 02:06:13.704883] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.895 [2024-07-24 02:06:13.704919] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11c56c0, cid 3, qid 0 00:27:58.895 [2024-07-24 02:06:13.705091] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.895 [2024-07-24 02:06:13.705107] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.895 [2024-07-24 02:06:13.705114] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.895 [2024-07-24 02:06:13.705121] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11c56c0) on tqpair=0x116eae0 00:27:58.895 [2024-07-24 02:06:13.705132] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.895 [2024-07-24 02:06:13.705140] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.895 [2024-07-24 02:06:13.705147] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x116eae0) 00:27:58.895 [2024-07-24 02:06:13.705157] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.895 [2024-07-24 02:06:13.705184] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11c56c0, cid 3, qid 0 00:27:58.895 [2024-07-24 02:06:13.705294] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.895 [2024-07-24 02:06:13.705310] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.895 [2024-07-24 02:06:13.705323] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.895 [2024-07-24 02:06:13.705330] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11c56c0) on tqpair=0x116eae0 00:27:58.895 [2024-07-24 02:06:13.705338] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:27:58.895 [2024-07-24 02:06:13.705346] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:27:58.895 [2024-07-24 02:06:13.705362] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.895 [2024-07-24 02:06:13.705372] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.895 [2024-07-24 02:06:13.705378] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x116eae0) 00:27:58.895 [2024-07-24 02:06:13.705389] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.895 [2024-07-24 02:06:13.705410] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11c56c0, cid 3, qid 0 00:27:58.895 [2024-07-24 02:06:13.705522] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.895 [2024-07-24 02:06:13.705535] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.895 [2024-07-24 02:06:13.705541] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.895 [2024-07-24 02:06:13.705548] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11c56c0) on tqpair=0x116eae0 00:27:58.895 [2024-07-24 02:06:13.705564] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.895 [2024-07-24 02:06:13.705574] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.895 [2024-07-24 02:06:13.705580] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x116eae0) 00:27:58.895 [2024-07-24 02:06:13.705591] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.895 [2024-07-24 02:06:13.705611] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11c56c0, cid 3, qid 0 00:27:58.895 [2024-07-24 02:06:13.705717] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.895 [2024-07-24 02:06:13.705737] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.895 [2024-07-24 02:06:13.705745] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.895 [2024-07-24 02:06:13.705752] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11c56c0) on tqpair=0x116eae0 00:27:58.895 [2024-07-24 02:06:13.705769] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.895 [2024-07-24 02:06:13.705779] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.895 [2024-07-24 02:06:13.705785] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x116eae0) 00:27:58.895 [2024-07-24 02:06:13.705796] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.895 [2024-07-24 02:06:13.705817] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11c56c0, cid 3, qid 0 00:27:58.895 [2024-07-24 02:06:13.705915] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.895 [2024-07-24 02:06:13.705927] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.895 [2024-07-24 02:06:13.705934] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.896 [2024-07-24 02:06:13.705941] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11c56c0) on tqpair=0x116eae0 00:27:58.896 [2024-07-24 02:06:13.705957] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.896 [2024-07-24 02:06:13.705966] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.896 [2024-07-24 02:06:13.705973] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x116eae0) 00:27:58.896 [2024-07-24 02:06:13.705983] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.896 [2024-07-24 02:06:13.706004] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11c56c0, cid 3, qid 0 00:27:58.896 [2024-07-24 02:06:13.706100] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.896 [2024-07-24 02:06:13.706113] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.896 [2024-07-24 02:06:13.706120] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.896 [2024-07-24 02:06:13.706127] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11c56c0) on tqpair=0x116eae0 00:27:58.896 [2024-07-24 02:06:13.706143] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.896 [2024-07-24 02:06:13.706152] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.896 [2024-07-24 02:06:13.706159] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x116eae0) 00:27:58.896 [2024-07-24 02:06:13.706169] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.896 [2024-07-24 02:06:13.706190] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11c56c0, cid 3, qid 0 00:27:58.896 [2024-07-24 02:06:13.706287] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.896 [2024-07-24 02:06:13.706299] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.896 [2024-07-24 02:06:13.706306] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.896 [2024-07-24 02:06:13.706313] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11c56c0) on tqpair=0x116eae0 00:27:58.896 [2024-07-24 02:06:13.706336] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.896 [2024-07-24 02:06:13.706346] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.896 [2024-07-24 02:06:13.706353] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x116eae0) 00:27:58.896 [2024-07-24 02:06:13.706363] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.896 [2024-07-24 02:06:13.706385] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11c56c0, cid 3, qid 0 00:27:58.896 [2024-07-24 02:06:13.706480] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.896 [2024-07-24 02:06:13.706495] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.896 [2024-07-24 02:06:13.706506] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.896 [2024-07-24 02:06:13.706513] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11c56c0) on tqpair=0x116eae0 00:27:58.896 [2024-07-24 02:06:13.706531] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.896 [2024-07-24 02:06:13.706540] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.896 [2024-07-24 02:06:13.706547] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x116eae0) 00:27:58.896 [2024-07-24 02:06:13.706558] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.896 [2024-07-24 02:06:13.706579] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11c56c0, cid 3, qid 0 00:27:58.896 [2024-07-24 02:06:13.706670] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.896 [2024-07-24 02:06:13.706686] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.896 [2024-07-24 02:06:13.706692] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.896 [2024-07-24 02:06:13.706699] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11c56c0) on tqpair=0x116eae0 00:27:58.896 [2024-07-24 02:06:13.706716] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.896 [2024-07-24 02:06:13.706725] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.896 [2024-07-24 02:06:13.706732] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x116eae0) 00:27:58.896 [2024-07-24 02:06:13.706742] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.896 [2024-07-24 02:06:13.706763] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11c56c0, cid 3, qid 0 00:27:58.896 [2024-07-24 02:06:13.706864] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.896 [2024-07-24 02:06:13.706876] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.896 [2024-07-24 02:06:13.706883] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.896 [2024-07-24 02:06:13.706890] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11c56c0) on tqpair=0x116eae0 00:27:58.896 [2024-07-24 02:06:13.706906] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.896 [2024-07-24 02:06:13.706915] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.896 [2024-07-24 02:06:13.706922] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x116eae0) 00:27:58.896 [2024-07-24 02:06:13.706932] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.896 [2024-07-24 02:06:13.706953] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11c56c0, cid 3, qid 0 00:27:58.896 [2024-07-24 02:06:13.707047] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.896 [2024-07-24 02:06:13.707062] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.896 [2024-07-24 02:06:13.707069] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.896 [2024-07-24 02:06:13.707076] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11c56c0) on tqpair=0x116eae0 00:27:58.896 [2024-07-24 02:06:13.707092] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.896 [2024-07-24 02:06:13.707102] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.896 [2024-07-24 02:06:13.707109] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x116eae0) 00:27:58.896 [2024-07-24 02:06:13.707119] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.896 [2024-07-24 02:06:13.707140] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11c56c0, cid 3, qid 0 00:27:58.896 [2024-07-24 02:06:13.707233] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.896 [2024-07-24 02:06:13.707245] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.896 [2024-07-24 02:06:13.707252] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.896 [2024-07-24 02:06:13.707262] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11c56c0) on tqpair=0x116eae0 00:27:58.896 [2024-07-24 02:06:13.707280] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.896 [2024-07-24 02:06:13.707289] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.896 [2024-07-24 02:06:13.707296] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x116eae0) 00:27:58.896 [2024-07-24 02:06:13.707306] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.896 [2024-07-24 02:06:13.707334] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11c56c0, cid 3, qid 0 00:27:58.896 [2024-07-24 02:06:13.707434] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.896 [2024-07-24 02:06:13.707446] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.896 [2024-07-24 02:06:13.707453] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.896 [2024-07-24 02:06:13.707460] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11c56c0) on tqpair=0x116eae0 00:27:58.896 [2024-07-24 02:06:13.707476] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.896 [2024-07-24 02:06:13.707485] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.896 [2024-07-24 02:06:13.707492] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x116eae0) 00:27:58.896 [2024-07-24 02:06:13.707502] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.896 [2024-07-24 02:06:13.707523] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11c56c0, cid 3, qid 0 00:27:58.896 [2024-07-24 02:06:13.707620] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.896 [2024-07-24 02:06:13.707635] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.896 [2024-07-24 02:06:13.707642] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.896 [2024-07-24 02:06:13.707649] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11c56c0) on tqpair=0x116eae0 00:27:58.896 [2024-07-24 02:06:13.707665] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.896 [2024-07-24 02:06:13.707675] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.896 [2024-07-24 02:06:13.707681] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x116eae0) 00:27:58.896 [2024-07-24 02:06:13.707692] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.896 [2024-07-24 02:06:13.707713] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11c56c0, cid 3, qid 0 00:27:58.896 [2024-07-24 02:06:13.707805] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.896 [2024-07-24 02:06:13.707820] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.896 [2024-07-24 02:06:13.707827] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.896 [2024-07-24 02:06:13.707834] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11c56c0) on tqpair=0x116eae0 00:27:58.896 [2024-07-24 02:06:13.707850] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.896 [2024-07-24 02:06:13.707859] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.896 [2024-07-24 02:06:13.707866] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x116eae0) 00:27:58.896 [2024-07-24 02:06:13.707876] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.896 [2024-07-24 02:06:13.707897] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11c56c0, cid 3, qid 0 00:27:58.896 [2024-07-24 02:06:13.707986] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.896 [2024-07-24 02:06:13.708001] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.896 [2024-07-24 02:06:13.708008] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.896 [2024-07-24 02:06:13.708015] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11c56c0) on tqpair=0x116eae0 00:27:58.896 [2024-07-24 02:06:13.708035] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.896 [2024-07-24 02:06:13.708046] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.897 [2024-07-24 02:06:13.708052] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x116eae0) 00:27:58.897 [2024-07-24 02:06:13.708063] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.897 [2024-07-24 02:06:13.708084] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11c56c0, cid 3, qid 0 00:27:58.897 [2024-07-24 02:06:13.708178] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.897 [2024-07-24 02:06:13.708193] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.897 [2024-07-24 02:06:13.708200] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.897 [2024-07-24 02:06:13.708207] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11c56c0) on tqpair=0x116eae0 00:27:58.897 [2024-07-24 02:06:13.708224] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.897 [2024-07-24 02:06:13.708233] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.897 [2024-07-24 02:06:13.708240] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x116eae0) 00:27:58.897 [2024-07-24 02:06:13.708250] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.897 [2024-07-24 02:06:13.708271] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11c56c0, cid 3, qid 0 00:27:58.897 [2024-07-24 02:06:13.712331] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.897 [2024-07-24 02:06:13.712348] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.897 [2024-07-24 02:06:13.712355] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.897 [2024-07-24 02:06:13.712361] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11c56c0) on tqpair=0x116eae0 00:27:58.897 [2024-07-24 02:06:13.712378] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:58.897 [2024-07-24 02:06:13.712388] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:58.897 [2024-07-24 02:06:13.712394] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x116eae0) 00:27:58.897 [2024-07-24 02:06:13.712405] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.897 [2024-07-24 02:06:13.712426] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11c56c0, cid 3, qid 0 00:27:58.897 [2024-07-24 02:06:13.712573] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:58.897 [2024-07-24 02:06:13.712586] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:58.897 [2024-07-24 02:06:13.712593] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:58.897 [2024-07-24 02:06:13.712599] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11c56c0) on tqpair=0x116eae0 00:27:58.897 [2024-07-24 02:06:13.712612] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:27:58.897 0% 00:27:58.897 Data Units Read: 0 00:27:58.897 Data Units Written: 0 00:27:58.897 Host Read Commands: 0 00:27:58.897 Host Write Commands: 0 00:27:58.897 Controller Busy Time: 0 minutes 00:27:58.897 Power Cycles: 0 00:27:58.897 Power On Hours: 0 hours 00:27:58.897 Unsafe Shutdowns: 0 00:27:58.897 Unrecoverable Media Errors: 0 00:27:58.897 Lifetime Error Log Entries: 0 00:27:58.897 Warning Temperature Time: 0 minutes 00:27:58.897 Critical Temperature Time: 0 minutes 00:27:58.897 00:27:58.897 Number of Queues 00:27:58.897 ================ 00:27:58.897 Number of I/O Submission Queues: 127 00:27:58.897 Number of I/O Completion Queues: 127 00:27:58.897 00:27:58.897 Active Namespaces 00:27:58.897 ================= 00:27:58.897 Namespace ID:1 00:27:58.897 Error Recovery Timeout: Unlimited 00:27:58.897 Command Set Identifier: NVM (00h) 00:27:58.897 Deallocate: Supported 00:27:58.897 Deallocated/Unwritten Error: Not Supported 00:27:58.897 Deallocated Read Value: Unknown 00:27:58.897 Deallocate in Write Zeroes: Not Supported 00:27:58.897 Deallocated Guard Field: 0xFFFF 00:27:58.897 Flush: Supported 00:27:58.897 Reservation: Supported 00:27:58.897 Namespace Sharing Capabilities: Multiple Controllers 00:27:58.897 Size (in LBAs): 131072 (0GiB) 00:27:58.897 Capacity (in LBAs): 131072 (0GiB) 00:27:58.897 Utilization (in LBAs): 131072 (0GiB) 00:27:58.897 NGUID: ABCDEF0123456789ABCDEF0123456789 00:27:58.897 EUI64: ABCDEF0123456789 00:27:58.897 UUID: f0ae6a18-c3c8-45a5-98b0-14249211538b 00:27:58.897 Thin Provisioning: Not Supported 00:27:58.897 Per-NS Atomic Units: Yes 00:27:58.897 Atomic Boundary Size (Normal): 0 00:27:58.897 Atomic Boundary Size (PFail): 0 00:27:58.897 Atomic Boundary Offset: 0 00:27:58.897 Maximum Single Source Range Length: 65535 00:27:58.897 Maximum Copy Length: 65535 00:27:58.897 Maximum Source Range Count: 1 00:27:58.897 NGUID/EUI64 Never Reused: No 00:27:58.897 Namespace Write Protected: No 00:27:58.897 Number of LBA Formats: 1 00:27:58.897 Current LBA Format: LBA Format #00 00:27:58.897 LBA Format #00: Data Size: 512 Metadata Size: 0 00:27:58.897 00:27:58.897 02:06:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:27:58.897 02:06:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:58.897 02:06:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:58.897 02:06:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:58.897 02:06:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:58.897 02:06:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:27:58.897 02:06:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:27:58.897 02:06:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:58.897 02:06:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:27:58.897 02:06:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:58.897 02:06:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:27:58.897 02:06:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:58.897 02:06:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:58.897 rmmod nvme_tcp 00:27:58.897 rmmod nvme_fabrics 00:27:58.897 rmmod nvme_keyring 00:27:59.155 02:06:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:59.155 02:06:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:27:59.155 02:06:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:27:59.155 02:06:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 1521132 ']' 00:27:59.155 02:06:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 1521132 00:27:59.155 02:06:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 1521132 ']' 00:27:59.155 02:06:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 1521132 00:27:59.155 02:06:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:27:59.155 02:06:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:59.155 02:06:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1521132 00:27:59.155 02:06:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:59.155 02:06:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:59.155 02:06:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1521132' 00:27:59.155 killing process with pid 1521132 00:27:59.155 02:06:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@967 -- # kill 1521132 00:27:59.155 02:06:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # wait 1521132 00:27:59.155 02:06:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:59.155 02:06:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:59.155 02:06:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:59.156 02:06:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:59.156 02:06:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:59.156 02:06:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:59.156 02:06:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:59.415 02:06:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:01.356 02:06:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:01.356 00:28:01.356 real 0m5.557s 00:28:01.356 user 0m4.787s 00:28:01.356 sys 0m1.918s 00:28:01.356 02:06:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:01.356 02:06:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:01.356 ************************************ 00:28:01.356 END TEST nvmf_identify 00:28:01.356 ************************************ 00:28:01.356 02:06:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:28:01.356 02:06:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:01.356 02:06:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:01.356 02:06:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.356 ************************************ 00:28:01.356 START TEST nvmf_perf 00:28:01.356 ************************************ 00:28:01.356 02:06:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:28:01.356 * Looking for test storage... 00:28:01.356 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:01.356 02:06:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:01.356 02:06:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:28:01.356 02:06:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:01.356 02:06:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:01.356 02:06:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:01.356 02:06:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:01.356 02:06:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:01.356 02:06:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:01.357 02:06:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:01.357 02:06:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:01.357 02:06:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:01.357 02:06:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:01.357 02:06:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:01.357 02:06:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:01.357 02:06:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:01.357 02:06:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:01.357 02:06:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:01.357 02:06:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:01.357 02:06:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:01.357 02:06:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:01.357 02:06:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:01.357 02:06:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:01.357 02:06:16 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:01.357 02:06:16 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:01.357 02:06:16 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:01.357 02:06:16 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:28:01.357 02:06:16 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:01.357 02:06:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:28:01.357 02:06:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:01.357 02:06:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:01.357 02:06:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:01.357 02:06:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:01.357 02:06:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:01.357 02:06:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:01.357 02:06:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:01.357 02:06:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:01.357 02:06:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:28:01.357 02:06:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:28:01.357 02:06:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:01.357 02:06:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:28:01.357 02:06:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:01.357 02:06:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:01.357 02:06:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:01.357 02:06:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:01.357 02:06:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:01.357 02:06:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:01.357 02:06:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:01.357 02:06:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:01.357 02:06:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:01.357 02:06:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:01.357 02:06:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:28:01.357 02:06:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:03.258 02:06:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:03.258 02:06:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:28:03.258 02:06:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:03.258 02:06:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:03.258 02:06:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:03.258 02:06:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:03.258 02:06:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:03.258 02:06:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:28:03.258 02:06:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:03.258 02:06:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:28:03.258 02:06:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:28:03.258 02:06:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:28:03.258 02:06:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:28:03.258 02:06:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:28:03.258 02:06:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:28:03.258 02:06:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:03.258 02:06:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:03.258 02:06:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:03.259 02:06:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:03.259 02:06:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:03.259 02:06:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:03.259 02:06:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:03.259 02:06:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:03.259 02:06:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:03.259 02:06:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:03.259 02:06:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:03.259 02:06:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:03.259 02:06:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:03.259 02:06:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:03.259 02:06:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:03.259 02:06:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:03.259 02:06:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:03.259 02:06:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:03.259 02:06:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:03.259 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:03.259 02:06:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:03.259 02:06:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:03.259 02:06:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:03.259 02:06:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:03.259 02:06:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:03.259 02:06:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:03.259 02:06:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:03.259 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:03.259 02:06:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:03.259 02:06:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:03.259 02:06:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:03.259 02:06:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:03.259 02:06:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:03.259 02:06:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:03.259 02:06:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:03.259 02:06:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:03.259 02:06:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:03.259 02:06:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:03.259 02:06:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:03.259 02:06:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:03.259 02:06:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:03.259 02:06:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:03.259 02:06:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:03.259 02:06:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:03.259 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:03.259 02:06:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:03.259 02:06:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:03.259 02:06:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:03.259 02:06:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:03.259 02:06:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:03.259 02:06:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:03.259 02:06:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:03.259 02:06:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:03.259 02:06:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:03.259 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:03.259 02:06:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:03.259 02:06:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:03.259 02:06:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:28:03.259 02:06:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:03.259 02:06:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:03.259 02:06:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:03.259 02:06:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:03.259 02:06:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:03.259 02:06:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:03.259 02:06:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:03.259 02:06:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:03.259 02:06:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:03.259 02:06:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:03.259 02:06:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:03.259 02:06:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:03.259 02:06:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:03.259 02:06:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:03.259 02:06:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:03.259 02:06:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:03.259 02:06:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:03.259 02:06:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:03.518 02:06:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:03.518 02:06:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:03.518 02:06:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:03.518 02:06:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:03.518 02:06:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:03.518 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:03.518 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.208 ms 00:28:03.518 00:28:03.518 --- 10.0.0.2 ping statistics --- 00:28:03.518 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:03.518 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:28:03.518 02:06:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:03.518 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:03.518 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.096 ms 00:28:03.518 00:28:03.518 --- 10.0.0.1 ping statistics --- 00:28:03.518 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:03.518 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:28:03.518 02:06:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:03.518 02:06:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:28:03.518 02:06:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:03.518 02:06:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:03.518 02:06:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:03.518 02:06:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:03.518 02:06:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:03.518 02:06:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:03.518 02:06:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:03.518 02:06:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:28:03.518 02:06:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:03.518 02:06:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:03.518 02:06:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:03.518 02:06:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=1523216 00:28:03.518 02:06:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:03.518 02:06:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 1523216 00:28:03.518 02:06:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 1523216 ']' 00:28:03.518 02:06:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:03.518 02:06:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:03.518 02:06:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:03.518 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:03.518 02:06:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:03.518 02:06:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:03.518 [2024-07-24 02:06:18.282325] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:28:03.518 [2024-07-24 02:06:18.282397] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:03.518 EAL: No free 2048 kB hugepages reported on node 1 00:28:03.518 [2024-07-24 02:06:18.351542] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:03.776 [2024-07-24 02:06:18.450546] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:03.776 [2024-07-24 02:06:18.450622] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:03.776 [2024-07-24 02:06:18.450639] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:03.776 [2024-07-24 02:06:18.450652] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:03.776 [2024-07-24 02:06:18.450664] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:03.776 [2024-07-24 02:06:18.450731] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:03.776 [2024-07-24 02:06:18.451123] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:03.776 [2024-07-24 02:06:18.451183] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:03.776 [2024-07-24 02:06:18.451186] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:03.776 02:06:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:03.776 02:06:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:28:03.776 02:06:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:03.776 02:06:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:03.776 02:06:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:03.776 02:06:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:03.777 02:06:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:28:03.777 02:06:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:28:07.052 02:06:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:28:07.052 02:06:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:28:07.310 02:06:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:88:00.0 00:28:07.310 02:06:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:28:07.567 02:06:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:28:07.567 02:06:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:88:00.0 ']' 00:28:07.567 02:06:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:28:07.567 02:06:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:28:07.567 02:06:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:28:07.567 [2024-07-24 02:06:22.460864] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:07.825 02:06:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:08.082 02:06:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:28:08.082 02:06:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:08.082 02:06:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:28:08.082 02:06:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:28:08.340 02:06:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:08.598 [2024-07-24 02:06:23.444457] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:08.598 02:06:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:08.855 02:06:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:88:00.0 ']' 00:28:08.855 02:06:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:28:08.855 02:06:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:28:08.855 02:06:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:28:10.228 Initializing NVMe Controllers 00:28:10.228 Attached to NVMe Controller at 0000:88:00.0 [8086:0a54] 00:28:10.228 Associating PCIE (0000:88:00.0) NSID 1 with lcore 0 00:28:10.228 Initialization complete. Launching workers. 00:28:10.228 ======================================================== 00:28:10.228 Latency(us) 00:28:10.228 Device Information : IOPS MiB/s Average min max 00:28:10.228 PCIE (0000:88:00.0) NSID 1 from core 0: 83686.50 326.90 381.75 43.73 7260.27 00:28:10.228 ======================================================== 00:28:10.228 Total : 83686.50 326.90 381.75 43.73 7260.27 00:28:10.228 00:28:10.228 02:06:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:10.228 EAL: No free 2048 kB hugepages reported on node 1 00:28:11.601 Initializing NVMe Controllers 00:28:11.601 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:11.601 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:11.601 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:11.601 Initialization complete. Launching workers. 00:28:11.601 ======================================================== 00:28:11.601 Latency(us) 00:28:11.601 Device Information : IOPS MiB/s Average min max 00:28:11.601 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 109.62 0.43 9122.66 156.42 45714.41 00:28:11.601 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 44.84 0.18 22300.53 4986.55 50881.51 00:28:11.601 ======================================================== 00:28:11.601 Total : 154.46 0.60 12948.50 156.42 50881.51 00:28:11.601 00:28:11.601 02:06:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:11.601 EAL: No free 2048 kB hugepages reported on node 1 00:28:12.974 Initializing NVMe Controllers 00:28:12.975 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:12.975 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:12.975 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:12.975 Initialization complete. Launching workers. 00:28:12.975 ======================================================== 00:28:12.975 Latency(us) 00:28:12.975 Device Information : IOPS MiB/s Average min max 00:28:12.975 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8367.03 32.68 3825.54 628.12 9848.32 00:28:12.975 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3859.25 15.08 8304.19 6479.93 15922.90 00:28:12.975 ======================================================== 00:28:12.975 Total : 12226.28 47.76 5239.23 628.12 15922.90 00:28:12.975 00:28:12.975 02:06:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:28:12.975 02:06:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:28:12.975 02:06:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:12.975 EAL: No free 2048 kB hugepages reported on node 1 00:28:15.502 Initializing NVMe Controllers 00:28:15.502 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:15.502 Controller IO queue size 128, less than required. 00:28:15.502 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:15.502 Controller IO queue size 128, less than required. 00:28:15.502 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:15.502 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:15.502 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:15.502 Initialization complete. Launching workers. 00:28:15.502 ======================================================== 00:28:15.502 Latency(us) 00:28:15.502 Device Information : IOPS MiB/s Average min max 00:28:15.502 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1262.50 315.62 103267.07 48010.10 168104.03 00:28:15.502 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 575.50 143.88 231868.14 117804.44 360505.30 00:28:15.502 ======================================================== 00:28:15.502 Total : 1838.00 459.50 143533.61 48010.10 360505.30 00:28:15.502 00:28:15.502 02:06:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:28:15.502 EAL: No free 2048 kB hugepages reported on node 1 00:28:15.502 No valid NVMe controllers or AIO or URING devices found 00:28:15.502 Initializing NVMe Controllers 00:28:15.502 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:15.502 Controller IO queue size 128, less than required. 00:28:15.502 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:15.502 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:28:15.502 Controller IO queue size 128, less than required. 00:28:15.502 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:15.502 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:28:15.502 WARNING: Some requested NVMe devices were skipped 00:28:15.502 02:06:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:28:15.760 EAL: No free 2048 kB hugepages reported on node 1 00:28:18.287 Initializing NVMe Controllers 00:28:18.287 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:18.287 Controller IO queue size 128, less than required. 00:28:18.287 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:18.287 Controller IO queue size 128, less than required. 00:28:18.287 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:18.288 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:18.288 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:18.288 Initialization complete. Launching workers. 00:28:18.288 00:28:18.288 ==================== 00:28:18.288 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:28:18.288 TCP transport: 00:28:18.288 polls: 13504 00:28:18.288 idle_polls: 8977 00:28:18.288 sock_completions: 4527 00:28:18.288 nvme_completions: 6067 00:28:18.288 submitted_requests: 9078 00:28:18.288 queued_requests: 1 00:28:18.288 00:28:18.288 ==================== 00:28:18.288 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:28:18.288 TCP transport: 00:28:18.288 polls: 13544 00:28:18.288 idle_polls: 8222 00:28:18.288 sock_completions: 5322 00:28:18.288 nvme_completions: 5769 00:28:18.288 submitted_requests: 8648 00:28:18.288 queued_requests: 1 00:28:18.288 ======================================================== 00:28:18.288 Latency(us) 00:28:18.288 Device Information : IOPS MiB/s Average min max 00:28:18.288 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1516.23 379.06 85818.35 48861.49 165772.90 00:28:18.288 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1441.75 360.44 90012.46 48147.30 126793.72 00:28:18.288 ======================================================== 00:28:18.288 Total : 2957.98 739.50 87862.60 48147.30 165772.90 00:28:18.288 00:28:18.288 02:06:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:28:18.288 02:06:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:18.545 02:06:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:28:18.545 02:06:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:88:00.0 ']' 00:28:18.545 02:06:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:28:22.724 02:06:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # ls_guid=7f213848-8b92-463b-83fa-54316e85db69 00:28:22.724 02:06:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 7f213848-8b92-463b-83fa-54316e85db69 00:28:22.724 02:06:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1362 -- # local lvs_uuid=7f213848-8b92-463b-83fa-54316e85db69 00:28:22.724 02:06:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1363 -- # local lvs_info 00:28:22.724 02:06:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1364 -- # local fc 00:28:22.724 02:06:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1365 -- # local cs 00:28:22.724 02:06:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:22.724 02:06:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # lvs_info='[ 00:28:22.724 { 00:28:22.724 "uuid": "7f213848-8b92-463b-83fa-54316e85db69", 00:28:22.724 "name": "lvs_0", 00:28:22.724 "base_bdev": "Nvme0n1", 00:28:22.724 "total_data_clusters": 238234, 00:28:22.724 "free_clusters": 238234, 00:28:22.724 "block_size": 512, 00:28:22.724 "cluster_size": 4194304 00:28:22.724 } 00:28:22.724 ]' 00:28:22.724 02:06:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1367 -- # jq '.[] | select(.uuid=="7f213848-8b92-463b-83fa-54316e85db69") .free_clusters' 00:28:22.724 02:06:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1367 -- # fc=238234 00:28:22.724 02:06:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # jq '.[] | select(.uuid=="7f213848-8b92-463b-83fa-54316e85db69") .cluster_size' 00:28:22.724 02:06:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # cs=4194304 00:28:22.724 02:06:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # free_mb=952936 00:28:22.724 02:06:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # echo 952936 00:28:22.724 952936 00:28:22.724 02:06:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@77 -- # '[' 952936 -gt 20480 ']' 00:28:22.724 02:06:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:28:22.724 02:06:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 7f213848-8b92-463b-83fa-54316e85db69 lbd_0 20480 00:28:22.724 02:06:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # lb_guid=e3ef06a5-4ba9-4da8-8629-58c1c39c891e 00:28:22.724 02:06:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore e3ef06a5-4ba9-4da8-8629-58c1c39c891e lvs_n_0 00:28:23.697 02:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=5e394691-8425-4c67-a5d9-99c41293bab1 00:28:23.697 02:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 5e394691-8425-4c67-a5d9-99c41293bab1 00:28:23.697 02:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1362 -- # local lvs_uuid=5e394691-8425-4c67-a5d9-99c41293bab1 00:28:23.697 02:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1363 -- # local lvs_info 00:28:23.697 02:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1364 -- # local fc 00:28:23.697 02:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1365 -- # local cs 00:28:23.697 02:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:23.697 02:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # lvs_info='[ 00:28:23.697 { 00:28:23.697 "uuid": "7f213848-8b92-463b-83fa-54316e85db69", 00:28:23.697 "name": "lvs_0", 00:28:23.697 "base_bdev": "Nvme0n1", 00:28:23.697 "total_data_clusters": 238234, 00:28:23.697 "free_clusters": 233114, 00:28:23.697 "block_size": 512, 00:28:23.697 "cluster_size": 4194304 00:28:23.697 }, 00:28:23.697 { 00:28:23.697 "uuid": "5e394691-8425-4c67-a5d9-99c41293bab1", 00:28:23.697 "name": "lvs_n_0", 00:28:23.697 "base_bdev": "e3ef06a5-4ba9-4da8-8629-58c1c39c891e", 00:28:23.697 "total_data_clusters": 5114, 00:28:23.697 "free_clusters": 5114, 00:28:23.697 "block_size": 512, 00:28:23.697 "cluster_size": 4194304 00:28:23.697 } 00:28:23.697 ]' 00:28:23.697 02:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1367 -- # jq '.[] | select(.uuid=="5e394691-8425-4c67-a5d9-99c41293bab1") .free_clusters' 00:28:23.697 02:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1367 -- # fc=5114 00:28:23.697 02:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # jq '.[] | select(.uuid=="5e394691-8425-4c67-a5d9-99c41293bab1") .cluster_size' 00:28:23.955 02:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # cs=4194304 00:28:23.955 02:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # free_mb=20456 00:28:23.955 02:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # echo 20456 00:28:23.955 20456 00:28:23.955 02:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:28:23.955 02:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 5e394691-8425-4c67-a5d9-99c41293bab1 lbd_nest_0 20456 00:28:24.214 02:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=08679aac-6fb2-47d1-9555-4af32ed15d08 00:28:24.214 02:06:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:24.214 02:06:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:28:24.214 02:06:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 08679aac-6fb2-47d1-9555-4af32ed15d08 00:28:24.472 02:06:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:24.730 02:06:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:28:24.730 02:06:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:28:24.730 02:06:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:28:24.730 02:06:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:24.730 02:06:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:24.988 EAL: No free 2048 kB hugepages reported on node 1 00:28:37.195 Initializing NVMe Controllers 00:28:37.195 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:37.195 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:37.195 Initialization complete. Launching workers. 00:28:37.195 ======================================================== 00:28:37.195 Latency(us) 00:28:37.195 Device Information : IOPS MiB/s Average min max 00:28:37.195 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 47.49 0.02 21055.34 185.40 45888.09 00:28:37.195 ======================================================== 00:28:37.195 Total : 47.49 0.02 21055.34 185.40 45888.09 00:28:37.195 00:28:37.195 02:06:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:37.195 02:06:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:37.195 EAL: No free 2048 kB hugepages reported on node 1 00:28:45.315 Initializing NVMe Controllers 00:28:45.315 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:45.315 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:45.315 Initialization complete. Launching workers. 00:28:45.315 ======================================================== 00:28:45.315 Latency(us) 00:28:45.315 Device Information : IOPS MiB/s Average min max 00:28:45.315 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 74.90 9.36 13356.53 6009.02 51825.23 00:28:45.315 ======================================================== 00:28:45.315 Total : 74.90 9.36 13356.53 6009.02 51825.23 00:28:45.315 00:28:45.315 02:07:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:28:45.315 02:07:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:45.315 02:07:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:45.315 EAL: No free 2048 kB hugepages reported on node 1 00:28:57.517 Initializing NVMe Controllers 00:28:57.517 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:57.517 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:57.517 Initialization complete. Launching workers. 00:28:57.517 ======================================================== 00:28:57.517 Latency(us) 00:28:57.517 Device Information : IOPS MiB/s Average min max 00:28:57.517 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7098.90 3.47 4516.36 297.46 46789.38 00:28:57.517 ======================================================== 00:28:57.517 Total : 7098.90 3.47 4516.36 297.46 46789.38 00:28:57.517 00:28:57.517 02:07:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:57.517 02:07:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:57.517 EAL: No free 2048 kB hugepages reported on node 1 00:29:07.488 Initializing NVMe Controllers 00:29:07.488 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:07.488 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:07.488 Initialization complete. Launching workers. 00:29:07.488 ======================================================== 00:29:07.488 Latency(us) 00:29:07.488 Device Information : IOPS MiB/s Average min max 00:29:07.488 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3485.74 435.72 9181.22 628.01 22015.13 00:29:07.488 ======================================================== 00:29:07.488 Total : 3485.74 435.72 9181.22 628.01 22015.13 00:29:07.488 00:29:07.488 02:07:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:29:07.488 02:07:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:07.488 02:07:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:07.488 EAL: No free 2048 kB hugepages reported on node 1 00:29:17.498 Initializing NVMe Controllers 00:29:17.498 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:17.498 Controller IO queue size 128, less than required. 00:29:17.498 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:17.498 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:17.498 Initialization complete. Launching workers. 00:29:17.498 ======================================================== 00:29:17.498 Latency(us) 00:29:17.498 Device Information : IOPS MiB/s Average min max 00:29:17.498 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11772.88 5.75 10874.88 1911.59 30321.57 00:29:17.498 ======================================================== 00:29:17.498 Total : 11772.88 5.75 10874.88 1911.59 30321.57 00:29:17.498 00:29:17.498 02:07:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:17.498 02:07:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:17.498 EAL: No free 2048 kB hugepages reported on node 1 00:29:27.471 Initializing NVMe Controllers 00:29:27.471 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:27.471 Controller IO queue size 128, less than required. 00:29:27.471 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:27.471 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:27.471 Initialization complete. Launching workers. 00:29:27.471 ======================================================== 00:29:27.471 Latency(us) 00:29:27.471 Device Information : IOPS MiB/s Average min max 00:29:27.471 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1190.00 148.75 108065.29 16593.25 221342.06 00:29:27.471 ======================================================== 00:29:27.471 Total : 1190.00 148.75 108065.29 16593.25 221342.06 00:29:27.471 00:29:27.471 02:07:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:27.471 02:07:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 08679aac-6fb2-47d1-9555-4af32ed15d08 00:29:27.731 02:07:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:29:27.731 02:07:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete e3ef06a5-4ba9-4da8-8629-58c1c39c891e 00:29:28.295 02:07:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:29:28.295 02:07:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:29:28.295 02:07:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:29:28.295 02:07:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:28.295 02:07:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:29:28.295 02:07:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:28.295 02:07:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:29:28.295 02:07:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:28.295 02:07:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:28.295 rmmod nvme_tcp 00:29:28.564 rmmod nvme_fabrics 00:29:28.564 rmmod nvme_keyring 00:29:28.564 02:07:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:28.564 02:07:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:29:28.564 02:07:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:29:28.564 02:07:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 1523216 ']' 00:29:28.564 02:07:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 1523216 00:29:28.564 02:07:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 1523216 ']' 00:29:28.564 02:07:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 1523216 00:29:28.564 02:07:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:29:28.564 02:07:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:28.564 02:07:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1523216 00:29:28.564 02:07:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:28.564 02:07:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:28.564 02:07:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1523216' 00:29:28.564 killing process with pid 1523216 00:29:28.564 02:07:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@967 -- # kill 1523216 00:29:28.564 02:07:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # wait 1523216 00:29:30.469 02:07:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:30.469 02:07:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:30.469 02:07:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:30.469 02:07:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:30.469 02:07:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:30.469 02:07:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:30.469 02:07:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:30.469 02:07:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:32.375 02:07:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:32.375 00:29:32.375 real 1m30.754s 00:29:32.375 user 5m33.257s 00:29:32.375 sys 0m16.415s 00:29:32.375 02:07:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:32.375 02:07:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:32.375 ************************************ 00:29:32.375 END TEST nvmf_perf 00:29:32.375 ************************************ 00:29:32.375 02:07:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:29:32.375 02:07:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:29:32.375 02:07:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:32.375 02:07:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:32.375 ************************************ 00:29:32.375 START TEST nvmf_fio_host 00:29:32.375 ************************************ 00:29:32.375 02:07:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:29:32.375 * Looking for test storage... 00:29:32.375 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:32.375 02:07:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:32.375 02:07:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:32.375 02:07:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:32.375 02:07:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:32.375 02:07:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:32.375 02:07:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:32.375 02:07:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:32.375 02:07:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:29:32.375 02:07:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:32.375 02:07:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:32.375 02:07:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:29:32.375 02:07:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:32.375 02:07:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:32.375 02:07:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:32.375 02:07:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:32.375 02:07:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:32.375 02:07:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:32.375 02:07:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:32.375 02:07:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:32.375 02:07:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:32.375 02:07:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:32.375 02:07:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:32.375 02:07:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:32.375 02:07:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:32.375 02:07:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:32.375 02:07:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:32.375 02:07:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:32.375 02:07:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:32.375 02:07:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:32.375 02:07:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:32.375 02:07:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:32.375 02:07:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:32.375 02:07:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:32.376 02:07:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:32.376 02:07:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:29:32.376 02:07:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:32.376 02:07:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:29:32.376 02:07:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:32.376 02:07:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:32.376 02:07:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:32.376 02:07:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:32.376 02:07:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:32.376 02:07:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:32.376 02:07:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:32.376 02:07:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:32.376 02:07:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:32.376 02:07:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:29:32.376 02:07:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:32.376 02:07:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:32.376 02:07:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:32.376 02:07:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:32.376 02:07:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:32.376 02:07:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:32.376 02:07:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:32.376 02:07:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:32.376 02:07:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:32.376 02:07:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:32.376 02:07:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:29:32.376 02:07:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:34.277 02:07:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:34.277 02:07:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:29:34.277 02:07:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:34.277 02:07:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:34.277 02:07:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:34.277 02:07:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:34.277 02:07:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:34.277 02:07:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:29:34.277 02:07:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:34.277 02:07:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:29:34.277 02:07:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:29:34.277 02:07:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:29:34.277 02:07:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:29:34.277 02:07:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:29:34.277 02:07:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:29:34.277 02:07:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:34.277 02:07:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:34.277 02:07:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:34.277 02:07:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:34.277 02:07:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:34.277 02:07:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:34.277 02:07:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:34.277 02:07:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:34.277 02:07:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:34.277 02:07:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:34.277 02:07:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:34.277 02:07:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:34.277 02:07:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:34.277 02:07:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:34.278 02:07:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:34.278 02:07:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:34.278 02:07:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:34.278 02:07:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:34.278 02:07:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:34.278 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:34.278 02:07:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:34.278 02:07:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:34.278 02:07:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:34.278 02:07:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:34.278 02:07:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:34.278 02:07:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:34.278 02:07:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:34.278 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:34.278 02:07:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:34.278 02:07:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:34.278 02:07:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:34.278 02:07:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:34.278 02:07:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:34.278 02:07:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:34.278 02:07:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:34.278 02:07:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:34.278 02:07:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:34.278 02:07:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:34.278 02:07:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:34.278 02:07:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:34.278 02:07:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:34.278 02:07:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:34.278 02:07:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:34.278 02:07:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:34.278 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:34.278 02:07:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:34.278 02:07:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:34.278 02:07:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:34.278 02:07:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:34.278 02:07:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:34.278 02:07:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:34.278 02:07:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:34.278 02:07:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:34.278 02:07:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:34.278 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:34.278 02:07:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:34.278 02:07:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:34.278 02:07:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:29:34.278 02:07:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:34.278 02:07:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:34.278 02:07:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:34.278 02:07:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:34.278 02:07:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:34.278 02:07:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:34.278 02:07:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:34.278 02:07:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:34.278 02:07:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:34.278 02:07:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:34.278 02:07:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:34.278 02:07:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:34.278 02:07:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:34.278 02:07:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:34.278 02:07:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:34.278 02:07:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:34.278 02:07:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:34.278 02:07:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:34.278 02:07:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:34.278 02:07:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:34.278 02:07:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:34.278 02:07:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:34.278 02:07:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:34.278 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:34.278 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.131 ms 00:29:34.278 00:29:34.278 --- 10.0.0.2 ping statistics --- 00:29:34.278 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:34.278 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:29:34.278 02:07:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:34.278 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:34.278 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.094 ms 00:29:34.278 00:29:34.278 --- 10.0.0.1 ping statistics --- 00:29:34.278 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:34.278 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:29:34.278 02:07:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:34.278 02:07:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:29:34.278 02:07:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:34.278 02:07:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:34.278 02:07:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:34.278 02:07:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:34.278 02:07:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:34.278 02:07:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:34.278 02:07:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:34.278 02:07:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:29:34.278 02:07:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:29:34.278 02:07:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:34.278 02:07:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:34.278 02:07:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=1535180 00:29:34.278 02:07:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:34.278 02:07:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:34.278 02:07:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 1535180 00:29:34.278 02:07:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 1535180 ']' 00:29:34.278 02:07:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:34.278 02:07:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:34.278 02:07:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:34.278 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:34.278 02:07:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:34.278 02:07:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:34.278 [2024-07-24 02:07:49.109654] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:29:34.278 [2024-07-24 02:07:49.109756] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:34.278 EAL: No free 2048 kB hugepages reported on node 1 00:29:34.537 [2024-07-24 02:07:49.173387] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:34.537 [2024-07-24 02:07:49.258754] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:34.537 [2024-07-24 02:07:49.258808] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:34.537 [2024-07-24 02:07:49.258837] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:34.537 [2024-07-24 02:07:49.258856] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:34.537 [2024-07-24 02:07:49.258867] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:34.537 [2024-07-24 02:07:49.258999] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:34.537 [2024-07-24 02:07:49.259118] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:34.537 [2024-07-24 02:07:49.259171] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:34.537 [2024-07-24 02:07:49.259169] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:34.537 02:07:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:34.537 02:07:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:29:34.537 02:07:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:34.796 [2024-07-24 02:07:49.653494] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:34.796 02:07:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:29:34.796 02:07:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:34.796 02:07:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:35.054 02:07:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:29:35.311 Malloc1 00:29:35.311 02:07:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:35.569 02:07:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:29:35.827 02:07:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:36.085 [2024-07-24 02:07:50.786246] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:36.085 02:07:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:36.343 02:07:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:29:36.343 02:07:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:36.343 02:07:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:36.343 02:07:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local fio_dir=/usr/src/fio 00:29:36.343 02:07:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:36.343 02:07:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local sanitizers 00:29:36.343 02:07:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1338 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:36.343 02:07:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # shift 00:29:36.343 02:07:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local asan_lib= 00:29:36.343 02:07:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:29:36.343 02:07:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:36.343 02:07:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # grep libasan 00:29:36.343 02:07:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:29:36.343 02:07:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # asan_lib= 00:29:36.343 02:07:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:29:36.343 02:07:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:29:36.343 02:07:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:36.343 02:07:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # grep libclang_rt.asan 00:29:36.343 02:07:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:29:36.343 02:07:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # asan_lib= 00:29:36.343 02:07:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:29:36.343 02:07:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:29:36.343 02:07:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:36.601 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:29:36.601 fio-3.35 00:29:36.601 Starting 1 thread 00:29:36.601 EAL: No free 2048 kB hugepages reported on node 1 00:29:39.132 00:29:39.132 test: (groupid=0, jobs=1): err= 0: pid=1535541: Wed Jul 24 02:07:53 2024 00:29:39.132 read: IOPS=9197, BW=35.9MiB/s (37.7MB/s)(72.1MiB/2006msec) 00:29:39.132 slat (nsec): min=1977, max=115190, avg=2465.35, stdev=1516.72 00:29:39.132 clat (usec): min=2123, max=13206, avg=7655.24, stdev=614.37 00:29:39.132 lat (usec): min=2148, max=13208, avg=7657.70, stdev=614.29 00:29:39.132 clat percentiles (usec): 00:29:39.132 | 1.00th=[ 6259], 5.00th=[ 6718], 10.00th=[ 6915], 20.00th=[ 7177], 00:29:39.132 | 30.00th=[ 7373], 40.00th=[ 7504], 50.00th=[ 7635], 60.00th=[ 7832], 00:29:39.132 | 70.00th=[ 7963], 80.00th=[ 8160], 90.00th=[ 8356], 95.00th=[ 8586], 00:29:39.132 | 99.00th=[ 8979], 99.50th=[ 9110], 99.90th=[11338], 99.95th=[12256], 00:29:39.132 | 99.99th=[13173] 00:29:39.132 bw ( KiB/s): min=35960, max=37096, per=99.87%, avg=36742.00, stdev=531.80, samples=4 00:29:39.132 iops : min= 8990, max= 9274, avg=9185.50, stdev=132.95, samples=4 00:29:39.132 write: IOPS=9200, BW=35.9MiB/s (37.7MB/s)(72.1MiB/2006msec); 0 zone resets 00:29:39.132 slat (nsec): min=2128, max=84216, avg=2622.31, stdev=1150.85 00:29:39.132 clat (usec): min=1625, max=11296, avg=6229.47, stdev=512.41 00:29:39.132 lat (usec): min=1632, max=11298, avg=6232.09, stdev=512.36 00:29:39.132 clat percentiles (usec): 00:29:39.132 | 1.00th=[ 5080], 5.00th=[ 5473], 10.00th=[ 5669], 20.00th=[ 5866], 00:29:39.132 | 30.00th=[ 5997], 40.00th=[ 6128], 50.00th=[ 6259], 60.00th=[ 6325], 00:29:39.132 | 70.00th=[ 6456], 80.00th=[ 6587], 90.00th=[ 6849], 95.00th=[ 6980], 00:29:39.132 | 99.00th=[ 7308], 99.50th=[ 7439], 99.90th=[ 9503], 99.95th=[10290], 00:29:39.132 | 99.99th=[11207] 00:29:39.132 bw ( KiB/s): min=36352, max=37056, per=100.00%, avg=36816.00, stdev=319.47, samples=4 00:29:39.132 iops : min= 9088, max= 9264, avg=9204.00, stdev=79.87, samples=4 00:29:39.132 lat (msec) : 2=0.02%, 4=0.10%, 10=99.75%, 20=0.13% 00:29:39.132 cpu : usr=63.09%, sys=33.42%, ctx=83, majf=0, minf=38 00:29:39.132 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:29:39.132 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:39.132 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:39.132 issued rwts: total=18450,18456,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:39.132 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:39.132 00:29:39.132 Run status group 0 (all jobs): 00:29:39.132 READ: bw=35.9MiB/s (37.7MB/s), 35.9MiB/s-35.9MiB/s (37.7MB/s-37.7MB/s), io=72.1MiB (75.6MB), run=2006-2006msec 00:29:39.132 WRITE: bw=35.9MiB/s (37.7MB/s), 35.9MiB/s-35.9MiB/s (37.7MB/s-37.7MB/s), io=72.1MiB (75.6MB), run=2006-2006msec 00:29:39.132 02:07:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:29:39.132 02:07:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:29:39.132 02:07:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local fio_dir=/usr/src/fio 00:29:39.132 02:07:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:39.132 02:07:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local sanitizers 00:29:39.132 02:07:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1338 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:39.132 02:07:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # shift 00:29:39.132 02:07:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local asan_lib= 00:29:39.132 02:07:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:29:39.132 02:07:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:39.132 02:07:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # grep libasan 00:29:39.132 02:07:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:29:39.132 02:07:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # asan_lib= 00:29:39.132 02:07:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:29:39.132 02:07:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:29:39.132 02:07:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:39.132 02:07:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # grep libclang_rt.asan 00:29:39.132 02:07:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:29:39.132 02:07:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # asan_lib= 00:29:39.132 02:07:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:29:39.132 02:07:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:29:39.132 02:07:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:29:39.132 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:29:39.132 fio-3.35 00:29:39.132 Starting 1 thread 00:29:39.132 EAL: No free 2048 kB hugepages reported on node 1 00:29:41.697 00:29:41.697 test: (groupid=0, jobs=1): err= 0: pid=1535875: Wed Jul 24 02:07:56 2024 00:29:41.697 read: IOPS=7791, BW=122MiB/s (128MB/s)(244MiB/2007msec) 00:29:41.697 slat (nsec): min=2874, max=93605, avg=3746.04, stdev=1628.46 00:29:41.697 clat (usec): min=2649, max=19898, avg=9240.58, stdev=2048.35 00:29:41.697 lat (usec): min=2653, max=19902, avg=9244.33, stdev=2048.40 00:29:41.697 clat percentiles (usec): 00:29:41.697 | 1.00th=[ 4817], 5.00th=[ 5997], 10.00th=[ 6652], 20.00th=[ 7504], 00:29:41.697 | 30.00th=[ 8160], 40.00th=[ 8717], 50.00th=[ 9241], 60.00th=[ 9634], 00:29:41.697 | 70.00th=[10159], 80.00th=[10945], 90.00th=[11863], 95.00th=[12780], 00:29:41.697 | 99.00th=[14615], 99.50th=[14877], 99.90th=[15795], 99.95th=[16057], 00:29:41.697 | 99.99th=[17957] 00:29:41.697 bw ( KiB/s): min=50944, max=76320, per=52.47%, avg=65408.00, stdev=10706.99, samples=4 00:29:41.697 iops : min= 3184, max= 4770, avg=4088.00, stdev=669.19, samples=4 00:29:41.697 write: IOPS=4531, BW=70.8MiB/s (74.2MB/s)(133MiB/1885msec); 0 zone resets 00:29:41.697 slat (usec): min=30, max=195, avg=34.05, stdev= 5.74 00:29:41.697 clat (usec): min=5386, max=24074, avg=12376.07, stdev=2124.25 00:29:41.697 lat (usec): min=5432, max=24106, avg=12410.12, stdev=2124.62 00:29:41.697 clat percentiles (usec): 00:29:41.697 | 1.00th=[ 8029], 5.00th=[ 9110], 10.00th=[ 9634], 20.00th=[10683], 00:29:41.697 | 30.00th=[11207], 40.00th=[11863], 50.00th=[12387], 60.00th=[12780], 00:29:41.697 | 70.00th=[13304], 80.00th=[13960], 90.00th=[15139], 95.00th=[16057], 00:29:41.697 | 99.00th=[18220], 99.50th=[19006], 99.90th=[20579], 99.95th=[22676], 00:29:41.697 | 99.99th=[23987] 00:29:41.697 bw ( KiB/s): min=53216, max=79104, per=93.64%, avg=67896.00, stdev=11045.71, samples=4 00:29:41.697 iops : min= 3326, max= 4944, avg=4243.50, stdev=690.36, samples=4 00:29:41.697 lat (msec) : 4=0.16%, 10=48.54%, 20=51.26%, 50=0.04% 00:29:41.697 cpu : usr=74.38%, sys=22.88%, ctx=49, majf=0, minf=60 00:29:41.697 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:29:41.697 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:41.697 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:41.697 issued rwts: total=15637,8542,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:41.697 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:41.697 00:29:41.697 Run status group 0 (all jobs): 00:29:41.697 READ: bw=122MiB/s (128MB/s), 122MiB/s-122MiB/s (128MB/s-128MB/s), io=244MiB (256MB), run=2007-2007msec 00:29:41.697 WRITE: bw=70.8MiB/s (74.2MB/s), 70.8MiB/s-70.8MiB/s (74.2MB/s-74.2MB/s), io=133MiB (140MB), run=1885-1885msec 00:29:41.697 02:07:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:41.697 02:07:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:29:41.697 02:07:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:29:41.697 02:07:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:29:41.697 02:07:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1511 -- # bdfs=() 00:29:41.697 02:07:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1511 -- # local bdfs 00:29:41.697 02:07:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1512 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:29:41.697 02:07:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1512 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:29:41.697 02:07:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1512 -- # jq -r '.config[].params.traddr' 00:29:41.697 02:07:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1513 -- # (( 1 == 0 )) 00:29:41.697 02:07:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1517 -- # printf '%s\n' 0000:88:00.0 00:29:41.697 02:07:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 -i 10.0.0.2 00:29:44.976 Nvme0n1 00:29:44.976 02:07:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:29:48.254 02:08:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=d0112630-42a0-4571-bc1c-930eeedafa96 00:29:48.254 02:08:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb d0112630-42a0-4571-bc1c-930eeedafa96 00:29:48.254 02:08:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1362 -- # local lvs_uuid=d0112630-42a0-4571-bc1c-930eeedafa96 00:29:48.254 02:08:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1363 -- # local lvs_info 00:29:48.254 02:08:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local fc 00:29:48.254 02:08:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local cs 00:29:48.254 02:08:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:48.254 02:08:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # lvs_info='[ 00:29:48.254 { 00:29:48.254 "uuid": "d0112630-42a0-4571-bc1c-930eeedafa96", 00:29:48.254 "name": "lvs_0", 00:29:48.254 "base_bdev": "Nvme0n1", 00:29:48.254 "total_data_clusters": 930, 00:29:48.254 "free_clusters": 930, 00:29:48.254 "block_size": 512, 00:29:48.254 "cluster_size": 1073741824 00:29:48.254 } 00:29:48.254 ]' 00:29:48.254 02:08:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1367 -- # jq '.[] | select(.uuid=="d0112630-42a0-4571-bc1c-930eeedafa96") .free_clusters' 00:29:48.254 02:08:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1367 -- # fc=930 00:29:48.254 02:08:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # jq '.[] | select(.uuid=="d0112630-42a0-4571-bc1c-930eeedafa96") .cluster_size' 00:29:48.254 02:08:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # cs=1073741824 00:29:48.254 02:08:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # free_mb=952320 00:29:48.254 02:08:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # echo 952320 00:29:48.254 952320 00:29:48.254 02:08:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 952320 00:29:48.254 9581c4f9-89b1-4f0b-8ef2-542027151cc6 00:29:48.512 02:08:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:29:48.512 02:08:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:29:48.769 02:08:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:29:49.026 02:08:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:49.026 02:08:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:49.026 02:08:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local fio_dir=/usr/src/fio 00:29:49.026 02:08:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:49.026 02:08:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local sanitizers 00:29:49.026 02:08:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1338 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:49.026 02:08:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # shift 00:29:49.026 02:08:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local asan_lib= 00:29:49.026 02:08:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:29:49.026 02:08:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:49.026 02:08:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # grep libasan 00:29:49.026 02:08:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:29:49.026 02:08:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # asan_lib= 00:29:49.026 02:08:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:29:49.026 02:08:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:29:49.026 02:08:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:49.026 02:08:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # grep libclang_rt.asan 00:29:49.026 02:08:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:29:49.284 02:08:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # asan_lib= 00:29:49.284 02:08:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:29:49.284 02:08:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:29:49.284 02:08:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:49.284 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:29:49.284 fio-3.35 00:29:49.284 Starting 1 thread 00:29:49.284 EAL: No free 2048 kB hugepages reported on node 1 00:29:51.842 00:29:51.842 test: (groupid=0, jobs=1): err= 0: pid=1537162: Wed Jul 24 02:08:06 2024 00:29:51.842 read: IOPS=6057, BW=23.7MiB/s (24.8MB/s)(47.5MiB/2007msec) 00:29:51.842 slat (usec): min=2, max=129, avg= 2.56, stdev= 1.80 00:29:51.842 clat (usec): min=801, max=171269, avg=11603.47, stdev=11610.33 00:29:51.842 lat (usec): min=804, max=171304, avg=11606.02, stdev=11610.56 00:29:51.842 clat percentiles (msec): 00:29:51.842 | 1.00th=[ 9], 5.00th=[ 10], 10.00th=[ 10], 20.00th=[ 11], 00:29:51.842 | 30.00th=[ 11], 40.00th=[ 11], 50.00th=[ 11], 60.00th=[ 11], 00:29:51.842 | 70.00th=[ 12], 80.00th=[ 12], 90.00th=[ 12], 95.00th=[ 13], 00:29:51.842 | 99.00th=[ 14], 99.50th=[ 157], 99.90th=[ 171], 99.95th=[ 171], 00:29:51.842 | 99.99th=[ 171] 00:29:51.842 bw ( KiB/s): min=16928, max=26760, per=99.70%, avg=24156.00, stdev=4821.56, samples=4 00:29:51.842 iops : min= 4232, max= 6690, avg=6039.00, stdev=1205.39, samples=4 00:29:51.842 write: IOPS=6037, BW=23.6MiB/s (24.7MB/s)(47.3MiB/2007msec); 0 zone resets 00:29:51.842 slat (usec): min=2, max=248, avg= 2.68, stdev= 2.40 00:29:51.842 clat (usec): min=332, max=169365, avg=9422.63, stdev=10898.81 00:29:51.842 lat (usec): min=335, max=169371, avg=9425.31, stdev=10899.22 00:29:51.842 clat percentiles (msec): 00:29:51.842 | 1.00th=[ 7], 5.00th=[ 8], 10.00th=[ 8], 20.00th=[ 9], 00:29:51.842 | 30.00th=[ 9], 40.00th=[ 9], 50.00th=[ 9], 60.00th=[ 9], 00:29:51.842 | 70.00th=[ 10], 80.00th=[ 10], 90.00th=[ 10], 95.00th=[ 10], 00:29:51.842 | 99.00th=[ 11], 99.50th=[ 16], 99.90th=[ 169], 99.95th=[ 169], 00:29:51.842 | 99.99th=[ 169] 00:29:51.842 bw ( KiB/s): min=17960, max=26368, per=99.95%, avg=24138.00, stdev=4120.77, samples=4 00:29:51.842 iops : min= 4490, max= 6592, avg=6034.50, stdev=1030.19, samples=4 00:29:51.842 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:29:51.842 lat (msec) : 2=0.03%, 4=0.13%, 10=57.49%, 20=41.80%, 250=0.53% 00:29:51.842 cpu : usr=61.27%, sys=36.04%, ctx=123, majf=0, minf=38 00:29:51.842 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:29:51.843 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:51.843 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:51.843 issued rwts: total=12157,12117,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:51.843 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:51.843 00:29:51.843 Run status group 0 (all jobs): 00:29:51.843 READ: bw=23.7MiB/s (24.8MB/s), 23.7MiB/s-23.7MiB/s (24.8MB/s-24.8MB/s), io=47.5MiB (49.8MB), run=2007-2007msec 00:29:51.843 WRITE: bw=23.6MiB/s (24.7MB/s), 23.6MiB/s-23.6MiB/s (24.7MB/s-24.7MB/s), io=47.3MiB (49.6MB), run=2007-2007msec 00:29:51.843 02:08:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:29:52.100 02:08:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:29:53.482 02:08:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=26cc1547-d47f-41d6-850d-7abe8f79cf7b 00:29:53.482 02:08:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb 26cc1547-d47f-41d6-850d-7abe8f79cf7b 00:29:53.482 02:08:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1362 -- # local lvs_uuid=26cc1547-d47f-41d6-850d-7abe8f79cf7b 00:29:53.482 02:08:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1363 -- # local lvs_info 00:29:53.482 02:08:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local fc 00:29:53.482 02:08:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local cs 00:29:53.482 02:08:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:53.482 02:08:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # lvs_info='[ 00:29:53.482 { 00:29:53.482 "uuid": "d0112630-42a0-4571-bc1c-930eeedafa96", 00:29:53.482 "name": "lvs_0", 00:29:53.482 "base_bdev": "Nvme0n1", 00:29:53.482 "total_data_clusters": 930, 00:29:53.482 "free_clusters": 0, 00:29:53.482 "block_size": 512, 00:29:53.482 "cluster_size": 1073741824 00:29:53.482 }, 00:29:53.482 { 00:29:53.482 "uuid": "26cc1547-d47f-41d6-850d-7abe8f79cf7b", 00:29:53.482 "name": "lvs_n_0", 00:29:53.482 "base_bdev": "9581c4f9-89b1-4f0b-8ef2-542027151cc6", 00:29:53.482 "total_data_clusters": 237847, 00:29:53.482 "free_clusters": 237847, 00:29:53.482 "block_size": 512, 00:29:53.482 "cluster_size": 4194304 00:29:53.482 } 00:29:53.482 ]' 00:29:53.482 02:08:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1367 -- # jq '.[] | select(.uuid=="26cc1547-d47f-41d6-850d-7abe8f79cf7b") .free_clusters' 00:29:53.482 02:08:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1367 -- # fc=237847 00:29:53.482 02:08:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # jq '.[] | select(.uuid=="26cc1547-d47f-41d6-850d-7abe8f79cf7b") .cluster_size' 00:29:53.482 02:08:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # cs=4194304 00:29:53.482 02:08:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # free_mb=951388 00:29:53.482 02:08:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # echo 951388 00:29:53.482 951388 00:29:53.482 02:08:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 951388 00:29:54.412 39577771-47f2-4b8c-ba7a-d726965f0ab8 00:29:54.412 02:08:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:29:54.412 02:08:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:29:54.669 02:08:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:29:54.926 02:08:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:54.927 02:08:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:54.927 02:08:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local fio_dir=/usr/src/fio 00:29:54.927 02:08:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:54.927 02:08:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local sanitizers 00:29:54.927 02:08:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1338 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:54.927 02:08:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # shift 00:29:54.927 02:08:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local asan_lib= 00:29:54.927 02:08:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:29:54.927 02:08:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:54.927 02:08:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # grep libasan 00:29:54.927 02:08:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:29:54.927 02:08:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # asan_lib= 00:29:54.927 02:08:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:29:54.927 02:08:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:29:54.927 02:08:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:54.927 02:08:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # grep libclang_rt.asan 00:29:54.927 02:08:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:29:54.927 02:08:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # asan_lib= 00:29:54.927 02:08:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:29:54.927 02:08:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:29:54.927 02:08:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:55.184 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:29:55.184 fio-3.35 00:29:55.184 Starting 1 thread 00:29:55.184 EAL: No free 2048 kB hugepages reported on node 1 00:29:57.707 00:29:57.707 test: (groupid=0, jobs=1): err= 0: pid=1537894: Wed Jul 24 02:08:12 2024 00:29:57.707 read: IOPS=5892, BW=23.0MiB/s (24.1MB/s)(46.2MiB/2008msec) 00:29:57.707 slat (nsec): min=1900, max=141691, avg=2482.20, stdev=2046.40 00:29:57.707 clat (usec): min=4534, max=21114, avg=11938.76, stdev=1085.49 00:29:57.707 lat (usec): min=4551, max=21116, avg=11941.25, stdev=1085.39 00:29:57.707 clat percentiles (usec): 00:29:57.707 | 1.00th=[ 9372], 5.00th=[10290], 10.00th=[10683], 20.00th=[11076], 00:29:57.707 | 30.00th=[11469], 40.00th=[11731], 50.00th=[11994], 60.00th=[12256], 00:29:57.707 | 70.00th=[12518], 80.00th=[12780], 90.00th=[13173], 95.00th=[13566], 00:29:57.707 | 99.00th=[14353], 99.50th=[14746], 99.90th=[18744], 99.95th=[20055], 00:29:57.707 | 99.99th=[21103] 00:29:57.707 bw ( KiB/s): min=22400, max=24104, per=99.79%, avg=23522.00, stdev=781.01, samples=4 00:29:57.707 iops : min= 5600, max= 6026, avg=5880.50, stdev=195.25, samples=4 00:29:57.707 write: IOPS=5885, BW=23.0MiB/s (24.1MB/s)(46.2MiB/2008msec); 0 zone resets 00:29:57.707 slat (nsec): min=2045, max=99336, avg=2621.95, stdev=1280.72 00:29:57.707 clat (usec): min=2231, max=17520, avg=9662.21, stdev=898.68 00:29:57.707 lat (usec): min=2238, max=17523, avg=9664.83, stdev=898.66 00:29:57.707 clat percentiles (usec): 00:29:57.707 | 1.00th=[ 7570], 5.00th=[ 8291], 10.00th=[ 8586], 20.00th=[ 8979], 00:29:57.707 | 30.00th=[ 9241], 40.00th=[ 9503], 50.00th=[ 9634], 60.00th=[ 9896], 00:29:57.707 | 70.00th=[10028], 80.00th=[10290], 90.00th=[10683], 95.00th=[10945], 00:29:57.707 | 99.00th=[11600], 99.50th=[11863], 99.90th=[15795], 99.95th=[16188], 00:29:57.707 | 99.99th=[17433] 00:29:57.707 bw ( KiB/s): min=23320, max=23616, per=99.92%, avg=23526.00, stdev=138.78, samples=4 00:29:57.707 iops : min= 5830, max= 5904, avg=5881.50, stdev=34.69, samples=4 00:29:57.707 lat (msec) : 4=0.05%, 10=35.04%, 20=64.89%, 50=0.03% 00:29:57.707 cpu : usr=59.39%, sys=38.02%, ctx=110, majf=0, minf=38 00:29:57.707 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:29:57.707 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:57.707 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:57.707 issued rwts: total=11833,11819,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:57.707 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:57.707 00:29:57.707 Run status group 0 (all jobs): 00:29:57.707 READ: bw=23.0MiB/s (24.1MB/s), 23.0MiB/s-23.0MiB/s (24.1MB/s-24.1MB/s), io=46.2MiB (48.5MB), run=2008-2008msec 00:29:57.707 WRITE: bw=23.0MiB/s (24.1MB/s), 23.0MiB/s-23.0MiB/s (24.1MB/s-24.1MB/s), io=46.2MiB (48.4MB), run=2008-2008msec 00:29:57.707 02:08:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:29:57.707 02:08:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:29:57.707 02:08:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_n_0/lbd_nest_0 00:30:01.883 02:08:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:30:01.883 02:08:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:30:05.159 02:08:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:30:05.159 02:08:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:30:07.057 02:08:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:30:07.057 02:08:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:30:07.057 02:08:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:30:07.057 02:08:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:07.057 02:08:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:30:07.057 02:08:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:07.057 02:08:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:30:07.057 02:08:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:07.057 02:08:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:07.057 rmmod nvme_tcp 00:30:07.057 rmmod nvme_fabrics 00:30:07.057 rmmod nvme_keyring 00:30:07.057 02:08:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:07.057 02:08:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:30:07.057 02:08:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:30:07.057 02:08:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 1535180 ']' 00:30:07.057 02:08:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 1535180 00:30:07.057 02:08:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 1535180 ']' 00:30:07.057 02:08:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 1535180 00:30:07.057 02:08:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:30:07.057 02:08:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:07.057 02:08:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1535180 00:30:07.057 02:08:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:07.057 02:08:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:07.057 02:08:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1535180' 00:30:07.057 killing process with pid 1535180 00:30:07.057 02:08:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 1535180 00:30:07.057 02:08:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 1535180 00:30:07.316 02:08:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:07.316 02:08:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:07.316 02:08:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:07.316 02:08:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:07.316 02:08:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:07.316 02:08:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:07.316 02:08:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:07.316 02:08:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:09.225 02:08:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:09.225 00:30:09.225 real 0m37.132s 00:30:09.225 user 2m22.922s 00:30:09.225 sys 0m6.794s 00:30:09.225 02:08:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:09.225 02:08:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:09.225 ************************************ 00:30:09.225 END TEST nvmf_fio_host 00:30:09.225 ************************************ 00:30:09.225 02:08:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:30:09.225 02:08:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:30:09.225 02:08:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:09.225 02:08:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:09.484 ************************************ 00:30:09.484 START TEST nvmf_failover 00:30:09.484 ************************************ 00:30:09.484 02:08:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:30:09.484 * Looking for test storage... 00:30:09.484 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:09.484 02:08:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:09.484 02:08:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:30:09.484 02:08:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:09.484 02:08:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:09.484 02:08:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:09.484 02:08:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:09.484 02:08:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:09.484 02:08:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:09.484 02:08:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:09.484 02:08:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:09.484 02:08:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:09.484 02:08:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:09.484 02:08:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:09.484 02:08:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:09.484 02:08:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:09.484 02:08:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:09.484 02:08:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:09.484 02:08:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:09.484 02:08:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:09.484 02:08:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:09.484 02:08:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:09.484 02:08:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:09.484 02:08:24 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:09.484 02:08:24 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:09.484 02:08:24 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:09.484 02:08:24 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:30:09.484 02:08:24 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:09.484 02:08:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:30:09.484 02:08:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:09.484 02:08:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:09.484 02:08:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:09.484 02:08:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:09.484 02:08:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:09.484 02:08:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:09.484 02:08:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:09.484 02:08:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:09.484 02:08:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:09.484 02:08:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:09.484 02:08:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:09.484 02:08:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:09.484 02:08:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:30:09.484 02:08:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:09.484 02:08:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:09.484 02:08:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:09.484 02:08:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:09.484 02:08:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:09.484 02:08:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:09.484 02:08:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:09.484 02:08:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:09.484 02:08:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:09.484 02:08:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:09.484 02:08:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:30:09.484 02:08:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:11.387 02:08:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:11.387 02:08:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:30:11.387 02:08:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:11.387 02:08:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:11.387 02:08:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:11.387 02:08:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:11.387 02:08:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:11.387 02:08:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:30:11.387 02:08:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:11.387 02:08:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:30:11.387 02:08:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:30:11.387 02:08:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:30:11.387 02:08:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:30:11.387 02:08:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:30:11.388 02:08:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:30:11.388 02:08:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:11.388 02:08:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:11.388 02:08:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:11.388 02:08:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:11.388 02:08:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:11.388 02:08:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:11.388 02:08:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:11.388 02:08:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:11.388 02:08:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:11.388 02:08:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:11.388 02:08:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:11.388 02:08:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:11.388 02:08:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:11.388 02:08:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:11.388 02:08:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:11.388 02:08:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:11.388 02:08:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:11.388 02:08:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:11.388 02:08:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:11.388 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:11.388 02:08:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:11.388 02:08:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:11.388 02:08:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:11.388 02:08:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:11.388 02:08:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:11.388 02:08:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:11.388 02:08:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:11.388 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:11.388 02:08:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:11.388 02:08:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:11.388 02:08:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:11.388 02:08:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:11.388 02:08:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:11.388 02:08:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:11.388 02:08:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:11.388 02:08:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:11.388 02:08:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:11.388 02:08:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:11.388 02:08:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:11.388 02:08:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:11.388 02:08:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:11.388 02:08:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:11.388 02:08:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:11.388 02:08:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:11.388 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:11.388 02:08:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:11.388 02:08:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:11.388 02:08:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:11.388 02:08:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:11.388 02:08:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:11.388 02:08:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:11.388 02:08:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:11.388 02:08:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:11.388 02:08:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:11.388 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:11.388 02:08:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:11.388 02:08:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:11.388 02:08:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:30:11.388 02:08:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:11.388 02:08:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:11.388 02:08:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:11.388 02:08:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:11.388 02:08:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:11.388 02:08:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:11.388 02:08:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:11.388 02:08:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:11.388 02:08:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:11.388 02:08:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:11.388 02:08:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:11.388 02:08:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:11.388 02:08:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:11.388 02:08:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:11.388 02:08:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:11.388 02:08:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:11.388 02:08:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:11.389 02:08:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:11.389 02:08:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:11.389 02:08:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:11.389 02:08:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:11.389 02:08:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:11.389 02:08:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:11.389 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:11.389 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.127 ms 00:30:11.389 00:30:11.389 --- 10.0.0.2 ping statistics --- 00:30:11.389 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:11.389 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:30:11.389 02:08:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:11.389 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:11.389 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.097 ms 00:30:11.389 00:30:11.389 --- 10.0.0.1 ping statistics --- 00:30:11.389 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:11.389 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:30:11.389 02:08:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:11.389 02:08:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:30:11.389 02:08:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:11.389 02:08:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:11.389 02:08:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:11.389 02:08:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:11.389 02:08:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:11.389 02:08:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:11.389 02:08:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:11.389 02:08:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:30:11.389 02:08:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:11.389 02:08:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:11.389 02:08:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:11.389 02:08:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=1541249 00:30:11.389 02:08:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:30:11.389 02:08:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 1541249 00:30:11.389 02:08:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 1541249 ']' 00:30:11.389 02:08:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:11.389 02:08:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:11.389 02:08:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:11.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:11.389 02:08:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:11.389 02:08:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:11.648 [2024-07-24 02:08:26.288561] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:30:11.648 [2024-07-24 02:08:26.288636] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:11.648 EAL: No free 2048 kB hugepages reported on node 1 00:30:11.648 [2024-07-24 02:08:26.357813] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:11.648 [2024-07-24 02:08:26.449811] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:11.648 [2024-07-24 02:08:26.449873] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:11.648 [2024-07-24 02:08:26.449888] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:11.648 [2024-07-24 02:08:26.449901] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:11.648 [2024-07-24 02:08:26.449913] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:11.648 [2024-07-24 02:08:26.449993] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:11.648 [2024-07-24 02:08:26.450120] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:30:11.648 [2024-07-24 02:08:26.450124] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:11.906 02:08:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:11.906 02:08:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:30:11.906 02:08:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:11.906 02:08:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:11.906 02:08:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:11.906 02:08:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:11.906 02:08:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:11.906 [2024-07-24 02:08:26.790239] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:12.164 02:08:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:30:12.422 Malloc0 00:30:12.422 02:08:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:12.680 02:08:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:12.938 02:08:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:12.938 [2024-07-24 02:08:27.820656] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:13.195 02:08:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:13.195 [2024-07-24 02:08:28.069380] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:13.195 02:08:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:30:13.453 [2024-07-24 02:08:28.314168] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:30:13.453 02:08:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=1541420 00:30:13.453 02:08:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:30:13.453 02:08:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:13.453 02:08:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 1541420 /var/tmp/bdevperf.sock 00:30:13.453 02:08:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 1541420 ']' 00:30:13.453 02:08:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:13.453 02:08:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:13.453 02:08:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:13.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:13.454 02:08:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:13.454 02:08:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:14.019 02:08:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:14.019 02:08:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:30:14.019 02:08:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:14.276 NVMe0n1 00:30:14.276 02:08:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:14.534 00:30:14.791 02:08:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=1541554 00:30:14.791 02:08:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:14.791 02:08:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:30:15.724 02:08:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:15.982 02:08:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:30:19.264 02:08:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:19.264 00:30:19.264 02:08:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:19.522 [2024-07-24 02:08:34.315137] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e80d0 is same with the state(5) to be set 00:30:19.522 [2024-07-24 02:08:34.315206] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e80d0 is same with the state(5) to be set 00:30:19.522 [2024-07-24 02:08:34.315222] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e80d0 is same with the state(5) to be set 00:30:19.522 [2024-07-24 02:08:34.315235] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e80d0 is same with the state(5) to be set 00:30:19.522 [2024-07-24 02:08:34.315247] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e80d0 is same with the state(5) to be set 00:30:19.522 [2024-07-24 02:08:34.315269] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e80d0 is same with the state(5) to be set 00:30:19.522 [2024-07-24 02:08:34.315281] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e80d0 is same with the state(5) to be set 00:30:19.522 [2024-07-24 02:08:34.315293] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e80d0 is same with the state(5) to be set 00:30:19.522 [2024-07-24 02:08:34.315305] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e80d0 is same with the state(5) to be set 00:30:19.522 [2024-07-24 02:08:34.315323] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e80d0 is same with the state(5) to be set 00:30:19.522 [2024-07-24 02:08:34.315351] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e80d0 is same with the state(5) to be set 00:30:19.522 [2024-07-24 02:08:34.315363] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e80d0 is same with the state(5) to be set 00:30:19.522 [2024-07-24 02:08:34.315374] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e80d0 is same with the state(5) to be set 00:30:19.522 [2024-07-24 02:08:34.315385] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e80d0 is same with the state(5) to be set 00:30:19.522 [2024-07-24 02:08:34.315396] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e80d0 is same with the state(5) to be set 00:30:19.522 [2024-07-24 02:08:34.315406] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e80d0 is same with the state(5) to be set 00:30:19.522 [2024-07-24 02:08:34.315417] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e80d0 is same with the state(5) to be set 00:30:19.522 [2024-07-24 02:08:34.315428] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e80d0 is same with the state(5) to be set 00:30:19.522 [2024-07-24 02:08:34.315438] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e80d0 is same with the state(5) to be set 00:30:19.522 [2024-07-24 02:08:34.315449] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e80d0 is same with the state(5) to be set 00:30:19.522 [2024-07-24 02:08:34.315460] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e80d0 is same with the state(5) to be set 00:30:19.522 [2024-07-24 02:08:34.315471] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e80d0 is same with the state(5) to be set 00:30:19.522 [2024-07-24 02:08:34.315482] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e80d0 is same with the state(5) to be set 00:30:19.522 [2024-07-24 02:08:34.315493] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e80d0 is same with the state(5) to be set 00:30:19.522 [2024-07-24 02:08:34.315504] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e80d0 is same with the state(5) to be set 00:30:19.522 [2024-07-24 02:08:34.315515] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e80d0 is same with the state(5) to be set 00:30:19.523 [2024-07-24 02:08:34.315526] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e80d0 is same with the state(5) to be set 00:30:19.523 [2024-07-24 02:08:34.315537] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e80d0 is same with the state(5) to be set 00:30:19.523 [2024-07-24 02:08:34.315548] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e80d0 is same with the state(5) to be set 00:30:19.523 [2024-07-24 02:08:34.315559] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e80d0 is same with the state(5) to be set 00:30:19.523 [2024-07-24 02:08:34.315570] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e80d0 is same with the state(5) to be set 00:30:19.523 [2024-07-24 02:08:34.315580] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e80d0 is same with the state(5) to be set 00:30:19.523 [2024-07-24 02:08:34.315595] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e80d0 is same with the state(5) to be set 00:30:19.523 [2024-07-24 02:08:34.315607] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e80d0 is same with the state(5) to be set 00:30:19.523 [2024-07-24 02:08:34.315618] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e80d0 is same with the state(5) to be set 00:30:19.523 [2024-07-24 02:08:34.315629] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e80d0 is same with the state(5) to be set 00:30:19.523 [2024-07-24 02:08:34.315640] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e80d0 is same with the state(5) to be set 00:30:19.523 [2024-07-24 02:08:34.315651] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e80d0 is same with the state(5) to be set 00:30:19.523 [2024-07-24 02:08:34.315662] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e80d0 is same with the state(5) to be set 00:30:19.523 [2024-07-24 02:08:34.315673] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e80d0 is same with the state(5) to be set 00:30:19.523 [2024-07-24 02:08:34.315684] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e80d0 is same with the state(5) to be set 00:30:19.523 [2024-07-24 02:08:34.315696] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e80d0 is same with the state(5) to be set 00:30:19.523 [2024-07-24 02:08:34.315707] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e80d0 is same with the state(5) to be set 00:30:19.523 [2024-07-24 02:08:34.315718] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e80d0 is same with the state(5) to be set 00:30:19.523 [2024-07-24 02:08:34.315729] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e80d0 is same with the state(5) to be set 00:30:19.523 [2024-07-24 02:08:34.315740] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e80d0 is same with the state(5) to be set 00:30:19.523 [2024-07-24 02:08:34.315751] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e80d0 is same with the state(5) to be set 00:30:19.523 [2024-07-24 02:08:34.315762] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e80d0 is same with the state(5) to be set 00:30:19.523 [2024-07-24 02:08:34.315773] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e80d0 is same with the state(5) to be set 00:30:19.523 [2024-07-24 02:08:34.315784] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e80d0 is same with the state(5) to be set 00:30:19.523 [2024-07-24 02:08:34.315795] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e80d0 is same with the state(5) to be set 00:30:19.523 [2024-07-24 02:08:34.315806] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e80d0 is same with the state(5) to be set 00:30:19.523 [2024-07-24 02:08:34.315817] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e80d0 is same with the state(5) to be set 00:30:19.523 02:08:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:30:22.815 02:08:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:22.815 [2024-07-24 02:08:37.616727] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:22.815 02:08:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:30:23.765 02:08:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:30:24.027 [2024-07-24 02:08:38.871512] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9460 is same with the state(5) to be set 00:30:24.027 [2024-07-24 02:08:38.871573] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9460 is same with the state(5) to be set 00:30:24.027 [2024-07-24 02:08:38.871598] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9460 is same with the state(5) to be set 00:30:24.027 [2024-07-24 02:08:38.871611] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9460 is same with the state(5) to be set 00:30:24.027 [2024-07-24 02:08:38.871623] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9460 is same with the state(5) to be set 00:30:24.027 [2024-07-24 02:08:38.871635] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9460 is same with the state(5) to be set 00:30:24.027 [2024-07-24 02:08:38.871647] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9460 is same with the state(5) to be set 00:30:24.027 [2024-07-24 02:08:38.871658] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9460 is same with the state(5) to be set 00:30:24.027 [2024-07-24 02:08:38.871669] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9460 is same with the state(5) to be set 00:30:24.027 [2024-07-24 02:08:38.871681] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9460 is same with the state(5) to be set 00:30:24.027 [2024-07-24 02:08:38.871692] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9460 is same with the state(5) to be set 00:30:24.027 [2024-07-24 02:08:38.871703] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9460 is same with the state(5) to be set 00:30:24.027 [2024-07-24 02:08:38.871715] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9460 is same with the state(5) to be set 00:30:24.027 [2024-07-24 02:08:38.871726] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9460 is same with the state(5) to be set 00:30:24.027 [2024-07-24 02:08:38.871738] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9460 is same with the state(5) to be set 00:30:24.027 [2024-07-24 02:08:38.871749] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9460 is same with the state(5) to be set 00:30:24.027 [2024-07-24 02:08:38.871760] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9460 is same with the state(5) to be set 00:30:24.027 [2024-07-24 02:08:38.871772] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9460 is same with the state(5) to be set 00:30:24.027 [2024-07-24 02:08:38.871783] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9460 is same with the state(5) to be set 00:30:24.027 [2024-07-24 02:08:38.871794] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9460 is same with the state(5) to be set 00:30:24.027 [2024-07-24 02:08:38.871806] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9460 is same with the state(5) to be set 00:30:24.027 [2024-07-24 02:08:38.871817] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9460 is same with the state(5) to be set 00:30:24.027 [2024-07-24 02:08:38.871829] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9460 is same with the state(5) to be set 00:30:24.027 [2024-07-24 02:08:38.871840] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9460 is same with the state(5) to be set 00:30:24.027 [2024-07-24 02:08:38.871851] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9460 is same with the state(5) to be set 00:30:24.027 [2024-07-24 02:08:38.871863] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9460 is same with the state(5) to be set 00:30:24.027 [2024-07-24 02:08:38.871874] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9460 is same with the state(5) to be set 00:30:24.027 [2024-07-24 02:08:38.871886] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9460 is same with the state(5) to be set 00:30:24.027 [2024-07-24 02:08:38.871913] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9460 is same with the state(5) to be set 00:30:24.027 [2024-07-24 02:08:38.871928] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9460 is same with the state(5) to be set 00:30:24.027 [2024-07-24 02:08:38.871939] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9460 is same with the state(5) to be set 00:30:24.027 [2024-07-24 02:08:38.871950] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9460 is same with the state(5) to be set 00:30:24.027 [2024-07-24 02:08:38.871961] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9460 is same with the state(5) to be set 00:30:24.027 [2024-07-24 02:08:38.871972] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9460 is same with the state(5) to be set 00:30:24.027 [2024-07-24 02:08:38.871983] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9460 is same with the state(5) to be set 00:30:24.027 [2024-07-24 02:08:38.871994] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9460 is same with the state(5) to be set 00:30:24.027 [2024-07-24 02:08:38.872021] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9460 is same with the state(5) to be set 00:30:24.027 [2024-07-24 02:08:38.872034] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9460 is same with the state(5) to be set 00:30:24.027 [2024-07-24 02:08:38.872045] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9460 is same with the state(5) to be set 00:30:24.027 [2024-07-24 02:08:38.872056] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9460 is same with the state(5) to be set 00:30:24.027 [2024-07-24 02:08:38.872068] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9460 is same with the state(5) to be set 00:30:24.027 [2024-07-24 02:08:38.872079] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9460 is same with the state(5) to be set 00:30:24.027 [2024-07-24 02:08:38.872090] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9460 is same with the state(5) to be set 00:30:24.027 [2024-07-24 02:08:38.872101] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9460 is same with the state(5) to be set 00:30:24.027 [2024-07-24 02:08:38.872113] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9460 is same with the state(5) to be set 00:30:24.027 [2024-07-24 02:08:38.872129] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9460 is same with the state(5) to be set 00:30:24.027 [2024-07-24 02:08:38.872142] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9460 is same with the state(5) to be set 00:30:24.027 [2024-07-24 02:08:38.872153] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9460 is same with the state(5) to be set 00:30:24.027 [2024-07-24 02:08:38.872165] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9460 is same with the state(5) to be set 00:30:24.027 [2024-07-24 02:08:38.872176] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9460 is same with the state(5) to be set 00:30:24.027 [2024-07-24 02:08:38.872187] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9460 is same with the state(5) to be set 00:30:24.027 [2024-07-24 02:08:38.872198] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9460 is same with the state(5) to be set 00:30:24.027 [2024-07-24 02:08:38.872209] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9460 is same with the state(5) to be set 00:30:24.027 [2024-07-24 02:08:38.872221] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9460 is same with the state(5) to be set 00:30:24.027 [2024-07-24 02:08:38.872232] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9460 is same with the state(5) to be set 00:30:24.027 [2024-07-24 02:08:38.872244] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9460 is same with the state(5) to be set 00:30:24.027 [2024-07-24 02:08:38.872259] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9460 is same with the state(5) to be set 00:30:24.027 [2024-07-24 02:08:38.872270] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9460 is same with the state(5) to be set 00:30:24.028 [2024-07-24 02:08:38.872282] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9460 is same with the state(5) to be set 00:30:24.028 [2024-07-24 02:08:38.872293] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9460 is same with the state(5) to be set 00:30:24.028 [2024-07-24 02:08:38.872304] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9460 is same with the state(5) to be set 00:30:24.028 [2024-07-24 02:08:38.872322] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9460 is same with the state(5) to be set 00:30:24.028 02:08:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 1541554 00:30:30.604 0 00:30:30.604 02:08:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 1541420 00:30:30.604 02:08:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 1541420 ']' 00:30:30.604 02:08:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 1541420 00:30:30.604 02:08:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:30:30.604 02:08:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:30.604 02:08:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1541420 00:30:30.604 02:08:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:30.604 02:08:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:30.604 02:08:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1541420' 00:30:30.604 killing process with pid 1541420 00:30:30.604 02:08:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@967 -- # kill 1541420 00:30:30.604 02:08:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # wait 1541420 00:30:30.604 02:08:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:30.604 [2024-07-24 02:08:28.378381] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:30:30.604 [2024-07-24 02:08:28.378465] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1541420 ] 00:30:30.604 EAL: No free 2048 kB hugepages reported on node 1 00:30:30.604 [2024-07-24 02:08:28.442736] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:30.604 [2024-07-24 02:08:28.531200] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:30.604 Running I/O for 15 seconds... 00:30:30.604 [2024-07-24 02:08:30.671672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:79216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.604 [2024-07-24 02:08:30.671742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.604 [2024-07-24 02:08:30.671779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:79224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.604 [2024-07-24 02:08:30.671795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.604 [2024-07-24 02:08:30.671812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.604 [2024-07-24 02:08:30.671827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.604 [2024-07-24 02:08:30.671842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:79240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.604 [2024-07-24 02:08:30.671856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.604 [2024-07-24 02:08:30.671871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:79248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.604 [2024-07-24 02:08:30.671885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.604 [2024-07-24 02:08:30.671900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:78328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.604 [2024-07-24 02:08:30.671914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.604 [2024-07-24 02:08:30.671929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:78336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.604 [2024-07-24 02:08:30.671943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.604 [2024-07-24 02:08:30.671958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:78344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.604 [2024-07-24 02:08:30.671972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.604 [2024-07-24 02:08:30.671987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:78352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.604 [2024-07-24 02:08:30.672001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.604 [2024-07-24 02:08:30.672032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:78360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.604 [2024-07-24 02:08:30.672046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.604 [2024-07-24 02:08:30.672060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:78368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.604 [2024-07-24 02:08:30.672074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.604 [2024-07-24 02:08:30.672098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:78376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.604 [2024-07-24 02:08:30.672113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.604 [2024-07-24 02:08:30.672128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:78384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.604 [2024-07-24 02:08:30.672141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.604 [2024-07-24 02:08:30.672156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:78392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.604 [2024-07-24 02:08:30.672169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.604 [2024-07-24 02:08:30.672191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:78400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.604 [2024-07-24 02:08:30.672205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.604 [2024-07-24 02:08:30.672219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:78408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.604 [2024-07-24 02:08:30.672232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.604 [2024-07-24 02:08:30.672246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:78416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.604 [2024-07-24 02:08:30.672260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.604 [2024-07-24 02:08:30.672275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:78424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.604 [2024-07-24 02:08:30.672289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.604 [2024-07-24 02:08:30.672337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:78432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.604 [2024-07-24 02:08:30.672353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.604 [2024-07-24 02:08:30.672369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:78440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.604 [2024-07-24 02:08:30.672382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.604 [2024-07-24 02:08:30.672397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:79256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.604 [2024-07-24 02:08:30.672411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.604 [2024-07-24 02:08:30.672426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:79264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.604 [2024-07-24 02:08:30.672439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.604 [2024-07-24 02:08:30.672454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:79272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.604 [2024-07-24 02:08:30.672468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.604 [2024-07-24 02:08:30.672483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:79280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.604 [2024-07-24 02:08:30.672501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.604 [2024-07-24 02:08:30.672517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:79288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.604 [2024-07-24 02:08:30.672531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.604 [2024-07-24 02:08:30.672546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:79296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.605 [2024-07-24 02:08:30.672560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.605 [2024-07-24 02:08:30.672574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:79304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.605 [2024-07-24 02:08:30.672588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.605 [2024-07-24 02:08:30.672603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:79312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.605 [2024-07-24 02:08:30.672623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.605 [2024-07-24 02:08:30.672654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:79320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.605 [2024-07-24 02:08:30.672667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.605 [2024-07-24 02:08:30.672682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:79328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.605 [2024-07-24 02:08:30.672695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.605 [2024-07-24 02:08:30.672709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:78448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.605 [2024-07-24 02:08:30.672723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.605 [2024-07-24 02:08:30.672737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:78456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.605 [2024-07-24 02:08:30.672750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.605 [2024-07-24 02:08:30.672765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:78464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.605 [2024-07-24 02:08:30.672778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.605 [2024-07-24 02:08:30.672792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:78472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.605 [2024-07-24 02:08:30.672806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.605 [2024-07-24 02:08:30.672820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:78480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.605 [2024-07-24 02:08:30.672833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.605 [2024-07-24 02:08:30.672849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:78488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.605 [2024-07-24 02:08:30.672862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.605 [2024-07-24 02:08:30.672884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:78496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.605 [2024-07-24 02:08:30.672898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.605 [2024-07-24 02:08:30.672912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:78504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.605 [2024-07-24 02:08:30.672925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.605 [2024-07-24 02:08:30.672947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:78512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.605 [2024-07-24 02:08:30.672960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.605 [2024-07-24 02:08:30.672974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:78520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.605 [2024-07-24 02:08:30.672987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.605 [2024-07-24 02:08:30.673002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:78528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.605 [2024-07-24 02:08:30.673015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.605 [2024-07-24 02:08:30.673030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:78536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.605 [2024-07-24 02:08:30.673043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.605 [2024-07-24 02:08:30.673057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:78544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.605 [2024-07-24 02:08:30.673070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.605 [2024-07-24 02:08:30.673085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:78552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.605 [2024-07-24 02:08:30.673098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.605 [2024-07-24 02:08:30.673112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:78560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.605 [2024-07-24 02:08:30.673125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.605 [2024-07-24 02:08:30.673139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:78568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.605 [2024-07-24 02:08:30.673152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.605 [2024-07-24 02:08:30.673167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:78576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.605 [2024-07-24 02:08:30.673179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.605 [2024-07-24 02:08:30.673194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:78584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.605 [2024-07-24 02:08:30.673207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.605 [2024-07-24 02:08:30.673221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:78592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.605 [2024-07-24 02:08:30.673237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.605 [2024-07-24 02:08:30.673253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:78600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.605 [2024-07-24 02:08:30.673266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.605 [2024-07-24 02:08:30.673281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:78608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.605 [2024-07-24 02:08:30.673331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.605 [2024-07-24 02:08:30.673350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:78616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.605 [2024-07-24 02:08:30.673364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.605 [2024-07-24 02:08:30.673379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:78624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.605 [2024-07-24 02:08:30.673393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.605 [2024-07-24 02:08:30.673408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:78632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.605 [2024-07-24 02:08:30.673421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.605 [2024-07-24 02:08:30.673436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:78640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.605 [2024-07-24 02:08:30.673450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.605 [2024-07-24 02:08:30.673465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:79336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.605 [2024-07-24 02:08:30.673478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.605 [2024-07-24 02:08:30.673493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:78648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.605 [2024-07-24 02:08:30.673507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.605 [2024-07-24 02:08:30.673523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:78656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.605 [2024-07-24 02:08:30.673536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.605 [2024-07-24 02:08:30.673551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:78664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.605 [2024-07-24 02:08:30.673565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.605 [2024-07-24 02:08:30.673579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:78672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.605 [2024-07-24 02:08:30.673593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.605 [2024-07-24 02:08:30.673623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:78680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.605 [2024-07-24 02:08:30.673637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.606 [2024-07-24 02:08:30.673652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:78688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.606 [2024-07-24 02:08:30.673669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.606 [2024-07-24 02:08:30.673702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:78696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.606 [2024-07-24 02:08:30.673716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.606 [2024-07-24 02:08:30.673731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:78704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.606 [2024-07-24 02:08:30.673744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.606 [2024-07-24 02:08:30.673760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:78712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.606 [2024-07-24 02:08:30.673774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.606 [2024-07-24 02:08:30.673793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:78720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.606 [2024-07-24 02:08:30.673807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.606 [2024-07-24 02:08:30.673822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:78728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.606 [2024-07-24 02:08:30.673836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.606 [2024-07-24 02:08:30.673851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:78736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.606 [2024-07-24 02:08:30.673865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.606 [2024-07-24 02:08:30.673880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:78744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.606 [2024-07-24 02:08:30.673893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.606 [2024-07-24 02:08:30.673909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:78752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.606 [2024-07-24 02:08:30.673922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.606 [2024-07-24 02:08:30.673938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:78760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.606 [2024-07-24 02:08:30.673951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.606 [2024-07-24 02:08:30.673966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:78768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.606 [2024-07-24 02:08:30.673979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.606 [2024-07-24 02:08:30.673995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:78776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.606 [2024-07-24 02:08:30.674009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.606 [2024-07-24 02:08:30.674023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:78784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.606 [2024-07-24 02:08:30.674037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.606 [2024-07-24 02:08:30.674055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:78792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.606 [2024-07-24 02:08:30.674069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.606 [2024-07-24 02:08:30.674085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:78800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.606 [2024-07-24 02:08:30.674099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.606 [2024-07-24 02:08:30.674113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:78808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.606 [2024-07-24 02:08:30.674127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.606 [2024-07-24 02:08:30.674142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:78816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.606 [2024-07-24 02:08:30.674155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.606 [2024-07-24 02:08:30.674171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:78824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.606 [2024-07-24 02:08:30.674184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.606 [2024-07-24 02:08:30.674199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:79344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.606 [2024-07-24 02:08:30.674212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.606 [2024-07-24 02:08:30.674227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:78832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.606 [2024-07-24 02:08:30.674241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.606 [2024-07-24 02:08:30.674255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:78840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.606 [2024-07-24 02:08:30.674269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.606 [2024-07-24 02:08:30.674284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:78848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.606 [2024-07-24 02:08:30.674297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.606 [2024-07-24 02:08:30.674312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:78856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.606 [2024-07-24 02:08:30.674333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.606 [2024-07-24 02:08:30.674348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:78864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.606 [2024-07-24 02:08:30.674362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.606 [2024-07-24 02:08:30.674377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:78872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.606 [2024-07-24 02:08:30.674390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.606 [2024-07-24 02:08:30.674405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:78880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.606 [2024-07-24 02:08:30.674422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.606 [2024-07-24 02:08:30.674437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:78888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.606 [2024-07-24 02:08:30.674451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.606 [2024-07-24 02:08:30.674465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:78896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.606 [2024-07-24 02:08:30.674479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.606 [2024-07-24 02:08:30.674494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:78904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.606 [2024-07-24 02:08:30.674507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.606 [2024-07-24 02:08:30.674522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:78912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.606 [2024-07-24 02:08:30.674535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.606 [2024-07-24 02:08:30.674550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:78920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.606 [2024-07-24 02:08:30.674563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.606 [2024-07-24 02:08:30.674577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:78928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.606 [2024-07-24 02:08:30.674591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.606 [2024-07-24 02:08:30.674606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:78936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.606 [2024-07-24 02:08:30.674627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.606 [2024-07-24 02:08:30.674642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:78944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.606 [2024-07-24 02:08:30.674655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.606 [2024-07-24 02:08:30.674670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:78952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.606 [2024-07-24 02:08:30.674683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.606 [2024-07-24 02:08:30.674698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:78960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.606 [2024-07-24 02:08:30.674712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.606 [2024-07-24 02:08:30.674726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:78968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.606 [2024-07-24 02:08:30.674740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.606 [2024-07-24 02:08:30.674755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:78976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.607 [2024-07-24 02:08:30.674768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.607 [2024-07-24 02:08:30.674787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:78984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.607 [2024-07-24 02:08:30.674800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.607 [2024-07-24 02:08:30.674815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:78992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.607 [2024-07-24 02:08:30.674829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.607 [2024-07-24 02:08:30.674843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:79000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.607 [2024-07-24 02:08:30.674857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.607 [2024-07-24 02:08:30.674871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:79008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.607 [2024-07-24 02:08:30.674885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.607 [2024-07-24 02:08:30.674899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:79016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.607 [2024-07-24 02:08:30.674913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.607 [2024-07-24 02:08:30.674927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:79024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.607 [2024-07-24 02:08:30.674940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.607 [2024-07-24 02:08:30.674955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:79032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.607 [2024-07-24 02:08:30.674968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.607 [2024-07-24 02:08:30.674983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:79040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.607 [2024-07-24 02:08:30.674998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.607 [2024-07-24 02:08:30.675013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:79048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.607 [2024-07-24 02:08:30.675027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.607 [2024-07-24 02:08:30.675042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:79056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.607 [2024-07-24 02:08:30.675055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.607 [2024-07-24 02:08:30.675070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:79064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.607 [2024-07-24 02:08:30.675084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.607 [2024-07-24 02:08:30.675098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:79072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.607 [2024-07-24 02:08:30.675112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.607 [2024-07-24 02:08:30.675127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:79080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.607 [2024-07-24 02:08:30.675144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.607 [2024-07-24 02:08:30.675160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:79088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.607 [2024-07-24 02:08:30.675174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.607 [2024-07-24 02:08:30.675189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:79096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.607 [2024-07-24 02:08:30.675203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.607 [2024-07-24 02:08:30.675218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:79104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.607 [2024-07-24 02:08:30.675232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.607 [2024-07-24 02:08:30.675247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:79112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.607 [2024-07-24 02:08:30.675260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.607 [2024-07-24 02:08:30.675276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:79120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.607 [2024-07-24 02:08:30.675290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.607 [2024-07-24 02:08:30.675305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:79128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.607 [2024-07-24 02:08:30.675325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.607 [2024-07-24 02:08:30.675342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:79136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.607 [2024-07-24 02:08:30.675356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.607 [2024-07-24 02:08:30.675372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:79144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.607 [2024-07-24 02:08:30.675385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.607 [2024-07-24 02:08:30.675400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:79152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.607 [2024-07-24 02:08:30.675414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.607 [2024-07-24 02:08:30.675429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:79160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.607 [2024-07-24 02:08:30.675442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.607 [2024-07-24 02:08:30.675457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:79168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.607 [2024-07-24 02:08:30.675471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.607 [2024-07-24 02:08:30.675486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:79176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.607 [2024-07-24 02:08:30.675499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.607 [2024-07-24 02:08:30.675514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:79184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.607 [2024-07-24 02:08:30.675531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.607 [2024-07-24 02:08:30.675547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:79192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.607 [2024-07-24 02:08:30.675561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.607 [2024-07-24 02:08:30.675581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:79200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.607 [2024-07-24 02:08:30.675598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.607 [2024-07-24 02:08:30.675613] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2313400 is same with the state(5) to be set 00:30:30.607 [2024-07-24 02:08:30.675640] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.607 [2024-07-24 02:08:30.675651] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.607 [2024-07-24 02:08:30.675663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79208 len:8 PRP1 0x0 PRP2 0x0 00:30:30.607 [2024-07-24 02:08:30.675676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.607 [2024-07-24 02:08:30.675735] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2313400 was disconnected and freed. reset controller. 00:30:30.607 [2024-07-24 02:08:30.675753] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:30:30.607 [2024-07-24 02:08:30.675787] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:30.607 [2024-07-24 02:08:30.675806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.607 [2024-07-24 02:08:30.675821] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:30.607 [2024-07-24 02:08:30.675834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.607 [2024-07-24 02:08:30.675848] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:30.607 [2024-07-24 02:08:30.675862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.607 [2024-07-24 02:08:30.675875] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:30.607 [2024-07-24 02:08:30.675888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.607 [2024-07-24 02:08:30.675901] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:30.607 [2024-07-24 02:08:30.679269] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:30.607 [2024-07-24 02:08:30.679307] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x231c830 (9): Bad file descriptor 00:30:30.607 [2024-07-24 02:08:30.718411] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:30.608 [2024-07-24 02:08:34.316104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:79064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.608 [2024-07-24 02:08:34.316146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.608 [2024-07-24 02:08:34.316174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:79072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.608 [2024-07-24 02:08:34.316211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.608 [2024-07-24 02:08:34.316228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:79080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.608 [2024-07-24 02:08:34.316241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.608 [2024-07-24 02:08:34.316256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:79088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.608 [2024-07-24 02:08:34.316270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.608 [2024-07-24 02:08:34.316285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:79096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.608 [2024-07-24 02:08:34.316297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.608 [2024-07-24 02:08:34.316312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:79104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.608 [2024-07-24 02:08:34.316348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.608 [2024-07-24 02:08:34.316365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:79112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.608 [2024-07-24 02:08:34.316379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.608 [2024-07-24 02:08:34.316394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:79120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.608 [2024-07-24 02:08:34.316407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.608 [2024-07-24 02:08:34.316422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:79128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.608 [2024-07-24 02:08:34.316436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.608 [2024-07-24 02:08:34.316451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:79136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.608 [2024-07-24 02:08:34.316465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.608 [2024-07-24 02:08:34.316479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:79144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.608 [2024-07-24 02:08:34.316492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.608 [2024-07-24 02:08:34.316507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:79152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.608 [2024-07-24 02:08:34.316521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.608 [2024-07-24 02:08:34.316536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:79160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.608 [2024-07-24 02:08:34.316549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.608 [2024-07-24 02:08:34.316564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:79168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.608 [2024-07-24 02:08:34.316577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.608 [2024-07-24 02:08:34.316596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:79176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.608 [2024-07-24 02:08:34.316610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.608 [2024-07-24 02:08:34.316640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:79184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.608 [2024-07-24 02:08:34.316654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.608 [2024-07-24 02:08:34.316669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:79192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.608 [2024-07-24 02:08:34.316683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.608 [2024-07-24 02:08:34.316698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:79200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.608 [2024-07-24 02:08:34.316711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.608 [2024-07-24 02:08:34.316725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:79208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.608 [2024-07-24 02:08:34.316738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.608 [2024-07-24 02:08:34.316753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:79216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.608 [2024-07-24 02:08:34.316766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.608 [2024-07-24 02:08:34.316780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:79224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.608 [2024-07-24 02:08:34.316793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.608 [2024-07-24 02:08:34.316807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:79232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.608 [2024-07-24 02:08:34.316821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.608 [2024-07-24 02:08:34.316835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:79240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.608 [2024-07-24 02:08:34.316848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.608 [2024-07-24 02:08:34.316863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:79248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.608 [2024-07-24 02:08:34.316876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.608 [2024-07-24 02:08:34.316890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:79256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.608 [2024-07-24 02:08:34.316904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.608 [2024-07-24 02:08:34.316918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:79264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.608 [2024-07-24 02:08:34.316932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.608 [2024-07-24 02:08:34.316946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:79272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.608 [2024-07-24 02:08:34.316963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.608 [2024-07-24 02:08:34.316978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:79280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.608 [2024-07-24 02:08:34.316990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.608 [2024-07-24 02:08:34.317005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:79288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.608 [2024-07-24 02:08:34.317018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.608 [2024-07-24 02:08:34.317032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:79296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.608 [2024-07-24 02:08:34.317045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.608 [2024-07-24 02:08:34.317059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:79304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.608 [2024-07-24 02:08:34.317072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.609 [2024-07-24 02:08:34.317086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:79312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.609 [2024-07-24 02:08:34.317099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.609 [2024-07-24 02:08:34.317114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:79320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.609 [2024-07-24 02:08:34.317127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.609 [2024-07-24 02:08:34.317142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:79328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.609 [2024-07-24 02:08:34.317155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.609 [2024-07-24 02:08:34.317169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:79336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.609 [2024-07-24 02:08:34.317182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.609 [2024-07-24 02:08:34.317196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:79344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.609 [2024-07-24 02:08:34.317209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.609 [2024-07-24 02:08:34.317224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:79352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.609 [2024-07-24 02:08:34.317237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.609 [2024-07-24 02:08:34.317251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.609 [2024-07-24 02:08:34.317264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.609 [2024-07-24 02:08:34.317278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:79368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.609 [2024-07-24 02:08:34.317291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.609 [2024-07-24 02:08:34.317332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:79376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.609 [2024-07-24 02:08:34.317348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.609 [2024-07-24 02:08:34.317363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:79384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.609 [2024-07-24 02:08:34.317377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.609 [2024-07-24 02:08:34.317391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:79392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.609 [2024-07-24 02:08:34.317405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.609 [2024-07-24 02:08:34.317420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:79400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.609 [2024-07-24 02:08:34.317433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.609 [2024-07-24 02:08:34.317448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:79408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.609 [2024-07-24 02:08:34.317461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.609 [2024-07-24 02:08:34.317476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:79416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.609 [2024-07-24 02:08:34.317489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.609 [2024-07-24 02:08:34.317504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:79424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.609 [2024-07-24 02:08:34.317518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.609 [2024-07-24 02:08:34.317532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:79432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.609 [2024-07-24 02:08:34.317546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.609 [2024-07-24 02:08:34.317560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:79456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.609 [2024-07-24 02:08:34.317574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.609 [2024-07-24 02:08:34.317589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:79464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.609 [2024-07-24 02:08:34.317602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.609 [2024-07-24 02:08:34.317618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:79472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.609 [2024-07-24 02:08:34.317649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.609 [2024-07-24 02:08:34.317664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:79480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.609 [2024-07-24 02:08:34.317678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.609 [2024-07-24 02:08:34.317693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:79488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.609 [2024-07-24 02:08:34.317709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.609 [2024-07-24 02:08:34.317725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:79496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.609 [2024-07-24 02:08:34.317738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.609 [2024-07-24 02:08:34.317753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:79504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.609 [2024-07-24 02:08:34.317767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.609 [2024-07-24 02:08:34.317781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:79512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.609 [2024-07-24 02:08:34.317795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.609 [2024-07-24 02:08:34.317809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:79520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.609 [2024-07-24 02:08:34.317823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.609 [2024-07-24 02:08:34.317837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:79528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.609 [2024-07-24 02:08:34.317850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.609 [2024-07-24 02:08:34.317865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:79536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.609 [2024-07-24 02:08:34.317878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.609 [2024-07-24 02:08:34.317893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:79544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.609 [2024-07-24 02:08:34.317906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.609 [2024-07-24 02:08:34.317921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:79552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.609 [2024-07-24 02:08:34.317934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.609 [2024-07-24 02:08:34.317948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:79560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.609 [2024-07-24 02:08:34.317962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.609 [2024-07-24 02:08:34.317976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:79568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.609 [2024-07-24 02:08:34.317989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.609 [2024-07-24 02:08:34.318004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:79576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.609 [2024-07-24 02:08:34.318017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.609 [2024-07-24 02:08:34.318031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:79584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.609 [2024-07-24 02:08:34.318044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.609 [2024-07-24 02:08:34.318059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:79592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.609 [2024-07-24 02:08:34.318076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.609 [2024-07-24 02:08:34.318091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:79600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.609 [2024-07-24 02:08:34.318104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.609 [2024-07-24 02:08:34.318119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:79608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.609 [2024-07-24 02:08:34.318133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.610 [2024-07-24 02:08:34.318147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:79616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.610 [2024-07-24 02:08:34.318160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.610 [2024-07-24 02:08:34.318174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:79624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.610 [2024-07-24 02:08:34.318188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.610 [2024-07-24 02:08:34.318203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:79632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.610 [2024-07-24 02:08:34.318216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.610 [2024-07-24 02:08:34.318231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:79640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.610 [2024-07-24 02:08:34.318244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.610 [2024-07-24 02:08:34.318259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:79648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.610 [2024-07-24 02:08:34.318272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.610 [2024-07-24 02:08:34.318287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:79656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.610 [2024-07-24 02:08:34.318300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.610 [2024-07-24 02:08:34.318314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:79664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.610 [2024-07-24 02:08:34.318353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.610 [2024-07-24 02:08:34.318369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:79672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.610 [2024-07-24 02:08:34.318383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.610 [2024-07-24 02:08:34.318398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:79680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.610 [2024-07-24 02:08:34.318412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.610 [2024-07-24 02:08:34.318427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:79688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.610 [2024-07-24 02:08:34.318441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.610 [2024-07-24 02:08:34.318460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:79696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.610 [2024-07-24 02:08:34.318475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.610 [2024-07-24 02:08:34.318490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:79704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.610 [2024-07-24 02:08:34.318504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.610 [2024-07-24 02:08:34.318519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:79712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.610 [2024-07-24 02:08:34.318533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.610 [2024-07-24 02:08:34.318548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:79720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.610 [2024-07-24 02:08:34.318562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.610 [2024-07-24 02:08:34.318577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:79728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.610 [2024-07-24 02:08:34.318591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.610 [2024-07-24 02:08:34.318606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:79736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.610 [2024-07-24 02:08:34.318621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.610 [2024-07-24 02:08:34.318636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:79744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.610 [2024-07-24 02:08:34.318650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.610 [2024-07-24 02:08:34.318664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:79752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.610 [2024-07-24 02:08:34.318678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.610 [2024-07-24 02:08:34.318693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:79760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.610 [2024-07-24 02:08:34.318707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.610 [2024-07-24 02:08:34.318722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:79768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.610 [2024-07-24 02:08:34.318735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.610 [2024-07-24 02:08:34.318750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:79776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.610 [2024-07-24 02:08:34.318763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.610 [2024-07-24 02:08:34.318779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:79784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.610 [2024-07-24 02:08:34.318792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.610 [2024-07-24 02:08:34.318807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:79792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.610 [2024-07-24 02:08:34.318824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.610 [2024-07-24 02:08:34.318840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:79800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.610 [2024-07-24 02:08:34.318854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.610 [2024-07-24 02:08:34.318869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:79808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.610 [2024-07-24 02:08:34.318883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.610 [2024-07-24 02:08:34.318898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:79816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.610 [2024-07-24 02:08:34.318912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.610 [2024-07-24 02:08:34.318927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:79824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.610 [2024-07-24 02:08:34.318941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.610 [2024-07-24 02:08:34.318955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:79832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.610 [2024-07-24 02:08:34.318969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.610 [2024-07-24 02:08:34.319003] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.610 [2024-07-24 02:08:34.319021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79840 len:8 PRP1 0x0 PRP2 0x0 00:30:30.610 [2024-07-24 02:08:34.319034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.610 [2024-07-24 02:08:34.319051] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.610 [2024-07-24 02:08:34.319064] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.610 [2024-07-24 02:08:34.319075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79848 len:8 PRP1 0x0 PRP2 0x0 00:30:30.610 [2024-07-24 02:08:34.319088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.610 [2024-07-24 02:08:34.319101] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.610 [2024-07-24 02:08:34.319112] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.610 [2024-07-24 02:08:34.319123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79856 len:8 PRP1 0x0 PRP2 0x0 00:30:30.610 [2024-07-24 02:08:34.319136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.610 [2024-07-24 02:08:34.319149] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.610 [2024-07-24 02:08:34.319159] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.610 [2024-07-24 02:08:34.319170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79864 len:8 PRP1 0x0 PRP2 0x0 00:30:30.610 [2024-07-24 02:08:34.319183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.610 [2024-07-24 02:08:34.319196] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.610 [2024-07-24 02:08:34.319207] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.610 [2024-07-24 02:08:34.319218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79872 len:8 PRP1 0x0 PRP2 0x0 00:30:30.610 [2024-07-24 02:08:34.319235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.610 [2024-07-24 02:08:34.319248] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.610 [2024-07-24 02:08:34.319259] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.610 [2024-07-24 02:08:34.319270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79880 len:8 PRP1 0x0 PRP2 0x0 00:30:30.610 [2024-07-24 02:08:34.319283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.611 [2024-07-24 02:08:34.319296] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.611 [2024-07-24 02:08:34.319306] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.611 [2024-07-24 02:08:34.319325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79888 len:8 PRP1 0x0 PRP2 0x0 00:30:30.611 [2024-07-24 02:08:34.319339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.611 [2024-07-24 02:08:34.319352] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.611 [2024-07-24 02:08:34.319363] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.611 [2024-07-24 02:08:34.319374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79896 len:8 PRP1 0x0 PRP2 0x0 00:30:30.611 [2024-07-24 02:08:34.319387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.611 [2024-07-24 02:08:34.319399] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.611 [2024-07-24 02:08:34.319410] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.611 [2024-07-24 02:08:34.319421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79904 len:8 PRP1 0x0 PRP2 0x0 00:30:30.611 [2024-07-24 02:08:34.319434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.611 [2024-07-24 02:08:34.319447] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.611 [2024-07-24 02:08:34.319457] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.611 [2024-07-24 02:08:34.319468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79912 len:8 PRP1 0x0 PRP2 0x0 00:30:30.611 [2024-07-24 02:08:34.319481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.611 [2024-07-24 02:08:34.319494] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.611 [2024-07-24 02:08:34.319504] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.611 [2024-07-24 02:08:34.319515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79920 len:8 PRP1 0x0 PRP2 0x0 00:30:30.611 [2024-07-24 02:08:34.319528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.611 [2024-07-24 02:08:34.319540] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.611 [2024-07-24 02:08:34.319550] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.611 [2024-07-24 02:08:34.319561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79928 len:8 PRP1 0x0 PRP2 0x0 00:30:30.611 [2024-07-24 02:08:34.319574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.611 [2024-07-24 02:08:34.319587] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.611 [2024-07-24 02:08:34.319597] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.611 [2024-07-24 02:08:34.319611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79936 len:8 PRP1 0x0 PRP2 0x0 00:30:30.611 [2024-07-24 02:08:34.319624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.611 [2024-07-24 02:08:34.319638] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.611 [2024-07-24 02:08:34.319648] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.611 [2024-07-24 02:08:34.319659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79944 len:8 PRP1 0x0 PRP2 0x0 00:30:30.611 [2024-07-24 02:08:34.319672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.611 [2024-07-24 02:08:34.319685] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.611 [2024-07-24 02:08:34.319695] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.611 [2024-07-24 02:08:34.319706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79952 len:8 PRP1 0x0 PRP2 0x0 00:30:30.611 [2024-07-24 02:08:34.319718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.611 [2024-07-24 02:08:34.319731] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.611 [2024-07-24 02:08:34.319742] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.611 [2024-07-24 02:08:34.319753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79960 len:8 PRP1 0x0 PRP2 0x0 00:30:30.611 [2024-07-24 02:08:34.319765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.611 [2024-07-24 02:08:34.319778] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.611 [2024-07-24 02:08:34.319788] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.611 [2024-07-24 02:08:34.319799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79968 len:8 PRP1 0x0 PRP2 0x0 00:30:30.611 [2024-07-24 02:08:34.319812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.611 [2024-07-24 02:08:34.319825] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.611 [2024-07-24 02:08:34.319836] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.611 [2024-07-24 02:08:34.319847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79976 len:8 PRP1 0x0 PRP2 0x0 00:30:30.611 [2024-07-24 02:08:34.319859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.611 [2024-07-24 02:08:34.319872] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.611 [2024-07-24 02:08:34.319882] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.611 [2024-07-24 02:08:34.319893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79984 len:8 PRP1 0x0 PRP2 0x0 00:30:30.611 [2024-07-24 02:08:34.319906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.611 [2024-07-24 02:08:34.319919] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.611 [2024-07-24 02:08:34.319929] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.611 [2024-07-24 02:08:34.319940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79992 len:8 PRP1 0x0 PRP2 0x0 00:30:30.611 [2024-07-24 02:08:34.319952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.611 [2024-07-24 02:08:34.319969] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.611 [2024-07-24 02:08:34.319980] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.611 [2024-07-24 02:08:34.319992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80000 len:8 PRP1 0x0 PRP2 0x0 00:30:30.611 [2024-07-24 02:08:34.320005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.611 [2024-07-24 02:08:34.320018] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.611 [2024-07-24 02:08:34.320029] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.611 [2024-07-24 02:08:34.320040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80008 len:8 PRP1 0x0 PRP2 0x0 00:30:30.611 [2024-07-24 02:08:34.320053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.611 [2024-07-24 02:08:34.320066] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.611 [2024-07-24 02:08:34.320077] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.611 [2024-07-24 02:08:34.320088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80016 len:8 PRP1 0x0 PRP2 0x0 00:30:30.611 [2024-07-24 02:08:34.320101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.611 [2024-07-24 02:08:34.320114] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.611 [2024-07-24 02:08:34.320125] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.611 [2024-07-24 02:08:34.320136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80024 len:8 PRP1 0x0 PRP2 0x0 00:30:30.611 [2024-07-24 02:08:34.320149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.611 [2024-07-24 02:08:34.320162] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.611 [2024-07-24 02:08:34.320172] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.611 [2024-07-24 02:08:34.320184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80032 len:8 PRP1 0x0 PRP2 0x0 00:30:30.611 [2024-07-24 02:08:34.320197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.611 [2024-07-24 02:08:34.320217] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.611 [2024-07-24 02:08:34.320229] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.611 [2024-07-24 02:08:34.320240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80040 len:8 PRP1 0x0 PRP2 0x0 00:30:30.611 [2024-07-24 02:08:34.320253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.611 [2024-07-24 02:08:34.320266] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.611 [2024-07-24 02:08:34.320276] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.611 [2024-07-24 02:08:34.320287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80048 len:8 PRP1 0x0 PRP2 0x0 00:30:30.611 [2024-07-24 02:08:34.320300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.611 [2024-07-24 02:08:34.320314] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.611 [2024-07-24 02:08:34.320331] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.611 [2024-07-24 02:08:34.320343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80056 len:8 PRP1 0x0 PRP2 0x0 00:30:30.611 [2024-07-24 02:08:34.320360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.611 [2024-07-24 02:08:34.320375] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.612 [2024-07-24 02:08:34.320387] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.612 [2024-07-24 02:08:34.320398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80064 len:8 PRP1 0x0 PRP2 0x0 00:30:30.612 [2024-07-24 02:08:34.320411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.612 [2024-07-24 02:08:34.320424] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.612 [2024-07-24 02:08:34.320434] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.612 [2024-07-24 02:08:34.320446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80072 len:8 PRP1 0x0 PRP2 0x0 00:30:30.612 [2024-07-24 02:08:34.320459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.612 [2024-07-24 02:08:34.320472] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.612 [2024-07-24 02:08:34.320483] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.612 [2024-07-24 02:08:34.320494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80080 len:8 PRP1 0x0 PRP2 0x0 00:30:30.612 [2024-07-24 02:08:34.320507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.612 [2024-07-24 02:08:34.320521] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.612 [2024-07-24 02:08:34.320531] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.612 [2024-07-24 02:08:34.320543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79440 len:8 PRP1 0x0 PRP2 0x0 00:30:30.612 [2024-07-24 02:08:34.320555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.612 [2024-07-24 02:08:34.320568] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.612 [2024-07-24 02:08:34.320579] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.612 [2024-07-24 02:08:34.320590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79448 len:8 PRP1 0x0 PRP2 0x0 00:30:30.612 [2024-07-24 02:08:34.320603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.612 [2024-07-24 02:08:34.320664] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x23151d0 was disconnected and freed. reset controller. 00:30:30.612 [2024-07-24 02:08:34.320683] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:30:30.612 [2024-07-24 02:08:34.320717] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:30.612 [2024-07-24 02:08:34.320735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.612 [2024-07-24 02:08:34.320749] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:30.612 [2024-07-24 02:08:34.320763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.612 [2024-07-24 02:08:34.320776] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:30.612 [2024-07-24 02:08:34.320789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.612 [2024-07-24 02:08:34.320807] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:30.612 [2024-07-24 02:08:34.320820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.612 [2024-07-24 02:08:34.320833] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:30.612 [2024-07-24 02:08:34.324110] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:30.612 [2024-07-24 02:08:34.324150] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x231c830 (9): Bad file descriptor 00:30:30.612 [2024-07-24 02:08:34.402471] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:30.612 [2024-07-24 02:08:38.873540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.612 [2024-07-24 02:08:38.873580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.612 [2024-07-24 02:08:38.873607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:25912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.612 [2024-07-24 02:08:38.873622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.612 [2024-07-24 02:08:38.873638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.612 [2024-07-24 02:08:38.873651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.612 [2024-07-24 02:08:38.873665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:25928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.612 [2024-07-24 02:08:38.873678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.612 [2024-07-24 02:08:38.873693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:25936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.612 [2024-07-24 02:08:38.873706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.612 [2024-07-24 02:08:38.873721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:25944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.612 [2024-07-24 02:08:38.873734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.612 [2024-07-24 02:08:38.873748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:25952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.612 [2024-07-24 02:08:38.873762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.612 [2024-07-24 02:08:38.873776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:25960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.612 [2024-07-24 02:08:38.873790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.612 [2024-07-24 02:08:38.873804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.612 [2024-07-24 02:08:38.873818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.612 [2024-07-24 02:08:38.873832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:25976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.612 [2024-07-24 02:08:38.873846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.612 [2024-07-24 02:08:38.873860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:25984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.612 [2024-07-24 02:08:38.873879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.612 [2024-07-24 02:08:38.873894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:25992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.612 [2024-07-24 02:08:38.873908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.612 [2024-07-24 02:08:38.873922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.612 [2024-07-24 02:08:38.873935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.612 [2024-07-24 02:08:38.873949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:26008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.612 [2024-07-24 02:08:38.873961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.612 [2024-07-24 02:08:38.873975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:26016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.612 [2024-07-24 02:08:38.873988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.612 [2024-07-24 02:08:38.874003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:26024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.612 [2024-07-24 02:08:38.874015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.612 [2024-07-24 02:08:38.874030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:26032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.612 [2024-07-24 02:08:38.874043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.612 [2024-07-24 02:08:38.874057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:26040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.612 [2024-07-24 02:08:38.874069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.612 [2024-07-24 02:08:38.874085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:26048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.612 [2024-07-24 02:08:38.874098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.612 [2024-07-24 02:08:38.874113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:26056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.612 [2024-07-24 02:08:38.874126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.612 [2024-07-24 02:08:38.874140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:26064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.612 [2024-07-24 02:08:38.874152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.612 [2024-07-24 02:08:38.874166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:26072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.612 [2024-07-24 02:08:38.874179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.612 [2024-07-24 02:08:38.874194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:26080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.612 [2024-07-24 02:08:38.874206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.613 [2024-07-24 02:08:38.874228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:26088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.613 [2024-07-24 02:08:38.874242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.613 [2024-07-24 02:08:38.874256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:26240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.613 [2024-07-24 02:08:38.874269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.613 [2024-07-24 02:08:38.874283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:26248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.613 [2024-07-24 02:08:38.874296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.613 [2024-07-24 02:08:38.874314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:26256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.613 [2024-07-24 02:08:38.874350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.613 [2024-07-24 02:08:38.874367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:26264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.613 [2024-07-24 02:08:38.874381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.613 [2024-07-24 02:08:38.874395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:26272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.613 [2024-07-24 02:08:38.874408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.613 [2024-07-24 02:08:38.874423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:26280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.613 [2024-07-24 02:08:38.874436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.613 [2024-07-24 02:08:38.874451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:26288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.613 [2024-07-24 02:08:38.874464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.613 [2024-07-24 02:08:38.874479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:26296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.613 [2024-07-24 02:08:38.874492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.613 [2024-07-24 02:08:38.874507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:26304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.613 [2024-07-24 02:08:38.874520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.613 [2024-07-24 02:08:38.874534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:26312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.613 [2024-07-24 02:08:38.874548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.613 [2024-07-24 02:08:38.874563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:26320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.613 [2024-07-24 02:08:38.874576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.613 [2024-07-24 02:08:38.874591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.613 [2024-07-24 02:08:38.874610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.613 [2024-07-24 02:08:38.874626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:26336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.613 [2024-07-24 02:08:38.874639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.613 [2024-07-24 02:08:38.874654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:26344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.613 [2024-07-24 02:08:38.874667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.613 [2024-07-24 02:08:38.874682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:26352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.613 [2024-07-24 02:08:38.874695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.613 [2024-07-24 02:08:38.874710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:26360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.613 [2024-07-24 02:08:38.874724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.613 [2024-07-24 02:08:38.874738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:26368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.613 [2024-07-24 02:08:38.874752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.613 [2024-07-24 02:08:38.874766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:26376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.613 [2024-07-24 02:08:38.874780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.613 [2024-07-24 02:08:38.874794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:26384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.613 [2024-07-24 02:08:38.874807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.613 [2024-07-24 02:08:38.874822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.613 [2024-07-24 02:08:38.874835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.613 [2024-07-24 02:08:38.874849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:26400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.613 [2024-07-24 02:08:38.874863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.613 [2024-07-24 02:08:38.874877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:26408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.613 [2024-07-24 02:08:38.874891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.613 [2024-07-24 02:08:38.874905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:26416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.613 [2024-07-24 02:08:38.874919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.613 [2024-07-24 02:08:38.874933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:26424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.613 [2024-07-24 02:08:38.874947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.613 [2024-07-24 02:08:38.874965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:26432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.613 [2024-07-24 02:08:38.874979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.613 [2024-07-24 02:08:38.874993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:26440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.613 [2024-07-24 02:08:38.875007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.613 [2024-07-24 02:08:38.875022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:26448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.613 [2024-07-24 02:08:38.875035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.613 [2024-07-24 02:08:38.875050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:26456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.613 [2024-07-24 02:08:38.875063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.613 [2024-07-24 02:08:38.875078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:26464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.613 [2024-07-24 02:08:38.875091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.613 [2024-07-24 02:08:38.875106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:26472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.613 [2024-07-24 02:08:38.875119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.613 [2024-07-24 02:08:38.875134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:26480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.613 [2024-07-24 02:08:38.875147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.613 [2024-07-24 02:08:38.875162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:26488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.614 [2024-07-24 02:08:38.875175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.614 [2024-07-24 02:08:38.875190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:26496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.614 [2024-07-24 02:08:38.875203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.614 [2024-07-24 02:08:38.875217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:26504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.614 [2024-07-24 02:08:38.875230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.614 [2024-07-24 02:08:38.875246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:26512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.614 [2024-07-24 02:08:38.875259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.614 [2024-07-24 02:08:38.875274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:26520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.614 [2024-07-24 02:08:38.875287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.614 [2024-07-24 02:08:38.875302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:26528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.614 [2024-07-24 02:08:38.875382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.614 [2024-07-24 02:08:38.875411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:26536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.614 [2024-07-24 02:08:38.875426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.614 [2024-07-24 02:08:38.875441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:26544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.614 [2024-07-24 02:08:38.875455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.614 [2024-07-24 02:08:38.875470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:26552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.614 [2024-07-24 02:08:38.875483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.614 [2024-07-24 02:08:38.875498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:26560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.614 [2024-07-24 02:08:38.875511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.614 [2024-07-24 02:08:38.875525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:26568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.614 [2024-07-24 02:08:38.875539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.614 [2024-07-24 02:08:38.875554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.614 [2024-07-24 02:08:38.875568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.614 [2024-07-24 02:08:38.875582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:26584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.614 [2024-07-24 02:08:38.875596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.614 [2024-07-24 02:08:38.875611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:26592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.614 [2024-07-24 02:08:38.875624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.614 [2024-07-24 02:08:38.875638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:26600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.614 [2024-07-24 02:08:38.875652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.614 [2024-07-24 02:08:38.875666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:26608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.614 [2024-07-24 02:08:38.875680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.614 [2024-07-24 02:08:38.875694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:26616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.614 [2024-07-24 02:08:38.875707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.614 [2024-07-24 02:08:38.875722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:26624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.614 [2024-07-24 02:08:38.875735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.614 [2024-07-24 02:08:38.875750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:26632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.614 [2024-07-24 02:08:38.875768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.614 [2024-07-24 02:08:38.875783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:26640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.614 [2024-07-24 02:08:38.875797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.614 [2024-07-24 02:08:38.875812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:26648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.614 [2024-07-24 02:08:38.875826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.614 [2024-07-24 02:08:38.875840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:26656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.614 [2024-07-24 02:08:38.875854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.614 [2024-07-24 02:08:38.875868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:26664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.614 [2024-07-24 02:08:38.875881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.614 [2024-07-24 02:08:38.875896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:26672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.614 [2024-07-24 02:08:38.875909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.614 [2024-07-24 02:08:38.875924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:26680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.614 [2024-07-24 02:08:38.875937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.614 [2024-07-24 02:08:38.875952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.614 [2024-07-24 02:08:38.875965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.614 [2024-07-24 02:08:38.875980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:26696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.614 [2024-07-24 02:08:38.875993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.614 [2024-07-24 02:08:38.876008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:26704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.614 [2024-07-24 02:08:38.876022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.614 [2024-07-24 02:08:38.876037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:26712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.614 [2024-07-24 02:08:38.876051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.614 [2024-07-24 02:08:38.876065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:26720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.614 [2024-07-24 02:08:38.876078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.614 [2024-07-24 02:08:38.876093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:26728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.614 [2024-07-24 02:08:38.876106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.614 [2024-07-24 02:08:38.876124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:26736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:30.614 [2024-07-24 02:08:38.876139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.614 [2024-07-24 02:08:38.876153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:26096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.614 [2024-07-24 02:08:38.876167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.614 [2024-07-24 02:08:38.876181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:26104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.614 [2024-07-24 02:08:38.876195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.614 [2024-07-24 02:08:38.876225] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.614 [2024-07-24 02:08:38.876241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26744 len:8 PRP1 0x0 PRP2 0x0 00:30:30.614 [2024-07-24 02:08:38.876254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.614 [2024-07-24 02:08:38.876300] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:30.614 [2024-07-24 02:08:38.876331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.614 [2024-07-24 02:08:38.876349] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:30.614 [2024-07-24 02:08:38.876363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.614 [2024-07-24 02:08:38.876376] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:30.614 [2024-07-24 02:08:38.876390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.615 [2024-07-24 02:08:38.876403] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:30.615 [2024-07-24 02:08:38.876416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.615 [2024-07-24 02:08:38.876429] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231c830 is same with the state(5) to be set 00:30:30.615 [2024-07-24 02:08:38.876655] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.615 [2024-07-24 02:08:38.876676] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.615 [2024-07-24 02:08:38.876688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26752 len:8 PRP1 0x0 PRP2 0x0 00:30:30.615 [2024-07-24 02:08:38.876701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.615 [2024-07-24 02:08:38.876717] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.615 [2024-07-24 02:08:38.876729] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.615 [2024-07-24 02:08:38.876741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26760 len:8 PRP1 0x0 PRP2 0x0 00:30:30.615 [2024-07-24 02:08:38.876754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.615 [2024-07-24 02:08:38.876767] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.615 [2024-07-24 02:08:38.876778] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.615 [2024-07-24 02:08:38.876793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26768 len:8 PRP1 0x0 PRP2 0x0 00:30:30.615 [2024-07-24 02:08:38.876806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.615 [2024-07-24 02:08:38.876820] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.615 [2024-07-24 02:08:38.876830] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.615 [2024-07-24 02:08:38.876842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26776 len:8 PRP1 0x0 PRP2 0x0 00:30:30.615 [2024-07-24 02:08:38.876854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.615 [2024-07-24 02:08:38.876867] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.615 [2024-07-24 02:08:38.876877] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.615 [2024-07-24 02:08:38.876888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26784 len:8 PRP1 0x0 PRP2 0x0 00:30:30.615 [2024-07-24 02:08:38.876901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.615 [2024-07-24 02:08:38.876914] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.615 [2024-07-24 02:08:38.876924] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.615 [2024-07-24 02:08:38.876935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26792 len:8 PRP1 0x0 PRP2 0x0 00:30:30.615 [2024-07-24 02:08:38.876948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.615 [2024-07-24 02:08:38.876961] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.615 [2024-07-24 02:08:38.876971] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.615 [2024-07-24 02:08:38.876982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26800 len:8 PRP1 0x0 PRP2 0x0 00:30:30.615 [2024-07-24 02:08:38.876995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.615 [2024-07-24 02:08:38.877008] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.615 [2024-07-24 02:08:38.877018] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.615 [2024-07-24 02:08:38.877029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26808 len:8 PRP1 0x0 PRP2 0x0 00:30:30.615 [2024-07-24 02:08:38.877042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.615 [2024-07-24 02:08:38.877055] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.615 [2024-07-24 02:08:38.877065] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.615 [2024-07-24 02:08:38.877076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26816 len:8 PRP1 0x0 PRP2 0x0 00:30:30.615 [2024-07-24 02:08:38.877088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.615 [2024-07-24 02:08:38.877101] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.615 [2024-07-24 02:08:38.877111] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.615 [2024-07-24 02:08:38.877122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26824 len:8 PRP1 0x0 PRP2 0x0 00:30:30.615 [2024-07-24 02:08:38.877134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.615 [2024-07-24 02:08:38.877151] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.615 [2024-07-24 02:08:38.877162] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.615 [2024-07-24 02:08:38.877173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26832 len:8 PRP1 0x0 PRP2 0x0 00:30:30.615 [2024-07-24 02:08:38.877185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.615 [2024-07-24 02:08:38.877198] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.615 [2024-07-24 02:08:38.877208] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.615 [2024-07-24 02:08:38.877219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26840 len:8 PRP1 0x0 PRP2 0x0 00:30:30.615 [2024-07-24 02:08:38.877232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.615 [2024-07-24 02:08:38.877244] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.615 [2024-07-24 02:08:38.877255] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.615 [2024-07-24 02:08:38.877266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26848 len:8 PRP1 0x0 PRP2 0x0 00:30:30.615 [2024-07-24 02:08:38.877278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.615 [2024-07-24 02:08:38.877291] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.615 [2024-07-24 02:08:38.877301] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.615 [2024-07-24 02:08:38.877313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26856 len:8 PRP1 0x0 PRP2 0x0 00:30:30.615 [2024-07-24 02:08:38.877336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.615 [2024-07-24 02:08:38.877351] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.615 [2024-07-24 02:08:38.877362] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.615 [2024-07-24 02:08:38.877373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26864 len:8 PRP1 0x0 PRP2 0x0 00:30:30.615 [2024-07-24 02:08:38.877386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.615 [2024-07-24 02:08:38.877399] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.615 [2024-07-24 02:08:38.877409] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.615 [2024-07-24 02:08:38.877420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26872 len:8 PRP1 0x0 PRP2 0x0 00:30:30.615 [2024-07-24 02:08:38.877433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.615 [2024-07-24 02:08:38.877445] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.615 [2024-07-24 02:08:38.877456] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.615 [2024-07-24 02:08:38.877467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26880 len:8 PRP1 0x0 PRP2 0x0 00:30:30.615 [2024-07-24 02:08:38.877479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.615 [2024-07-24 02:08:38.877492] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.615 [2024-07-24 02:08:38.877503] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.615 [2024-07-24 02:08:38.877514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26888 len:8 PRP1 0x0 PRP2 0x0 00:30:30.615 [2024-07-24 02:08:38.877530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.615 [2024-07-24 02:08:38.877543] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.615 [2024-07-24 02:08:38.877554] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.615 [2024-07-24 02:08:38.877565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26896 len:8 PRP1 0x0 PRP2 0x0 00:30:30.615 [2024-07-24 02:08:38.877578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.615 [2024-07-24 02:08:38.877590] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.615 [2024-07-24 02:08:38.877601] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.615 [2024-07-24 02:08:38.877612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26904 len:8 PRP1 0x0 PRP2 0x0 00:30:30.615 [2024-07-24 02:08:38.877624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.615 [2024-07-24 02:08:38.877637] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.615 [2024-07-24 02:08:38.877647] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.615 [2024-07-24 02:08:38.877658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26912 len:8 PRP1 0x0 PRP2 0x0 00:30:30.615 [2024-07-24 02:08:38.877670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.615 [2024-07-24 02:08:38.877683] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.616 [2024-07-24 02:08:38.877694] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.616 [2024-07-24 02:08:38.877704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26920 len:8 PRP1 0x0 PRP2 0x0 00:30:30.616 [2024-07-24 02:08:38.877717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.616 [2024-07-24 02:08:38.877729] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.616 [2024-07-24 02:08:38.877740] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.616 [2024-07-24 02:08:38.877751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:26112 len:8 PRP1 0x0 PRP2 0x0 00:30:30.616 [2024-07-24 02:08:38.877763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.616 [2024-07-24 02:08:38.877776] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.616 [2024-07-24 02:08:38.877786] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.616 [2024-07-24 02:08:38.877797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:26120 len:8 PRP1 0x0 PRP2 0x0 00:30:30.616 [2024-07-24 02:08:38.877809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.616 [2024-07-24 02:08:38.877822] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.616 [2024-07-24 02:08:38.877832] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.616 [2024-07-24 02:08:38.877843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:26128 len:8 PRP1 0x0 PRP2 0x0 00:30:30.616 [2024-07-24 02:08:38.877856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.616 [2024-07-24 02:08:38.877868] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.616 [2024-07-24 02:08:38.877879] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.616 [2024-07-24 02:08:38.877894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:26136 len:8 PRP1 0x0 PRP2 0x0 00:30:30.616 [2024-07-24 02:08:38.877907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.616 [2024-07-24 02:08:38.877920] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.616 [2024-07-24 02:08:38.877931] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.616 [2024-07-24 02:08:38.877942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:26144 len:8 PRP1 0x0 PRP2 0x0 00:30:30.616 [2024-07-24 02:08:38.877954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.616 [2024-07-24 02:08:38.877966] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.616 [2024-07-24 02:08:38.877977] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.616 [2024-07-24 02:08:38.877988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:26152 len:8 PRP1 0x0 PRP2 0x0 00:30:30.616 [2024-07-24 02:08:38.878001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.616 [2024-07-24 02:08:38.878013] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.616 [2024-07-24 02:08:38.878024] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.616 [2024-07-24 02:08:38.878034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:26160 len:8 PRP1 0x0 PRP2 0x0 00:30:30.616 [2024-07-24 02:08:38.878047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.616 [2024-07-24 02:08:38.878060] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.616 [2024-07-24 02:08:38.878071] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.616 [2024-07-24 02:08:38.878082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:26168 len:8 PRP1 0x0 PRP2 0x0 00:30:30.616 [2024-07-24 02:08:38.878094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.616 [2024-07-24 02:08:38.878107] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.616 [2024-07-24 02:08:38.878117] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.616 [2024-07-24 02:08:38.878128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:26176 len:8 PRP1 0x0 PRP2 0x0 00:30:30.616 [2024-07-24 02:08:38.878140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.616 [2024-07-24 02:08:38.878153] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.616 [2024-07-24 02:08:38.878163] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.616 [2024-07-24 02:08:38.878174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:26184 len:8 PRP1 0x0 PRP2 0x0 00:30:30.616 [2024-07-24 02:08:38.878187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.616 [2024-07-24 02:08:38.878202] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.616 [2024-07-24 02:08:38.878213] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.616 [2024-07-24 02:08:38.878225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:26192 len:8 PRP1 0x0 PRP2 0x0 00:30:30.616 [2024-07-24 02:08:38.878238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.616 [2024-07-24 02:08:38.878251] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.616 [2024-07-24 02:08:38.878265] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.616 [2024-07-24 02:08:38.878284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:26200 len:8 PRP1 0x0 PRP2 0x0 00:30:30.616 [2024-07-24 02:08:38.878298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.616 [2024-07-24 02:08:38.878311] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.616 [2024-07-24 02:08:38.878331] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.616 [2024-07-24 02:08:38.878344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:26208 len:8 PRP1 0x0 PRP2 0x0 00:30:30.616 [2024-07-24 02:08:38.878357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.616 [2024-07-24 02:08:38.878371] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.616 [2024-07-24 02:08:38.878382] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.616 [2024-07-24 02:08:38.878394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:26216 len:8 PRP1 0x0 PRP2 0x0 00:30:30.616 [2024-07-24 02:08:38.878406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.616 [2024-07-24 02:08:38.878420] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.616 [2024-07-24 02:08:38.878430] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.616 [2024-07-24 02:08:38.878442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:26224 len:8 PRP1 0x0 PRP2 0x0 00:30:30.616 [2024-07-24 02:08:38.878454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.616 [2024-07-24 02:08:38.878467] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.616 [2024-07-24 02:08:38.878478] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.616 [2024-07-24 02:08:38.878489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:26232 len:8 PRP1 0x0 PRP2 0x0 00:30:30.616 [2024-07-24 02:08:38.878502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.616 [2024-07-24 02:08:38.878515] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.616 [2024-07-24 02:08:38.878526] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.616 [2024-07-24 02:08:38.878537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25904 len:8 PRP1 0x0 PRP2 0x0 00:30:30.616 [2024-07-24 02:08:38.878550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.616 [2024-07-24 02:08:38.878563] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.616 [2024-07-24 02:08:38.878575] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.616 [2024-07-24 02:08:38.878586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25912 len:8 PRP1 0x0 PRP2 0x0 00:30:30.616 [2024-07-24 02:08:38.878599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.616 [2024-07-24 02:08:38.878612] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.616 [2024-07-24 02:08:38.878631] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.616 [2024-07-24 02:08:38.878642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25920 len:8 PRP1 0x0 PRP2 0x0 00:30:30.616 [2024-07-24 02:08:38.878655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.616 [2024-07-24 02:08:38.878673] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.616 [2024-07-24 02:08:38.878684] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.616 [2024-07-24 02:08:38.878700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25928 len:8 PRP1 0x0 PRP2 0x0 00:30:30.616 [2024-07-24 02:08:38.878714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.616 [2024-07-24 02:08:38.878727] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.616 [2024-07-24 02:08:38.878738] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.616 [2024-07-24 02:08:38.878749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25936 len:8 PRP1 0x0 PRP2 0x0 00:30:30.616 [2024-07-24 02:08:38.878762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.616 [2024-07-24 02:08:38.878775] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.616 [2024-07-24 02:08:38.878786] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.617 [2024-07-24 02:08:38.878797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25944 len:8 PRP1 0x0 PRP2 0x0 00:30:30.617 [2024-07-24 02:08:38.878810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.617 [2024-07-24 02:08:38.878823] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.617 [2024-07-24 02:08:38.878834] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.617 [2024-07-24 02:08:38.878845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25952 len:8 PRP1 0x0 PRP2 0x0 00:30:30.617 [2024-07-24 02:08:38.878859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.617 [2024-07-24 02:08:38.878872] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.617 [2024-07-24 02:08:38.878883] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.617 [2024-07-24 02:08:38.878894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25960 len:8 PRP1 0x0 PRP2 0x0 00:30:30.617 [2024-07-24 02:08:38.878908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.617 [2024-07-24 02:08:38.878921] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.617 [2024-07-24 02:08:38.878932] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.617 [2024-07-24 02:08:38.878943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25968 len:8 PRP1 0x0 PRP2 0x0 00:30:30.617 [2024-07-24 02:08:38.878956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.617 [2024-07-24 02:08:38.878969] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.617 [2024-07-24 02:08:38.878980] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.617 [2024-07-24 02:08:38.878991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25976 len:8 PRP1 0x0 PRP2 0x0 00:30:30.617 [2024-07-24 02:08:38.879004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.617 [2024-07-24 02:08:38.879018] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.617 [2024-07-24 02:08:38.879030] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.617 [2024-07-24 02:08:38.879041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25984 len:8 PRP1 0x0 PRP2 0x0 00:30:30.617 [2024-07-24 02:08:38.879058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.617 [2024-07-24 02:08:38.879071] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.617 [2024-07-24 02:08:38.879082] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.617 [2024-07-24 02:08:38.879094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25992 len:8 PRP1 0x0 PRP2 0x0 00:30:30.617 [2024-07-24 02:08:38.879107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.617 [2024-07-24 02:08:38.879120] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.617 [2024-07-24 02:08:38.879131] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.617 [2024-07-24 02:08:38.879142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:26000 len:8 PRP1 0x0 PRP2 0x0 00:30:30.617 [2024-07-24 02:08:38.879155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.617 [2024-07-24 02:08:38.879168] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.617 [2024-07-24 02:08:38.879179] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.617 [2024-07-24 02:08:38.879191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:26008 len:8 PRP1 0x0 PRP2 0x0 00:30:30.617 [2024-07-24 02:08:38.879203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.617 [2024-07-24 02:08:38.879216] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.617 [2024-07-24 02:08:38.879227] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.617 [2024-07-24 02:08:38.879238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:26016 len:8 PRP1 0x0 PRP2 0x0 00:30:30.617 [2024-07-24 02:08:38.879251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.617 [2024-07-24 02:08:38.879264] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.617 [2024-07-24 02:08:38.879275] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.617 [2024-07-24 02:08:38.879286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:26024 len:8 PRP1 0x0 PRP2 0x0 00:30:30.617 [2024-07-24 02:08:38.879298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.617 [2024-07-24 02:08:38.879311] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.617 [2024-07-24 02:08:38.879332] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.617 [2024-07-24 02:08:38.879345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:26032 len:8 PRP1 0x0 PRP2 0x0 00:30:30.617 [2024-07-24 02:08:38.879358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.617 [2024-07-24 02:08:38.879372] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.617 [2024-07-24 02:08:38.879383] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.617 [2024-07-24 02:08:38.879394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:26040 len:8 PRP1 0x0 PRP2 0x0 00:30:30.617 [2024-07-24 02:08:38.879407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.617 [2024-07-24 02:08:38.879420] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.617 [2024-07-24 02:08:38.879431] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.617 [2024-07-24 02:08:38.879446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:26048 len:8 PRP1 0x0 PRP2 0x0 00:30:30.617 [2024-07-24 02:08:38.879459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.617 [2024-07-24 02:08:38.879473] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.617 [2024-07-24 02:08:38.879484] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.617 [2024-07-24 02:08:38.879495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:26056 len:8 PRP1 0x0 PRP2 0x0 00:30:30.617 [2024-07-24 02:08:38.879508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.617 [2024-07-24 02:08:38.879521] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.617 [2024-07-24 02:08:38.879532] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.617 [2024-07-24 02:08:38.879543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:26064 len:8 PRP1 0x0 PRP2 0x0 00:30:30.617 [2024-07-24 02:08:38.879556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.617 [2024-07-24 02:08:38.879569] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.617 [2024-07-24 02:08:38.879580] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.617 [2024-07-24 02:08:38.879591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:26072 len:8 PRP1 0x0 PRP2 0x0 00:30:30.617 [2024-07-24 02:08:38.879603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.617 [2024-07-24 02:08:38.879616] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.617 [2024-07-24 02:08:38.879627] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.617 [2024-07-24 02:08:38.879638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:26080 len:8 PRP1 0x0 PRP2 0x0 00:30:30.617 [2024-07-24 02:08:38.879651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.617 [2024-07-24 02:08:38.879664] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.617 [2024-07-24 02:08:38.879675] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.617 [2024-07-24 02:08:38.879687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:26088 len:8 PRP1 0x0 PRP2 0x0 00:30:30.617 [2024-07-24 02:08:38.879700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.617 [2024-07-24 02:08:38.879712] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.617 [2024-07-24 02:08:38.879723] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.617 [2024-07-24 02:08:38.879734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26240 len:8 PRP1 0x0 PRP2 0x0 00:30:30.617 [2024-07-24 02:08:38.879747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.617 [2024-07-24 02:08:38.879759] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.617 [2024-07-24 02:08:38.879770] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.617 [2024-07-24 02:08:38.879781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26248 len:8 PRP1 0x0 PRP2 0x0 00:30:30.617 [2024-07-24 02:08:38.879793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.617 [2024-07-24 02:08:38.879805] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.617 [2024-07-24 02:08:38.879819] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.617 [2024-07-24 02:08:38.879831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26256 len:8 PRP1 0x0 PRP2 0x0 00:30:30.617 [2024-07-24 02:08:38.879843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.617 [2024-07-24 02:08:38.879856] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.617 [2024-07-24 02:08:38.879866] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.617 [2024-07-24 02:08:38.879877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26264 len:8 PRP1 0x0 PRP2 0x0 00:30:30.617 [2024-07-24 02:08:38.879890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.618 [2024-07-24 02:08:38.879902] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.618 [2024-07-24 02:08:38.879913] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.618 [2024-07-24 02:08:38.879924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26272 len:8 PRP1 0x0 PRP2 0x0 00:30:30.618 [2024-07-24 02:08:38.879936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.618 [2024-07-24 02:08:38.879948] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.618 [2024-07-24 02:08:38.879959] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.618 [2024-07-24 02:08:38.879969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26280 len:8 PRP1 0x0 PRP2 0x0 00:30:30.618 [2024-07-24 02:08:38.879981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.618 [2024-07-24 02:08:38.879994] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.618 [2024-07-24 02:08:38.880005] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.618 [2024-07-24 02:08:38.880015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26288 len:8 PRP1 0x0 PRP2 0x0 00:30:30.618 [2024-07-24 02:08:38.880028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.618 [2024-07-24 02:08:38.880040] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.618 [2024-07-24 02:08:38.880051] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.618 [2024-07-24 02:08:38.880061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26296 len:8 PRP1 0x0 PRP2 0x0 00:30:30.618 [2024-07-24 02:08:38.880074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.618 [2024-07-24 02:08:38.880086] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.618 [2024-07-24 02:08:38.880096] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.618 [2024-07-24 02:08:38.880107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26304 len:8 PRP1 0x0 PRP2 0x0 00:30:30.618 [2024-07-24 02:08:38.880120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.618 [2024-07-24 02:08:38.880132] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.618 [2024-07-24 02:08:38.880142] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.618 [2024-07-24 02:08:38.880153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26312 len:8 PRP1 0x0 PRP2 0x0 00:30:30.618 [2024-07-24 02:08:38.880165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.618 [2024-07-24 02:08:38.880184] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.618 [2024-07-24 02:08:38.880195] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.618 [2024-07-24 02:08:38.880206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26320 len:8 PRP1 0x0 PRP2 0x0 00:30:30.618 [2024-07-24 02:08:38.880218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.618 [2024-07-24 02:08:38.880230] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.618 [2024-07-24 02:08:38.880241] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.618 [2024-07-24 02:08:38.880252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26328 len:8 PRP1 0x0 PRP2 0x0 00:30:30.618 [2024-07-24 02:08:38.880264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.618 [2024-07-24 02:08:38.880277] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.618 [2024-07-24 02:08:38.880287] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.618 [2024-07-24 02:08:38.880298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26336 len:8 PRP1 0x0 PRP2 0x0 00:30:30.618 [2024-07-24 02:08:38.880310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.618 [2024-07-24 02:08:38.880334] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.618 [2024-07-24 02:08:38.880346] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.618 [2024-07-24 02:08:38.880357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26344 len:8 PRP1 0x0 PRP2 0x0 00:30:30.618 [2024-07-24 02:08:38.880369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.618 [2024-07-24 02:08:38.880382] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.618 [2024-07-24 02:08:38.880393] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.618 [2024-07-24 02:08:38.880404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26352 len:8 PRP1 0x0 PRP2 0x0 00:30:30.618 [2024-07-24 02:08:38.880416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.618 [2024-07-24 02:08:38.880428] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.618 [2024-07-24 02:08:38.880439] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.618 [2024-07-24 02:08:38.880450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26360 len:8 PRP1 0x0 PRP2 0x0 00:30:30.618 [2024-07-24 02:08:38.886353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.618 [2024-07-24 02:08:38.886385] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.618 [2024-07-24 02:08:38.886398] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.618 [2024-07-24 02:08:38.886410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26368 len:8 PRP1 0x0 PRP2 0x0 00:30:30.618 [2024-07-24 02:08:38.886422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.618 [2024-07-24 02:08:38.886435] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.618 [2024-07-24 02:08:38.886446] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.618 [2024-07-24 02:08:38.886457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26376 len:8 PRP1 0x0 PRP2 0x0 00:30:30.618 [2024-07-24 02:08:38.886475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.618 [2024-07-24 02:08:38.886489] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.618 [2024-07-24 02:08:38.886500] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.618 [2024-07-24 02:08:38.886511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26384 len:8 PRP1 0x0 PRP2 0x0 00:30:30.618 [2024-07-24 02:08:38.886523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.618 [2024-07-24 02:08:38.886535] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.618 [2024-07-24 02:08:38.886546] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.618 [2024-07-24 02:08:38.886557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26392 len:8 PRP1 0x0 PRP2 0x0 00:30:30.618 [2024-07-24 02:08:38.886570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.618 [2024-07-24 02:08:38.886582] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.618 [2024-07-24 02:08:38.886592] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.618 [2024-07-24 02:08:38.886603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26400 len:8 PRP1 0x0 PRP2 0x0 00:30:30.618 [2024-07-24 02:08:38.886615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.618 [2024-07-24 02:08:38.886628] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.618 [2024-07-24 02:08:38.886638] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.618 [2024-07-24 02:08:38.886649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26408 len:8 PRP1 0x0 PRP2 0x0 00:30:30.618 [2024-07-24 02:08:38.886661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.618 [2024-07-24 02:08:38.886673] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.618 [2024-07-24 02:08:38.886684] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.618 [2024-07-24 02:08:38.886695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26416 len:8 PRP1 0x0 PRP2 0x0 00:30:30.618 [2024-07-24 02:08:38.886707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.619 [2024-07-24 02:08:38.886719] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.619 [2024-07-24 02:08:38.886729] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.619 [2024-07-24 02:08:38.886740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26424 len:8 PRP1 0x0 PRP2 0x0 00:30:30.619 [2024-07-24 02:08:38.886753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.619 [2024-07-24 02:08:38.886765] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.619 [2024-07-24 02:08:38.886775] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.619 [2024-07-24 02:08:38.886786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26432 len:8 PRP1 0x0 PRP2 0x0 00:30:30.619 [2024-07-24 02:08:38.886798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.619 [2024-07-24 02:08:38.886810] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.619 [2024-07-24 02:08:38.886824] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.619 [2024-07-24 02:08:38.886836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26440 len:8 PRP1 0x0 PRP2 0x0 00:30:30.619 [2024-07-24 02:08:38.886849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.619 [2024-07-24 02:08:38.886861] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.619 [2024-07-24 02:08:38.886872] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.619 [2024-07-24 02:08:38.886883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26448 len:8 PRP1 0x0 PRP2 0x0 00:30:30.619 [2024-07-24 02:08:38.886895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.619 [2024-07-24 02:08:38.886907] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.619 [2024-07-24 02:08:38.886918] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.619 [2024-07-24 02:08:38.886929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26456 len:8 PRP1 0x0 PRP2 0x0 00:30:30.619 [2024-07-24 02:08:38.886941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.619 [2024-07-24 02:08:38.886954] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.619 [2024-07-24 02:08:38.886964] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.619 [2024-07-24 02:08:38.886975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26464 len:8 PRP1 0x0 PRP2 0x0 00:30:30.619 [2024-07-24 02:08:38.886987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.619 [2024-07-24 02:08:38.887000] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.619 [2024-07-24 02:08:38.887010] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.619 [2024-07-24 02:08:38.887021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26472 len:8 PRP1 0x0 PRP2 0x0 00:30:30.619 [2024-07-24 02:08:38.887033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.619 [2024-07-24 02:08:38.887045] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.619 [2024-07-24 02:08:38.887056] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.619 [2024-07-24 02:08:38.887066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26480 len:8 PRP1 0x0 PRP2 0x0 00:30:30.619 [2024-07-24 02:08:38.887079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.619 [2024-07-24 02:08:38.887091] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.619 [2024-07-24 02:08:38.887101] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.619 [2024-07-24 02:08:38.887112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26488 len:8 PRP1 0x0 PRP2 0x0 00:30:30.619 [2024-07-24 02:08:38.887124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.619 [2024-07-24 02:08:38.887137] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.619 [2024-07-24 02:08:38.887147] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.619 [2024-07-24 02:08:38.887158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26496 len:8 PRP1 0x0 PRP2 0x0 00:30:30.619 [2024-07-24 02:08:38.887170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.619 [2024-07-24 02:08:38.887186] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.619 [2024-07-24 02:08:38.887197] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.619 [2024-07-24 02:08:38.887208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26504 len:8 PRP1 0x0 PRP2 0x0 00:30:30.619 [2024-07-24 02:08:38.887220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.619 [2024-07-24 02:08:38.887233] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.619 [2024-07-24 02:08:38.887243] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.619 [2024-07-24 02:08:38.887254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26512 len:8 PRP1 0x0 PRP2 0x0 00:30:30.619 [2024-07-24 02:08:38.887266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.619 [2024-07-24 02:08:38.887278] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.619 [2024-07-24 02:08:38.887289] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.619 [2024-07-24 02:08:38.887300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26520 len:8 PRP1 0x0 PRP2 0x0 00:30:30.619 [2024-07-24 02:08:38.887312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.619 [2024-07-24 02:08:38.887341] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.619 [2024-07-24 02:08:38.887353] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.619 [2024-07-24 02:08:38.887363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26528 len:8 PRP1 0x0 PRP2 0x0 00:30:30.619 [2024-07-24 02:08:38.887376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.619 [2024-07-24 02:08:38.887388] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.619 [2024-07-24 02:08:38.887399] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.619 [2024-07-24 02:08:38.887409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26536 len:8 PRP1 0x0 PRP2 0x0 00:30:30.619 [2024-07-24 02:08:38.887422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.619 [2024-07-24 02:08:38.887434] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.619 [2024-07-24 02:08:38.887445] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.619 [2024-07-24 02:08:38.887455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26544 len:8 PRP1 0x0 PRP2 0x0 00:30:30.619 [2024-07-24 02:08:38.887468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.619 [2024-07-24 02:08:38.887480] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.619 [2024-07-24 02:08:38.887491] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.619 [2024-07-24 02:08:38.887502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26552 len:8 PRP1 0x0 PRP2 0x0 00:30:30.619 [2024-07-24 02:08:38.887515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.619 [2024-07-24 02:08:38.887527] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.619 [2024-07-24 02:08:38.887538] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.619 [2024-07-24 02:08:38.887549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26560 len:8 PRP1 0x0 PRP2 0x0 00:30:30.619 [2024-07-24 02:08:38.887564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.619 [2024-07-24 02:08:38.887578] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.619 [2024-07-24 02:08:38.887588] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.619 [2024-07-24 02:08:38.887599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26568 len:8 PRP1 0x0 PRP2 0x0 00:30:30.619 [2024-07-24 02:08:38.887611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.619 [2024-07-24 02:08:38.887624] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.619 [2024-07-24 02:08:38.887634] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.619 [2024-07-24 02:08:38.887645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26576 len:8 PRP1 0x0 PRP2 0x0 00:30:30.619 [2024-07-24 02:08:38.887657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.619 [2024-07-24 02:08:38.887670] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.619 [2024-07-24 02:08:38.887680] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.619 [2024-07-24 02:08:38.887691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26584 len:8 PRP1 0x0 PRP2 0x0 00:30:30.619 [2024-07-24 02:08:38.887703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.619 [2024-07-24 02:08:38.887716] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.619 [2024-07-24 02:08:38.887726] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.619 [2024-07-24 02:08:38.887737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26592 len:8 PRP1 0x0 PRP2 0x0 00:30:30.619 [2024-07-24 02:08:38.887749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.619 [2024-07-24 02:08:38.887762] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.620 [2024-07-24 02:08:38.887772] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.620 [2024-07-24 02:08:38.887783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26600 len:8 PRP1 0x0 PRP2 0x0 00:30:30.620 [2024-07-24 02:08:38.887795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.620 [2024-07-24 02:08:38.887808] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.620 [2024-07-24 02:08:38.887818] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.620 [2024-07-24 02:08:38.887829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26608 len:8 PRP1 0x0 PRP2 0x0 00:30:30.620 [2024-07-24 02:08:38.887841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.620 [2024-07-24 02:08:38.887853] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.620 [2024-07-24 02:08:38.887864] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.620 [2024-07-24 02:08:38.887874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26616 len:8 PRP1 0x0 PRP2 0x0 00:30:30.620 [2024-07-24 02:08:38.887887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.620 [2024-07-24 02:08:38.887899] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.620 [2024-07-24 02:08:38.887909] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.620 [2024-07-24 02:08:38.887924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26624 len:8 PRP1 0x0 PRP2 0x0 00:30:30.620 [2024-07-24 02:08:38.887937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.620 [2024-07-24 02:08:38.887949] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.620 [2024-07-24 02:08:38.887960] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.620 [2024-07-24 02:08:38.887970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26632 len:8 PRP1 0x0 PRP2 0x0 00:30:30.620 [2024-07-24 02:08:38.887983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.620 [2024-07-24 02:08:38.887995] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.620 [2024-07-24 02:08:38.888006] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.620 [2024-07-24 02:08:38.888017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26640 len:8 PRP1 0x0 PRP2 0x0 00:30:30.620 [2024-07-24 02:08:38.888029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.620 [2024-07-24 02:08:38.888041] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.620 [2024-07-24 02:08:38.888052] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.620 [2024-07-24 02:08:38.888063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26648 len:8 PRP1 0x0 PRP2 0x0 00:30:30.620 [2024-07-24 02:08:38.888075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.620 [2024-07-24 02:08:38.888088] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.620 [2024-07-24 02:08:38.888098] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.620 [2024-07-24 02:08:38.888109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26656 len:8 PRP1 0x0 PRP2 0x0 00:30:30.620 [2024-07-24 02:08:38.888121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.620 [2024-07-24 02:08:38.888133] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.620 [2024-07-24 02:08:38.888144] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.620 [2024-07-24 02:08:38.888154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26664 len:8 PRP1 0x0 PRP2 0x0 00:30:30.620 [2024-07-24 02:08:38.888167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.620 [2024-07-24 02:08:38.888179] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.620 [2024-07-24 02:08:38.888189] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.620 [2024-07-24 02:08:38.888200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26672 len:8 PRP1 0x0 PRP2 0x0 00:30:30.620 [2024-07-24 02:08:38.888212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.620 [2024-07-24 02:08:38.888224] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.620 [2024-07-24 02:08:38.888235] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.620 [2024-07-24 02:08:38.888245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26680 len:8 PRP1 0x0 PRP2 0x0 00:30:30.620 [2024-07-24 02:08:38.888257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.620 [2024-07-24 02:08:38.888270] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.620 [2024-07-24 02:08:38.888283] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.620 [2024-07-24 02:08:38.888295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26688 len:8 PRP1 0x0 PRP2 0x0 00:30:30.620 [2024-07-24 02:08:38.888308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.620 [2024-07-24 02:08:38.888328] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.620 [2024-07-24 02:08:38.888340] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.620 [2024-07-24 02:08:38.888351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26696 len:8 PRP1 0x0 PRP2 0x0 00:30:30.620 [2024-07-24 02:08:38.888365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.620 [2024-07-24 02:08:38.888379] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.620 [2024-07-24 02:08:38.888390] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.620 [2024-07-24 02:08:38.888401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26704 len:8 PRP1 0x0 PRP2 0x0 00:30:30.620 [2024-07-24 02:08:38.888414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.620 [2024-07-24 02:08:38.888426] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.620 [2024-07-24 02:08:38.888437] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.620 [2024-07-24 02:08:38.888453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26712 len:8 PRP1 0x0 PRP2 0x0 00:30:30.620 [2024-07-24 02:08:38.888467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.620 [2024-07-24 02:08:38.888480] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.620 [2024-07-24 02:08:38.888491] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.620 [2024-07-24 02:08:38.888502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26720 len:8 PRP1 0x0 PRP2 0x0 00:30:30.620 [2024-07-24 02:08:38.888515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.620 [2024-07-24 02:08:38.888528] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.620 [2024-07-24 02:08:38.888539] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.620 [2024-07-24 02:08:38.888550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26728 len:8 PRP1 0x0 PRP2 0x0 00:30:30.620 [2024-07-24 02:08:38.888563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.620 [2024-07-24 02:08:38.888576] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.620 [2024-07-24 02:08:38.888586] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.620 [2024-07-24 02:08:38.888597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26736 len:8 PRP1 0x0 PRP2 0x0 00:30:30.620 [2024-07-24 02:08:38.888611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.620 [2024-07-24 02:08:38.888624] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.620 [2024-07-24 02:08:38.888635] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.620 [2024-07-24 02:08:38.888646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:26096 len:8 PRP1 0x0 PRP2 0x0 00:30:30.620 [2024-07-24 02:08:38.888658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.620 [2024-07-24 02:08:38.888675] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.620 [2024-07-24 02:08:38.888686] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.620 [2024-07-24 02:08:38.888697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:26104 len:8 PRP1 0x0 PRP2 0x0 00:30:30.620 [2024-07-24 02:08:38.888710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.620 [2024-07-24 02:08:38.888723] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:30.620 [2024-07-24 02:08:38.888734] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:30.620 [2024-07-24 02:08:38.888745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26744 len:8 PRP1 0x0 PRP2 0x0 00:30:30.620 [2024-07-24 02:08:38.888758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:30.620 [2024-07-24 02:08:38.888818] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x23400b0 was disconnected and freed. reset controller. 00:30:30.620 [2024-07-24 02:08:38.888836] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:30:30.620 [2024-07-24 02:08:38.888852] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:30.620 [2024-07-24 02:08:38.888906] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x231c830 (9): Bad file descriptor 00:30:30.620 [2024-07-24 02:08:38.892203] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:30.620 [2024-07-24 02:08:39.017140] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:30.620 00:30:30.621 Latency(us) 00:30:30.621 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:30.621 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:30.621 Verification LBA range: start 0x0 length 0x4000 00:30:30.621 NVMe0n1 : 15.05 8591.51 33.56 620.84 0.00 13830.54 603.78 42525.58 00:30:30.621 =================================================================================================================== 00:30:30.621 Total : 8591.51 33.56 620.84 0.00 13830.54 603.78 42525.58 00:30:30.621 Received shutdown signal, test time was about 15.000000 seconds 00:30:30.621 00:30:30.621 Latency(us) 00:30:30.621 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:30.621 =================================================================================================================== 00:30:30.621 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:30.621 02:08:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:30:30.621 02:08:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:30:30.621 02:08:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:30:30.621 02:08:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=1543395 00:30:30.621 02:08:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:30:30.621 02:08:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 1543395 /var/tmp/bdevperf.sock 00:30:30.621 02:08:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 1543395 ']' 00:30:30.621 02:08:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:30.621 02:08:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:30.621 02:08:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:30.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:30.621 02:08:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:30.621 02:08:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:30.621 02:08:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:30.621 02:08:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:30:30.621 02:08:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:30.621 [2024-07-24 02:08:45.399963] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:30.621 02:08:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:30:30.879 [2024-07-24 02:08:45.640550] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:30:30.879 02:08:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:31.136 NVMe0n1 00:30:31.136 02:08:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:31.702 00:30:31.702 02:08:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:31.960 00:30:31.960 02:08:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:31.960 02:08:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:30:32.218 02:08:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:32.477 02:08:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:30:35.763 02:08:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:35.763 02:08:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:30:35.763 02:08:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=1544059 00:30:35.763 02:08:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:35.763 02:08:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 1544059 00:30:37.144 0 00:30:37.144 02:08:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:37.144 [2024-07-24 02:08:44.916854] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:30:37.144 [2024-07-24 02:08:44.916937] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1543395 ] 00:30:37.144 EAL: No free 2048 kB hugepages reported on node 1 00:30:37.144 [2024-07-24 02:08:44.974807] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:37.144 [2024-07-24 02:08:45.057443] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:37.144 [2024-07-24 02:08:47.281285] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:30:37.144 [2024-07-24 02:08:47.281377] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:37.144 [2024-07-24 02:08:47.281399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.144 [2024-07-24 02:08:47.281415] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:37.144 [2024-07-24 02:08:47.281429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.144 [2024-07-24 02:08:47.281442] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:37.144 [2024-07-24 02:08:47.281456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.144 [2024-07-24 02:08:47.281469] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:37.144 [2024-07-24 02:08:47.281483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.144 [2024-07-24 02:08:47.281496] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:37.144 [2024-07-24 02:08:47.281539] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:37.144 [2024-07-24 02:08:47.281569] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcc7830 (9): Bad file descriptor 00:30:37.144 [2024-07-24 02:08:47.285169] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:37.144 Running I/O for 1 seconds... 00:30:37.144 00:30:37.144 Latency(us) 00:30:37.144 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:37.144 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:37.144 Verification LBA range: start 0x0 length 0x4000 00:30:37.144 NVMe0n1 : 1.01 8644.41 33.77 0.00 0.00 14748.05 3398.16 11845.03 00:30:37.144 =================================================================================================================== 00:30:37.144 Total : 8644.41 33.77 0.00 0.00 14748.05 3398.16 11845.03 00:30:37.144 02:08:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:37.144 02:08:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:30:37.144 02:08:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:37.402 02:08:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:37.402 02:08:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:30:37.660 02:08:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:37.920 02:08:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:30:41.205 02:08:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:41.205 02:08:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:30:41.205 02:08:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 1543395 00:30:41.205 02:08:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 1543395 ']' 00:30:41.205 02:08:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 1543395 00:30:41.205 02:08:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:30:41.205 02:08:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:41.205 02:08:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1543395 00:30:41.205 02:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:41.205 02:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:41.205 02:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1543395' 00:30:41.205 killing process with pid 1543395 00:30:41.205 02:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@967 -- # kill 1543395 00:30:41.205 02:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # wait 1543395 00:30:41.463 02:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:30:41.464 02:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:41.722 02:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:30:41.722 02:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:41.722 02:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:30:41.722 02:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:41.722 02:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:30:41.722 02:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:41.722 02:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:30:41.722 02:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:41.722 02:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:41.722 rmmod nvme_tcp 00:30:41.722 rmmod nvme_fabrics 00:30:41.722 rmmod nvme_keyring 00:30:41.722 02:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:41.722 02:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:30:41.722 02:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:30:41.722 02:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 1541249 ']' 00:30:41.722 02:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 1541249 00:30:41.722 02:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 1541249 ']' 00:30:41.722 02:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 1541249 00:30:41.722 02:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:30:41.722 02:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:41.722 02:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1541249 00:30:41.722 02:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:30:41.722 02:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:30:41.722 02:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1541249' 00:30:41.722 killing process with pid 1541249 00:30:41.722 02:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@967 -- # kill 1541249 00:30:41.722 02:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # wait 1541249 00:30:41.980 02:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:41.980 02:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:41.980 02:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:41.980 02:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:41.980 02:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:41.980 02:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:41.980 02:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:41.980 02:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:44.518 02:08:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:44.518 00:30:44.518 real 0m34.694s 00:30:44.518 user 2m2.621s 00:30:44.518 sys 0m5.708s 00:30:44.518 02:08:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:44.518 02:08:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:44.518 ************************************ 00:30:44.518 END TEST nvmf_failover 00:30:44.518 ************************************ 00:30:44.518 02:08:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:30:44.518 02:08:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:30:44.518 02:08:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:44.518 02:08:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:44.518 ************************************ 00:30:44.518 START TEST nvmf_host_discovery 00:30:44.518 ************************************ 00:30:44.518 02:08:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:30:44.518 * Looking for test storage... 00:30:44.518 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:44.519 02:08:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:44.519 02:08:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:30:44.519 02:08:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:44.519 02:08:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:44.519 02:08:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:44.519 02:08:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:44.519 02:08:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:44.519 02:08:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:44.519 02:08:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:44.519 02:08:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:44.519 02:08:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:44.519 02:08:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:44.519 02:08:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:44.519 02:08:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:44.519 02:08:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:44.519 02:08:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:44.519 02:08:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:44.519 02:08:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:44.519 02:08:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:44.519 02:08:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:44.519 02:08:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:44.519 02:08:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:44.519 02:08:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:44.519 02:08:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:44.519 02:08:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:44.519 02:08:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:30:44.519 02:08:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:44.519 02:08:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:30:44.519 02:08:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:44.519 02:08:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:44.519 02:08:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:44.519 02:08:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:44.519 02:08:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:44.519 02:08:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:44.519 02:08:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:44.519 02:08:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:44.519 02:08:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:30:44.519 02:08:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:30:44.519 02:08:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:30:44.519 02:08:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:30:44.519 02:08:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:30:44.519 02:08:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:30:44.519 02:08:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:30:44.519 02:08:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:44.519 02:08:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:44.519 02:08:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:44.519 02:08:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:44.519 02:08:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:44.519 02:08:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:44.519 02:08:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:44.519 02:08:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:44.519 02:08:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:44.519 02:08:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:44.519 02:08:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:30:44.519 02:08:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:45.951 02:09:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:45.951 02:09:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:30:45.951 02:09:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:45.951 02:09:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:45.951 02:09:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:45.951 02:09:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:45.951 02:09:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:45.951 02:09:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:30:45.951 02:09:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:45.951 02:09:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:30:45.951 02:09:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:30:45.951 02:09:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:30:45.951 02:09:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:30:45.951 02:09:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:30:45.951 02:09:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:30:45.951 02:09:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:45.951 02:09:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:45.951 02:09:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:45.951 02:09:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:45.951 02:09:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:45.951 02:09:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:45.951 02:09:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:45.951 02:09:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:45.951 02:09:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:45.951 02:09:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:45.951 02:09:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:45.951 02:09:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:45.951 02:09:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:45.951 02:09:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:45.951 02:09:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:45.951 02:09:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:45.951 02:09:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:45.951 02:09:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:45.951 02:09:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:45.951 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:45.951 02:09:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:45.951 02:09:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:45.951 02:09:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:45.951 02:09:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:45.951 02:09:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:45.951 02:09:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:45.951 02:09:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:45.951 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:45.951 02:09:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:45.951 02:09:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:45.951 02:09:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:45.951 02:09:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:45.951 02:09:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:45.951 02:09:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:45.951 02:09:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:45.951 02:09:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:45.951 02:09:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:45.951 02:09:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:45.951 02:09:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:45.951 02:09:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:45.951 02:09:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:45.951 02:09:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:45.951 02:09:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:45.951 02:09:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:45.951 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:45.951 02:09:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:45.951 02:09:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:45.951 02:09:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:45.951 02:09:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:45.951 02:09:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:45.951 02:09:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:45.951 02:09:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:45.951 02:09:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:45.951 02:09:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:45.951 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:45.951 02:09:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:45.951 02:09:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:45.951 02:09:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:30:45.951 02:09:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:45.951 02:09:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:45.951 02:09:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:45.951 02:09:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:45.951 02:09:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:45.951 02:09:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:45.951 02:09:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:45.951 02:09:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:45.951 02:09:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:45.951 02:09:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:45.951 02:09:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:45.951 02:09:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:45.951 02:09:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:45.951 02:09:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:45.952 02:09:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:45.952 02:09:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:45.952 02:09:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:45.952 02:09:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:46.210 02:09:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:46.210 02:09:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:46.210 02:09:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:46.210 02:09:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:46.210 02:09:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:46.210 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:46.210 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.124 ms 00:30:46.210 00:30:46.211 --- 10.0.0.2 ping statistics --- 00:30:46.211 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:46.211 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:30:46.211 02:09:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:46.211 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:46.211 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.098 ms 00:30:46.211 00:30:46.211 --- 10.0.0.1 ping statistics --- 00:30:46.211 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:46.211 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:30:46.211 02:09:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:46.211 02:09:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:30:46.211 02:09:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:46.211 02:09:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:46.211 02:09:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:46.211 02:09:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:46.211 02:09:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:46.211 02:09:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:46.211 02:09:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:46.211 02:09:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:30:46.211 02:09:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:46.211 02:09:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:46.211 02:09:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:46.211 02:09:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=1546723 00:30:46.211 02:09:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:30:46.211 02:09:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 1546723 00:30:46.211 02:09:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 1546723 ']' 00:30:46.211 02:09:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:46.211 02:09:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:46.211 02:09:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:46.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:46.211 02:09:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:46.211 02:09:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:46.211 [2024-07-24 02:09:00.975535] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:30:46.211 [2024-07-24 02:09:00.975612] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:46.211 EAL: No free 2048 kB hugepages reported on node 1 00:30:46.211 [2024-07-24 02:09:01.042960] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:46.469 [2024-07-24 02:09:01.133344] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:46.469 [2024-07-24 02:09:01.133406] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:46.469 [2024-07-24 02:09:01.133423] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:46.469 [2024-07-24 02:09:01.133436] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:46.469 [2024-07-24 02:09:01.133448] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:46.469 [2024-07-24 02:09:01.133478] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:46.469 02:09:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:46.469 02:09:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:30:46.469 02:09:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:46.469 02:09:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:46.470 02:09:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:46.470 02:09:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:46.470 02:09:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:46.470 02:09:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:46.470 02:09:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:46.470 [2024-07-24 02:09:01.271790] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:46.470 02:09:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:46.470 02:09:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:30:46.470 02:09:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:46.470 02:09:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:46.470 [2024-07-24 02:09:01.279975] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:30:46.470 02:09:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:46.470 02:09:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:30:46.470 02:09:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:46.470 02:09:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:46.470 null0 00:30:46.470 02:09:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:46.470 02:09:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:30:46.470 02:09:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:46.470 02:09:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:46.470 null1 00:30:46.470 02:09:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:46.470 02:09:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:30:46.470 02:09:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:46.470 02:09:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:46.470 02:09:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:46.470 02:09:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=1546876 00:30:46.470 02:09:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 1546876 /tmp/host.sock 00:30:46.470 02:09:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:30:46.470 02:09:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 1546876 ']' 00:30:46.470 02:09:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:30:46.470 02:09:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:46.470 02:09:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:30:46.470 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:30:46.470 02:09:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:46.470 02:09:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:46.470 [2024-07-24 02:09:01.353444] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:30:46.470 [2024-07-24 02:09:01.353526] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1546876 ] 00:30:46.728 EAL: No free 2048 kB hugepages reported on node 1 00:30:46.728 [2024-07-24 02:09:01.412793] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:46.728 [2024-07-24 02:09:01.501678] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:46.728 02:09:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:46.728 02:09:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:30:46.728 02:09:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:46.728 02:09:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:30:46.728 02:09:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:46.728 02:09:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:46.728 02:09:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:46.728 02:09:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:30:46.728 02:09:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:46.728 02:09:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:46.986 02:09:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:46.986 02:09:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:30:46.986 02:09:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:30:46.986 02:09:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:46.986 02:09:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:46.986 02:09:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:46.986 02:09:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:46.986 02:09:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:46.986 02:09:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:46.986 02:09:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:46.986 02:09:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:30:46.986 02:09:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:30:46.986 02:09:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:46.986 02:09:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:46.986 02:09:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:46.986 02:09:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:46.986 02:09:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:46.986 02:09:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:46.986 02:09:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:46.986 02:09:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:30:46.986 02:09:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:30:46.986 02:09:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:46.986 02:09:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:46.986 02:09:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:46.986 02:09:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:30:46.986 02:09:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:46.986 02:09:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:46.986 02:09:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:46.986 02:09:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:46.986 02:09:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:46.986 02:09:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:46.986 02:09:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:46.986 02:09:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:30:46.986 02:09:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:30:46.986 02:09:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:46.986 02:09:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:46.986 02:09:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:46.986 02:09:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:46.986 02:09:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:46.986 02:09:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:46.986 02:09:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:46.986 02:09:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:30:46.986 02:09:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:30:46.986 02:09:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:46.986 02:09:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:46.986 02:09:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:46.986 02:09:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:30:46.986 02:09:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:46.986 02:09:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:46.986 02:09:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:46.986 02:09:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:46.986 02:09:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:46.986 02:09:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:46.986 02:09:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:46.986 02:09:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:30:46.986 02:09:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:30:46.987 02:09:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:46.987 02:09:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:46.987 02:09:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:46.987 02:09:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:46.987 02:09:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:46.987 02:09:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:46.987 02:09:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:47.245 02:09:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:30:47.245 02:09:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:47.245 02:09:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:47.245 02:09:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:47.245 [2024-07-24 02:09:01.901656] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:47.245 02:09:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:47.245 02:09:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:30:47.245 02:09:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:47.245 02:09:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:47.245 02:09:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:47.245 02:09:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:47.245 02:09:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:47.245 02:09:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:47.245 02:09:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:47.245 02:09:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:30:47.245 02:09:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:30:47.245 02:09:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:47.245 02:09:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:47.245 02:09:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:47.245 02:09:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:47.245 02:09:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:47.245 02:09:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:47.245 02:09:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:47.245 02:09:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:30:47.245 02:09:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:30:47.245 02:09:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:30:47.245 02:09:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:30:47.245 02:09:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:30:47.245 02:09:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:30:47.245 02:09:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:47.245 02:09:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:30:47.245 02:09:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:30:47.245 02:09:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:30:47.245 02:09:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:47.245 02:09:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:30:47.245 02:09:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:47.245 02:09:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:47.245 02:09:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:30:47.245 02:09:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:30:47.245 02:09:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:30:47.245 02:09:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:30:47.245 02:09:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:30:47.245 02:09:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:47.245 02:09:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:47.245 02:09:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:47.245 02:09:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:30:47.245 02:09:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:30:47.245 02:09:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:30:47.245 02:09:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:47.245 02:09:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:30:47.245 02:09:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:30:47.245 02:09:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:47.245 02:09:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:47.245 02:09:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:47.245 02:09:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:47.245 02:09:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:47.245 02:09:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:47.245 02:09:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:47.245 02:09:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:30:47.245 02:09:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:30:47.810 [2024-07-24 02:09:02.680102] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:30:47.810 [2024-07-24 02:09:02.680134] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:30:47.810 [2024-07-24 02:09:02.680166] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:48.067 [2024-07-24 02:09:02.768478] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:30:48.067 [2024-07-24 02:09:02.871255] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:30:48.067 [2024-07-24 02:09:02.871283] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:30:48.325 02:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:48.325 02:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:30:48.325 02:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:30:48.325 02:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:48.325 02:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:48.325 02:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:48.325 02:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:48.325 02:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:48.325 02:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:48.325 02:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:48.325 02:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:48.325 02:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:30:48.325 02:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:30:48.325 02:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:30:48.325 02:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:30:48.325 02:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:48.325 02:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:30:48.325 02:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:30:48.325 02:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:48.325 02:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:48.325 02:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:48.325 02:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:48.325 02:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:48.325 02:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:48.325 02:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:48.325 02:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:30:48.325 02:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:30:48.325 02:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:30:48.325 02:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:30:48.325 02:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:30:48.325 02:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:48.325 02:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:30:48.325 02:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:30:48.325 02:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:30:48.325 02:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:48.325 02:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:30:48.325 02:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:48.325 02:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:30:48.325 02:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:30:48.325 02:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:48.325 02:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:30:48.325 02:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:30:48.325 02:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:30:48.325 02:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:30:48.325 02:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:30:48.325 02:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:30:48.325 02:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:30:48.325 02:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:48.325 02:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:30:48.326 02:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:30:48.326 02:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:30:48.326 02:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:30:48.326 02:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:48.326 02:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:48.326 02:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:48.583 02:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:30:48.583 02:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:30:48.583 02:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:30:48.583 02:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:30:48.583 02:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:30:48.583 02:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:48.583 02:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:48.583 02:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:48.583 02:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:30:48.583 02:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:30:48.583 02:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:30:48.583 02:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:48.583 02:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:30:48.583 02:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:30:48.583 02:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:48.583 02:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:48.583 02:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:48.583 02:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:48.583 02:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:48.583 02:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:48.842 02:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:48.842 02:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:30:48.842 02:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:30:48.842 02:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:30:48.842 02:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:30:48.842 02:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:30:48.842 02:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:30:48.842 02:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:30:48.842 02:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:48.842 02:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:30:48.842 02:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:30:48.842 02:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:30:48.842 02:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:30:48.842 02:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:48.842 02:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:48.842 02:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:48.842 02:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:30:48.842 02:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:30:48.842 02:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:30:48.842 02:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:30:48.842 02:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:30:48.842 02:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:48.842 02:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:48.842 [2024-07-24 02:09:03.538550] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:48.842 [2024-07-24 02:09:03.538869] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:30:48.842 [2024-07-24 02:09:03.538904] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:48.842 02:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:48.842 02:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:30:48.842 02:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:30:48.842 02:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:30:48.842 02:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:48.842 02:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:30:48.842 02:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:30:48.842 02:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:48.842 02:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:48.842 02:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:48.842 02:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:48.842 02:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:48.842 02:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:48.842 02:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:48.842 02:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:48.842 02:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:30:48.842 02:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:30:48.842 02:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:30:48.842 02:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:30:48.842 02:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:48.842 02:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:30:48.842 02:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:30:48.842 02:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:48.842 02:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:48.842 02:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:48.842 02:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:48.842 02:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:48.842 02:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:48.842 02:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:48.842 [2024-07-24 02:09:03.624605] bdev_nvme.c:6935:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:30:48.842 02:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:30:48.842 02:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:30:48.842 02:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:30:48.842 02:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:30:48.842 02:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:30:48.842 02:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:48.842 02:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:30:48.842 02:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:30:48.842 02:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:30:48.842 02:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:48.842 02:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:48.842 02:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:30:48.842 02:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:30:48.842 02:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:30:48.842 02:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:48.842 02:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:30:48.842 02:09:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:30:49.100 [2024-07-24 02:09:03.884796] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:30:49.100 [2024-07-24 02:09:03.884822] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:30:49.100 [2024-07-24 02:09:03.884833] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:30:50.034 02:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:50.034 02:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:30:50.034 02:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:30:50.034 02:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:30:50.034 02:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:30:50.034 02:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:50.034 02:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:50.034 02:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:30:50.034 02:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:30:50.034 02:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:50.034 02:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:30:50.034 02:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:30:50.034 02:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:30:50.034 02:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:30:50.034 02:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:30:50.034 02:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:30:50.034 02:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:30:50.034 02:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:50.034 02:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:30:50.034 02:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:30:50.034 02:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:30:50.034 02:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:50.034 02:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:30:50.034 02:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:50.034 02:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:50.034 02:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:30:50.034 02:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:30:50.034 02:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:30:50.034 02:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:30:50.034 02:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:50.034 02:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:50.034 02:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:50.034 [2024-07-24 02:09:04.762806] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:30:50.034 [2024-07-24 02:09:04.762848] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:50.034 [2024-07-24 02:09:04.763764] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:50.034 [2024-07-24 02:09:04.763799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.034 [2024-07-24 02:09:04.763828] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:50.034 [2024-07-24 02:09:04.763843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.034 [2024-07-24 02:09:04.763859] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:50.034 [2024-07-24 02:09:04.763873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.034 [2024-07-24 02:09:04.763889] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:50.034 [2024-07-24 02:09:04.763903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.034 [2024-07-24 02:09:04.763918] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d8e550 is same with the state(5) to be set 00:30:50.034 02:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:50.034 02:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:30:50.034 02:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:30:50.034 02:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:30:50.034 02:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:50.034 02:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:30:50.034 02:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:30:50.034 02:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:50.034 02:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:50.034 02:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:50.034 02:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:50.034 02:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:50.034 02:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:50.035 [2024-07-24 02:09:04.773750] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d8e550 (9): Bad file descriptor 00:30:50.035 02:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:50.035 [2024-07-24 02:09:04.783799] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:50.035 [2024-07-24 02:09:04.784065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.035 [2024-07-24 02:09:04.784099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d8e550 with addr=10.0.0.2, port=4420 00:30:50.035 [2024-07-24 02:09:04.784118] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d8e550 is same with the state(5) to be set 00:30:50.035 [2024-07-24 02:09:04.784144] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d8e550 (9): Bad file descriptor 00:30:50.035 [2024-07-24 02:09:04.784181] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:50.035 [2024-07-24 02:09:04.784201] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:50.035 [2024-07-24 02:09:04.784218] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:50.035 [2024-07-24 02:09:04.784241] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.035 [2024-07-24 02:09:04.793889] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:50.035 [2024-07-24 02:09:04.794089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.035 [2024-07-24 02:09:04.794120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d8e550 with addr=10.0.0.2, port=4420 00:30:50.035 [2024-07-24 02:09:04.794138] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d8e550 is same with the state(5) to be set 00:30:50.035 [2024-07-24 02:09:04.794162] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d8e550 (9): Bad file descriptor 00:30:50.035 [2024-07-24 02:09:04.794185] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:50.035 [2024-07-24 02:09:04.794200] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:50.035 [2024-07-24 02:09:04.794214] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:50.035 [2024-07-24 02:09:04.794250] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.035 [2024-07-24 02:09:04.803968] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:50.035 [2024-07-24 02:09:04.804147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.035 [2024-07-24 02:09:04.804178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d8e550 with addr=10.0.0.2, port=4420 00:30:50.035 [2024-07-24 02:09:04.804196] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d8e550 is same with the state(5) to be set 00:30:50.035 [2024-07-24 02:09:04.804220] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d8e550 (9): Bad file descriptor 00:30:50.035 [2024-07-24 02:09:04.804243] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:50.035 [2024-07-24 02:09:04.804258] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:50.035 [2024-07-24 02:09:04.804273] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:50.035 [2024-07-24 02:09:04.804331] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.035 02:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:50.035 02:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:30:50.035 02:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:30:50.035 02:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:30:50.035 02:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:30:50.035 02:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:50.035 02:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:30:50.035 02:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:30:50.035 02:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:50.035 [2024-07-24 02:09:04.814045] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:50.035 02:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:50.035 02:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:50.035 [2024-07-24 02:09:04.814257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.035 [2024-07-24 02:09:04.814287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d8e550 with addr=10.0.0.2, port=4420 00:30:50.035 [2024-07-24 02:09:04.814303] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d8e550 is same with the state(5) to be set 00:30:50.035 [2024-07-24 02:09:04.814337] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d8e550 (9): Bad file descriptor 00:30:50.035 02:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:50.035 [2024-07-24 02:09:04.814374] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:50.035 [2024-07-24 02:09:04.814393] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:50.035 [2024-07-24 02:09:04.814407] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:50.035 [2024-07-24 02:09:04.814439] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.035 02:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:50.035 02:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:50.035 [2024-07-24 02:09:04.824135] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:50.035 [2024-07-24 02:09:04.824314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.035 [2024-07-24 02:09:04.824369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d8e550 with addr=10.0.0.2, port=4420 00:30:50.035 [2024-07-24 02:09:04.824386] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d8e550 is same with the state(5) to be set 00:30:50.035 [2024-07-24 02:09:04.824407] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d8e550 (9): Bad file descriptor 00:30:50.035 [2024-07-24 02:09:04.824440] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:50.035 [2024-07-24 02:09:04.824457] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:50.035 [2024-07-24 02:09:04.824471] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:50.035 [2024-07-24 02:09:04.824490] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.035 [2024-07-24 02:09:04.834216] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:50.035 [2024-07-24 02:09:04.834403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.035 [2024-07-24 02:09:04.834431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d8e550 with addr=10.0.0.2, port=4420 00:30:50.035 [2024-07-24 02:09:04.834448] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d8e550 is same with the state(5) to be set 00:30:50.035 [2024-07-24 02:09:04.834469] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d8e550 (9): Bad file descriptor 00:30:50.035 [2024-07-24 02:09:04.834515] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:50.035 [2024-07-24 02:09:04.834534] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:50.035 [2024-07-24 02:09:04.834548] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:50.035 [2024-07-24 02:09:04.834567] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.035 02:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:50.035 [2024-07-24 02:09:04.844293] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:50.035 [2024-07-24 02:09:04.844481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.035 [2024-07-24 02:09:04.844509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d8e550 with addr=10.0.0.2, port=4420 00:30:50.035 [2024-07-24 02:09:04.844525] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d8e550 is same with the state(5) to be set 00:30:50.035 [2024-07-24 02:09:04.844546] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d8e550 (9): Bad file descriptor 00:30:50.035 [2024-07-24 02:09:04.844578] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:50.035 [2024-07-24 02:09:04.844595] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:50.035 [2024-07-24 02:09:04.844607] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:50.035 [2024-07-24 02:09:04.844627] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:50.035 [2024-07-24 02:09:04.849595] bdev_nvme.c:6798:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:30:50.035 [2024-07-24 02:09:04.849654] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:30:50.035 02:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:30:50.035 02:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:30:50.035 02:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:30:50.035 02:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:30:50.035 02:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:30:50.035 02:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:50.036 02:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:30:50.036 02:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:30:50.036 02:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:30:50.036 02:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:30:50.036 02:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:50.036 02:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:50.036 02:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:30:50.036 02:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:30:50.036 02:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:50.036 02:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:30:50.036 02:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:30:50.036 02:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:30:50.036 02:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:30:50.036 02:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:30:50.036 02:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:30:50.036 02:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:30:50.036 02:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:50.036 02:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:30:50.036 02:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:30:50.036 02:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:30:50.036 02:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:30:50.036 02:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:50.036 02:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:50.036 02:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:50.294 02:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:30:50.294 02:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:30:50.294 02:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:30:50.294 02:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:30:50.294 02:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:30:50.294 02:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:50.295 02:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:50.295 02:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:50.295 02:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:30:50.295 02:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:30:50.295 02:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:30:50.295 02:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:50.295 02:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:30:50.295 02:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:30:50.295 02:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:50.295 02:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:50.295 02:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:50.295 02:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:50.295 02:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:50.295 02:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:50.295 02:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:50.295 02:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:30:50.295 02:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:30:50.295 02:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:30:50.295 02:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:30:50.295 02:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:30:50.295 02:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:50.295 02:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:30:50.295 02:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:30:50.295 02:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:50.295 02:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:50.295 02:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:50.295 02:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:50.295 02:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:50.295 02:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:50.295 02:09:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:50.295 02:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:30:50.295 02:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:30:50.295 02:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:30:50.295 02:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:30:50.295 02:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:30:50.295 02:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:30:50.295 02:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:30:50.295 02:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:50.295 02:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:30:50.295 02:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:30:50.295 02:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:30:50.295 02:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:30:50.295 02:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:50.295 02:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:50.295 02:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:50.295 02:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:30:50.295 02:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:30:50.295 02:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:30:50.295 02:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:30:50.295 02:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:50.295 02:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:50.295 02:09:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:51.669 [2024-07-24 02:09:06.129206] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:30:51.669 [2024-07-24 02:09:06.129240] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:30:51.669 [2024-07-24 02:09:06.129266] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:51.669 [2024-07-24 02:09:06.256716] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:30:51.669 [2024-07-24 02:09:06.364076] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:30:51.669 [2024-07-24 02:09:06.364130] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:30:51.669 02:09:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:51.669 02:09:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:51.669 02:09:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:30:51.669 02:09:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:51.669 02:09:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:30:51.669 02:09:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:51.669 02:09:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:30:51.669 02:09:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:51.669 02:09:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:51.669 02:09:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:51.669 02:09:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:51.669 request: 00:30:51.669 { 00:30:51.669 "name": "nvme", 00:30:51.669 "trtype": "tcp", 00:30:51.669 "traddr": "10.0.0.2", 00:30:51.669 "adrfam": "ipv4", 00:30:51.669 "trsvcid": "8009", 00:30:51.669 "hostnqn": "nqn.2021-12.io.spdk:test", 00:30:51.669 "wait_for_attach": true, 00:30:51.669 "method": "bdev_nvme_start_discovery", 00:30:51.669 "req_id": 1 00:30:51.669 } 00:30:51.669 Got JSON-RPC error response 00:30:51.669 response: 00:30:51.669 { 00:30:51.669 "code": -17, 00:30:51.669 "message": "File exists" 00:30:51.669 } 00:30:51.669 02:09:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:30:51.669 02:09:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:30:51.669 02:09:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:51.669 02:09:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:51.669 02:09:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:51.669 02:09:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:30:51.669 02:09:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:30:51.669 02:09:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:30:51.669 02:09:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:51.669 02:09:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:51.669 02:09:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:30:51.669 02:09:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:30:51.669 02:09:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:51.669 02:09:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:30:51.669 02:09:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:30:51.669 02:09:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:51.669 02:09:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:51.669 02:09:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:51.669 02:09:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:51.669 02:09:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:51.669 02:09:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:51.669 02:09:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:51.669 02:09:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:30:51.669 02:09:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:51.669 02:09:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:30:51.669 02:09:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:51.669 02:09:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:30:51.669 02:09:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:51.669 02:09:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:30:51.669 02:09:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:51.669 02:09:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:51.669 02:09:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:51.669 02:09:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:51.669 request: 00:30:51.669 { 00:30:51.669 "name": "nvme_second", 00:30:51.669 "trtype": "tcp", 00:30:51.669 "traddr": "10.0.0.2", 00:30:51.669 "adrfam": "ipv4", 00:30:51.669 "trsvcid": "8009", 00:30:51.669 "hostnqn": "nqn.2021-12.io.spdk:test", 00:30:51.669 "wait_for_attach": true, 00:30:51.669 "method": "bdev_nvme_start_discovery", 00:30:51.669 "req_id": 1 00:30:51.669 } 00:30:51.669 Got JSON-RPC error response 00:30:51.669 response: 00:30:51.669 { 00:30:51.669 "code": -17, 00:30:51.669 "message": "File exists" 00:30:51.669 } 00:30:51.669 02:09:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:30:51.669 02:09:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:30:51.669 02:09:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:51.669 02:09:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:51.669 02:09:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:51.669 02:09:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:30:51.669 02:09:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:30:51.669 02:09:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:51.669 02:09:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:30:51.669 02:09:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:51.669 02:09:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:30:51.669 02:09:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:30:51.669 02:09:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:51.669 02:09:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:30:51.669 02:09:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:30:51.669 02:09:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:51.669 02:09:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:51.669 02:09:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:51.669 02:09:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:51.669 02:09:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:51.669 02:09:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:51.669 02:09:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:51.927 02:09:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:30:51.927 02:09:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:30:51.927 02:09:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:30:51.927 02:09:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:30:51.927 02:09:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:30:51.927 02:09:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:51.927 02:09:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:30:51.927 02:09:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:51.927 02:09:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:30:51.927 02:09:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:51.927 02:09:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:52.861 [2024-07-24 02:09:07.584434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.861 [2024-07-24 02:09:07.584515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d8e070 with addr=10.0.0.2, port=8010 00:30:52.861 [2024-07-24 02:09:07.584547] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:30:52.861 [2024-07-24 02:09:07.584561] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:30:52.861 [2024-07-24 02:09:07.584574] bdev_nvme.c:7073:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:30:53.795 [2024-07-24 02:09:08.586867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:53.795 [2024-07-24 02:09:08.586917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d8e070 with addr=10.0.0.2, port=8010 00:30:53.795 [2024-07-24 02:09:08.586945] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:30:53.795 [2024-07-24 02:09:08.586960] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:30:53.795 [2024-07-24 02:09:08.586974] bdev_nvme.c:7073:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:30:54.728 [2024-07-24 02:09:09.589047] bdev_nvme.c:7054:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:30:54.728 request: 00:30:54.728 { 00:30:54.728 "name": "nvme_second", 00:30:54.728 "trtype": "tcp", 00:30:54.728 "traddr": "10.0.0.2", 00:30:54.728 "adrfam": "ipv4", 00:30:54.728 "trsvcid": "8010", 00:30:54.728 "hostnqn": "nqn.2021-12.io.spdk:test", 00:30:54.728 "wait_for_attach": false, 00:30:54.728 "attach_timeout_ms": 3000, 00:30:54.728 "method": "bdev_nvme_start_discovery", 00:30:54.728 "req_id": 1 00:30:54.728 } 00:30:54.728 Got JSON-RPC error response 00:30:54.728 response: 00:30:54.728 { 00:30:54.728 "code": -110, 00:30:54.728 "message": "Connection timed out" 00:30:54.728 } 00:30:54.728 02:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:30:54.728 02:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:30:54.728 02:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:54.728 02:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:54.728 02:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:54.728 02:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:30:54.728 02:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:30:54.728 02:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:54.728 02:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:30:54.728 02:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:54.728 02:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:30:54.728 02:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:30:54.728 02:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:54.986 02:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:30:54.986 02:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:30:54.986 02:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 1546876 00:30:54.986 02:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:30:54.986 02:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:54.986 02:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:30:54.986 02:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:54.986 02:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:30:54.986 02:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:54.986 02:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:54.986 rmmod nvme_tcp 00:30:54.986 rmmod nvme_fabrics 00:30:54.986 rmmod nvme_keyring 00:30:54.986 02:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:54.986 02:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:30:54.986 02:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:30:54.986 02:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 1546723 ']' 00:30:54.986 02:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 1546723 00:30:54.986 02:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 1546723 ']' 00:30:54.986 02:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 1546723 00:30:54.986 02:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:30:54.986 02:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:54.986 02:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1546723 00:30:54.986 02:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:30:54.986 02:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:30:54.986 02:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1546723' 00:30:54.986 killing process with pid 1546723 00:30:54.986 02:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 1546723 00:30:54.986 02:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 1546723 00:30:55.245 02:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:55.245 02:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:55.245 02:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:55.245 02:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:55.245 02:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:55.245 02:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:55.245 02:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:55.245 02:09:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:57.145 02:09:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:57.145 00:30:57.145 real 0m13.162s 00:30:57.145 user 0m19.162s 00:30:57.145 sys 0m2.750s 00:30:57.145 02:09:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:57.145 02:09:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:57.145 ************************************ 00:30:57.145 END TEST nvmf_host_discovery 00:30:57.145 ************************************ 00:30:57.403 02:09:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:30:57.403 02:09:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:30:57.403 02:09:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:57.403 02:09:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:57.403 ************************************ 00:30:57.403 START TEST nvmf_host_multipath_status 00:30:57.403 ************************************ 00:30:57.403 02:09:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:30:57.403 * Looking for test storage... 00:30:57.403 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:57.403 02:09:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:57.403 02:09:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:30:57.403 02:09:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:57.403 02:09:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:57.403 02:09:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:57.403 02:09:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:57.403 02:09:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:57.403 02:09:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:57.403 02:09:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:57.403 02:09:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:57.403 02:09:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:57.403 02:09:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:57.403 02:09:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:57.403 02:09:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:57.403 02:09:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:57.403 02:09:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:57.403 02:09:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:57.403 02:09:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:57.403 02:09:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:57.403 02:09:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:57.403 02:09:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:57.403 02:09:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:57.404 02:09:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:57.404 02:09:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:57.404 02:09:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:57.404 02:09:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:30:57.404 02:09:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:57.404 02:09:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:30:57.404 02:09:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:57.404 02:09:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:57.404 02:09:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:57.404 02:09:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:57.404 02:09:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:57.404 02:09:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:57.404 02:09:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:57.404 02:09:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:57.404 02:09:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:30:57.404 02:09:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:30:57.404 02:09:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:57.404 02:09:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:30:57.404 02:09:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:57.404 02:09:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:30:57.404 02:09:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:30:57.404 02:09:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:57.404 02:09:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:57.404 02:09:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:57.404 02:09:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:57.404 02:09:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:57.404 02:09:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:57.404 02:09:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:57.404 02:09:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:57.404 02:09:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:57.404 02:09:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:57.404 02:09:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:30:57.404 02:09:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:59.303 02:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:59.303 02:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:30:59.303 02:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:59.303 02:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:59.303 02:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:59.303 02:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:59.303 02:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:59.303 02:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:30:59.303 02:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:59.303 02:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:30:59.303 02:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:30:59.303 02:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:30:59.303 02:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:30:59.303 02:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:30:59.303 02:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:30:59.303 02:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:59.303 02:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:59.303 02:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:59.303 02:09:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:59.303 02:09:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:59.303 02:09:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:59.303 02:09:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:59.303 02:09:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:59.303 02:09:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:59.303 02:09:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:59.303 02:09:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:59.303 02:09:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:59.303 02:09:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:59.303 02:09:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:59.303 02:09:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:59.303 02:09:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:59.303 02:09:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:59.303 02:09:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:59.303 02:09:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:59.303 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:59.303 02:09:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:59.303 02:09:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:59.303 02:09:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:59.303 02:09:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:59.303 02:09:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:59.303 02:09:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:59.303 02:09:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:59.303 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:59.303 02:09:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:59.303 02:09:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:59.303 02:09:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:59.303 02:09:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:59.303 02:09:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:59.303 02:09:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:59.303 02:09:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:59.303 02:09:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:59.303 02:09:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:59.303 02:09:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:59.303 02:09:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:59.303 02:09:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:59.303 02:09:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:59.303 02:09:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:59.303 02:09:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:59.303 02:09:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:59.303 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:59.303 02:09:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:59.303 02:09:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:59.304 02:09:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:59.304 02:09:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:59.304 02:09:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:59.304 02:09:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:59.304 02:09:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:59.304 02:09:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:59.304 02:09:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:59.304 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:59.304 02:09:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:59.304 02:09:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:59.304 02:09:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:30:59.304 02:09:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:59.304 02:09:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:59.304 02:09:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:59.304 02:09:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:59.304 02:09:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:59.304 02:09:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:59.304 02:09:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:59.304 02:09:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:59.304 02:09:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:59.304 02:09:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:59.304 02:09:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:59.304 02:09:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:59.304 02:09:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:59.304 02:09:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:59.304 02:09:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:59.304 02:09:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:59.304 02:09:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:59.304 02:09:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:59.304 02:09:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:59.304 02:09:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:59.304 02:09:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:59.304 02:09:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:59.304 02:09:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:59.304 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:59.304 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.221 ms 00:30:59.304 00:30:59.304 --- 10.0.0.2 ping statistics --- 00:30:59.304 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:59.304 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:30:59.304 02:09:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:59.304 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:59.304 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.147 ms 00:30:59.304 00:30:59.304 --- 10.0.0.1 ping statistics --- 00:30:59.304 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:59.304 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:30:59.304 02:09:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:59.304 02:09:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:30:59.304 02:09:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:59.304 02:09:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:59.304 02:09:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:59.304 02:09:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:59.304 02:09:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:59.304 02:09:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:59.304 02:09:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:59.304 02:09:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:30:59.304 02:09:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:59.304 02:09:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:59.304 02:09:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:59.304 02:09:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=1550449 00:30:59.304 02:09:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:30:59.304 02:09:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 1550449 00:30:59.304 02:09:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 1550449 ']' 00:30:59.304 02:09:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:59.304 02:09:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:59.304 02:09:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:59.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:59.304 02:09:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:59.304 02:09:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:59.563 [2024-07-24 02:09:14.210745] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:30:59.563 [2024-07-24 02:09:14.210831] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:59.563 EAL: No free 2048 kB hugepages reported on node 1 00:30:59.563 [2024-07-24 02:09:14.285066] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:59.563 [2024-07-24 02:09:14.374739] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:59.563 [2024-07-24 02:09:14.374802] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:59.563 [2024-07-24 02:09:14.374818] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:59.563 [2024-07-24 02:09:14.374831] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:59.563 [2024-07-24 02:09:14.374842] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:59.563 [2024-07-24 02:09:14.374930] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:59.563 [2024-07-24 02:09:14.374938] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:59.821 02:09:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:59.821 02:09:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:30:59.821 02:09:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:59.821 02:09:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:59.821 02:09:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:59.821 02:09:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:59.821 02:09:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=1550449 00:30:59.821 02:09:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:00.079 [2024-07-24 02:09:14.771782] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:00.079 02:09:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:31:00.337 Malloc0 00:31:00.337 02:09:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:31:00.594 02:09:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:00.852 02:09:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:01.109 [2024-07-24 02:09:15.853501] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:01.110 02:09:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:01.368 [2024-07-24 02:09:16.094096] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:01.368 02:09:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=1550624 00:31:01.368 02:09:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:31:01.368 02:09:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:31:01.368 02:09:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 1550624 /var/tmp/bdevperf.sock 00:31:01.368 02:09:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 1550624 ']' 00:31:01.368 02:09:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:01.368 02:09:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:01.368 02:09:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:01.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:01.368 02:09:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:01.368 02:09:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:01.626 02:09:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:01.626 02:09:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:31:01.626 02:09:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:31:01.883 02:09:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:31:02.448 Nvme0n1 00:31:02.448 02:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:31:02.706 Nvme0n1 00:31:02.706 02:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:31:02.706 02:09:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:31:05.239 02:09:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:31:05.239 02:09:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:31:05.239 02:09:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:05.240 02:09:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:31:06.215 02:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:31:06.215 02:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:06.215 02:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:06.215 02:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:06.474 02:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:06.474 02:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:06.474 02:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:06.474 02:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:06.732 02:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:06.732 02:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:06.732 02:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:06.732 02:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:06.992 02:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:06.992 02:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:06.992 02:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:06.992 02:09:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:07.250 02:09:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:07.250 02:09:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:07.250 02:09:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:07.250 02:09:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:07.510 02:09:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:07.510 02:09:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:07.510 02:09:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:07.510 02:09:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:07.770 02:09:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:07.770 02:09:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:31:07.770 02:09:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:08.027 02:09:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:08.283 02:09:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:31:09.653 02:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:31:09.653 02:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:31:09.653 02:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:09.653 02:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:09.653 02:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:09.653 02:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:09.653 02:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:09.653 02:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:09.911 02:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:09.911 02:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:09.911 02:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:09.911 02:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:10.168 02:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:10.168 02:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:10.168 02:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:10.168 02:09:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:10.425 02:09:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:10.425 02:09:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:10.425 02:09:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:10.425 02:09:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:10.682 02:09:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:10.682 02:09:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:10.682 02:09:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:10.682 02:09:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:10.939 02:09:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:10.939 02:09:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:31:10.939 02:09:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:11.196 02:09:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:31:11.453 02:09:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:31:12.386 02:09:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:31:12.386 02:09:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:12.386 02:09:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:12.386 02:09:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:12.644 02:09:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:12.644 02:09:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:12.644 02:09:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:12.644 02:09:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:12.902 02:09:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:12.902 02:09:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:12.902 02:09:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:12.902 02:09:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:13.160 02:09:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:13.160 02:09:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:13.160 02:09:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:13.160 02:09:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:13.418 02:09:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:13.418 02:09:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:13.418 02:09:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:13.418 02:09:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:13.675 02:09:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:13.675 02:09:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:13.675 02:09:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:13.675 02:09:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:13.933 02:09:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:13.933 02:09:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:31:13.933 02:09:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:14.191 02:09:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:31:14.449 02:09:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:31:15.383 02:09:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:31:15.383 02:09:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:15.383 02:09:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:15.383 02:09:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:15.641 02:09:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:15.641 02:09:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:15.641 02:09:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:15.641 02:09:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:15.899 02:09:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:15.899 02:09:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:15.899 02:09:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:15.899 02:09:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:16.157 02:09:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:16.157 02:09:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:16.157 02:09:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:16.157 02:09:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:16.415 02:09:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:16.415 02:09:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:16.415 02:09:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:16.415 02:09:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:16.673 02:09:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:16.673 02:09:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:31:16.673 02:09:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:16.673 02:09:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:16.931 02:09:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:16.931 02:09:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:31:16.931 02:09:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:31:17.189 02:09:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:31:17.446 02:09:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:31:18.379 02:09:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:31:18.379 02:09:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:31:18.379 02:09:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:18.379 02:09:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:18.637 02:09:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:18.637 02:09:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:18.637 02:09:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:18.637 02:09:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:18.895 02:09:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:18.895 02:09:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:18.895 02:09:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:18.895 02:09:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:19.153 02:09:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:19.153 02:09:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:19.153 02:09:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:19.153 02:09:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:19.411 02:09:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:19.411 02:09:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:31:19.411 02:09:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:19.411 02:09:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:19.669 02:09:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:19.669 02:09:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:31:19.669 02:09:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:19.669 02:09:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:19.926 02:09:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:19.926 02:09:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:31:19.926 02:09:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:31:20.184 02:09:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:20.441 02:09:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:31:21.375 02:09:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:31:21.375 02:09:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:31:21.375 02:09:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:21.375 02:09:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:21.633 02:09:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:21.633 02:09:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:21.633 02:09:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:21.633 02:09:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:21.890 02:09:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:21.890 02:09:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:21.890 02:09:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:21.890 02:09:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:22.148 02:09:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:22.148 02:09:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:22.148 02:09:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:22.148 02:09:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:22.406 02:09:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:22.406 02:09:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:31:22.406 02:09:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:22.406 02:09:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:22.697 02:09:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:22.697 02:09:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:22.697 02:09:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:22.697 02:09:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:22.959 02:09:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:22.959 02:09:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:31:23.217 02:09:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:31:23.217 02:09:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:31:23.475 02:09:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:23.733 02:09:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:31:24.668 02:09:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:31:24.668 02:09:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:24.668 02:09:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:24.668 02:09:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:24.926 02:09:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:24.926 02:09:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:24.926 02:09:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:24.926 02:09:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:25.184 02:09:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:25.184 02:09:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:25.184 02:09:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:25.184 02:09:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:25.442 02:09:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:25.442 02:09:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:25.442 02:09:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:25.442 02:09:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:25.700 02:09:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:25.700 02:09:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:25.700 02:09:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:25.700 02:09:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:25.958 02:09:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:25.958 02:09:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:25.958 02:09:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:25.958 02:09:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:26.216 02:09:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:26.216 02:09:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:31:26.216 02:09:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:26.474 02:09:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:26.732 02:09:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:31:27.665 02:09:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:31:27.665 02:09:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:31:27.665 02:09:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:27.665 02:09:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:27.923 02:09:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:27.923 02:09:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:27.923 02:09:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:27.923 02:09:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:28.181 02:09:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:28.181 02:09:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:28.181 02:09:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:28.181 02:09:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:28.438 02:09:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:28.438 02:09:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:28.438 02:09:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:28.439 02:09:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:28.695 02:09:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:28.695 02:09:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:28.695 02:09:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:28.695 02:09:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:28.953 02:09:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:28.953 02:09:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:28.953 02:09:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:28.953 02:09:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:29.211 02:09:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:29.211 02:09:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:31:29.211 02:09:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:29.469 02:09:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:31:29.727 02:09:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:31:30.661 02:09:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:31:30.661 02:09:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:30.661 02:09:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:30.661 02:09:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:30.919 02:09:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:30.919 02:09:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:30.919 02:09:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:30.919 02:09:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:31.177 02:09:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:31.177 02:09:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:31.177 02:09:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:31.177 02:09:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:31.435 02:09:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:31.435 02:09:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:31.435 02:09:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:31.435 02:09:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:31.692 02:09:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:31.692 02:09:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:31.692 02:09:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:31.692 02:09:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:31.949 02:09:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:31.949 02:09:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:31.949 02:09:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:31.949 02:09:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:32.207 02:09:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:32.207 02:09:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:31:32.207 02:09:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:32.465 02:09:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:31:32.723 02:09:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:31:33.656 02:09:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:31:33.656 02:09:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:33.656 02:09:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:33.656 02:09:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:33.914 02:09:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:33.914 02:09:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:33.914 02:09:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:33.914 02:09:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:34.172 02:09:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:34.172 02:09:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:34.172 02:09:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:34.172 02:09:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:34.429 02:09:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:34.429 02:09:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:34.429 02:09:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:34.429 02:09:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:34.686 02:09:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:34.686 02:09:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:34.686 02:09:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:34.686 02:09:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:34.944 02:09:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:34.944 02:09:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:31:34.944 02:09:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:34.944 02:09:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:35.201 02:09:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:35.201 02:09:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 1550624 00:31:35.201 02:09:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 1550624 ']' 00:31:35.201 02:09:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 1550624 00:31:35.201 02:09:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:31:35.201 02:09:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:35.201 02:09:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1550624 00:31:35.201 02:09:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:31:35.201 02:09:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:31:35.201 02:09:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1550624' 00:31:35.201 killing process with pid 1550624 00:31:35.201 02:09:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 1550624 00:31:35.201 02:09:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 1550624 00:31:35.483 Connection closed with partial response: 00:31:35.483 00:31:35.483 00:31:35.483 02:09:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 1550624 00:31:35.483 02:09:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:35.483 [2024-07-24 02:09:16.157627] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:31:35.483 [2024-07-24 02:09:16.157722] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1550624 ] 00:31:35.483 EAL: No free 2048 kB hugepages reported on node 1 00:31:35.483 [2024-07-24 02:09:16.221943] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:35.483 [2024-07-24 02:09:16.306788] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:31:35.483 Running I/O for 90 seconds... 00:31:35.483 [2024-07-24 02:09:31.976999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:45696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.483 [2024-07-24 02:09:31.977051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:35.483 [2024-07-24 02:09:31.977104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:45704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.483 [2024-07-24 02:09:31.977123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:35.483 [2024-07-24 02:09:31.977147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:45712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.483 [2024-07-24 02:09:31.977164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:35.483 [2024-07-24 02:09:31.977186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:45720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.483 [2024-07-24 02:09:31.977203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:35.484 [2024-07-24 02:09:31.977225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:45728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.484 [2024-07-24 02:09:31.977241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:35.484 [2024-07-24 02:09:31.977264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:45736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.484 [2024-07-24 02:09:31.977280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:35.484 [2024-07-24 02:09:31.977303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:45744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.484 [2024-07-24 02:09:31.977336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:35.484 [2024-07-24 02:09:31.977360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:45752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.484 [2024-07-24 02:09:31.977376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:35.484 [2024-07-24 02:09:31.977399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:45760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.484 [2024-07-24 02:09:31.977415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:35.484 [2024-07-24 02:09:31.977438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:45768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.484 [2024-07-24 02:09:31.977454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:35.484 [2024-07-24 02:09:31.977476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:45776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.484 [2024-07-24 02:09:31.977504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:35.484 [2024-07-24 02:09:31.977528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:45784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.484 [2024-07-24 02:09:31.977544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:35.484 [2024-07-24 02:09:31.977566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:45792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.484 [2024-07-24 02:09:31.977582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:35.484 [2024-07-24 02:09:31.977604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:45800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.484 [2024-07-24 02:09:31.977623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:35.484 [2024-07-24 02:09:31.977644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:45808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.484 [2024-07-24 02:09:31.977660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:35.484 [2024-07-24 02:09:31.977682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:45816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.484 [2024-07-24 02:09:31.977700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:35.484 [2024-07-24 02:09:31.977722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:45824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.484 [2024-07-24 02:09:31.977738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:35.484 [2024-07-24 02:09:31.977760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:45832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.484 [2024-07-24 02:09:31.977776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:35.484 [2024-07-24 02:09:31.977798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:45840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.484 [2024-07-24 02:09:31.977814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:35.484 [2024-07-24 02:09:31.977836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:45848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.484 [2024-07-24 02:09:31.977852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:35.484 [2024-07-24 02:09:31.977889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:45856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.484 [2024-07-24 02:09:31.977906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:35.484 [2024-07-24 02:09:31.977928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:45864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.484 [2024-07-24 02:09:31.977959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:35.484 [2024-07-24 02:09:31.977982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:45872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.484 [2024-07-24 02:09:31.977998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:35.484 [2024-07-24 02:09:31.978025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:45880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.484 [2024-07-24 02:09:31.978042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:35.484 [2024-07-24 02:09:31.978065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:45888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.484 [2024-07-24 02:09:31.978081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:35.484 [2024-07-24 02:09:31.978103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:45896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.484 [2024-07-24 02:09:31.978119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:35.484 [2024-07-24 02:09:31.978141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:45904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.484 [2024-07-24 02:09:31.978157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:35.484 [2024-07-24 02:09:31.978190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:45912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.484 [2024-07-24 02:09:31.978205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.484 [2024-07-24 02:09:31.978227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:45920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.484 [2024-07-24 02:09:31.978259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:35.484 [2024-07-24 02:09:31.978282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:45928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.484 [2024-07-24 02:09:31.978297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:35.484 [2024-07-24 02:09:31.978352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:45936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.484 [2024-07-24 02:09:31.978371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:31:35.484 [2024-07-24 02:09:31.978393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.484 [2024-07-24 02:09:31.978410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:35.484 [2024-07-24 02:09:31.978432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:45952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.484 [2024-07-24 02:09:31.978447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:35.484 [2024-07-24 02:09:31.978469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:45960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.484 [2024-07-24 02:09:31.978485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:35.484 [2024-07-24 02:09:31.978507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:45968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.484 [2024-07-24 02:09:31.978523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:31:35.484 [2024-07-24 02:09:31.978550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:45976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.484 [2024-07-24 02:09:31.978567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:35.484 [2024-07-24 02:09:31.978589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:45984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.484 [2024-07-24 02:09:31.978605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:35.484 [2024-07-24 02:09:31.978652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:45992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.484 [2024-07-24 02:09:31.978668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:31:35.484 [2024-07-24 02:09:31.978690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:46000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.484 [2024-07-24 02:09:31.978720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:31:35.484 [2024-07-24 02:09:31.978744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:46008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.484 [2024-07-24 02:09:31.978760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:35.484 [2024-07-24 02:09:31.978782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:46016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.485 [2024-07-24 02:09:31.978799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:31:35.485 [2024-07-24 02:09:31.978821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:46024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.485 [2024-07-24 02:09:31.978837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:31:35.485 [2024-07-24 02:09:31.978859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:46032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.485 [2024-07-24 02:09:31.978875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:31:35.485 [2024-07-24 02:09:31.978897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:46040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.485 [2024-07-24 02:09:31.978913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:31:35.485 [2024-07-24 02:09:31.978935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:46048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.485 [2024-07-24 02:09:31.978951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:35.485 [2024-07-24 02:09:31.978973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:46056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.485 [2024-07-24 02:09:31.978989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:31:35.485 [2024-07-24 02:09:31.979011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:46064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.485 [2024-07-24 02:09:31.979027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:31:35.485 [2024-07-24 02:09:31.979048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:46072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.485 [2024-07-24 02:09:31.979068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:31:35.485 [2024-07-24 02:09:31.979091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:46080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.485 [2024-07-24 02:09:31.979107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:35.485 [2024-07-24 02:09:31.979129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:46088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.485 [2024-07-24 02:09:31.979145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:35.485 [2024-07-24 02:09:31.979167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:46096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.485 [2024-07-24 02:09:31.979197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:35.485 [2024-07-24 02:09:31.979220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:46104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.485 [2024-07-24 02:09:31.979235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:35.485 [2024-07-24 02:09:31.979257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:46112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.485 [2024-07-24 02:09:31.979273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:31:35.485 [2024-07-24 02:09:31.979293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:46120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.485 [2024-07-24 02:09:31.979322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:35.485 [2024-07-24 02:09:31.979364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:46128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.485 [2024-07-24 02:09:31.979380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:31:35.485 [2024-07-24 02:09:31.979402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:46136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.485 [2024-07-24 02:09:31.979418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:31:35.485 [2024-07-24 02:09:31.979440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:46144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.485 [2024-07-24 02:09:31.979457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:31:35.485 [2024-07-24 02:09:31.980110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:45560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.485 [2024-07-24 02:09:31.980133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:31:35.485 [2024-07-24 02:09:31.980160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:46152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.485 [2024-07-24 02:09:31.980178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:31:35.485 [2024-07-24 02:09:31.980200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:46160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.485 [2024-07-24 02:09:31.980221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:31:35.485 [2024-07-24 02:09:31.980245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:46168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.485 [2024-07-24 02:09:31.980261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:35.485 [2024-07-24 02:09:31.980283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:46176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.485 [2024-07-24 02:09:31.980299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:35.485 [2024-07-24 02:09:31.980335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:46184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.485 [2024-07-24 02:09:31.980353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:35.485 [2024-07-24 02:09:31.980375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:46192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.485 [2024-07-24 02:09:31.980391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:35.485 [2024-07-24 02:09:31.980412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:46200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.485 [2024-07-24 02:09:31.980429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:35.485 [2024-07-24 02:09:31.980451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:46208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.485 [2024-07-24 02:09:31.980467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:35.485 [2024-07-24 02:09:31.980488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:46216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.485 [2024-07-24 02:09:31.980504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:35.485 [2024-07-24 02:09:31.980526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:46224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.485 [2024-07-24 02:09:31.980542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:35.485 [2024-07-24 02:09:31.980565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:46232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.485 [2024-07-24 02:09:31.980581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:35.485 [2024-07-24 02:09:31.980603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:46240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.485 [2024-07-24 02:09:31.980645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:35.485 [2024-07-24 02:09:31.980668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:46248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.485 [2024-07-24 02:09:31.980684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:35.485 [2024-07-24 02:09:31.980705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:46256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.485 [2024-07-24 02:09:31.980722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:35.485 [2024-07-24 02:09:31.980748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:46264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.485 [2024-07-24 02:09:31.980764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:35.485 [2024-07-24 02:09:31.980796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:46272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.485 [2024-07-24 02:09:31.980811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:35.485 [2024-07-24 02:09:31.980832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:46280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.485 [2024-07-24 02:09:31.980847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:35.485 [2024-07-24 02:09:31.980868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:46288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.485 [2024-07-24 02:09:31.980884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:35.485 [2024-07-24 02:09:31.980906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:46296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.486 [2024-07-24 02:09:31.980921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:31:35.486 [2024-07-24 02:09:31.980942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:46304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.486 [2024-07-24 02:09:31.980958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:35.486 [2024-07-24 02:09:31.980979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:46312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.486 [2024-07-24 02:09:31.980994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:35.486 [2024-07-24 02:09:31.981015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:46320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.486 [2024-07-24 02:09:31.981031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:35.486 [2024-07-24 02:09:31.981052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:46328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.486 [2024-07-24 02:09:31.981067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:35.486 [2024-07-24 02:09:31.981088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:46336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.486 [2024-07-24 02:09:31.981104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:35.486 [2024-07-24 02:09:31.981124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:46344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.486 [2024-07-24 02:09:31.981140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:35.486 [2024-07-24 02:09:31.981161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:46352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.486 [2024-07-24 02:09:31.981176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:35.486 [2024-07-24 02:09:31.981206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:46360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.486 [2024-07-24 02:09:31.981223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:35.486 [2024-07-24 02:09:31.981244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:45568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.486 [2024-07-24 02:09:31.981260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:35.486 [2024-07-24 02:09:31.981282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:45576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.486 [2024-07-24 02:09:31.981297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:31:35.486 [2024-07-24 02:09:31.981349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:45584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.486 [2024-07-24 02:09:31.981368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:31:35.486 [2024-07-24 02:09:31.981391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:45592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.486 [2024-07-24 02:09:31.981407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:31:35.486 [2024-07-24 02:09:31.981429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:45600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.486 [2024-07-24 02:09:31.981444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:35.486 [2024-07-24 02:09:31.981466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:45608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.486 [2024-07-24 02:09:31.981482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:35.486 [2024-07-24 02:09:31.981503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:45616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.486 [2024-07-24 02:09:31.981519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:35.486 [2024-07-24 02:09:31.981540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:45624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.486 [2024-07-24 02:09:31.981556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:35.486 [2024-07-24 02:09:31.981578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:45632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.486 [2024-07-24 02:09:31.981594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:35.486 [2024-07-24 02:09:31.981615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:45640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.486 [2024-07-24 02:09:31.981655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:35.486 [2024-07-24 02:09:31.981678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:45648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.486 [2024-07-24 02:09:31.981693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:35.486 [2024-07-24 02:09:31.981715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:45656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.486 [2024-07-24 02:09:31.981734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:35.486 [2024-07-24 02:09:31.981756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:45664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.486 [2024-07-24 02:09:31.981771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:35.486 [2024-07-24 02:09:31.981792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:45672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.486 [2024-07-24 02:09:31.981808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:35.486 [2024-07-24 02:09:31.981846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:45680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.486 [2024-07-24 02:09:31.981862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:35.486 [2024-07-24 02:09:31.981885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:45688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.486 [2024-07-24 02:09:31.981901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:35.486 [2024-07-24 02:09:31.981923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:46368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.486 [2024-07-24 02:09:31.981940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:35.486 [2024-07-24 02:09:31.981961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:46376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.486 [2024-07-24 02:09:31.981977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:35.486 [2024-07-24 02:09:31.982007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:46384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.486 [2024-07-24 02:09:31.982023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:35.486 [2024-07-24 02:09:31.982045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:46392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.486 [2024-07-24 02:09:31.982061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:35.486 [2024-07-24 02:09:31.982083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:46400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.486 [2024-07-24 02:09:31.982099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:35.486 [2024-07-24 02:09:31.982120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:46408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.486 [2024-07-24 02:09:31.982137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:35.486 [2024-07-24 02:09:31.982158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:46416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.486 [2024-07-24 02:09:31.982174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:35.486 [2024-07-24 02:09:31.982196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:46424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.486 [2024-07-24 02:09:31.982216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:35.486 [2024-07-24 02:09:31.982239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:46432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.486 [2024-07-24 02:09:31.982255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:35.486 [2024-07-24 02:09:31.982277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:46440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.486 [2024-07-24 02:09:31.982293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:35.486 [2024-07-24 02:09:31.982331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:46448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.486 [2024-07-24 02:09:31.982349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:35.486 [2024-07-24 02:09:31.982372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:46456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.487 [2024-07-24 02:09:31.982388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:35.487 [2024-07-24 02:09:31.982410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:46464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.487 [2024-07-24 02:09:31.982425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:35.487 [2024-07-24 02:09:31.982448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:46472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.487 [2024-07-24 02:09:31.982464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:35.487 [2024-07-24 02:09:31.982486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:46480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.487 [2024-07-24 02:09:31.982502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:35.487 [2024-07-24 02:09:31.982524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:46488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.487 [2024-07-24 02:09:31.982541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:35.487 [2024-07-24 02:09:31.982563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:46496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.487 [2024-07-24 02:09:31.982579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:35.487 [2024-07-24 02:09:31.982602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:46504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.487 [2024-07-24 02:09:31.982633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:35.487 [2024-07-24 02:09:31.982656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:46512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.487 [2024-07-24 02:09:31.982672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:35.487 [2024-07-24 02:09:31.982693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:46520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.487 [2024-07-24 02:09:31.982711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:31:35.487 [2024-07-24 02:09:31.982737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:46528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.487 [2024-07-24 02:09:31.982754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:31:35.487 [2024-07-24 02:09:31.982775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:46536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.487 [2024-07-24 02:09:31.982791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:31:35.487 [2024-07-24 02:09:31.982812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:46544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.487 [2024-07-24 02:09:31.982827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:35.487 [2024-07-24 02:09:31.982848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:46552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.487 [2024-07-24 02:09:31.982864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:35.487 [2024-07-24 02:09:31.982885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:46560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.487 [2024-07-24 02:09:31.982901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:35.487 [2024-07-24 02:09:31.982922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:46568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.487 [2024-07-24 02:09:31.982938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:35.487 [2024-07-24 02:09:31.983691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:46576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.487 [2024-07-24 02:09:31.983714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:35.487 [2024-07-24 02:09:31.983741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:45696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.487 [2024-07-24 02:09:31.983759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:35.487 [2024-07-24 02:09:31.983781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:45704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.487 [2024-07-24 02:09:31.983798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:35.487 [2024-07-24 02:09:31.983820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:45712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.487 [2024-07-24 02:09:31.983836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:35.487 [2024-07-24 02:09:31.983858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:45720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.487 [2024-07-24 02:09:31.983874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:35.487 [2024-07-24 02:09:31.983896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:45728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.487 [2024-07-24 02:09:31.983913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:35.487 [2024-07-24 02:09:31.983947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:45736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.487 [2024-07-24 02:09:31.983964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:35.487 [2024-07-24 02:09:31.983987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:45744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.487 [2024-07-24 02:09:31.984003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:35.487 [2024-07-24 02:09:31.984025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:45752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.487 [2024-07-24 02:09:31.984041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:35.487 [2024-07-24 02:09:31.984062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:45760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.487 [2024-07-24 02:09:31.984078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:35.487 [2024-07-24 02:09:31.984100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:45768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.487 [2024-07-24 02:09:31.984116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:35.487 [2024-07-24 02:09:31.984153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:45776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.487 [2024-07-24 02:09:31.984169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:35.487 [2024-07-24 02:09:31.984191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:45784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.487 [2024-07-24 02:09:31.984222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:35.487 [2024-07-24 02:09:31.984246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:45792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.487 [2024-07-24 02:09:31.984262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:35.487 [2024-07-24 02:09:31.984283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:45800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.487 [2024-07-24 02:09:31.984299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:35.487 [2024-07-24 02:09:31.984336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:45808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.487 [2024-07-24 02:09:31.984354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:35.487 [2024-07-24 02:09:31.984376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:45816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.487 [2024-07-24 02:09:31.984392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:35.487 [2024-07-24 02:09:31.984413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:45824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.487 [2024-07-24 02:09:31.984429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:35.487 [2024-07-24 02:09:31.984451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:45832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.487 [2024-07-24 02:09:31.984471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:35.487 [2024-07-24 02:09:31.984494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:45840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.487 [2024-07-24 02:09:31.984510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:35.487 [2024-07-24 02:09:31.984532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:45848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.487 [2024-07-24 02:09:31.984548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:35.487 [2024-07-24 02:09:31.984570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:45856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.487 [2024-07-24 02:09:31.984586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:35.488 [2024-07-24 02:09:31.984607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:45864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.488 [2024-07-24 02:09:31.984623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:35.488 [2024-07-24 02:09:31.984644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:45872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.488 [2024-07-24 02:09:31.984660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:35.488 [2024-07-24 02:09:31.984682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:45880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.488 [2024-07-24 02:09:31.984698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:35.488 [2024-07-24 02:09:31.984719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.488 [2024-07-24 02:09:31.984735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:35.488 [2024-07-24 02:09:31.984771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:45896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.488 [2024-07-24 02:09:31.984788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:35.488 [2024-07-24 02:09:31.984809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:45904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.488 [2024-07-24 02:09:31.984824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:35.488 [2024-07-24 02:09:31.984845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:45912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.488 [2024-07-24 02:09:31.984860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.488 [2024-07-24 02:09:31.984881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:45920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.488 [2024-07-24 02:09:31.984897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:35.488 [2024-07-24 02:09:31.984918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:45928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.488 [2024-07-24 02:09:31.984937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:35.488 [2024-07-24 02:09:31.984960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:45936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.488 [2024-07-24 02:09:31.984976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:31:35.488 [2024-07-24 02:09:31.984997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:45944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.488 [2024-07-24 02:09:31.985012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:35.488 [2024-07-24 02:09:31.985033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:45952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.488 [2024-07-24 02:09:31.985048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:35.488 [2024-07-24 02:09:31.985069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:45960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.488 [2024-07-24 02:09:31.985084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:35.488 [2024-07-24 02:09:31.985105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:45968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.488 [2024-07-24 02:09:31.985121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:31:35.488 [2024-07-24 02:09:31.985142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:45976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.488 [2024-07-24 02:09:31.985157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:35.488 [2024-07-24 02:09:31.985178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:45984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.488 [2024-07-24 02:09:31.985209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:35.488 [2024-07-24 02:09:31.985232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:45992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.488 [2024-07-24 02:09:31.985248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:31:35.488 [2024-07-24 02:09:31.985270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:46000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.488 [2024-07-24 02:09:31.985286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:31:35.488 [2024-07-24 02:09:31.985308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:46008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.488 [2024-07-24 02:09:31.985334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:35.488 [2024-07-24 02:09:31.985358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:46016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.488 [2024-07-24 02:09:31.985374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:31:35.488 [2024-07-24 02:09:31.985395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:46024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.488 [2024-07-24 02:09:31.985411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:31:35.488 [2024-07-24 02:09:31.985438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:46032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.488 [2024-07-24 02:09:31.985455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:31:35.488 [2024-07-24 02:09:31.985476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:46040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.488 [2024-07-24 02:09:31.985492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:31:35.488 [2024-07-24 02:09:31.985513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:46048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.488 [2024-07-24 02:09:31.985529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:35.488 [2024-07-24 02:09:31.985551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:46056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.488 [2024-07-24 02:09:31.985567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:31:35.489 [2024-07-24 02:09:31.985589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:46064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.489 [2024-07-24 02:09:31.985605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:31:35.489 [2024-07-24 02:09:31.985626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:46072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.489 [2024-07-24 02:09:31.985642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:31:35.489 [2024-07-24 02:09:31.985664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:46080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.489 [2024-07-24 02:09:31.985680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:35.489 [2024-07-24 02:09:31.985716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:46088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.489 [2024-07-24 02:09:31.985732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:35.489 [2024-07-24 02:09:31.985754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:46096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.489 [2024-07-24 02:09:31.985770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:35.489 [2024-07-24 02:09:31.985791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:46104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.489 [2024-07-24 02:09:31.985806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:35.489 [2024-07-24 02:09:31.985827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:46112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.489 [2024-07-24 02:09:31.985843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:31:35.489 [2024-07-24 02:09:31.985864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:46120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.489 [2024-07-24 02:09:31.985879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:35.489 [2024-07-24 02:09:31.985910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:46128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.489 [2024-07-24 02:09:31.985925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:31:35.489 [2024-07-24 02:09:31.985947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:46136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.489 [2024-07-24 02:09:31.985962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:31:35.489 [2024-07-24 02:09:31.986670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:46144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.489 [2024-07-24 02:09:31.986693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:31:35.489 [2024-07-24 02:09:31.986721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:45560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.489 [2024-07-24 02:09:31.986739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:31:35.489 [2024-07-24 02:09:31.986761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:46152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.489 [2024-07-24 02:09:31.986777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:31:35.489 [2024-07-24 02:09:31.986799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:46160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.489 [2024-07-24 02:09:31.986815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:31:35.489 [2024-07-24 02:09:31.986837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:46168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.489 [2024-07-24 02:09:31.986853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:35.489 [2024-07-24 02:09:31.986875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:46176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.489 [2024-07-24 02:09:31.986890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:35.489 [2024-07-24 02:09:31.986912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:46184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.489 [2024-07-24 02:09:31.986928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:35.489 [2024-07-24 02:09:31.986950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:46192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.489 [2024-07-24 02:09:31.986965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:35.489 [2024-07-24 02:09:31.986987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:46200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.489 [2024-07-24 02:09:31.987003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:35.489 [2024-07-24 02:09:31.987025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:46208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.489 [2024-07-24 02:09:31.987041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:35.489 [2024-07-24 02:09:31.987062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:46216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.489 [2024-07-24 02:09:31.987083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:35.489 [2024-07-24 02:09:31.987121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:46224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.489 [2024-07-24 02:09:31.987138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:35.489 [2024-07-24 02:09:31.987160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:46232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.489 [2024-07-24 02:09:31.987176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:35.489 [2024-07-24 02:09:31.987197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:46240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.489 [2024-07-24 02:09:31.987212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:35.489 [2024-07-24 02:09:31.987233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:46248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.489 [2024-07-24 02:09:31.987249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:35.489 [2024-07-24 02:09:31.987270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:46256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.489 [2024-07-24 02:09:31.987285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:35.489 [2024-07-24 02:09:31.987306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:46264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.489 [2024-07-24 02:09:31.987343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:35.489 [2024-07-24 02:09:31.987369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:46272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.489 [2024-07-24 02:09:31.987385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:35.489 [2024-07-24 02:09:31.987407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:46280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.489 [2024-07-24 02:09:31.987423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:35.489 [2024-07-24 02:09:31.987445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:46288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.489 [2024-07-24 02:09:31.987461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:35.489 [2024-07-24 02:09:31.987482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:46296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.489 [2024-07-24 02:09:31.987498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:31:35.489 [2024-07-24 02:09:31.987520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:46304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.489 [2024-07-24 02:09:31.987536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:35.489 [2024-07-24 02:09:31.987558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:46312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.489 [2024-07-24 02:09:31.987578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:35.489 [2024-07-24 02:09:31.987601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:46320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.489 [2024-07-24 02:09:31.987617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:35.489 [2024-07-24 02:09:31.987658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:46328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.489 [2024-07-24 02:09:31.987673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:35.489 [2024-07-24 02:09:31.987694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:46336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.489 [2024-07-24 02:09:31.987710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:35.490 [2024-07-24 02:09:31.987731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:46344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.490 [2024-07-24 02:09:31.987746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:35.490 [2024-07-24 02:09:31.987767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:46352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.490 [2024-07-24 02:09:31.987782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:35.490 [2024-07-24 02:09:31.987804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:46360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.490 [2024-07-24 02:09:31.987819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:35.490 [2024-07-24 02:09:31.987840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:45568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.490 [2024-07-24 02:09:31.987855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:35.490 [2024-07-24 02:09:31.987877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:45576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.490 [2024-07-24 02:09:31.987892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:31:35.490 [2024-07-24 02:09:31.987912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:45584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.490 [2024-07-24 02:09:31.987928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:31:35.490 [2024-07-24 02:09:31.987949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:45592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.490 [2024-07-24 02:09:31.987964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:31:35.490 [2024-07-24 02:09:31.987985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:45600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.490 [2024-07-24 02:09:31.988000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:35.490 [2024-07-24 02:09:31.988021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:45608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.490 [2024-07-24 02:09:31.988036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:35.490 [2024-07-24 02:09:31.988062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:45616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.490 [2024-07-24 02:09:31.988078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:35.490 [2024-07-24 02:09:31.988099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:45624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.490 [2024-07-24 02:09:31.988115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:35.490 [2024-07-24 02:09:31.988135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:45632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.490 [2024-07-24 02:09:31.988151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:35.490 [2024-07-24 02:09:31.988172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:45640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.490 [2024-07-24 02:09:31.988187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:35.490 [2024-07-24 02:09:31.988208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:45648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.490 [2024-07-24 02:09:31.988223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:35.490 [2024-07-24 02:09:31.988243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:45656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.490 [2024-07-24 02:09:31.988259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:35.490 [2024-07-24 02:09:31.988280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:45664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.490 [2024-07-24 02:09:31.988309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:35.490 [2024-07-24 02:09:31.988341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:45672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.490 [2024-07-24 02:09:31.988358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:35.490 [2024-07-24 02:09:31.988380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:45680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.490 [2024-07-24 02:09:31.988396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:35.490 [2024-07-24 02:09:31.988418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:45688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.490 [2024-07-24 02:09:31.988434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:35.490 [2024-07-24 02:09:31.988456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:46368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.490 [2024-07-24 02:09:31.988472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:35.490 [2024-07-24 02:09:31.988494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:46376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.490 [2024-07-24 02:09:31.988511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:35.490 [2024-07-24 02:09:31.988537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:46384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.490 [2024-07-24 02:09:31.988554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:35.490 [2024-07-24 02:09:31.988575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:46392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.490 [2024-07-24 02:09:31.988591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:35.490 [2024-07-24 02:09:31.988613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:46400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.490 [2024-07-24 02:09:31.988629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:35.490 [2024-07-24 02:09:31.988650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:46408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.490 [2024-07-24 02:09:31.988666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:35.490 [2024-07-24 02:09:31.988688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:46416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.490 [2024-07-24 02:09:31.988704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:35.490 [2024-07-24 02:09:31.988725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:46424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.490 [2024-07-24 02:09:31.988741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:35.490 [2024-07-24 02:09:31.988762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:46432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.490 [2024-07-24 02:09:31.988778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:35.490 [2024-07-24 02:09:31.988799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:46440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.490 [2024-07-24 02:09:31.988815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:35.490 [2024-07-24 02:09:31.988837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:46448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.490 [2024-07-24 02:09:31.988852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:35.490 [2024-07-24 02:09:31.988874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:46456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.490 [2024-07-24 02:09:31.988890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:35.490 [2024-07-24 02:09:31.988912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:46464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.490 [2024-07-24 02:09:31.988928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:35.490 [2024-07-24 02:09:31.988965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:46472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.490 [2024-07-24 02:09:31.988981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:35.490 [2024-07-24 02:09:31.989002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:46480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.490 [2024-07-24 02:09:31.989021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:35.490 [2024-07-24 02:09:31.989043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:46488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.490 [2024-07-24 02:09:31.989059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:35.490 [2024-07-24 02:09:31.989080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:46496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.490 [2024-07-24 02:09:31.989095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:35.491 [2024-07-24 02:09:31.989117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:46504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.491 [2024-07-24 02:09:31.989132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:35.491 [2024-07-24 02:09:31.989153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:46512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.491 [2024-07-24 02:09:31.989168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:35.491 [2024-07-24 02:09:31.989189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:46520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.491 [2024-07-24 02:09:31.989205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:31:35.491 [2024-07-24 02:09:31.989226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:46528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.491 [2024-07-24 02:09:31.989241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:31:35.491 [2024-07-24 02:09:31.989262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:46536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.491 [2024-07-24 02:09:31.989277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:31:35.491 [2024-07-24 02:09:31.989298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:46544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.491 [2024-07-24 02:09:31.989313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:35.491 [2024-07-24 02:09:31.989360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:46552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.491 [2024-07-24 02:09:31.989376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:35.491 [2024-07-24 02:09:31.989399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:46560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.491 [2024-07-24 02:09:31.989415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:35.491 [2024-07-24 02:09:31.990155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:46568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.491 [2024-07-24 02:09:31.990178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:35.491 [2024-07-24 02:09:31.990205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:46576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.491 [2024-07-24 02:09:31.990227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:35.491 [2024-07-24 02:09:31.990251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:45696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.491 [2024-07-24 02:09:31.990267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:35.491 [2024-07-24 02:09:31.990288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:45704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.491 [2024-07-24 02:09:31.990304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:35.491 [2024-07-24 02:09:31.990334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:45712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.491 [2024-07-24 02:09:31.990352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:35.491 [2024-07-24 02:09:31.990374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:45720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.491 [2024-07-24 02:09:31.990390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:35.491 [2024-07-24 02:09:31.990412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:45728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.491 [2024-07-24 02:09:31.990428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:35.491 [2024-07-24 02:09:31.990451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:45736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.491 [2024-07-24 02:09:31.990467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:35.491 [2024-07-24 02:09:31.990489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:45744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.491 [2024-07-24 02:09:31.990504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:35.491 [2024-07-24 02:09:31.990526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:45752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.491 [2024-07-24 02:09:31.990542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:35.491 [2024-07-24 02:09:31.990564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:45760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.491 [2024-07-24 02:09:31.990580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:35.491 [2024-07-24 02:09:31.990601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:45768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.491 [2024-07-24 02:09:31.990617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:35.491 [2024-07-24 02:09:31.990639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:45776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.491 [2024-07-24 02:09:31.990655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:35.491 [2024-07-24 02:09:31.990692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:45784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.491 [2024-07-24 02:09:31.990708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:35.491 [2024-07-24 02:09:31.990751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:45792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.491 [2024-07-24 02:09:31.990769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:35.491 [2024-07-24 02:09:31.990792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:45800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.491 [2024-07-24 02:09:31.990808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:35.491 [2024-07-24 02:09:31.990830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:45808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.491 [2024-07-24 02:09:31.990845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:35.491 [2024-07-24 02:09:31.990867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:45816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.491 [2024-07-24 02:09:31.990883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:35.491 [2024-07-24 02:09:31.990905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:45824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.491 [2024-07-24 02:09:31.990921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:35.491 [2024-07-24 02:09:31.990942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:45832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.491 [2024-07-24 02:09:31.990958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:35.491 [2024-07-24 02:09:31.990980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:45840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.491 [2024-07-24 02:09:31.990996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:35.491 [2024-07-24 02:09:31.991017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:45848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.491 [2024-07-24 02:09:31.991033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:35.491 [2024-07-24 02:09:31.991069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:45856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.491 [2024-07-24 02:09:31.991086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:35.491 [2024-07-24 02:09:31.991108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:45864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.491 [2024-07-24 02:09:31.991124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:35.491 [2024-07-24 02:09:31.991146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:45872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.491 [2024-07-24 02:09:31.991162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:35.491 [2024-07-24 02:09:31.991183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:45880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.491 [2024-07-24 02:09:31.991200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:35.491 [2024-07-24 02:09:31.991227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:45888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.491 [2024-07-24 02:09:31.991243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:35.491 [2024-07-24 02:09:31.991281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:45896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.491 [2024-07-24 02:09:31.991297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:35.491 [2024-07-24 02:09:31.991350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:45904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.492 [2024-07-24 02:09:31.991368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:35.492 [2024-07-24 02:09:31.991391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:45912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.492 [2024-07-24 02:09:31.991407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.492 [2024-07-24 02:09:31.991429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:45920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.492 [2024-07-24 02:09:31.991445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:35.492 [2024-07-24 02:09:31.991467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.492 [2024-07-24 02:09:31.991483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:35.492 [2024-07-24 02:09:31.991504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:45936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.492 [2024-07-24 02:09:31.991520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:31:35.492 [2024-07-24 02:09:31.991541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:45944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.492 [2024-07-24 02:09:31.991557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:35.492 [2024-07-24 02:09:31.991579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:45952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.492 [2024-07-24 02:09:31.991595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:35.492 [2024-07-24 02:09:31.991617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:45960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.492 [2024-07-24 02:09:31.991647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:35.492 [2024-07-24 02:09:31.991670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:45968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.492 [2024-07-24 02:09:31.991685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:31:35.492 [2024-07-24 02:09:31.991713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:45976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.492 [2024-07-24 02:09:31.991728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:35.492 [2024-07-24 02:09:31.991750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:45984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.492 [2024-07-24 02:09:31.991784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:35.492 [2024-07-24 02:09:31.991809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:45992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.492 [2024-07-24 02:09:31.991825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:31:35.492 [2024-07-24 02:09:31.991847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:46000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.492 [2024-07-24 02:09:31.991862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:31:35.492 [2024-07-24 02:09:31.991884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:46008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.492 [2024-07-24 02:09:31.991900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:35.492 [2024-07-24 02:09:31.991921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:46016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.492 [2024-07-24 02:09:31.991937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:31:35.492 [2024-07-24 02:09:31.991958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:46024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.492 [2024-07-24 02:09:31.991974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:31:35.492 [2024-07-24 02:09:31.991996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:46032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.492 [2024-07-24 02:09:31.992011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:31:35.492 [2024-07-24 02:09:31.992033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:46040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.492 [2024-07-24 02:09:31.992049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:31:35.492 [2024-07-24 02:09:31.992071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:46048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.492 [2024-07-24 02:09:31.992086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:35.492 [2024-07-24 02:09:31.992108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:46056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.492 [2024-07-24 02:09:31.992124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:31:35.492 [2024-07-24 02:09:31.992145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:46064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.492 [2024-07-24 02:09:31.992161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:31:35.492 [2024-07-24 02:09:31.992183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:46072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.492 [2024-07-24 02:09:31.992198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:31:35.492 [2024-07-24 02:09:31.992220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:46080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.492 [2024-07-24 02:09:31.992241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:35.492 [2024-07-24 02:09:31.992263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:46088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.492 [2024-07-24 02:09:31.992280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:35.492 [2024-07-24 02:09:31.992324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:46096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.492 [2024-07-24 02:09:31.992341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:35.492 [2024-07-24 02:09:31.992380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:46104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.492 [2024-07-24 02:09:31.992396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:35.492 [2024-07-24 02:09:31.992418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:46112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.492 [2024-07-24 02:09:31.992434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:31:35.492 [2024-07-24 02:09:31.992455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:46120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.492 [2024-07-24 02:09:31.992471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:35.492 [2024-07-24 02:09:31.992494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:46128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.492 [2024-07-24 02:09:31.992510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:31:35.492 [2024-07-24 02:09:31.993142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:46136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.492 [2024-07-24 02:09:31.993166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:31:35.492 [2024-07-24 02:09:31.993193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:46144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.492 [2024-07-24 02:09:31.993210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:31:35.492 [2024-07-24 02:09:31.993232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:45560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.492 [2024-07-24 02:09:31.993249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:31:35.492 [2024-07-24 02:09:31.993270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:46152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.492 [2024-07-24 02:09:31.993286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:31:35.492 [2024-07-24 02:09:31.993308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:46160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.492 [2024-07-24 02:09:31.993332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:31:35.492 [2024-07-24 02:09:31.993355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:46168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.492 [2024-07-24 02:09:31.993372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:35.492 [2024-07-24 02:09:31.993398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:46176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.492 [2024-07-24 02:09:31.993415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:35.492 [2024-07-24 02:09:31.993437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:46184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.492 [2024-07-24 02:09:31.993453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:35.492 [2024-07-24 02:09:31.993475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:46192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.493 [2024-07-24 02:09:31.993491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:35.493 [2024-07-24 02:09:31.993513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:46200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.493 [2024-07-24 02:09:31.993529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:35.493 [2024-07-24 02:09:31.993550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:46208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.493 [2024-07-24 02:09:31.993566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:35.493 [2024-07-24 02:09:31.993588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:46216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.493 [2024-07-24 02:09:31.993603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:35.493 [2024-07-24 02:09:31.993625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:46224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.493 [2024-07-24 02:09:31.993656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:35.493 [2024-07-24 02:09:31.993679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:46232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.493 [2024-07-24 02:09:31.993694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:35.493 [2024-07-24 02:09:31.993715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:46240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.493 [2024-07-24 02:09:31.993730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:35.493 [2024-07-24 02:09:31.993751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:46248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.493 [2024-07-24 02:09:31.993766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:35.493 [2024-07-24 02:09:31.993787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:46256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.493 [2024-07-24 02:09:31.993802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:35.493 [2024-07-24 02:09:31.993823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:46264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.493 [2024-07-24 02:09:31.993838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:35.493 [2024-07-24 02:09:31.993864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:46272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.493 [2024-07-24 02:09:31.993880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:35.493 [2024-07-24 02:09:31.993901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:46280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.493 [2024-07-24 02:09:31.993917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:35.493 [2024-07-24 02:09:31.993939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:46288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.493 [2024-07-24 02:09:31.993955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:35.493 [2024-07-24 02:09:31.993976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:46296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.493 [2024-07-24 02:09:31.993991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:31:35.493 [2024-07-24 02:09:31.994012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:46304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.493 [2024-07-24 02:09:31.994027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:35.494 [2024-07-24 02:09:31.994048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:46312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.494 [2024-07-24 02:09:31.994064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:35.494 [2024-07-24 02:09:31.994084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:46320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.494 [2024-07-24 02:09:31.994100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:35.494 [2024-07-24 02:09:31.994136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:46328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.494 [2024-07-24 02:09:31.994152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:35.494 [2024-07-24 02:09:31.994173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:46336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.494 [2024-07-24 02:09:31.994188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:35.494 [2024-07-24 02:09:31.994209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:46344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.494 [2024-07-24 02:09:31.994223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:35.494 [2024-07-24 02:09:31.994244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:46352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.494 [2024-07-24 02:09:31.994258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:35.494 [2024-07-24 02:09:31.994285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:46360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.494 [2024-07-24 02:09:31.994334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:35.494 [2024-07-24 02:09:31.994360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:45568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.494 [2024-07-24 02:09:31.994381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:35.494 [2024-07-24 02:09:31.994404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:45576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.494 [2024-07-24 02:09:31.994420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:31:35.494 [2024-07-24 02:09:31.994441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:45584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.494 [2024-07-24 02:09:31.994457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:31:35.494 [2024-07-24 02:09:31.994480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:45592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.494 [2024-07-24 02:09:31.994496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:31:35.494 [2024-07-24 02:09:31.994517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:45600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.494 [2024-07-24 02:09:31.994533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:35.494 [2024-07-24 02:09:31.994555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:45608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.494 [2024-07-24 02:09:31.994571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:35.494 [2024-07-24 02:09:31.994593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:45616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.494 [2024-07-24 02:09:31.994629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:35.494 [2024-07-24 02:09:31.994652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:45624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.494 [2024-07-24 02:09:31.994682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:35.494 [2024-07-24 02:09:31.994705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:45632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.494 [2024-07-24 02:09:31.994721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:35.494 [2024-07-24 02:09:31.994741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:45640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.494 [2024-07-24 02:09:31.994757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:35.494 [2024-07-24 02:09:31.994778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:45648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.494 [2024-07-24 02:09:31.994793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:35.494 [2024-07-24 02:09:31.994813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:45656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.494 [2024-07-24 02:09:31.994828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:35.494 [2024-07-24 02:09:31.994849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:45664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.494 [2024-07-24 02:09:31.994880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:35.494 [2024-07-24 02:09:31.994918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:45672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.494 [2024-07-24 02:09:31.994935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:35.494 [2024-07-24 02:09:31.994956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:45680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.494 [2024-07-24 02:09:31.994987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:35.494 [2024-07-24 02:09:31.995016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:45688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.494 [2024-07-24 02:09:31.995033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:35.494 [2024-07-24 02:09:31.995056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:46368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.494 [2024-07-24 02:09:31.995081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:35.494 [2024-07-24 02:09:31.995102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:46376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.494 [2024-07-24 02:09:31.995118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:35.494 [2024-07-24 02:09:31.995141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:46384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.494 [2024-07-24 02:09:31.995157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:35.494 [2024-07-24 02:09:31.995179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:46392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.494 [2024-07-24 02:09:31.995195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:35.494 [2024-07-24 02:09:31.995217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:46400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.494 [2024-07-24 02:09:31.995233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:35.494 [2024-07-24 02:09:31.995254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:46408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.494 [2024-07-24 02:09:31.995285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:35.494 [2024-07-24 02:09:31.995308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:46416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.494 [2024-07-24 02:09:31.995347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:35.495 [2024-07-24 02:09:31.995372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:46424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.495 [2024-07-24 02:09:31.995389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:35.495 [2024-07-24 02:09:31.995411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:46432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.495 [2024-07-24 02:09:31.995428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:35.495 [2024-07-24 02:09:31.995454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:46440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.495 [2024-07-24 02:09:31.995471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:35.495 [2024-07-24 02:09:31.995493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:46448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.495 [2024-07-24 02:09:31.995509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:35.495 [2024-07-24 02:09:31.995531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:46456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.495 [2024-07-24 02:09:31.995547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:35.495 [2024-07-24 02:09:31.995568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:46464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.495 [2024-07-24 02:09:31.995584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:35.495 [2024-07-24 02:09:31.995606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:46472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.495 [2024-07-24 02:09:31.995628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:35.495 [2024-07-24 02:09:31.995666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:46480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.495 [2024-07-24 02:09:31.995682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:35.495 [2024-07-24 02:09:31.995703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:46488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.495 [2024-07-24 02:09:31.995718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:35.495 [2024-07-24 02:09:31.995739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:46496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.495 [2024-07-24 02:09:31.995754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:35.495 [2024-07-24 02:09:31.995775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:46504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.495 [2024-07-24 02:09:31.995789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:35.495 [2024-07-24 02:09:31.995810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:46512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.495 [2024-07-24 02:09:31.995825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:35.495 [2024-07-24 02:09:31.995856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:46520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.495 [2024-07-24 02:09:31.995871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:31:35.495 [2024-07-24 02:09:31.995891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:46528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.495 [2024-07-24 02:09:31.995906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:31:35.495 [2024-07-24 02:09:31.995931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:46536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.495 [2024-07-24 02:09:31.995947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:31:35.495 [2024-07-24 02:09:31.995968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:46544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.495 [2024-07-24 02:09:31.995983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:35.495 [2024-07-24 02:09:31.996003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:46552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.495 [2024-07-24 02:09:31.996019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:35.495 [2024-07-24 02:09:31.996804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:46560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.495 [2024-07-24 02:09:31.996828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:35.495 [2024-07-24 02:09:31.996856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:46568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.495 [2024-07-24 02:09:31.996873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:35.495 [2024-07-24 02:09:31.996896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:46576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.495 [2024-07-24 02:09:31.996912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:35.495 [2024-07-24 02:09:31.996933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:45696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.495 [2024-07-24 02:09:31.996950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:35.495 [2024-07-24 02:09:31.996971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:45704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.495 [2024-07-24 02:09:31.996987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:35.495 [2024-07-24 02:09:31.997008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:45712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.495 [2024-07-24 02:09:31.997024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:35.495 [2024-07-24 02:09:31.997046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:45720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.495 [2024-07-24 02:09:31.997078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:35.495 [2024-07-24 02:09:31.997100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:45728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.495 [2024-07-24 02:09:31.997115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:35.495 [2024-07-24 02:09:31.997136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:45736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.495 [2024-07-24 02:09:31.997152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:35.495 [2024-07-24 02:09:31.997173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:45744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.495 [2024-07-24 02:09:31.997193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:35.495 [2024-07-24 02:09:31.997215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:45752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.495 [2024-07-24 02:09:31.997230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:35.495 [2024-07-24 02:09:31.997252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:45760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.495 [2024-07-24 02:09:31.997267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:35.495 [2024-07-24 02:09:31.997304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:45768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.495 [2024-07-24 02:09:31.997327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:35.495 [2024-07-24 02:09:31.997368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:45776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.495 [2024-07-24 02:09:31.997385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:35.495 [2024-07-24 02:09:31.997408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:45784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.495 [2024-07-24 02:09:31.997424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:35.495 [2024-07-24 02:09:31.997445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:45792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.495 [2024-07-24 02:09:31.997461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:35.495 [2024-07-24 02:09:31.997483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:45800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.495 [2024-07-24 02:09:31.997499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:35.495 [2024-07-24 02:09:31.997520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:45808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.495 [2024-07-24 02:09:31.997536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:35.495 [2024-07-24 02:09:31.997557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:45816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.495 [2024-07-24 02:09:31.997573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:35.495 [2024-07-24 02:09:31.997594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:45824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.495 [2024-07-24 02:09:31.997610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:35.496 [2024-07-24 02:09:31.997649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:45832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.496 [2024-07-24 02:09:31.997665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:35.496 [2024-07-24 02:09:31.997687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:45840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.496 [2024-07-24 02:09:31.997706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:35.496 [2024-07-24 02:09:31.997728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:45848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.496 [2024-07-24 02:09:31.997743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:35.496 [2024-07-24 02:09:31.997764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:45856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.496 [2024-07-24 02:09:31.997780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:35.496 [2024-07-24 02:09:31.997800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:45864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.496 [2024-07-24 02:09:31.997816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:35.496 [2024-07-24 02:09:31.997837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.496 [2024-07-24 02:09:31.997852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:35.496 [2024-07-24 02:09:31.997873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:45880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.496 [2024-07-24 02:09:31.997888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:35.496 [2024-07-24 02:09:31.997909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:45888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.496 [2024-07-24 02:09:31.997924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:35.496 [2024-07-24 02:09:31.997945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:45896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.496 [2024-07-24 02:09:31.997976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:35.496 [2024-07-24 02:09:31.997997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:45904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.496 [2024-07-24 02:09:31.998012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:35.496 [2024-07-24 02:09:31.998033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:45912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.496 [2024-07-24 02:09:31.998048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.496 [2024-07-24 02:09:31.998068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:45920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.496 [2024-07-24 02:09:31.998084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:35.496 [2024-07-24 02:09:31.998105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:45928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.496 [2024-07-24 02:09:31.998120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:35.496 [2024-07-24 02:09:31.998140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:45936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.496 [2024-07-24 02:09:31.998155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:31:35.496 [2024-07-24 02:09:31.998179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:45944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.496 [2024-07-24 02:09:31.998196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:35.496 [2024-07-24 02:09:31.998216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:45952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.496 [2024-07-24 02:09:31.998231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:35.496 [2024-07-24 02:09:31.998251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:45960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.496 [2024-07-24 02:09:31.998266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:35.496 [2024-07-24 02:09:31.998287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:45968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.496 [2024-07-24 02:09:31.998331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:31:35.496 [2024-07-24 02:09:31.998358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:45976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.496 [2024-07-24 02:09:31.998374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:35.496 [2024-07-24 02:09:31.998396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:45984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.496 [2024-07-24 02:09:31.998412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:35.496 [2024-07-24 02:09:31.998434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:45992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.496 [2024-07-24 02:09:31.998450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:31:35.496 [2024-07-24 02:09:31.998472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:46000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.496 [2024-07-24 02:09:31.998487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:31:35.496 [2024-07-24 02:09:31.998509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:46008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.496 [2024-07-24 02:09:31.998525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:35.496 [2024-07-24 02:09:31.998547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:46016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.496 [2024-07-24 02:09:31.998563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:31:35.496 [2024-07-24 02:09:31.998584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:46024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.496 [2024-07-24 02:09:31.998600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:31:35.496 [2024-07-24 02:09:31.998622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:46032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.496 [2024-07-24 02:09:31.998638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:31:35.496 [2024-07-24 02:09:31.998680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:46040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.496 [2024-07-24 02:09:31.998697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:31:35.496 [2024-07-24 02:09:31.998723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:46048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.496 [2024-07-24 02:09:31.998739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:35.496 [2024-07-24 02:09:31.998760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:46056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.496 [2024-07-24 02:09:31.998775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:31:35.496 [2024-07-24 02:09:31.998812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:46064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.496 [2024-07-24 02:09:31.998828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:31:35.496 [2024-07-24 02:09:31.998848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:46072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.496 [2024-07-24 02:09:31.998863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:31:35.496 [2024-07-24 02:09:31.998883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:46080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.496 [2024-07-24 02:09:31.998898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:35.496 [2024-07-24 02:09:31.998918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:46088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.496 [2024-07-24 02:09:31.998933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:35.496 [2024-07-24 02:09:31.998953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:46096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.496 [2024-07-24 02:09:31.998968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:35.496 [2024-07-24 02:09:31.998988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:46104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.496 [2024-07-24 02:09:31.999002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:35.496 [2024-07-24 02:09:31.999028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:46112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.496 [2024-07-24 02:09:31.999044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:31:35.496 [2024-07-24 02:09:31.999065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:46120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.497 [2024-07-24 02:09:31.999080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:35.497 [2024-07-24 02:09:31.999744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:46128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.497 [2024-07-24 02:09:31.999767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:31:35.497 [2024-07-24 02:09:31.999794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:46136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.497 [2024-07-24 02:09:31.999816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:31:35.497 [2024-07-24 02:09:31.999839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:46144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.497 [2024-07-24 02:09:31.999856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:31:35.497 [2024-07-24 02:09:31.999878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:45560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.497 [2024-07-24 02:09:31.999894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:31:35.497 [2024-07-24 02:09:31.999916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:46152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.497 [2024-07-24 02:09:31.999932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:31:35.497 [2024-07-24 02:09:31.999953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:46160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.497 [2024-07-24 02:09:31.999969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:31:35.497 [2024-07-24 02:09:31.999992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:46168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.497 [2024-07-24 02:09:32.000008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:35.497 [2024-07-24 02:09:32.000030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:46176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.497 [2024-07-24 02:09:32.000046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:35.497 [2024-07-24 02:09:32.000067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:46184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.497 [2024-07-24 02:09:32.000083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:35.497 [2024-07-24 02:09:32.000105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:46192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.497 [2024-07-24 02:09:32.000121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:35.497 [2024-07-24 02:09:32.000142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:46200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.497 [2024-07-24 02:09:32.000173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:35.497 [2024-07-24 02:09:32.000198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:46208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.497 [2024-07-24 02:09:32.000215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:35.497 [2024-07-24 02:09:32.000236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:46216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.497 [2024-07-24 02:09:32.000252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:35.497 [2024-07-24 02:09:32.000273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:46224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.497 [2024-07-24 02:09:32.000328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:35.497 [2024-07-24 02:09:32.000357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:46232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.497 [2024-07-24 02:09:32.000374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:35.497 [2024-07-24 02:09:32.000396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:46240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.497 [2024-07-24 02:09:32.000413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:35.497 [2024-07-24 02:09:32.000435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:46248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.497 [2024-07-24 02:09:32.000451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:35.497 [2024-07-24 02:09:32.000473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:46256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.497 [2024-07-24 02:09:32.000490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:35.497 [2024-07-24 02:09:32.000512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:46264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.497 [2024-07-24 02:09:32.000529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:35.497 [2024-07-24 02:09:32.000551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:46272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.497 [2024-07-24 02:09:32.000568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:35.497 [2024-07-24 02:09:32.000590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:46280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.497 [2024-07-24 02:09:32.000622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:35.497 [2024-07-24 02:09:32.000646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:46288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.497 [2024-07-24 02:09:32.000662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:35.497 [2024-07-24 02:09:32.000695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:46296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.497 [2024-07-24 02:09:32.000711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:31:35.497 [2024-07-24 02:09:32.000733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:46304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.497 [2024-07-24 02:09:32.000749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:35.497 [2024-07-24 02:09:32.000771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:46312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.497 [2024-07-24 02:09:32.000787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:35.497 [2024-07-24 02:09:32.000809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:46320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.497 [2024-07-24 02:09:32.000825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:35.497 [2024-07-24 02:09:32.000851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:46328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.497 [2024-07-24 02:09:32.000867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:35.497 [2024-07-24 02:09:32.000889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:46336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.497 [2024-07-24 02:09:32.000906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:35.497 [2024-07-24 02:09:32.000928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:46344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.497 [2024-07-24 02:09:32.000944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:35.497 [2024-07-24 02:09:32.000965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:46352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.497 [2024-07-24 02:09:32.000981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:35.497 [2024-07-24 02:09:32.001002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:46360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.497 [2024-07-24 02:09:32.001018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:35.497 [2024-07-24 02:09:32.001039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:45568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.497 [2024-07-24 02:09:32.001055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:35.497 [2024-07-24 02:09:32.001077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:45576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.497 [2024-07-24 02:09:32.001092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:31:35.497 [2024-07-24 02:09:32.001114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:45584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.497 [2024-07-24 02:09:32.001129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:31:35.497 [2024-07-24 02:09:32.001150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:45592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.497 [2024-07-24 02:09:32.001167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:31:35.497 [2024-07-24 02:09:32.001188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:45600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.497 [2024-07-24 02:09:32.001204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:35.498 [2024-07-24 02:09:32.001225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:45608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.498 [2024-07-24 02:09:32.001240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:35.498 [2024-07-24 02:09:32.001262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:45616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.498 [2024-07-24 02:09:32.001278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:35.498 [2024-07-24 02:09:32.001338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:45624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.498 [2024-07-24 02:09:32.001365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:35.498 [2024-07-24 02:09:32.001387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:45632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.498 [2024-07-24 02:09:32.001403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:35.498 [2024-07-24 02:09:32.001425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:45640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.498 [2024-07-24 02:09:32.001442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:35.498 [2024-07-24 02:09:32.001464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:45648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.498 [2024-07-24 02:09:32.001480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:35.498 [2024-07-24 02:09:32.001502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:45656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.498 [2024-07-24 02:09:32.001518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:35.498 [2024-07-24 02:09:32.001540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:45664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.498 [2024-07-24 02:09:32.001556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:35.498 [2024-07-24 02:09:32.001578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:45672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.498 [2024-07-24 02:09:32.001593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:35.498 [2024-07-24 02:09:32.001615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:45680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.498 [2024-07-24 02:09:32.001632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:35.498 [2024-07-24 02:09:32.001654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:45688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.498 [2024-07-24 02:09:32.001670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:35.498 [2024-07-24 02:09:32.001693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:46368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.498 [2024-07-24 02:09:32.001710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:35.498 [2024-07-24 02:09:32.001736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:46376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.498 [2024-07-24 02:09:32.001752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:35.498 [2024-07-24 02:09:32.001774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:46384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.498 [2024-07-24 02:09:32.001790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:35.498 [2024-07-24 02:09:32.001816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:46392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.498 [2024-07-24 02:09:32.001847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:35.498 [2024-07-24 02:09:32.001870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:46400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.498 [2024-07-24 02:09:32.001900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:35.498 [2024-07-24 02:09:32.001922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:46408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.498 [2024-07-24 02:09:32.001937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:35.498 [2024-07-24 02:09:32.001957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:46416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.498 [2024-07-24 02:09:32.001972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:35.498 [2024-07-24 02:09:32.001992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:46424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.498 [2024-07-24 02:09:32.002007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:35.498 [2024-07-24 02:09:32.002027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:46432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.498 [2024-07-24 02:09:32.002042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:35.498 [2024-07-24 02:09:32.002062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:46440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.498 [2024-07-24 02:09:32.002077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:35.498 [2024-07-24 02:09:32.002097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:46448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.498 [2024-07-24 02:09:32.002111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:35.498 [2024-07-24 02:09:32.002132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:46456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.498 [2024-07-24 02:09:32.002146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:35.498 [2024-07-24 02:09:32.002167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:46464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.498 [2024-07-24 02:09:32.002182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:35.498 [2024-07-24 02:09:32.002202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:46472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.498 [2024-07-24 02:09:32.002217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:35.498 [2024-07-24 02:09:32.002239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:46480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.498 [2024-07-24 02:09:32.002257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:35.498 [2024-07-24 02:09:32.002277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:46488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.498 [2024-07-24 02:09:32.002309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:35.498 [2024-07-24 02:09:32.002347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:46496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.498 [2024-07-24 02:09:32.002364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:35.498 [2024-07-24 02:09:32.002386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:46504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.498 [2024-07-24 02:09:32.002402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:35.498 [2024-07-24 02:09:32.002424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:46512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.498 [2024-07-24 02:09:32.002440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:35.498 [2024-07-24 02:09:32.002461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:46520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.498 [2024-07-24 02:09:32.002477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:31:35.499 [2024-07-24 02:09:32.002499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:46528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.499 [2024-07-24 02:09:32.002514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:31:35.499 [2024-07-24 02:09:32.002536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:46536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.499 [2024-07-24 02:09:32.002552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:31:35.499 [2024-07-24 02:09:32.002574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:46544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.499 [2024-07-24 02:09:32.002590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:35.499 [2024-07-24 02:09:32.003369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:46552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.499 [2024-07-24 02:09:32.003392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:35.499 [2024-07-24 02:09:32.003419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:46560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.499 [2024-07-24 02:09:32.003437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:35.499 [2024-07-24 02:09:32.003459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:46568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.499 [2024-07-24 02:09:32.003475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:35.499 [2024-07-24 02:09:32.003497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:46576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.499 [2024-07-24 02:09:32.003513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:35.499 [2024-07-24 02:09:32.003534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:45696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.499 [2024-07-24 02:09:32.003555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:35.499 [2024-07-24 02:09:32.003578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:45704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.499 [2024-07-24 02:09:32.003594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:35.499 [2024-07-24 02:09:32.003616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:45712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.499 [2024-07-24 02:09:32.003658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:35.499 [2024-07-24 02:09:32.003681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:45720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.499 [2024-07-24 02:09:32.003696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:35.499 [2024-07-24 02:09:32.003717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:45728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.499 [2024-07-24 02:09:32.003733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:35.499 [2024-07-24 02:09:32.003754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:45736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.499 [2024-07-24 02:09:32.003770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:35.499 [2024-07-24 02:09:32.003791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:45744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.499 [2024-07-24 02:09:32.003807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:35.499 [2024-07-24 02:09:32.003843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:45752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.499 [2024-07-24 02:09:32.003859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:35.499 [2024-07-24 02:09:32.003880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:45760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.499 [2024-07-24 02:09:32.003911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:35.499 [2024-07-24 02:09:32.003934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:45768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.499 [2024-07-24 02:09:32.003949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:35.499 [2024-07-24 02:09:32.003970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:45776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.499 [2024-07-24 02:09:32.003986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:35.499 [2024-07-24 02:09:32.004007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:45784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.499 [2024-07-24 02:09:32.004023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:35.499 [2024-07-24 02:09:32.004043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:45792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.499 [2024-07-24 02:09:32.004059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:35.499 [2024-07-24 02:09:32.004092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:45800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.499 [2024-07-24 02:09:32.004109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:35.499 [2024-07-24 02:09:32.004129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:45808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.499 [2024-07-24 02:09:32.004145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:35.499 [2024-07-24 02:09:32.004166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:45816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.499 [2024-07-24 02:09:32.004181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:35.499 [2024-07-24 02:09:32.004202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:45824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.499 [2024-07-24 02:09:32.004218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:35.499 [2024-07-24 02:09:32.004238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:45832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.499 [2024-07-24 02:09:32.004253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:35.499 [2024-07-24 02:09:32.004274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:45840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.499 [2024-07-24 02:09:32.004290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:35.499 [2024-07-24 02:09:32.004342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:45848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.499 [2024-07-24 02:09:32.004360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:35.499 [2024-07-24 02:09:32.004382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:45856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.499 [2024-07-24 02:09:32.004398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:35.499 [2024-07-24 02:09:32.004420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:45864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.499 [2024-07-24 02:09:32.004436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:35.499 [2024-07-24 02:09:32.004458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:45872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.499 [2024-07-24 02:09:32.004474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:35.499 [2024-07-24 02:09:32.004496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:45880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.499 [2024-07-24 02:09:32.004512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:35.499 [2024-07-24 02:09:32.004534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:45888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.499 [2024-07-24 02:09:32.004550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:35.499 [2024-07-24 02:09:32.004576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:45896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.499 [2024-07-24 02:09:32.004593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:35.499 [2024-07-24 02:09:32.004631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:45904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.499 [2024-07-24 02:09:32.004647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:35.499 [2024-07-24 02:09:32.004683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.499 [2024-07-24 02:09:32.004698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.499 [2024-07-24 02:09:32.004719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:45920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.499 [2024-07-24 02:09:32.004734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:35.499 [2024-07-24 02:09:32.004754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:45928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.499 [2024-07-24 02:09:32.004769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:35.500 [2024-07-24 02:09:32.004789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:45936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.500 [2024-07-24 02:09:32.004804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:31:35.500 [2024-07-24 02:09:32.004825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:45944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.500 [2024-07-24 02:09:32.004840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:35.500 [2024-07-24 02:09:32.004860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:45952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.500 [2024-07-24 02:09:32.004875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:35.500 [2024-07-24 02:09:32.004895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:45960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.500 [2024-07-24 02:09:32.004910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:35.500 [2024-07-24 02:09:32.004931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:45968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.500 [2024-07-24 02:09:32.004961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:31:35.500 [2024-07-24 02:09:32.004985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:45976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.500 [2024-07-24 02:09:32.005001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:35.500 [2024-07-24 02:09:32.005022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:45984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.500 [2024-07-24 02:09:32.005037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:35.500 [2024-07-24 02:09:32.005058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:45992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.500 [2024-07-24 02:09:32.005077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:31:35.500 [2024-07-24 02:09:32.005099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:46000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.500 [2024-07-24 02:09:32.005115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:31:35.500 [2024-07-24 02:09:32.005136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:46008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.500 [2024-07-24 02:09:32.005152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:35.500 [2024-07-24 02:09:32.005173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:46016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.500 [2024-07-24 02:09:32.005188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:31:35.500 [2024-07-24 02:09:32.005209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:46024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.500 [2024-07-24 02:09:32.005225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:31:35.500 [2024-07-24 02:09:32.005246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:46032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.500 [2024-07-24 02:09:32.005261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:31:35.500 [2024-07-24 02:09:32.005283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:46040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.500 [2024-07-24 02:09:32.005313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:31:35.500 [2024-07-24 02:09:32.005358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:46048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.500 [2024-07-24 02:09:32.005375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:35.500 [2024-07-24 02:09:32.005397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:46056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.500 [2024-07-24 02:09:32.005413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:31:35.500 [2024-07-24 02:09:32.005435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:46064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.500 [2024-07-24 02:09:32.005451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:31:35.500 [2024-07-24 02:09:32.005473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:46072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.500 [2024-07-24 02:09:32.005489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:31:35.500 [2024-07-24 02:09:32.005510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:46080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.500 [2024-07-24 02:09:32.005526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:35.500 [2024-07-24 02:09:32.005547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:46088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.500 [2024-07-24 02:09:32.005567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:35.500 [2024-07-24 02:09:32.005590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:46096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.500 [2024-07-24 02:09:32.005619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:35.500 [2024-07-24 02:09:32.005642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:46104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.500 [2024-07-24 02:09:32.005657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:35.500 [2024-07-24 02:09:32.005695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:46112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.500 [2024-07-24 02:09:32.005710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:31:35.500 [2024-07-24 02:09:32.006390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:46120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.500 [2024-07-24 02:09:32.006413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:35.500 [2024-07-24 02:09:32.006442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:46128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.500 [2024-07-24 02:09:32.006459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:31:35.500 [2024-07-24 02:09:32.006481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:46136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.500 [2024-07-24 02:09:32.006497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:31:35.500 [2024-07-24 02:09:32.006519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:46144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.500 [2024-07-24 02:09:32.006534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:31:35.500 [2024-07-24 02:09:32.006557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:45560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.500 [2024-07-24 02:09:32.006573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:31:35.500 [2024-07-24 02:09:32.006595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:46152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.500 [2024-07-24 02:09:32.006611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:31:35.500 [2024-07-24 02:09:32.006632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:46160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.500 [2024-07-24 02:09:32.006648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:31:35.500 [2024-07-24 02:09:32.006686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:46168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.500 [2024-07-24 02:09:32.006701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:35.500 [2024-07-24 02:09:32.006722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:46176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.500 [2024-07-24 02:09:32.006753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:35.500 [2024-07-24 02:09:32.006781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:46184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.500 [2024-07-24 02:09:32.006797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:35.500 [2024-07-24 02:09:32.006818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:46192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.500 [2024-07-24 02:09:32.006832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:35.500 [2024-07-24 02:09:32.006852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:46200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.500 [2024-07-24 02:09:32.006867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:35.500 [2024-07-24 02:09:32.006888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:46208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.500 [2024-07-24 02:09:32.006903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:35.501 [2024-07-24 02:09:32.006923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:46216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.501 [2024-07-24 02:09:32.006938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:35.501 [2024-07-24 02:09:32.006958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:46224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.501 [2024-07-24 02:09:32.006973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:35.501 [2024-07-24 02:09:32.007000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:46232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.501 [2024-07-24 02:09:32.007016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:35.501 [2024-07-24 02:09:32.007037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:46240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.501 [2024-07-24 02:09:32.007052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:35.501 [2024-07-24 02:09:32.007073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:46248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.501 [2024-07-24 02:09:32.007088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:35.501 [2024-07-24 02:09:32.007108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:46256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.501 [2024-07-24 02:09:32.007123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:35.501 [2024-07-24 02:09:32.007144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:46264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.501 [2024-07-24 02:09:32.007159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:35.501 [2024-07-24 02:09:32.007180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:46272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.501 [2024-07-24 02:09:32.007195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:35.501 [2024-07-24 02:09:32.007216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:46280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.501 [2024-07-24 02:09:32.007234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:35.501 [2024-07-24 02:09:32.007255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:46288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.501 [2024-07-24 02:09:32.007271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:35.501 [2024-07-24 02:09:32.007291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:46296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.501 [2024-07-24 02:09:32.007330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:31:35.501 [2024-07-24 02:09:32.007355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:46304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.501 [2024-07-24 02:09:32.007372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:35.501 [2024-07-24 02:09:32.007393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:46312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.501 [2024-07-24 02:09:32.007409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:35.501 [2024-07-24 02:09:32.007431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:46320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.501 [2024-07-24 02:09:32.007447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:35.501 [2024-07-24 02:09:32.007468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:46328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.501 [2024-07-24 02:09:32.007484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:35.501 [2024-07-24 02:09:32.007506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:46336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.501 [2024-07-24 02:09:32.007522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:35.501 [2024-07-24 02:09:32.007544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:46344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.501 [2024-07-24 02:09:32.007560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:35.501 [2024-07-24 02:09:32.007582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:46352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.501 [2024-07-24 02:09:32.007613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:35.501 [2024-07-24 02:09:32.007635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:46360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.501 [2024-07-24 02:09:32.007650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:35.501 [2024-07-24 02:09:32.007670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:45568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.501 [2024-07-24 02:09:32.007685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:35.501 [2024-07-24 02:09:32.007706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:45576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.501 [2024-07-24 02:09:32.007724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:31:35.501 [2024-07-24 02:09:32.007745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:45584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.501 [2024-07-24 02:09:32.007761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:31:35.501 [2024-07-24 02:09:32.007781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:45592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.501 [2024-07-24 02:09:32.007796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:31:35.501 [2024-07-24 02:09:32.007816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:45600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.501 [2024-07-24 02:09:32.007831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:35.501 [2024-07-24 02:09:32.007852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:45608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.501 [2024-07-24 02:09:32.007867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:35.501 [2024-07-24 02:09:32.007887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:45616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.501 [2024-07-24 02:09:32.007902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:35.501 [2024-07-24 02:09:32.007922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:45624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.501 [2024-07-24 02:09:32.007936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:35.501 [2024-07-24 02:09:32.007957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:45632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.501 [2024-07-24 02:09:32.007972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:35.501 [2024-07-24 02:09:32.007992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:45640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.501 [2024-07-24 02:09:32.008007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:35.501 [2024-07-24 02:09:32.008027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:45648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.501 [2024-07-24 02:09:32.008057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:35.501 [2024-07-24 02:09:32.008080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:45656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.501 [2024-07-24 02:09:32.008095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:35.501 [2024-07-24 02:09:32.008135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:45664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.501 [2024-07-24 02:09:32.008151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:35.501 [2024-07-24 02:09:32.008173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:45672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.501 [2024-07-24 02:09:32.008205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:35.501 [2024-07-24 02:09:32.008228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:45680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.501 [2024-07-24 02:09:32.008246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:35.501 [2024-07-24 02:09:32.008269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:45688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.501 [2024-07-24 02:09:32.008286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:35.501 [2024-07-24 02:09:32.008308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:46368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.501 [2024-07-24 02:09:32.008332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:35.501 [2024-07-24 02:09:32.008356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:46376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.502 [2024-07-24 02:09:32.008373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:35.502 [2024-07-24 02:09:32.008395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:46384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.502 [2024-07-24 02:09:32.008411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:35.502 [2024-07-24 02:09:32.008433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:46392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.502 [2024-07-24 02:09:32.008449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:35.502 [2024-07-24 02:09:32.008472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:46400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.502 [2024-07-24 02:09:32.008487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:35.502 [2024-07-24 02:09:32.008509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:46408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.502 [2024-07-24 02:09:32.008526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:35.502 [2024-07-24 02:09:32.008548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:46416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.502 [2024-07-24 02:09:32.008564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:35.502 [2024-07-24 02:09:32.008603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:46424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.502 [2024-07-24 02:09:32.008620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:35.502 [2024-07-24 02:09:32.008645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:46432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.502 [2024-07-24 02:09:32.008676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:35.502 [2024-07-24 02:09:32.008697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:46440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.502 [2024-07-24 02:09:32.008713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:35.502 [2024-07-24 02:09:32.008738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:46448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.502 [2024-07-24 02:09:32.008754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:35.502 [2024-07-24 02:09:32.008775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:46456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.502 [2024-07-24 02:09:32.008791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:35.502 [2024-07-24 02:09:32.008812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:46464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.502 [2024-07-24 02:09:32.008827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:35.502 [2024-07-24 02:09:32.008847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:46472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.502 [2024-07-24 02:09:32.008863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:35.502 [2024-07-24 02:09:32.008883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:46480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.502 [2024-07-24 02:09:32.008899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:35.502 [2024-07-24 02:09:32.008920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:46488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.502 [2024-07-24 02:09:32.008935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:35.502 [2024-07-24 02:09:32.008956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:46496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.502 [2024-07-24 02:09:32.008974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:35.502 [2024-07-24 02:09:32.008996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:46504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.502 [2024-07-24 02:09:32.009012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:35.502 [2024-07-24 02:09:32.009032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:46512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.502 [2024-07-24 02:09:32.009048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:35.502 [2024-07-24 02:09:32.009069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:46520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.502 [2024-07-24 02:09:32.009085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:31:35.502 [2024-07-24 02:09:32.009105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:46528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.502 [2024-07-24 02:09:32.009120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:31:35.502 [2024-07-24 02:09:32.009141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:46536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.502 [2024-07-24 02:09:32.009156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:31:35.502 [2024-07-24 02:09:32.009933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:46544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.502 [2024-07-24 02:09:32.009957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:35.502 [2024-07-24 02:09:32.009985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:46552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.502 [2024-07-24 02:09:32.010003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:35.502 [2024-07-24 02:09:32.010025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:46560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.502 [2024-07-24 02:09:32.010042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:35.502 [2024-07-24 02:09:32.010064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:46568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.502 [2024-07-24 02:09:32.010080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:35.502 [2024-07-24 02:09:32.010102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:46576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.502 [2024-07-24 02:09:32.010118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:35.502 [2024-07-24 02:09:32.010140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:45696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.502 [2024-07-24 02:09:32.010156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:35.502 [2024-07-24 02:09:32.010178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:45704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.502 [2024-07-24 02:09:32.010209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:35.502 [2024-07-24 02:09:32.010232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:45712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.502 [2024-07-24 02:09:32.010247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:35.502 [2024-07-24 02:09:32.010268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:45720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.502 [2024-07-24 02:09:32.010283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:35.502 [2024-07-24 02:09:32.010331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:45728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.502 [2024-07-24 02:09:32.010350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:35.502 [2024-07-24 02:09:32.010373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:45736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.502 [2024-07-24 02:09:32.010390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:35.502 [2024-07-24 02:09:32.010412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:45744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.502 [2024-07-24 02:09:32.010430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:35.502 [2024-07-24 02:09:32.010453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:45752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.502 [2024-07-24 02:09:32.010474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:35.502 [2024-07-24 02:09:32.010498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:45760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.502 [2024-07-24 02:09:32.010515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:35.502 [2024-07-24 02:09:32.010537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:45768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.502 [2024-07-24 02:09:32.010554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:35.502 [2024-07-24 02:09:32.010576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:45776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.503 [2024-07-24 02:09:32.010593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:35.503 [2024-07-24 02:09:32.010615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:45784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.503 [2024-07-24 02:09:32.010652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:35.503 [2024-07-24 02:09:32.010676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:45792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.503 [2024-07-24 02:09:32.010701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:35.503 [2024-07-24 02:09:32.010739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:45800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.503 [2024-07-24 02:09:32.010754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:35.503 [2024-07-24 02:09:32.010789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:45808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.503 [2024-07-24 02:09:32.010806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:35.503 [2024-07-24 02:09:32.010829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:45816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.503 [2024-07-24 02:09:32.010845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:35.503 [2024-07-24 02:09:32.010867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:45824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.503 [2024-07-24 02:09:32.010882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:35.503 [2024-07-24 02:09:32.010904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:45832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.503 [2024-07-24 02:09:32.010920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:35.503 [2024-07-24 02:09:32.010942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:45840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.503 [2024-07-24 02:09:32.010957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:35.503 [2024-07-24 02:09:32.010978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:45848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.503 [2024-07-24 02:09:32.010998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:35.503 [2024-07-24 02:09:32.011026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.503 [2024-07-24 02:09:32.011043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:35.503 [2024-07-24 02:09:32.011065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:45864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.503 [2024-07-24 02:09:32.011095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:35.503 [2024-07-24 02:09:32.011125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:45872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.503 [2024-07-24 02:09:32.011156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:35.503 [2024-07-24 02:09:32.011179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:45880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.503 [2024-07-24 02:09:32.011195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:35.503 [2024-07-24 02:09:32.011217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:45888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.503 [2024-07-24 02:09:32.011233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:35.503 [2024-07-24 02:09:32.011254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:45896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.503 [2024-07-24 02:09:32.011270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:35.503 [2024-07-24 02:09:32.011291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:45904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.503 [2024-07-24 02:09:32.011336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:35.503 [2024-07-24 02:09:32.011361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:45912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.503 [2024-07-24 02:09:32.011378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.503 [2024-07-24 02:09:32.011400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:45920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.503 [2024-07-24 02:09:32.011417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:35.503 [2024-07-24 02:09:32.011439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:45928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.503 [2024-07-24 02:09:32.011456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:35.503 [2024-07-24 02:09:32.011479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:45936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.503 [2024-07-24 02:09:32.011495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:31:35.503 [2024-07-24 02:09:32.011518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:45944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.503 [2024-07-24 02:09:32.011535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:35.503 [2024-07-24 02:09:32.011561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:45952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.503 [2024-07-24 02:09:32.011578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:35.503 [2024-07-24 02:09:32.011616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:45960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.503 [2024-07-24 02:09:32.011632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:35.503 [2024-07-24 02:09:32.011654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:45968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.503 [2024-07-24 02:09:32.011670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:31:35.503 [2024-07-24 02:09:32.011703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:45976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.503 [2024-07-24 02:09:32.011719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:35.503 [2024-07-24 02:09:32.011741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:45984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.503 [2024-07-24 02:09:32.011756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:35.503 [2024-07-24 02:09:32.011777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:45992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.503 [2024-07-24 02:09:32.018862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:31:35.503 [2024-07-24 02:09:32.018922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:46000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.503 [2024-07-24 02:09:32.018941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:31:35.503 [2024-07-24 02:09:32.018964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:46008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.503 [2024-07-24 02:09:32.018980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:35.503 [2024-07-24 02:09:32.019001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:46016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.503 [2024-07-24 02:09:32.019017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:31:35.503 [2024-07-24 02:09:32.019038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:46024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.503 [2024-07-24 02:09:32.019054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:31:35.503 [2024-07-24 02:09:32.019076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:46032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.503 [2024-07-24 02:09:32.019091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:31:35.504 [2024-07-24 02:09:32.019112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:46040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.504 [2024-07-24 02:09:32.019127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:31:35.504 [2024-07-24 02:09:32.019155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:46048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.504 [2024-07-24 02:09:32.019171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:35.504 [2024-07-24 02:09:32.019191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:46056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.504 [2024-07-24 02:09:32.019207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:31:35.504 [2024-07-24 02:09:32.019227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:46064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.504 [2024-07-24 02:09:32.019243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:31:35.504 [2024-07-24 02:09:32.019263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:46072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.504 [2024-07-24 02:09:32.019279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:31:35.504 [2024-07-24 02:09:32.019326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:46080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.504 [2024-07-24 02:09:32.019346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:35.504 [2024-07-24 02:09:32.019369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:46088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.504 [2024-07-24 02:09:32.019386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:35.504 [2024-07-24 02:09:32.019409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:46096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.504 [2024-07-24 02:09:32.019425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:35.504 [2024-07-24 02:09:32.019447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:46104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.504 [2024-07-24 02:09:32.019463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:35.504 [2024-07-24 02:09:32.020161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:46112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.504 [2024-07-24 02:09:32.020186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:31:35.504 [2024-07-24 02:09:32.020214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:46120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.504 [2024-07-24 02:09:32.020232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:35.504 [2024-07-24 02:09:32.020255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:46128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.504 [2024-07-24 02:09:32.020272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:31:35.504 [2024-07-24 02:09:32.020295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:46136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.504 [2024-07-24 02:09:32.020311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:31:35.504 [2024-07-24 02:09:32.020348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:46144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.504 [2024-07-24 02:09:32.020371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:31:35.504 [2024-07-24 02:09:32.020394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:45560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.504 [2024-07-24 02:09:32.020411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:31:35.504 [2024-07-24 02:09:32.020434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:46152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.504 [2024-07-24 02:09:32.020450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:31:35.504 [2024-07-24 02:09:32.020472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:46160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.504 [2024-07-24 02:09:32.020488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:31:35.504 [2024-07-24 02:09:32.020512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:46168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.504 [2024-07-24 02:09:32.020528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:35.504 [2024-07-24 02:09:32.020550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:46176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.504 [2024-07-24 02:09:32.020567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:35.504 [2024-07-24 02:09:32.020604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:46184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.504 [2024-07-24 02:09:32.020631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:35.504 [2024-07-24 02:09:32.020669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:46192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.504 [2024-07-24 02:09:32.020685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:35.504 [2024-07-24 02:09:32.020707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:46200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.504 [2024-07-24 02:09:32.020722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:35.504 [2024-07-24 02:09:32.020743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:46208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.504 [2024-07-24 02:09:32.020759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:35.504 [2024-07-24 02:09:32.020779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:46216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.504 [2024-07-24 02:09:32.020795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:35.504 [2024-07-24 02:09:32.020815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:46224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.504 [2024-07-24 02:09:32.020831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:35.504 [2024-07-24 02:09:32.020852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:46232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.504 [2024-07-24 02:09:32.020871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:35.504 [2024-07-24 02:09:32.020893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:46240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.504 [2024-07-24 02:09:32.020909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:35.504 [2024-07-24 02:09:32.020930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:46248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.504 [2024-07-24 02:09:32.020945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:35.504 [2024-07-24 02:09:32.020967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:46256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.504 [2024-07-24 02:09:32.020982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:35.504 [2024-07-24 02:09:32.021002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:46264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.504 [2024-07-24 02:09:32.021017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:35.504 [2024-07-24 02:09:32.021038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:46272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.504 [2024-07-24 02:09:32.021052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:35.504 [2024-07-24 02:09:32.021073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:46280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.504 [2024-07-24 02:09:32.021087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:35.504 [2024-07-24 02:09:32.021107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:46288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.504 [2024-07-24 02:09:32.021122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:35.504 [2024-07-24 02:09:32.021142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:46296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.504 [2024-07-24 02:09:32.021157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:31:35.504 [2024-07-24 02:09:32.021178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:46304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.504 [2024-07-24 02:09:32.021193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:35.504 [2024-07-24 02:09:32.021213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:46312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.504 [2024-07-24 02:09:32.021229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:35.504 [2024-07-24 02:09:32.021250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:46320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.505 [2024-07-24 02:09:32.021265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:35.505 [2024-07-24 02:09:32.021286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:46328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.505 [2024-07-24 02:09:32.021324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:35.505 [2024-07-24 02:09:32.021354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:46336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.505 [2024-07-24 02:09:32.021371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:35.505 [2024-07-24 02:09:32.021393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:46344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.505 [2024-07-24 02:09:32.021409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:35.505 [2024-07-24 02:09:32.021431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:46352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.505 [2024-07-24 02:09:32.021447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:35.505 [2024-07-24 02:09:32.021469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:46360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.505 [2024-07-24 02:09:32.021484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:35.505 [2024-07-24 02:09:32.021506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:45568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.505 [2024-07-24 02:09:32.021521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:35.505 [2024-07-24 02:09:32.021543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:45576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.505 [2024-07-24 02:09:32.021559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:31:35.505 [2024-07-24 02:09:32.021581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:45584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.505 [2024-07-24 02:09:32.021612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:31:35.505 [2024-07-24 02:09:32.021643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:45592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.505 [2024-07-24 02:09:32.021658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:31:35.505 [2024-07-24 02:09:32.021679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:45600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.505 [2024-07-24 02:09:32.021693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:35.505 [2024-07-24 02:09:32.021713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:45608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.505 [2024-07-24 02:09:32.021728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:35.505 [2024-07-24 02:09:32.021749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:45616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.505 [2024-07-24 02:09:32.021764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:35.505 [2024-07-24 02:09:32.021784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:45624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.505 [2024-07-24 02:09:32.021814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:35.505 [2024-07-24 02:09:32.021841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:45632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.505 [2024-07-24 02:09:32.021858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:35.505 [2024-07-24 02:09:32.021897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:45640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.505 [2024-07-24 02:09:32.021913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:35.505 [2024-07-24 02:09:32.021935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:45648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.505 [2024-07-24 02:09:32.021951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:35.505 [2024-07-24 02:09:32.021973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:45656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.505 [2024-07-24 02:09:32.021989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:35.505 [2024-07-24 02:09:32.022011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:45664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.505 [2024-07-24 02:09:32.022027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:35.505 [2024-07-24 02:09:32.022049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:45672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.505 [2024-07-24 02:09:32.022065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:35.505 [2024-07-24 02:09:32.022087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:45680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.505 [2024-07-24 02:09:32.022103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:35.505 [2024-07-24 02:09:32.022125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:45688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.505 [2024-07-24 02:09:32.022142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:35.505 [2024-07-24 02:09:32.022164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:46368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.505 [2024-07-24 02:09:32.022196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:35.505 [2024-07-24 02:09:32.022219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:46376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.505 [2024-07-24 02:09:32.022249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:35.505 [2024-07-24 02:09:32.022271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:46384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.505 [2024-07-24 02:09:32.022285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:35.505 [2024-07-24 02:09:32.022328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:46392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.505 [2024-07-24 02:09:32.022347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:35.505 [2024-07-24 02:09:32.022369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:46400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.505 [2024-07-24 02:09:32.022389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:35.505 [2024-07-24 02:09:32.022411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:46408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.505 [2024-07-24 02:09:32.022427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:35.505 [2024-07-24 02:09:32.022449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:46416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.505 [2024-07-24 02:09:32.022465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:35.505 [2024-07-24 02:09:32.022487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:46424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.505 [2024-07-24 02:09:32.022503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:35.505 [2024-07-24 02:09:32.022524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:46432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.505 [2024-07-24 02:09:32.022540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:35.505 [2024-07-24 02:09:32.022562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:46440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.505 [2024-07-24 02:09:32.022578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:35.505 [2024-07-24 02:09:32.022600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:46448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.505 [2024-07-24 02:09:32.022616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:35.505 [2024-07-24 02:09:32.022661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:46456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.505 [2024-07-24 02:09:32.022676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:35.505 [2024-07-24 02:09:32.022696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:46464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.505 [2024-07-24 02:09:32.022711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:35.505 [2024-07-24 02:09:32.022732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:46472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.505 [2024-07-24 02:09:32.022746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:35.505 [2024-07-24 02:09:32.022767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:46480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.505 [2024-07-24 02:09:32.022782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:35.505 [2024-07-24 02:09:32.022802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:46488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.506 [2024-07-24 02:09:32.022817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:35.506 [2024-07-24 02:09:32.022838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:46496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.506 [2024-07-24 02:09:32.022857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:35.506 [2024-07-24 02:09:32.022879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:46504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.506 [2024-07-24 02:09:32.022894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:35.506 [2024-07-24 02:09:32.022915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:46512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.506 [2024-07-24 02:09:32.022930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:35.506 [2024-07-24 02:09:32.022951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:46520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.506 [2024-07-24 02:09:32.022966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:31:35.506 [2024-07-24 02:09:32.022987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:46528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.506 [2024-07-24 02:09:32.023002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:31:35.506 [2024-07-24 02:09:32.023735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:46536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.506 [2024-07-24 02:09:32.023759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:31:35.506 [2024-07-24 02:09:32.023786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:46544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.506 [2024-07-24 02:09:32.023804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:35.506 [2024-07-24 02:09:32.023826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:46552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.506 [2024-07-24 02:09:32.023842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:35.506 [2024-07-24 02:09:32.023864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:46560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.506 [2024-07-24 02:09:32.023880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:35.506 [2024-07-24 02:09:32.023902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:46568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.506 [2024-07-24 02:09:32.023918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:35.506 [2024-07-24 02:09:32.023940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:46576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.506 [2024-07-24 02:09:32.023956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:35.506 [2024-07-24 02:09:32.023978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:45696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.506 [2024-07-24 02:09:32.023994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:35.506 [2024-07-24 02:09:32.024031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:45704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.506 [2024-07-24 02:09:32.024048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:35.506 [2024-07-24 02:09:32.024075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:45712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.506 [2024-07-24 02:09:32.024091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:35.506 [2024-07-24 02:09:32.024113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:45720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.506 [2024-07-24 02:09:32.024128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:35.506 [2024-07-24 02:09:32.024150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:45728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.506 [2024-07-24 02:09:32.024165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:35.506 [2024-07-24 02:09:32.024186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:45736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.506 [2024-07-24 02:09:32.024202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:35.506 [2024-07-24 02:09:32.024223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:45744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.506 [2024-07-24 02:09:32.024239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:35.506 [2024-07-24 02:09:32.024275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:45752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.506 [2024-07-24 02:09:32.024290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:35.506 [2024-07-24 02:09:32.024337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:45760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.506 [2024-07-24 02:09:32.024356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:35.506 [2024-07-24 02:09:32.024380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:45768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.506 [2024-07-24 02:09:32.024396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:35.506 [2024-07-24 02:09:32.024418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:45776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.506 [2024-07-24 02:09:32.024434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:35.506 [2024-07-24 02:09:32.024457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:45784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.506 [2024-07-24 02:09:32.024473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:35.506 [2024-07-24 02:09:32.024495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:45792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.506 [2024-07-24 02:09:32.024511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:35.506 [2024-07-24 02:09:32.024533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:45800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.506 [2024-07-24 02:09:32.024549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:35.506 [2024-07-24 02:09:32.024577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:45808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.506 [2024-07-24 02:09:32.024594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:35.506 [2024-07-24 02:09:32.024632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:45816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.506 [2024-07-24 02:09:32.024648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:35.506 [2024-07-24 02:09:32.024670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:45824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.506 [2024-07-24 02:09:32.024686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:35.506 [2024-07-24 02:09:32.024707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:45832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.506 [2024-07-24 02:09:32.024723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:35.506 [2024-07-24 02:09:32.024744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:45840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.506 [2024-07-24 02:09:32.024760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:35.506 [2024-07-24 02:09:32.024781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:45848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.506 [2024-07-24 02:09:32.024797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:35.506 [2024-07-24 02:09:32.024818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:45856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.506 [2024-07-24 02:09:32.024834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:35.506 [2024-07-24 02:09:32.024855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:45864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.506 [2024-07-24 02:09:32.024885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:35.506 [2024-07-24 02:09:32.024907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:45872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.506 [2024-07-24 02:09:32.024922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:35.506 [2024-07-24 02:09:32.024960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:45880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.506 [2024-07-24 02:09:32.024976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:35.506 [2024-07-24 02:09:32.024998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:45888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.506 [2024-07-24 02:09:32.025014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:35.506 [2024-07-24 02:09:32.025035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.507 [2024-07-24 02:09:32.025050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:35.507 [2024-07-24 02:09:32.025071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:45904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.507 [2024-07-24 02:09:32.025091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:35.507 [2024-07-24 02:09:32.025113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:45912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.507 [2024-07-24 02:09:32.025129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.507 [2024-07-24 02:09:32.025151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:45920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.507 [2024-07-24 02:09:32.025167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:35.507 [2024-07-24 02:09:32.025188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:45928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.507 [2024-07-24 02:09:32.025203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:35.507 [2024-07-24 02:09:32.025225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:45936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.507 [2024-07-24 02:09:32.025256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:31:35.507 [2024-07-24 02:09:32.025278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:45944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.507 [2024-07-24 02:09:32.025293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:35.507 [2024-07-24 02:09:32.025340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:45952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.507 [2024-07-24 02:09:32.025357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:35.507 [2024-07-24 02:09:32.025379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:45960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.507 [2024-07-24 02:09:32.025395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:35.507 [2024-07-24 02:09:32.025417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:45968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.507 [2024-07-24 02:09:32.025433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:31:35.507 [2024-07-24 02:09:32.025455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:45976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.507 [2024-07-24 02:09:32.025471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:35.507 [2024-07-24 02:09:32.025494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:45984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.507 [2024-07-24 02:09:32.025510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:35.507 [2024-07-24 02:09:32.025532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:45992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.507 [2024-07-24 02:09:32.025548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:31:35.507 [2024-07-24 02:09:32.025569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:46000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.507 [2024-07-24 02:09:32.025590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:31:35.507 [2024-07-24 02:09:32.025629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:46008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.507 [2024-07-24 02:09:32.025645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:35.507 [2024-07-24 02:09:32.025665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:46016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.507 [2024-07-24 02:09:32.025695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:31:35.507 [2024-07-24 02:09:32.025718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:46024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.507 [2024-07-24 02:09:32.025734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:31:35.507 [2024-07-24 02:09:32.025755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:46032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.507 [2024-07-24 02:09:32.025771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:31:35.507 [2024-07-24 02:09:32.025792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:46040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.507 [2024-07-24 02:09:32.025808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:31:35.507 [2024-07-24 02:09:32.025829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:46048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.507 [2024-07-24 02:09:32.025845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:35.507 [2024-07-24 02:09:32.025866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:46056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.507 [2024-07-24 02:09:32.025882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:31:35.507 [2024-07-24 02:09:32.025902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:46064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.507 [2024-07-24 02:09:32.025918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:31:35.507 [2024-07-24 02:09:32.025939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:46072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.507 [2024-07-24 02:09:32.025955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:31:35.507 [2024-07-24 02:09:32.025976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:46080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.507 [2024-07-24 02:09:32.025991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:35.507 [2024-07-24 02:09:32.026012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:46088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.507 [2024-07-24 02:09:32.026028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:35.507 [2024-07-24 02:09:32.026049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:46096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.507 [2024-07-24 02:09:32.026079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:35.507 [2024-07-24 02:09:32.026753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:46104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.507 [2024-07-24 02:09:32.026777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:35.507 [2024-07-24 02:09:32.026805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:46112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.507 [2024-07-24 02:09:32.026823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:31:35.507 [2024-07-24 02:09:32.026845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:46120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.507 [2024-07-24 02:09:32.026861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:35.507 [2024-07-24 02:09:32.026883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:46128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.507 [2024-07-24 02:09:32.026899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:31:35.507 [2024-07-24 02:09:32.026921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:46136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.507 [2024-07-24 02:09:32.026936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:31:35.507 [2024-07-24 02:09:32.026958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:46144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.507 [2024-07-24 02:09:32.026974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:31:35.507 [2024-07-24 02:09:32.026995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:45560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.507 [2024-07-24 02:09:32.027012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:31:35.507 [2024-07-24 02:09:32.027050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:46152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.507 [2024-07-24 02:09:32.027065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:31:35.507 [2024-07-24 02:09:32.027086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:46160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.507 [2024-07-24 02:09:32.027102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:31:35.507 [2024-07-24 02:09:32.027123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:46168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.507 [2024-07-24 02:09:32.027139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:35.507 [2024-07-24 02:09:32.027160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:46176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.507 [2024-07-24 02:09:32.027175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:35.507 [2024-07-24 02:09:32.027212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:46184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.508 [2024-07-24 02:09:32.027227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:35.508 [2024-07-24 02:09:32.027253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:46192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.508 [2024-07-24 02:09:32.027269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:35.508 [2024-07-24 02:09:32.027289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:46200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.508 [2024-07-24 02:09:32.027327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:35.508 [2024-07-24 02:09:32.027351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:46208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.508 [2024-07-24 02:09:32.027383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:35.508 [2024-07-24 02:09:32.027406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:46216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.508 [2024-07-24 02:09:32.027422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:35.508 [2024-07-24 02:09:32.027444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:46224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.508 [2024-07-24 02:09:32.027460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:35.508 [2024-07-24 02:09:32.027481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:46232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.508 [2024-07-24 02:09:32.027497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:35.508 [2024-07-24 02:09:32.027519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:46240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.508 [2024-07-24 02:09:32.027535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:35.508 [2024-07-24 02:09:32.027557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:46248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.508 [2024-07-24 02:09:32.027573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:35.508 [2024-07-24 02:09:32.027594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:46256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.508 [2024-07-24 02:09:32.027610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:35.508 [2024-07-24 02:09:32.027648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:46264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.508 [2024-07-24 02:09:32.027664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:35.508 [2024-07-24 02:09:32.027684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:46272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.508 [2024-07-24 02:09:32.027714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:35.508 [2024-07-24 02:09:32.027736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:46280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.508 [2024-07-24 02:09:32.027751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:35.508 [2024-07-24 02:09:32.027771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:46288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.508 [2024-07-24 02:09:32.027790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:35.508 [2024-07-24 02:09:32.027811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:46296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.508 [2024-07-24 02:09:32.027826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:31:35.508 [2024-07-24 02:09:32.027847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:46304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.508 [2024-07-24 02:09:32.027862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:35.508 [2024-07-24 02:09:32.027883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:46312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.508 [2024-07-24 02:09:32.027898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:35.508 [2024-07-24 02:09:32.027919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:46320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.508 [2024-07-24 02:09:32.027934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:35.508 [2024-07-24 02:09:32.027955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:46328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.508 [2024-07-24 02:09:32.027970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:35.508 [2024-07-24 02:09:32.027990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:46336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.508 [2024-07-24 02:09:32.028005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:35.508 [2024-07-24 02:09:32.028025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:46344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.508 [2024-07-24 02:09:32.028040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:35.508 [2024-07-24 02:09:32.028060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:46352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.508 [2024-07-24 02:09:32.028075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:35.508 [2024-07-24 02:09:32.028095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:46360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.508 [2024-07-24 02:09:32.028110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:35.508 [2024-07-24 02:09:32.028131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:45568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.508 [2024-07-24 02:09:32.028145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:35.508 [2024-07-24 02:09:32.028166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:45576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.508 [2024-07-24 02:09:32.028181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:31:35.508 [2024-07-24 02:09:32.028201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:45584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.508 [2024-07-24 02:09:32.028220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:31:35.508 [2024-07-24 02:09:32.028241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:45592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.508 [2024-07-24 02:09:32.028256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:31:35.508 [2024-07-24 02:09:32.028277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:45600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.508 [2024-07-24 02:09:32.028292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:35.508 [2024-07-24 02:09:32.028312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:45608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.508 [2024-07-24 02:09:32.028350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:35.508 [2024-07-24 02:09:32.028373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:45616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.508 [2024-07-24 02:09:32.028389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:35.508 [2024-07-24 02:09:32.028410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:45624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.508 [2024-07-24 02:09:32.028425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:35.508 [2024-07-24 02:09:32.028447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:45632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.508 [2024-07-24 02:09:32.028462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:35.508 [2024-07-24 02:09:32.028483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:45640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.509 [2024-07-24 02:09:32.028498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:35.509 [2024-07-24 02:09:32.028536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:45648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.509 [2024-07-24 02:09:32.028552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:35.509 [2024-07-24 02:09:32.028574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:45656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.509 [2024-07-24 02:09:32.028590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:35.509 [2024-07-24 02:09:32.028612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:45664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.509 [2024-07-24 02:09:32.028627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:35.509 [2024-07-24 02:09:32.028649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:45672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.509 [2024-07-24 02:09:32.028665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:35.509 [2024-07-24 02:09:32.028687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:45680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.509 [2024-07-24 02:09:32.028702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:35.509 [2024-07-24 02:09:32.028729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:45688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.509 [2024-07-24 02:09:32.028746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:35.509 [2024-07-24 02:09:32.028768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:46368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.509 [2024-07-24 02:09:32.028784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:35.509 [2024-07-24 02:09:32.028806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:46376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.509 [2024-07-24 02:09:32.028822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:35.509 [2024-07-24 02:09:32.028843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:46384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.509 [2024-07-24 02:09:32.028874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:35.509 [2024-07-24 02:09:32.028897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:46392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.509 [2024-07-24 02:09:32.028912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:35.509 [2024-07-24 02:09:32.028950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:46400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.509 [2024-07-24 02:09:32.028964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:35.509 [2024-07-24 02:09:32.028985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:46408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.509 [2024-07-24 02:09:32.029000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:35.509 [2024-07-24 02:09:32.029020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:46416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.509 [2024-07-24 02:09:32.029034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:35.509 [2024-07-24 02:09:32.029055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:46424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.509 [2024-07-24 02:09:32.029069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:35.509 [2024-07-24 02:09:32.029090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:46432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.509 [2024-07-24 02:09:32.029105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:35.509 [2024-07-24 02:09:32.029125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:46440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.509 [2024-07-24 02:09:32.029140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:35.509 [2024-07-24 02:09:32.029161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:46448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.509 [2024-07-24 02:09:32.029176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:35.509 [2024-07-24 02:09:32.029200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:46456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.509 [2024-07-24 02:09:32.029216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:35.509 [2024-07-24 02:09:32.029236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:46464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.509 [2024-07-24 02:09:32.029251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:35.509 [2024-07-24 02:09:32.029271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:46472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.509 [2024-07-24 02:09:32.029286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:35.509 [2024-07-24 02:09:32.029307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:46480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.509 [2024-07-24 02:09:32.029343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:35.509 [2024-07-24 02:09:32.029367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:46488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.509 [2024-07-24 02:09:32.029383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:35.509 [2024-07-24 02:09:32.029405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:46496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.509 [2024-07-24 02:09:32.029420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:35.509 [2024-07-24 02:09:32.029440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:46504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.509 [2024-07-24 02:09:32.029456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:35.509 [2024-07-24 02:09:32.029477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:46512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.509 [2024-07-24 02:09:32.029493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:35.509 [2024-07-24 02:09:32.029514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:46520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.509 [2024-07-24 02:09:32.029529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:31:35.509 [2024-07-24 02:09:32.029799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:46528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.509 [2024-07-24 02:09:32.029823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:31:35.509 [2024-07-24 02:09:32.029870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:46536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.509 [2024-07-24 02:09:32.029892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:31:35.509 [2024-07-24 02:09:32.029921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:46544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.509 [2024-07-24 02:09:32.029938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:35.509 [2024-07-24 02:09:32.029965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:46552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.509 [2024-07-24 02:09:32.029987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:35.509 [2024-07-24 02:09:32.030015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:46560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.509 [2024-07-24 02:09:32.030032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:35.509 [2024-07-24 02:09:32.030059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:46568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.509 [2024-07-24 02:09:32.030076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:35.509 [2024-07-24 02:09:32.030103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:46576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.509 [2024-07-24 02:09:32.030119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:35.509 [2024-07-24 02:09:32.030147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:45696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.509 [2024-07-24 02:09:32.030163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:35.509 [2024-07-24 02:09:32.030190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:45704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.509 [2024-07-24 02:09:32.030206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:35.509 [2024-07-24 02:09:32.030234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:45712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.510 [2024-07-24 02:09:32.030250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:35.510 [2024-07-24 02:09:32.030293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:45720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.510 [2024-07-24 02:09:32.030310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:35.510 [2024-07-24 02:09:32.030363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:45728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.510 [2024-07-24 02:09:32.030382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:35.510 [2024-07-24 02:09:32.030409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:45736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.510 [2024-07-24 02:09:32.030426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:35.510 [2024-07-24 02:09:32.030453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:45744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.510 [2024-07-24 02:09:32.030470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:35.510 [2024-07-24 02:09:32.030498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:45752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.510 [2024-07-24 02:09:32.030514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:35.510 [2024-07-24 02:09:32.030541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:45760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.510 [2024-07-24 02:09:32.030562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:35.510 [2024-07-24 02:09:32.030590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:45768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.510 [2024-07-24 02:09:32.030607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:35.510 [2024-07-24 02:09:32.030650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:45776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.510 [2024-07-24 02:09:32.030666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:35.510 [2024-07-24 02:09:32.030692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:45784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.510 [2024-07-24 02:09:32.030708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:35.510 [2024-07-24 02:09:32.030733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:45792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.510 [2024-07-24 02:09:32.030749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:35.510 [2024-07-24 02:09:32.030774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:45800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.510 [2024-07-24 02:09:32.030790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:35.510 [2024-07-24 02:09:32.030815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:45808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.510 [2024-07-24 02:09:32.030830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:35.510 [2024-07-24 02:09:32.030856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:45816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.510 [2024-07-24 02:09:32.030872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:35.510 [2024-07-24 02:09:32.030898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:45824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.510 [2024-07-24 02:09:32.030913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:35.510 [2024-07-24 02:09:32.030939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:45832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.510 [2024-07-24 02:09:32.030954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:35.510 [2024-07-24 02:09:32.030980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.510 [2024-07-24 02:09:32.030995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:35.510 [2024-07-24 02:09:32.031020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:45848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.510 [2024-07-24 02:09:32.031036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:35.510 [2024-07-24 02:09:32.031062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:45856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.510 [2024-07-24 02:09:32.031077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:35.510 [2024-07-24 02:09:32.031107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:45864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.510 [2024-07-24 02:09:32.031123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:35.510 [2024-07-24 02:09:32.031149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:45872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.510 [2024-07-24 02:09:32.031165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:35.510 [2024-07-24 02:09:32.031190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:45880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.510 [2024-07-24 02:09:32.031206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:35.510 [2024-07-24 02:09:32.031231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:45888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.510 [2024-07-24 02:09:32.031246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:35.510 [2024-07-24 02:09:32.031272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:45896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.510 [2024-07-24 02:09:32.031287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:35.510 [2024-07-24 02:09:32.031313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:45904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.510 [2024-07-24 02:09:32.031354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:35.510 [2024-07-24 02:09:32.031383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:45912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.510 [2024-07-24 02:09:32.031399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.510 [2024-07-24 02:09:32.031425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:45920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.510 [2024-07-24 02:09:32.031441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:35.510 [2024-07-24 02:09:32.031468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:45928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.510 [2024-07-24 02:09:32.031484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:35.510 [2024-07-24 02:09:32.031510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:45936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.510 [2024-07-24 02:09:32.031526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:31:35.510 [2024-07-24 02:09:32.031552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:45944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.510 [2024-07-24 02:09:32.031569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:35.510 [2024-07-24 02:09:32.031595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:45952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.510 [2024-07-24 02:09:32.031611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:35.510 [2024-07-24 02:09:32.031666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:45960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.510 [2024-07-24 02:09:32.031683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:35.510 [2024-07-24 02:09:32.031709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:45968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.510 [2024-07-24 02:09:32.031725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:31:35.510 [2024-07-24 02:09:32.031750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:45976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.510 [2024-07-24 02:09:32.031766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:35.510 [2024-07-24 02:09:32.031792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:45984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.510 [2024-07-24 02:09:32.031808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:35.510 [2024-07-24 02:09:32.031834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:45992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.510 [2024-07-24 02:09:32.031849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:31:35.510 [2024-07-24 02:09:32.031875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:46000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.510 [2024-07-24 02:09:32.031890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:31:35.511 [2024-07-24 02:09:32.031916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:46008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.511 [2024-07-24 02:09:32.031932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:35.511 [2024-07-24 02:09:32.031957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:46016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.511 [2024-07-24 02:09:32.031973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:31:35.511 [2024-07-24 02:09:32.031998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:46024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.511 [2024-07-24 02:09:32.032013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:31:35.511 [2024-07-24 02:09:32.032039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:46032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.511 [2024-07-24 02:09:32.032054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:31:35.511 [2024-07-24 02:09:32.032079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:46040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.511 [2024-07-24 02:09:32.032095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:31:35.511 [2024-07-24 02:09:32.032120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:46048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.511 [2024-07-24 02:09:32.032136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:35.511 [2024-07-24 02:09:32.032162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:46056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.511 [2024-07-24 02:09:32.032181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:31:35.511 [2024-07-24 02:09:32.032207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:46064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.511 [2024-07-24 02:09:32.032223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:31:35.511 [2024-07-24 02:09:32.032249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:46072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.511 [2024-07-24 02:09:32.032265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:31:35.511 [2024-07-24 02:09:32.032290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:46080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.511 [2024-07-24 02:09:32.032306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:35.511 [2024-07-24 02:09:32.032356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:46088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.511 [2024-07-24 02:09:32.032374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:35.511 [2024-07-24 02:09:32.032549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:46096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.511 [2024-07-24 02:09:32.032571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:35.511 [2024-07-24 02:09:47.494760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:17896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.511 [2024-07-24 02:09:47.494820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:35.511 [2024-07-24 02:09:47.494859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:17912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.511 [2024-07-24 02:09:47.494878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:35.511 [2024-07-24 02:09:47.494903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:17928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.511 [2024-07-24 02:09:47.494920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:35.511 [2024-07-24 02:09:47.494942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:17944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.511 [2024-07-24 02:09:47.494959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:35.511 [2024-07-24 02:09:47.494981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:17960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.511 [2024-07-24 02:09:47.494997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:35.511 [2024-07-24 02:09:47.495020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:17976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.511 [2024-07-24 02:09:47.495036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:35.511 [2024-07-24 02:09:47.495058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:17992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.511 [2024-07-24 02:09:47.495086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:35.511 [2024-07-24 02:09:47.495109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:18008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.511 [2024-07-24 02:09:47.495126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:35.511 [2024-07-24 02:09:47.495165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:18024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.511 [2024-07-24 02:09:47.495180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:35.511 [2024-07-24 02:09:47.495202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:18040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.511 [2024-07-24 02:09:47.495217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:35.511 [2024-07-24 02:09:47.495239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:18056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.511 [2024-07-24 02:09:47.495254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:35.511 [2024-07-24 02:09:47.495276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.511 [2024-07-24 02:09:47.495291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:35.511 [2024-07-24 02:09:47.495339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:18088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.511 [2024-07-24 02:09:47.495356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:35.511 [2024-07-24 02:09:47.495378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:18104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.511 [2024-07-24 02:09:47.495394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:35.511 [2024-07-24 02:09:47.495416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.511 [2024-07-24 02:09:47.495432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:35.511 [2024-07-24 02:09:47.495454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:18136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.511 [2024-07-24 02:09:47.495470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:35.511 [2024-07-24 02:09:47.495491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:18152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.511 [2024-07-24 02:09:47.495507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:35.511 [2024-07-24 02:09:47.495529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:18168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.511 [2024-07-24 02:09:47.495545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:35.511 [2024-07-24 02:09:47.495567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:18184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.511 [2024-07-24 02:09:47.495585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:35.511 [2024-07-24 02:09:47.495612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:18200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.511 [2024-07-24 02:09:47.495629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:35.511 [2024-07-24 02:09:47.495651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:18216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.511 [2024-07-24 02:09:47.495668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:35.511 [2024-07-24 02:09:47.495690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:18232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.511 [2024-07-24 02:09:47.495706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:35.511 [2024-07-24 02:09:47.495728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:18248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.511 [2024-07-24 02:09:47.495745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:35.511 [2024-07-24 02:09:47.495767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:18264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.511 [2024-07-24 02:09:47.495783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:35.512 [2024-07-24 02:09:47.495822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:18280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.512 [2024-07-24 02:09:47.495838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:35.512 [2024-07-24 02:09:47.495860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:18296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.512 [2024-07-24 02:09:47.495891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.512 [2024-07-24 02:09:47.495915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:18312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.512 [2024-07-24 02:09:47.495931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:35.512 [2024-07-24 02:09:47.495953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:18328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.512 [2024-07-24 02:09:47.495970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:35.512 [2024-07-24 02:09:47.495993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:18344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.512 [2024-07-24 02:09:47.496009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:31:35.512 [2024-07-24 02:09:47.496031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:18360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.512 [2024-07-24 02:09:47.496047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:35.512 [2024-07-24 02:09:47.496069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:18376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.512 [2024-07-24 02:09:47.496086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:35.512 [2024-07-24 02:09:47.496112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.512 [2024-07-24 02:09:47.496129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:35.512 [2024-07-24 02:09:47.496152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:18408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.512 [2024-07-24 02:09:47.496168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:31:35.512 [2024-07-24 02:09:47.497568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:17712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.512 [2024-07-24 02:09:47.497595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:35.512 [2024-07-24 02:09:47.497624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:18424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.512 [2024-07-24 02:09:47.497642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:35.512 [2024-07-24 02:09:47.497664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:18440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.512 [2024-07-24 02:09:47.497680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:31:35.512 [2024-07-24 02:09:47.497702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:18456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.512 [2024-07-24 02:09:47.497719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:31:35.512 [2024-07-24 02:09:47.497741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:18472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.512 [2024-07-24 02:09:47.497757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:35.512 [2024-07-24 02:09:47.497779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:18480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.512 [2024-07-24 02:09:47.497795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:31:35.512 [2024-07-24 02:09:47.497817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:18496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.512 [2024-07-24 02:09:47.497832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:31:35.512 [2024-07-24 02:09:47.497854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:18512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.512 [2024-07-24 02:09:47.497870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:31:35.512 [2024-07-24 02:09:47.497892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:17768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.512 [2024-07-24 02:09:47.497908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:31:35.512 [2024-07-24 02:09:47.497929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.512 [2024-07-24 02:09:47.497945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:35.512 [2024-07-24 02:09:47.497983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:17832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.512 [2024-07-24 02:09:47.498003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:31:35.512 [2024-07-24 02:09:47.498025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:17864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.512 [2024-07-24 02:09:47.498041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:31:35.512 [2024-07-24 02:09:47.498061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:17720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.512 [2024-07-24 02:09:47.498077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:31:35.512 [2024-07-24 02:09:47.498098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:18528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.512 [2024-07-24 02:09:47.498113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:35.512 [2024-07-24 02:09:47.498134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:18544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.512 [2024-07-24 02:09:47.498150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:35.512 [2024-07-24 02:09:47.498171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:18560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.512 [2024-07-24 02:09:47.498187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:35.512 [2024-07-24 02:09:47.498208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:18576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.512 [2024-07-24 02:09:47.498223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:35.512 [2024-07-24 02:09:47.498244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:18592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.512 [2024-07-24 02:09:47.498259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:31:35.512 [2024-07-24 02:09:47.498281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:18608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.512 [2024-07-24 02:09:47.498311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:35.512 [2024-07-24 02:09:47.498344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:18624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.512 [2024-07-24 02:09:47.498361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:31:35.512 [2024-07-24 02:09:47.498383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:18632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.512 [2024-07-24 02:09:47.498400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:31:35.512 [2024-07-24 02:09:47.498421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.512 [2024-07-24 02:09:47.498437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:31:35.512 [2024-07-24 02:09:47.498459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:18664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.512 [2024-07-24 02:09:47.498482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:31:35.512 [2024-07-24 02:09:47.498506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:18680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.512 [2024-07-24 02:09:47.498522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:31:35.512 [2024-07-24 02:09:47.498544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:18696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.512 [2024-07-24 02:09:47.498560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:31:35.512 [2024-07-24 02:09:47.498581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:17760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.512 [2024-07-24 02:09:47.498597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:35.512 [2024-07-24 02:09:47.498635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:17792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.512 [2024-07-24 02:09:47.498652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:35.512 [2024-07-24 02:09:47.498674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.512 [2024-07-24 02:09:47.498689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:35.513 [2024-07-24 02:09:47.498710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:17856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.513 [2024-07-24 02:09:47.498726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:35.513 [2024-07-24 02:09:47.498748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:18712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.513 [2024-07-24 02:09:47.498779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:35.513 [2024-07-24 02:09:47.498803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:18728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.513 [2024-07-24 02:09:47.498819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:35.513 [2024-07-24 02:09:47.500011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:17904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.513 [2024-07-24 02:09:47.500037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:35.513 [2024-07-24 02:09:47.500065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.513 [2024-07-24 02:09:47.500083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:35.513 [2024-07-24 02:09:47.500106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:17968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.513 [2024-07-24 02:09:47.500122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:35.513 [2024-07-24 02:09:47.500144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:18000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.513 [2024-07-24 02:09:47.500160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:35.513 [2024-07-24 02:09:47.500187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:18032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.513 [2024-07-24 02:09:47.500203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:35.513 [2024-07-24 02:09:47.500225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:18064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.513 [2024-07-24 02:09:47.500241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:35.513 [2024-07-24 02:09:47.500263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:18096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.513 [2024-07-24 02:09:47.500279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:35.513 [2024-07-24 02:09:47.500301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:18128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.513 [2024-07-24 02:09:47.500324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:35.513 [2024-07-24 02:09:47.500349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:18160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.513 [2024-07-24 02:09:47.500365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:35.513 [2024-07-24 02:09:47.500387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:18192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.513 [2024-07-24 02:09:47.500402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:35.513 [2024-07-24 02:09:47.500424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:18224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.513 [2024-07-24 02:09:47.500440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:31:35.513 [2024-07-24 02:09:47.500462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:18256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.513 [2024-07-24 02:09:47.500477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:35.513 [2024-07-24 02:09:47.500499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:18288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.513 [2024-07-24 02:09:47.500515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:35.513 [2024-07-24 02:09:47.500537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:18320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.513 [2024-07-24 02:09:47.500553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:35.513 [2024-07-24 02:09:47.500574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:18352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.513 [2024-07-24 02:09:47.500590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:35.513 [2024-07-24 02:09:47.500613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:18384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.513 [2024-07-24 02:09:47.500644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:35.513 [2024-07-24 02:09:47.500671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:18416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.513 [2024-07-24 02:09:47.500687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:35.513 [2024-07-24 02:09:47.500709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:18448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.513 [2024-07-24 02:09:47.500725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:35.513 [2024-07-24 02:09:47.500763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.513 [2024-07-24 02:09:47.500780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:35.513 [2024-07-24 02:09:47.500801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:18520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.513 [2024-07-24 02:09:47.500818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:35.513 [2024-07-24 02:09:47.500840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:17896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.513 [2024-07-24 02:09:47.500855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:31:35.513 [2024-07-24 02:09:47.500877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:17928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.513 [2024-07-24 02:09:47.500893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:31:35.513 [2024-07-24 02:09:47.500916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.513 [2024-07-24 02:09:47.500932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:31:35.513 [2024-07-24 02:09:47.500953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:17992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.513 [2024-07-24 02:09:47.500969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:35.513 [2024-07-24 02:09:47.500991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:18024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.513 [2024-07-24 02:09:47.501007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:35.513 [2024-07-24 02:09:47.501029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:18056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.513 [2024-07-24 02:09:47.501045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:35.513 [2024-07-24 02:09:47.501067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:18088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.513 [2024-07-24 02:09:47.501084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:35.513 [2024-07-24 02:09:47.501105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:18120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.513 [2024-07-24 02:09:47.501121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:35.513 [2024-07-24 02:09:47.501143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.513 [2024-07-24 02:09:47.501163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:35.514 [2024-07-24 02:09:47.501185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.514 [2024-07-24 02:09:47.501201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:35.514 [2024-07-24 02:09:47.501224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:18216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.514 [2024-07-24 02:09:47.501240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:35.514 [2024-07-24 02:09:47.501262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:18248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.514 [2024-07-24 02:09:47.501278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:35.514 [2024-07-24 02:09:47.501300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.514 [2024-07-24 02:09:47.501324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:35.514 [2024-07-24 02:09:47.501350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:18312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.514 [2024-07-24 02:09:47.501367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:35.514 [2024-07-24 02:09:47.501388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:18344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.514 [2024-07-24 02:09:47.501404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:35.514 [2024-07-24 02:09:47.501427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.514 [2024-07-24 02:09:47.501443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:35.514 [2024-07-24 02:09:47.501465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:18408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.514 [2024-07-24 02:09:47.501482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:35.514 [2024-07-24 02:09:47.504385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:18568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.514 [2024-07-24 02:09:47.504411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:35.514 [2024-07-24 02:09:47.504440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:18600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.514 [2024-07-24 02:09:47.504458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:35.514 [2024-07-24 02:09:47.504481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:18640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.514 [2024-07-24 02:09:47.504497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:35.514 [2024-07-24 02:09:47.504518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.514 [2024-07-24 02:09:47.504540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:35.514 [2024-07-24 02:09:47.504563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:18704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.514 [2024-07-24 02:09:47.504579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:35.514 [2024-07-24 02:09:47.504601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:18736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.514 [2024-07-24 02:09:47.504617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:35.514 [2024-07-24 02:09:47.504639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:18752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.514 [2024-07-24 02:09:47.504655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:35.514 [2024-07-24 02:09:47.504677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:18768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.514 [2024-07-24 02:09:47.504693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:35.514 [2024-07-24 02:09:47.504714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:18784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.514 [2024-07-24 02:09:47.504730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:35.514 [2024-07-24 02:09:47.504751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.514 [2024-07-24 02:09:47.504767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:35.514 [2024-07-24 02:09:47.504789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:18816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.514 [2024-07-24 02:09:47.504805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:35.514 [2024-07-24 02:09:47.504827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.514 [2024-07-24 02:09:47.504843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:35.514 [2024-07-24 02:09:47.504864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:18440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.514 [2024-07-24 02:09:47.504880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:35.514 [2024-07-24 02:09:47.504902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:18472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.514 [2024-07-24 02:09:47.504918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:35.514 [2024-07-24 02:09:47.504940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:18496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.514 [2024-07-24 02:09:47.504956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:35.514 [2024-07-24 02:09:47.504978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.514 [2024-07-24 02:09:47.504993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:35.514 [2024-07-24 02:09:47.505020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:17832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.514 [2024-07-24 02:09:47.505037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:35.514 [2024-07-24 02:09:47.505059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:17720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.514 [2024-07-24 02:09:47.505075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:31:35.514 [2024-07-24 02:09:47.505097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:18544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.514 [2024-07-24 02:09:47.505113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:31:35.514 [2024-07-24 02:09:47.505135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:18576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.514 [2024-07-24 02:09:47.505150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:31:35.514 [2024-07-24 02:09:47.505172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:18608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.514 [2024-07-24 02:09:47.505188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:35.515 [2024-07-24 02:09:47.505210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.515 [2024-07-24 02:09:47.505226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:35.515 [2024-07-24 02:09:47.505248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:18664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.515 [2024-07-24 02:09:47.505264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:35.515 [2024-07-24 02:09:47.505285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:18696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.515 [2024-07-24 02:09:47.505301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:35.515 [2024-07-24 02:09:47.505330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:17792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.515 [2024-07-24 02:09:47.505348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:35.515 [2024-07-24 02:09:47.505370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:17856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.515 [2024-07-24 02:09:47.505386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:35.515 [2024-07-24 02:09:47.505408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:18728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.515 [2024-07-24 02:09:47.505424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:35.515 [2024-07-24 02:09:47.505446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:17944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.515 [2024-07-24 02:09:47.505462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:35.515 [2024-07-24 02:09:47.505488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.515 [2024-07-24 02:09:47.505505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:35.515 [2024-07-24 02:09:47.505527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:18072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.515 [2024-07-24 02:09:47.505544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:35.515 [2024-07-24 02:09:47.505565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:18136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.515 [2024-07-24 02:09:47.505581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:35.515 [2024-07-24 02:09:47.505603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.515 [2024-07-24 02:09:47.505620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:35.515 [2024-07-24 02:09:47.505642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:18264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.515 [2024-07-24 02:09:47.505658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:35.515 [2024-07-24 02:09:47.505680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:18328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.515 [2024-07-24 02:09:47.505696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:35.515 [2024-07-24 02:09:47.505718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.515 [2024-07-24 02:09:47.505734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:35.515 [2024-07-24 02:09:47.505756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.515 [2024-07-24 02:09:47.505772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:35.515 [2024-07-24 02:09:47.505794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:18856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.515 [2024-07-24 02:09:47.505810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:35.515 [2024-07-24 02:09:47.505831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:18872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.515 [2024-07-24 02:09:47.505847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:35.515 [2024-07-24 02:09:47.505869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:18888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.515 [2024-07-24 02:09:47.505885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:35.515 [2024-07-24 02:09:47.505907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:18904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.515 [2024-07-24 02:09:47.505923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:35.515 [2024-07-24 02:09:47.505946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:18920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.515 [2024-07-24 02:09:47.505966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:35.515 [2024-07-24 02:09:47.505990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:18936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.515 [2024-07-24 02:09:47.506007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:35.515 [2024-07-24 02:09:47.506536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.515 [2024-07-24 02:09:47.506559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:35.515 [2024-07-24 02:09:47.506587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:18968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.515 [2024-07-24 02:09:47.506605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:35.515 [2024-07-24 02:09:47.506627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:18984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.515 [2024-07-24 02:09:47.506644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:35.515 [2024-07-24 02:09:47.506665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.515 [2024-07-24 02:09:47.506682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:35.515 [2024-07-24 02:09:47.506703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:17968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.515 [2024-07-24 02:09:47.506719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:35.515 [2024-07-24 02:09:47.506740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:18032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.515 [2024-07-24 02:09:47.506756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:35.515 [2024-07-24 02:09:47.506778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:18096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.515 [2024-07-24 02:09:47.506794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:35.515 [2024-07-24 02:09:47.506816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:18160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.515 [2024-07-24 02:09:47.506832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:35.515 [2024-07-24 02:09:47.506854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:18224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.515 [2024-07-24 02:09:47.506871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:35.515 [2024-07-24 02:09:47.506893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:18288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.515 [2024-07-24 02:09:47.506910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:35.515 [2024-07-24 02:09:47.506946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:18352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.515 [2024-07-24 02:09:47.506968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.515 [2024-07-24 02:09:47.506992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:18416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.515 [2024-07-24 02:09:47.507008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:35.515 [2024-07-24 02:09:47.507029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:18488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.515 [2024-07-24 02:09:47.507045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:35.515 [2024-07-24 02:09:47.507066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:17896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.515 [2024-07-24 02:09:47.507082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:31:35.515 [2024-07-24 02:09:47.507103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:17960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.515 [2024-07-24 02:09:47.507119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:35.515 [2024-07-24 02:09:47.507140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:18024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.516 [2024-07-24 02:09:47.507156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:35.516 [2024-07-24 02:09:47.507177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:18088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.516 [2024-07-24 02:09:47.507192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:35.516 [2024-07-24 02:09:47.507214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:18152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.516 [2024-07-24 02:09:47.507229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:31:35.516 [2024-07-24 02:09:47.507250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:18216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.516 [2024-07-24 02:09:47.507265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:35.516 [2024-07-24 02:09:47.507286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:18280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.516 [2024-07-24 02:09:47.507325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:35.516 [2024-07-24 02:09:47.507351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.516 [2024-07-24 02:09:47.507367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:31:35.516 [2024-07-24 02:09:47.507390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:18408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.516 [2024-07-24 02:09:47.507406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:31:35.516 [2024-07-24 02:09:47.509115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:18760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.516 [2024-07-24 02:09:47.509141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:35.516 [2024-07-24 02:09:47.509175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:18792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.516 [2024-07-24 02:09:47.509194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:31:35.516 [2024-07-24 02:09:47.509217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:18824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.516 [2024-07-24 02:09:47.509233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:31:35.516 [2024-07-24 02:09:47.509256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:18456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.516 [2024-07-24 02:09:47.509272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:31:35.516 [2024-07-24 02:09:47.509294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:18512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.516 [2024-07-24 02:09:47.509311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:31:35.516 [2024-07-24 02:09:47.509342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:18560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.516 [2024-07-24 02:09:47.509360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:35.516 [2024-07-24 02:09:47.509381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:18624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.516 [2024-07-24 02:09:47.509397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:31:35.516 [2024-07-24 02:09:47.509419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:18680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.516 [2024-07-24 02:09:47.509435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:31:35.516 [2024-07-24 02:09:47.509458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:19000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.516 [2024-07-24 02:09:47.509474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:31:35.516 [2024-07-24 02:09:47.509496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:19016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.516 [2024-07-24 02:09:47.509512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:35.516 [2024-07-24 02:09:47.509534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:19032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.516 [2024-07-24 02:09:47.509550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:35.516 [2024-07-24 02:09:47.509571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:19048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.516 [2024-07-24 02:09:47.509587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:35.516 [2024-07-24 02:09:47.509609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:19064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.516 [2024-07-24 02:09:47.509625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:35.516 [2024-07-24 02:09:47.509668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.516 [2024-07-24 02:09:47.509685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:31:35.516 [2024-07-24 02:09:47.509706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:18672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.516 [2024-07-24 02:09:47.509722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:35.516 [2024-07-24 02:09:47.509743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:18736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.516 [2024-07-24 02:09:47.509758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:31:35.516 [2024-07-24 02:09:47.509779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:18768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.516 [2024-07-24 02:09:47.509795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:31:35.516 [2024-07-24 02:09:47.509816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:18800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.516 [2024-07-24 02:09:47.509831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:31:35.516 [2024-07-24 02:09:47.509852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:17712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.516 [2024-07-24 02:09:47.509868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:31:35.516 [2024-07-24 02:09:47.509905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.516 [2024-07-24 02:09:47.509921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:31:35.516 [2024-07-24 02:09:47.509944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:17768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.516 [2024-07-24 02:09:47.509960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:31:35.516 [2024-07-24 02:09:47.509982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:17720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.516 [2024-07-24 02:09:47.509999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:35.516 [2024-07-24 02:09:47.510021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:18576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.516 [2024-07-24 02:09:47.510037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:35.516 [2024-07-24 02:09:47.510059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.516 [2024-07-24 02:09:47.510074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:35.516 [2024-07-24 02:09:47.510096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:18696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.516 [2024-07-24 02:09:47.510112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:35.516 [2024-07-24 02:09:47.510133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:17856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.516 [2024-07-24 02:09:47.510154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:35.517 [2024-07-24 02:09:47.510177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.517 [2024-07-24 02:09:47.510193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:35.517 [2024-07-24 02:09:47.510215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:18072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.517 [2024-07-24 02:09:47.510231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:35.517 [2024-07-24 02:09:47.510252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:18200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.517 [2024-07-24 02:09:47.510268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:35.517 [2024-07-24 02:09:47.510290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:18328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.517 [2024-07-24 02:09:47.510306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:35.517 [2024-07-24 02:09:47.510334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:18840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.517 [2024-07-24 02:09:47.510351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:35.517 [2024-07-24 02:09:47.510374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:18872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.517 [2024-07-24 02:09:47.510391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:35.517 [2024-07-24 02:09:47.510413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:18904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.517 [2024-07-24 02:09:47.510428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:35.517 [2024-07-24 02:09:47.510450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.517 [2024-07-24 02:09:47.510466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:35.517 [2024-07-24 02:09:47.510488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:18848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.517 [2024-07-24 02:09:47.510504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:35.517 [2024-07-24 02:09:47.510526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.517 [2024-07-24 02:09:47.510542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:35.517 [2024-07-24 02:09:47.510564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:18912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.517 [2024-07-24 02:09:47.510579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:35.517 [2024-07-24 02:09:47.510618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:18944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.517 [2024-07-24 02:09:47.510638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:31:35.517 [2024-07-24 02:09:47.510660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:18976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.517 [2024-07-24 02:09:47.510676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:35.517 [2024-07-24 02:09:47.510698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:18952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.517 [2024-07-24 02:09:47.510713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:35.517 [2024-07-24 02:09:47.510734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:18984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.517 [2024-07-24 02:09:47.510750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:35.517 [2024-07-24 02:09:47.510772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:17968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.517 [2024-07-24 02:09:47.510789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:35.517 [2024-07-24 02:09:47.510810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:18096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.517 [2024-07-24 02:09:47.510826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:35.517 [2024-07-24 02:09:47.510846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:18224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.517 [2024-07-24 02:09:47.510862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:35.517 [2024-07-24 02:09:47.510884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:18352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.517 [2024-07-24 02:09:47.510899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:35.517 [2024-07-24 02:09:47.510920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:18488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.517 [2024-07-24 02:09:47.510936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:35.517 [2024-07-24 02:09:47.510957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:17960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.517 [2024-07-24 02:09:47.510972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:35.517 [2024-07-24 02:09:47.510993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:18088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.517 [2024-07-24 02:09:47.511009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:31:35.517 [2024-07-24 02:09:47.511030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:18216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.517 [2024-07-24 02:09:47.511046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:31:35.517 [2024-07-24 02:09:47.511068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:18344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.517 [2024-07-24 02:09:47.511084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:31:35.517 [2024-07-24 02:09:47.513816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:17928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.517 [2024-07-24 02:09:47.513842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:35.517 [2024-07-24 02:09:47.513886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:18056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.517 [2024-07-24 02:09:47.513903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:35.517 [2024-07-24 02:09:47.513943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.517 [2024-07-24 02:09:47.513959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:35.517 [2024-07-24 02:09:47.513982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:18312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.517 [2024-07-24 02:09:47.513998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:35.517 [2024-07-24 02:09:47.514020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:19080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.517 [2024-07-24 02:09:47.514036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:35.517 [2024-07-24 02:09:47.514058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:19096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.517 [2024-07-24 02:09:47.514074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:35.517 [2024-07-24 02:09:47.514095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:19112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.517 [2024-07-24 02:09:47.514111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:35.517 [2024-07-24 02:09:47.514147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:19128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.517 [2024-07-24 02:09:47.514164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:35.517 [2024-07-24 02:09:47.514185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:19144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.517 [2024-07-24 02:09:47.514201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:35.517 [2024-07-24 02:09:47.514222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:19160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.518 [2024-07-24 02:09:47.514238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:35.518 [2024-07-24 02:09:47.514258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:19176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.518 [2024-07-24 02:09:47.514274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:35.518 [2024-07-24 02:09:47.514312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:19192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.518 [2024-07-24 02:09:47.514342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:35.518 [2024-07-24 02:09:47.514400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.518 [2024-07-24 02:09:47.514417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:35.518 [2024-07-24 02:09:47.514439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:18792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.518 [2024-07-24 02:09:47.514455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:35.518 [2024-07-24 02:09:47.514477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:18456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.518 [2024-07-24 02:09:47.514492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:35.518 [2024-07-24 02:09:47.514514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:18560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.518 [2024-07-24 02:09:47.514531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:35.518 [2024-07-24 02:09:47.514552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:18680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.518 [2024-07-24 02:09:47.514568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:35.518 [2024-07-24 02:09:47.514590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:19016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.518 [2024-07-24 02:09:47.514606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:35.518 [2024-07-24 02:09:47.514628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:19048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.518 [2024-07-24 02:09:47.514644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:35.518 [2024-07-24 02:09:47.514681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:18600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.518 [2024-07-24 02:09:47.514697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:35.518 [2024-07-24 02:09:47.514718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:18736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.518 [2024-07-24 02:09:47.514733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:35.518 [2024-07-24 02:09:47.514753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:18800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.518 [2024-07-24 02:09:47.514767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:35.518 [2024-07-24 02:09:47.514787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:18472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.518 [2024-07-24 02:09:47.514817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:35.518 [2024-07-24 02:09:47.514839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:17720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.518 [2024-07-24 02:09:47.514855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:35.518 [2024-07-24 02:09:47.514900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:18632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.518 [2024-07-24 02:09:47.514917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:35.518 [2024-07-24 02:09:47.514939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:17856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.518 [2024-07-24 02:09:47.514955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:35.518 [2024-07-24 02:09:47.514977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:18072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.518 [2024-07-24 02:09:47.514993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:35.518 [2024-07-24 02:09:47.515015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.518 [2024-07-24 02:09:47.515031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:35.518 [2024-07-24 02:09:47.515780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:18872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.518 [2024-07-24 02:09:47.515804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:35.518 [2024-07-24 02:09:47.515831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.518 [2024-07-24 02:09:47.515849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:35.518 [2024-07-24 02:09:47.515871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:18880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.518 [2024-07-24 02:09:47.515888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:35.518 [2024-07-24 02:09:47.515910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:18944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.518 [2024-07-24 02:09:47.515926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:31:35.518 [2024-07-24 02:09:47.515948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:18952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.518 [2024-07-24 02:09:47.515979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:31:35.518 [2024-07-24 02:09:47.516002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:17968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.518 [2024-07-24 02:09:47.516017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:31:35.518 [2024-07-24 02:09:47.516054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:18224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.518 [2024-07-24 02:09:47.516069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:35.519 [2024-07-24 02:09:47.516090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:18488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.519 [2024-07-24 02:09:47.516121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:35.519 [2024-07-24 02:09:47.516143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.519 [2024-07-24 02:09:47.516163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:35.519 [2024-07-24 02:09:47.516185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:18344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.519 [2024-07-24 02:09:47.516201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:35.519 [2024-07-24 02:09:47.516223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:19024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.519 [2024-07-24 02:09:47.516238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:35.519 [2024-07-24 02:09:47.516259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:19056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.519 [2024-07-24 02:09:47.516275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:35.519 [2024-07-24 02:09:47.516296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:18752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.519 [2024-07-24 02:09:47.516337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:35.519 [2024-07-24 02:09:47.516362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:18816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.519 [2024-07-24 02:09:47.516377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:35.519 [2024-07-24 02:09:47.516399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.519 [2024-07-24 02:09:47.516415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:35.519 [2024-07-24 02:09:47.516436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.519 [2024-07-24 02:09:47.516453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:35.519 [2024-07-24 02:09:47.516475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:18728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.519 [2024-07-24 02:09:47.516491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:35.519 [2024-07-24 02:09:47.516513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:19224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.519 [2024-07-24 02:09:47.516529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:35.519 [2024-07-24 02:09:47.516551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:19240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.519 [2024-07-24 02:09:47.516567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:35.519 [2024-07-24 02:09:47.516589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:19256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.519 [2024-07-24 02:09:47.516604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:35.519 [2024-07-24 02:09:47.516626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:19272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.519 [2024-07-24 02:09:47.516646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:35.519 [2024-07-24 02:09:47.516669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:19288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.519 [2024-07-24 02:09:47.516686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:35.519 [2024-07-24 02:09:47.516723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:19304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.519 [2024-07-24 02:09:47.516739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:35.519 [2024-07-24 02:09:47.516760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:18888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.519 [2024-07-24 02:09:47.516792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:35.519 [2024-07-24 02:09:47.517520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:18968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.519 [2024-07-24 02:09:47.517545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:35.519 [2024-07-24 02:09:47.517573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.519 [2024-07-24 02:09:47.517591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:35.519 [2024-07-24 02:09:47.517630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:18280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.519 [2024-07-24 02:09:47.517646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:35.519 [2024-07-24 02:09:47.517683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:19320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.519 [2024-07-24 02:09:47.517699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:35.519 [2024-07-24 02:09:47.517720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:19336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.519 [2024-07-24 02:09:47.517735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:35.519 [2024-07-24 02:09:47.517755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:19352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.519 [2024-07-24 02:09:47.517770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:35.519 [2024-07-24 02:09:47.517790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:19368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.519 [2024-07-24 02:09:47.517805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:35.519 [2024-07-24 02:09:47.517826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:19384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.519 [2024-07-24 02:09:47.517840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:35.519 [2024-07-24 02:09:47.517861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.519 [2024-07-24 02:09:47.517876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:35.519 [2024-07-24 02:09:47.517901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:19416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.519 [2024-07-24 02:09:47.517917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:35.519 [2024-07-24 02:09:47.517937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:19432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.519 [2024-07-24 02:09:47.517952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:35.519 [2024-07-24 02:09:47.517988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:19448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.519 [2024-07-24 02:09:47.518004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:35.519 [2024-07-24 02:09:47.518026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.519 [2024-07-24 02:09:47.518042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:35.519 [2024-07-24 02:09:47.518063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:18312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.519 [2024-07-24 02:09:47.518079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:35.519 [2024-07-24 02:09:47.518100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:19096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.519 [2024-07-24 02:09:47.518115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.519 [2024-07-24 02:09:47.518136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:19128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.519 [2024-07-24 02:09:47.518151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:35.519 [2024-07-24 02:09:47.518172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:19160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.519 [2024-07-24 02:09:47.518187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:35.519 [2024-07-24 02:09:47.518209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:19192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.520 [2024-07-24 02:09:47.518224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:31:35.520 [2024-07-24 02:09:47.518245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:18792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.520 [2024-07-24 02:09:47.518260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:35.520 [2024-07-24 02:09:47.518281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:18560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.520 [2024-07-24 02:09:47.518311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:35.520 [2024-07-24 02:09:47.518345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:19016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.520 [2024-07-24 02:09:47.518366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:35.520 [2024-07-24 02:09:47.518392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:18600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.520 [2024-07-24 02:09:47.518409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:31:35.520 [2024-07-24 02:09:47.518431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:18800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.520 [2024-07-24 02:09:47.518447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:35.520 [2024-07-24 02:09:47.518469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:17720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.520 [2024-07-24 02:09:47.518485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:35.520 [2024-07-24 02:09:47.518507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:17856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.520 [2024-07-24 02:09:47.518523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:31:35.520 [2024-07-24 02:09:47.518545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:18328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.520 [2024-07-24 02:09:47.518561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:31:35.520 [2024-07-24 02:09:47.519042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:19104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.520 [2024-07-24 02:09:47.519065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:35.520 [2024-07-24 02:09:47.519092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.520 [2024-07-24 02:09:47.519110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:31:35.520 [2024-07-24 02:09:47.519133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.520 [2024-07-24 02:09:47.519149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:31:35.520 [2024-07-24 02:09:47.519171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:19200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.520 [2024-07-24 02:09:47.519187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:31:35.520 [2024-07-24 02:09:47.519209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:19000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.520 [2024-07-24 02:09:47.519225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:31:35.520 [2024-07-24 02:09:47.519247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:19064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.520 [2024-07-24 02:09:47.519263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:35.520 [2024-07-24 02:09:47.519285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:18936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.520 [2024-07-24 02:09:47.519322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:31:35.520 [2024-07-24 02:09:47.519349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.520 [2024-07-24 02:09:47.519391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:31:35.520 [2024-07-24 02:09:47.519415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.520 [2024-07-24 02:09:47.519431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:31:35.520 [2024-07-24 02:09:47.519453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:18488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.520 [2024-07-24 02:09:47.519470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:35.520 [2024-07-24 02:09:47.519491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:18344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.520 [2024-07-24 02:09:47.519507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:35.520 [2024-07-24 02:09:47.519529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:19056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.520 [2024-07-24 02:09:47.519544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:35.520 [2024-07-24 02:09:47.519567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:18816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.520 [2024-07-24 02:09:47.519583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:35.520 [2024-07-24 02:09:47.519604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:18608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.520 [2024-07-24 02:09:47.519620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:31:35.520 [2024-07-24 02:09:47.519642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:19224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.520 [2024-07-24 02:09:47.519658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:35.520 [2024-07-24 02:09:47.519696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:19256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.520 [2024-07-24 02:09:47.519711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:31:35.520 [2024-07-24 02:09:47.519732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.520 [2024-07-24 02:09:47.519748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:31:35.520 [2024-07-24 02:09:47.519769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:18888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.520 [2024-07-24 02:09:47.519785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:31:35.520 [2024-07-24 02:09:47.521718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.520 [2024-07-24 02:09:47.521756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:31:35.520 [2024-07-24 02:09:47.521783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:19456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.520 [2024-07-24 02:09:47.521819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:31:35.520 [2024-07-24 02:09:47.521843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:19472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.520 [2024-07-24 02:09:47.521859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:31:35.520 [2024-07-24 02:09:47.521880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.520 [2024-07-24 02:09:47.521896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:35.520 [2024-07-24 02:09:47.521917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:18984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.520 [2024-07-24 02:09:47.521932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:35.520 [2024-07-24 02:09:47.521954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:18024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.521 [2024-07-24 02:09:47.521969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:35.521 [2024-07-24 02:09:47.521991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:19320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.521 [2024-07-24 02:09:47.522006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:35.521 [2024-07-24 02:09:47.522027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:19352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.521 [2024-07-24 02:09:47.522058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:35.521 [2024-07-24 02:09:47.522080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:19384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.521 [2024-07-24 02:09:47.522095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:35.521 [2024-07-24 02:09:47.522115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:19416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.521 [2024-07-24 02:09:47.522131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:35.521 [2024-07-24 02:09:47.522151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:19448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.521 [2024-07-24 02:09:47.522166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:35.521 [2024-07-24 02:09:47.522186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:18312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.521 [2024-07-24 02:09:47.522201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:35.521 [2024-07-24 02:09:47.522221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:19128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.521 [2024-07-24 02:09:47.522236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:35.521 [2024-07-24 02:09:47.522257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:19192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.521 [2024-07-24 02:09:47.522271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:35.521 [2024-07-24 02:09:47.522296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:18560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.521 [2024-07-24 02:09:47.522339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:35.521 [2024-07-24 02:09:47.522364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:18600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.521 [2024-07-24 02:09:47.522381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:35.521 [2024-07-24 02:09:47.522403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:17720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.521 [2024-07-24 02:09:47.522419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:35.521 [2024-07-24 02:09:47.522440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:18328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.521 [2024-07-24 02:09:47.522456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:35.521 [2024-07-24 02:09:47.522478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.521 [2024-07-24 02:09:47.522495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:35.521 [2024-07-24 02:09:47.522516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:19136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.521 [2024-07-24 02:09:47.522532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:31:35.521 [2024-07-24 02:09:47.522554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.521 [2024-07-24 02:09:47.522570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:35.521 [2024-07-24 02:09:47.522592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:19064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.521 [2024-07-24 02:09:47.522608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:35.521 [2024-07-24 02:09:47.522645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.521 [2024-07-24 02:09:47.522660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:35.521 [2024-07-24 02:09:47.522680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:18488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.521 [2024-07-24 02:09:47.522695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:35.521 [2024-07-24 02:09:47.522715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:19056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.521 [2024-07-24 02:09:47.522730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:35.521 [2024-07-24 02:09:47.522751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:18608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.521 [2024-07-24 02:09:47.522766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:35.521 [2024-07-24 02:09:47.522791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:19256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.521 [2024-07-24 02:09:47.522806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:35.521 [2024-07-24 02:09:47.522827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:18888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.521 [2024-07-24 02:09:47.522842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:35.521 [2024-07-24 02:09:47.525107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.521 [2024-07-24 02:09:47.525130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:35.521 [2024-07-24 02:09:47.525174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:19488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.521 [2024-07-24 02:09:47.525191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:31:35.521 [2024-07-24 02:09:47.525230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:19504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.521 [2024-07-24 02:09:47.525247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:31:35.521 [2024-07-24 02:09:47.525269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:19520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.521 [2024-07-24 02:09:47.525286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:31:35.521 [2024-07-24 02:09:47.525307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:19536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.521 [2024-07-24 02:09:47.525332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:35.521 [2024-07-24 02:09:47.525356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:19552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.521 [2024-07-24 02:09:47.525372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:35.521 [2024-07-24 02:09:47.525394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.521 [2024-07-24 02:09:47.525410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:35.521 [2024-07-24 02:09:47.525432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.521 [2024-07-24 02:09:47.525448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:35.521 [2024-07-24 02:09:47.525469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:19576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.521 [2024-07-24 02:09:47.525486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:35.521 [2024-07-24 02:09:47.525507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:19592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.521 [2024-07-24 02:09:47.525523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:35.521 [2024-07-24 02:09:47.525545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:19608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.521 [2024-07-24 02:09:47.525566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:35.521 [2024-07-24 02:09:47.525588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:19624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.522 [2024-07-24 02:09:47.525605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:35.522 [2024-07-24 02:09:47.525627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.522 [2024-07-24 02:09:47.525643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:35.522 [2024-07-24 02:09:47.525665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:19656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.522 [2024-07-24 02:09:47.525681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:35.522 [2024-07-24 02:09:47.525703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:19328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.522 [2024-07-24 02:09:47.525718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:35.522 [2024-07-24 02:09:47.525740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:19360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.522 [2024-07-24 02:09:47.525756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:35.522 [2024-07-24 02:09:47.525794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:19392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.522 [2024-07-24 02:09:47.525810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:35.522 [2024-07-24 02:09:47.525847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:19424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.522 [2024-07-24 02:09:47.525863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:35.522 [2024-07-24 02:09:47.525886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:19672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.522 [2024-07-24 02:09:47.525917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:35.522 [2024-07-24 02:09:47.525940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:19688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.522 [2024-07-24 02:09:47.525956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:35.522 [2024-07-24 02:09:47.525977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:19704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.522 [2024-07-24 02:09:47.525992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:35.522 [2024-07-24 02:09:47.526014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:19112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.522 [2024-07-24 02:09:47.526029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:35.522 [2024-07-24 02:09:47.526050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:19176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.522 [2024-07-24 02:09:47.526070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:35.522 [2024-07-24 02:09:47.526092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:19456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.522 [2024-07-24 02:09:47.526123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:35.522 [2024-07-24 02:09:47.526147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:18840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.522 [2024-07-24 02:09:47.526163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:35.522 [2024-07-24 02:09:47.526201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:18024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.522 [2024-07-24 02:09:47.526217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:35.522 [2024-07-24 02:09:47.526238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:19352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.522 [2024-07-24 02:09:47.526254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:35.522 [2024-07-24 02:09:47.526275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:19416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.522 [2024-07-24 02:09:47.526290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:35.522 [2024-07-24 02:09:47.526335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.522 [2024-07-24 02:09:47.526353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:35.522 [2024-07-24 02:09:47.526375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:19192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.522 [2024-07-24 02:09:47.526392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:35.522 [2024-07-24 02:09:47.526413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:18600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.522 [2024-07-24 02:09:47.526430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:35.522 [2024-07-24 02:09:47.526451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:18328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.522 [2024-07-24 02:09:47.526468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:35.522 [2024-07-24 02:09:47.526490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.522 [2024-07-24 02:09:47.526506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:35.522 [2024-07-24 02:09:47.526528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.522 [2024-07-24 02:09:47.526544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:35.522 [2024-07-24 02:09:47.526566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:18488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.522 [2024-07-24 02:09:47.526581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:35.522 [2024-07-24 02:09:47.526622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:18608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.522 [2024-07-24 02:09:47.526639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:31:35.522 [2024-07-24 02:09:47.526661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:18888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.522 [2024-07-24 02:09:47.526692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:31:35.522 [2024-07-24 02:09:47.526715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:18736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.522 [2024-07-24 02:09:47.526731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:31:35.522 [2024-07-24 02:09:47.526754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:18632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.522 [2024-07-24 02:09:47.526770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:35.522 [2024-07-24 02:09:47.527829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:18952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.522 [2024-07-24 02:09:47.527852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:35.522 [2024-07-24 02:09:47.527879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:19240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.522 [2024-07-24 02:09:47.527895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:35.522 [2024-07-24 02:09:47.527917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:19304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.522 [2024-07-24 02:09:47.527932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:35.522 [2024-07-24 02:09:47.527953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:19720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.522 [2024-07-24 02:09:47.527968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:35.522 [2024-07-24 02:09:47.527989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:19736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.522 [2024-07-24 02:09:47.528004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:35.522 [2024-07-24 02:09:47.528042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:19752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.522 [2024-07-24 02:09:47.528058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:35.522 [2024-07-24 02:09:47.528079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:19768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.522 [2024-07-24 02:09:47.528094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:35.522 [2024-07-24 02:09:47.528115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:19784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.522 [2024-07-24 02:09:47.528131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:35.522 [2024-07-24 02:09:47.528158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.523 [2024-07-24 02:09:47.528174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:35.523 [2024-07-24 02:09:47.528196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:19816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.523 [2024-07-24 02:09:47.528212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:35.523 [2024-07-24 02:09:47.528694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:19464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.523 [2024-07-24 02:09:47.528719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:35.523 [2024-07-24 02:09:47.528747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:19336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.523 [2024-07-24 02:09:47.528764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:35.523 [2024-07-24 02:09:47.528787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:19400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.523 [2024-07-24 02:09:47.528804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:35.523 [2024-07-24 02:09:47.528826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:19096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.523 [2024-07-24 02:09:47.528842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:35.523 [2024-07-24 02:09:47.528863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:19016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.523 [2024-07-24 02:09:47.528879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:35.523 [2024-07-24 02:09:47.528901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.523 [2024-07-24 02:09:47.528917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:35.523 [2024-07-24 02:09:47.528939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.523 [2024-07-24 02:09:47.528970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:35.523 [2024-07-24 02:09:47.528993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:19864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.523 [2024-07-24 02:09:47.529023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:35.523 [2024-07-24 02:09:47.529045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:19880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.523 [2024-07-24 02:09:47.529061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:35.523 [2024-07-24 02:09:47.529081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:19488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.523 [2024-07-24 02:09:47.529096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:35.523 [2024-07-24 02:09:47.529121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:19520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.523 [2024-07-24 02:09:47.529137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:35.523 [2024-07-24 02:09:47.529158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:19552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.523 [2024-07-24 02:09:47.529173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:35.523 [2024-07-24 02:09:47.529771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:19560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.523 [2024-07-24 02:09:47.529809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:35.523 [2024-07-24 02:09:47.529837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:19592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.523 [2024-07-24 02:09:47.529855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:35.523 [2024-07-24 02:09:47.529877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.523 [2024-07-24 02:09:47.529893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:35.523 [2024-07-24 02:09:47.529914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.523 [2024-07-24 02:09:47.529945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:35.523 [2024-07-24 02:09:47.529967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:19360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.523 [2024-07-24 02:09:47.529982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:35.523 [2024-07-24 02:09:47.530021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.523 [2024-07-24 02:09:47.530036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:35.523 [2024-07-24 02:09:47.530057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:19688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.523 [2024-07-24 02:09:47.530073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:35.523 [2024-07-24 02:09:47.530094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:19112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.523 [2024-07-24 02:09:47.530109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:35.523 [2024-07-24 02:09:47.530131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:19456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.523 [2024-07-24 02:09:47.530146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:35.523 [2024-07-24 02:09:47.530168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:18024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.523 [2024-07-24 02:09:47.530183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.523 [2024-07-24 02:09:47.530205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:19416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.523 [2024-07-24 02:09:47.530225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:35.523 [2024-07-24 02:09:47.530247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:19192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.523 [2024-07-24 02:09:47.530263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:35.523 [2024-07-24 02:09:47.530285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:18328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.523 [2024-07-24 02:09:47.530325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:31:35.523 [2024-07-24 02:09:47.530352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:19064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.523 [2024-07-24 02:09:47.530369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:35.523 [2024-07-24 02:09:47.530390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:18608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.523 [2024-07-24 02:09:47.530406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:35.523 [2024-07-24 02:09:47.530428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:18736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.523 [2024-07-24 02:09:47.530444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:35.523 [2024-07-24 02:09:47.530466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.523 [2024-07-24 02:09:47.530482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:31:35.523 [2024-07-24 02:09:47.530505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:19224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.524 [2024-07-24 02:09:47.530521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:35.524 [2024-07-24 02:09:47.530543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:19896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.524 [2024-07-24 02:09:47.530560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:35.524 [2024-07-24 02:09:47.530582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:19912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.524 [2024-07-24 02:09:47.530598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:31:35.524 [2024-07-24 02:09:47.530635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:19928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.524 [2024-07-24 02:09:47.530651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:31:35.524 [2024-07-24 02:09:47.530690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:19496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.524 [2024-07-24 02:09:47.530707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:35.524 [2024-07-24 02:09:47.530729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:19528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.524 [2024-07-24 02:09:47.530752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:31:35.524 [2024-07-24 02:09:47.530776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:19240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.524 [2024-07-24 02:09:47.530792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:31:35.524 [2024-07-24 02:09:47.530814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.524 [2024-07-24 02:09:47.530830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:31:35.524 [2024-07-24 02:09:47.530852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:19752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.524 [2024-07-24 02:09:47.530868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:31:35.524 [2024-07-24 02:09:47.530890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:19784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.524 [2024-07-24 02:09:47.530906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:35.524 [2024-07-24 02:09:47.530928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:19816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.524 [2024-07-24 02:09:47.530944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:31:35.524 [2024-07-24 02:09:47.532436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:19584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.524 [2024-07-24 02:09:47.532461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:31:35.524 [2024-07-24 02:09:47.532489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:19616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.524 [2024-07-24 02:09:47.532506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:31:35.524 [2024-07-24 02:09:47.532529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:19648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.524 [2024-07-24 02:09:47.532545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:35.524 [2024-07-24 02:09:47.532567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:19680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.524 [2024-07-24 02:09:47.532583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:35.524 [2024-07-24 02:09:47.532605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:19336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.524 [2024-07-24 02:09:47.532621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:35.524 [2024-07-24 02:09:47.532643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:19096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.524 [2024-07-24 02:09:47.532659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:35.524 [2024-07-24 02:09:47.532681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:19832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.524 [2024-07-24 02:09:47.532697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:31:35.524 [2024-07-24 02:09:47.532725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:19864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.524 [2024-07-24 02:09:47.532742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:35.524 [2024-07-24 02:09:47.532780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:19488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.524 [2024-07-24 02:09:47.532796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:31:35.524 [2024-07-24 02:09:47.532833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.524 [2024-07-24 02:09:47.532850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:31:35.524 [2024-07-24 02:09:47.532872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.524 [2024-07-24 02:09:47.532887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:31:35.524 [2024-07-24 02:09:47.532909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:19952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.524 [2024-07-24 02:09:47.532925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:31:35.524 [2024-07-24 02:09:47.532946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:19968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.524 [2024-07-24 02:09:47.532962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:31:35.524 [2024-07-24 02:09:47.532985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:19448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.524 [2024-07-24 02:09:47.533001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:31:35.524 [2024-07-24 02:09:47.533022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:19256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.524 [2024-07-24 02:09:47.533038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:35.524 [2024-07-24 02:09:47.533060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:19592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.524 [2024-07-24 02:09:47.533076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:35.524 [2024-07-24 02:09:47.533098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:19656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.524 [2024-07-24 02:09:47.533114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:35.524 [2024-07-24 02:09:47.533136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:19424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.524 [2024-07-24 02:09:47.533152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:35.524 [2024-07-24 02:09:47.533175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.524 [2024-07-24 02:09:47.533191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:35.524 [2024-07-24 02:09:47.533217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:18024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.524 [2024-07-24 02:09:47.533234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:35.524 [2024-07-24 02:09:47.533256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:19192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.524 [2024-07-24 02:09:47.533273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:35.524 [2024-07-24 02:09:47.533295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:19064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.524 [2024-07-24 02:09:47.533311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:35.524 [2024-07-24 02:09:47.533341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:18736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.524 [2024-07-24 02:09:47.533358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:35.524 [2024-07-24 02:09:47.533380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:19224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.524 [2024-07-24 02:09:47.533397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:35.524 [2024-07-24 02:09:47.533418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:19912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.524 [2024-07-24 02:09:47.533434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:35.524 [2024-07-24 02:09:47.533455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:19496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.524 [2024-07-24 02:09:47.533471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:35.524 [2024-07-24 02:09:47.533493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.525 [2024-07-24 02:09:47.533509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:35.525 [2024-07-24 02:09:47.533531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:19752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.525 [2024-07-24 02:09:47.533547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:35.525 [2024-07-24 02:09:47.533569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:19816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.525 [2024-07-24 02:09:47.533585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:35.525 [2024-07-24 02:09:47.535083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:19728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.525 [2024-07-24 02:09:47.535108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:35.525 [2024-07-24 02:09:47.535136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:19760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.525 [2024-07-24 02:09:47.535155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:31:35.525 [2024-07-24 02:09:47.535177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:19984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.525 [2024-07-24 02:09:47.535198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:35.525 [2024-07-24 02:09:47.535221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:20000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.525 [2024-07-24 02:09:47.535238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:35.525 [2024-07-24 02:09:47.535260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:20016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.525 [2024-07-24 02:09:47.535276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:35.525 [2024-07-24 02:09:47.535298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:20032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.525 [2024-07-24 02:09:47.535314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:35.525 [2024-07-24 02:09:47.535346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:19776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.525 [2024-07-24 02:09:47.535362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:35.525 [2024-07-24 02:09:47.535384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:19808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.525 [2024-07-24 02:09:47.535400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:35.525 [2024-07-24 02:09:47.535422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:20048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.525 [2024-07-24 02:09:47.535438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:35.525 [2024-07-24 02:09:47.535459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:20064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.525 [2024-07-24 02:09:47.535476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:35.525 [2024-07-24 02:09:47.535497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.525 [2024-07-24 02:09:47.535514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:35.525 [2024-07-24 02:09:47.537052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:20096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.525 [2024-07-24 02:09:47.537092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:31:35.525 [2024-07-24 02:09:47.537121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:20112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.525 [2024-07-24 02:09:47.537139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:31:35.525 [2024-07-24 02:09:47.537162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:20128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.525 [2024-07-24 02:09:47.537178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:31:35.525 [2024-07-24 02:09:47.537200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:19616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.525 [2024-07-24 02:09:47.537221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:35.525 [2024-07-24 02:09:47.537244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:19680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.525 [2024-07-24 02:09:47.537260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:35.525 [2024-07-24 02:09:47.537282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.525 [2024-07-24 02:09:47.537298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:35.525 [2024-07-24 02:09:47.537327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:19864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.525 [2024-07-24 02:09:47.537345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:35.525 [2024-07-24 02:09:47.537367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:19552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.525 [2024-07-24 02:09:47.537384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:35.525 [2024-07-24 02:09:47.537406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:19952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.525 [2024-07-24 02:09:47.537421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:35.525 [2024-07-24 02:09:47.537443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:19448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.525 [2024-07-24 02:09:47.537459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:35.525 [2024-07-24 02:09:47.537481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:19592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.525 [2024-07-24 02:09:47.537498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:35.525 [2024-07-24 02:09:47.537519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:19424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.525 [2024-07-24 02:09:47.537535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:35.525 [2024-07-24 02:09:47.537557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:18024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.525 [2024-07-24 02:09:47.537573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:35.525 [2024-07-24 02:09:47.537595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.525 [2024-07-24 02:09:47.537611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:35.525 [2024-07-24 02:09:47.537648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:19224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.525 [2024-07-24 02:09:47.537664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:35.525 [2024-07-24 02:09:47.537684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:19496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.525 [2024-07-24 02:09:47.537699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:35.525 [2024-07-24 02:09:47.537724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:19752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.525 [2024-07-24 02:09:47.537740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:35.525 [2024-07-24 02:09:47.537761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:19840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.525 [2024-07-24 02:09:47.537776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:35.525 [2024-07-24 02:09:47.537796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:19872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.525 [2024-07-24 02:09:47.537811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:35.525 [2024-07-24 02:09:47.537831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:19504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.525 [2024-07-24 02:09:47.537847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:35.525 [2024-07-24 02:09:47.537867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:19576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.525 [2024-07-24 02:09:47.537882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:35.525 [2024-07-24 02:09:47.537902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.525 [2024-07-24 02:09:47.537918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:35.525 [2024-07-24 02:09:47.537939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.525 [2024-07-24 02:09:47.537954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:35.525 [2024-07-24 02:09:47.537974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:20136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.526 [2024-07-24 02:09:47.537989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:35.526 [2024-07-24 02:09:47.538009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:20152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.526 [2024-07-24 02:09:47.538024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:35.526 [2024-07-24 02:09:47.538045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:20168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.526 [2024-07-24 02:09:47.538060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:35.526 [2024-07-24 02:09:47.538080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:20184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.526 [2024-07-24 02:09:47.538095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:35.526 [2024-07-24 02:09:47.538115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:19920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.526 [2024-07-24 02:09:47.538130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:35.526 [2024-07-24 02:09:47.538155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:19736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.526 [2024-07-24 02:09:47.538170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:35.526 [2024-07-24 02:09:47.538190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:19800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.526 [2024-07-24 02:09:47.538205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:35.526 [2024-07-24 02:09:47.538226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.526 [2024-07-24 02:09:47.538241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:35.526 [2024-07-24 02:09:47.538261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:20000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.526 [2024-07-24 02:09:47.538277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:35.526 [2024-07-24 02:09:47.538297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:20032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.526 [2024-07-24 02:09:47.538337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:35.526 [2024-07-24 02:09:47.538361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:19808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.526 [2024-07-24 02:09:47.538378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:35.526 [2024-07-24 02:09:47.538400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:20064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.526 [2024-07-24 02:09:47.538416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:31:35.526 [2024-07-24 02:09:47.540202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:19848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.526 [2024-07-24 02:09:47.540226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:31:35.526 [2024-07-24 02:09:47.540267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:20200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.526 [2024-07-24 02:09:47.540285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:31:35.526 [2024-07-24 02:09:47.540331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:20216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.526 [2024-07-24 02:09:47.540350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:35.526 [2024-07-24 02:09:47.540373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:20232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.526 [2024-07-24 02:09:47.540389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:35.526 [2024-07-24 02:09:47.540411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:20248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.526 [2024-07-24 02:09:47.540427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:35.526 [2024-07-24 02:09:47.540449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:20264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.526 [2024-07-24 02:09:47.540469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:35.526 [2024-07-24 02:09:47.540492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:19520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.526 [2024-07-24 02:09:47.540509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:35.526 [2024-07-24 02:09:47.540530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:20280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.526 [2024-07-24 02:09:47.540546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:35.526 [2024-07-24 02:09:47.540569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:20296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.526 [2024-07-24 02:09:47.540585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:35.526 [2024-07-24 02:09:47.540896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:20312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.526 [2024-07-24 02:09:47.540920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:35.526 [2024-07-24 02:09:47.540947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:20328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.526 [2024-07-24 02:09:47.540965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:35.526 [2024-07-24 02:09:47.540988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:20344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.526 [2024-07-24 02:09:47.541004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:35.526 [2024-07-24 02:09:47.541026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:20360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.526 [2024-07-24 02:09:47.541042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:35.526 [2024-07-24 02:09:47.541063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:20376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.526 [2024-07-24 02:09:47.541080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:35.526 [2024-07-24 02:09:47.541102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:19960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.526 [2024-07-24 02:09:47.541118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:35.526 [2024-07-24 02:09:47.541155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.526 [2024-07-24 02:09:47.541171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:35.526 [2024-07-24 02:09:47.541192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.526 [2024-07-24 02:09:47.541222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:35.526 [2024-07-24 02:09:47.541244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:20112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.526 [2024-07-24 02:09:47.541264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:35.526 [2024-07-24 02:09:47.541285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:19616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.526 [2024-07-24 02:09:47.541322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:35.526 [2024-07-24 02:09:47.541349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.526 [2024-07-24 02:09:47.541365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:35.526 [2024-07-24 02:09:47.541387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:19552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.526 [2024-07-24 02:09:47.541404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:35.526 [2024-07-24 02:09:47.541426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.526 [2024-07-24 02:09:47.541442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:35.526 [2024-07-24 02:09:47.541464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:19424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.526 [2024-07-24 02:09:47.541480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:35.526 [2024-07-24 02:09:47.541501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:19064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.526 [2024-07-24 02:09:47.541517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:35.526 [2024-07-24 02:09:47.541539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:19496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.526 [2024-07-24 02:09:47.541555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:35.526 [2024-07-24 02:09:47.541577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:19840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.526 [2024-07-24 02:09:47.541593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:35.527 [2024-07-24 02:09:47.541631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:19504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.527 [2024-07-24 02:09:47.541647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:35.527 [2024-07-24 02:09:47.541668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:19640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.527 [2024-07-24 02:09:47.541700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:35.527 [2024-07-24 02:09:47.541723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:20136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.527 [2024-07-24 02:09:47.541739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:35.527 [2024-07-24 02:09:47.543270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:20168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.527 [2024-07-24 02:09:47.543293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:35.527 [2024-07-24 02:09:47.543354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.527 [2024-07-24 02:09:47.543375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:35.527 [2024-07-24 02:09:47.543450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:19800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.527 [2024-07-24 02:09:47.543472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:35.527 [2024-07-24 02:09:47.543494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:20000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.527 [2024-07-24 02:09:47.543510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:35.527 [2024-07-24 02:09:47.543532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:19808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.527 [2024-07-24 02:09:47.543548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:35.527 [2024-07-24 02:09:47.543569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:19896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.527 [2024-07-24 02:09:47.543585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:35.527 [2024-07-24 02:09:47.543607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:19720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.527 [2024-07-24 02:09:47.543623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:35.527 [2024-07-24 02:09:47.543645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:20384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.527 [2024-07-24 02:09:47.543661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:35.527 [2024-07-24 02:09:47.543683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:20400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.527 [2024-07-24 02:09:47.543699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:31:35.527 [2024-07-24 02:09:47.543721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.527 [2024-07-24 02:09:47.543736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:35.527 [2024-07-24 02:09:47.543758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:20432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.527 [2024-07-24 02:09:47.543773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:35.527 [2024-07-24 02:09:47.543795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:19976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.527 [2024-07-24 02:09:47.543811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:35.527 [2024-07-24 02:09:47.543833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:20008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.527 [2024-07-24 02:09:47.543848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:31:35.527 [2024-07-24 02:09:47.543875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:20040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.527 [2024-07-24 02:09:47.543891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:35.527 [2024-07-24 02:09:47.543913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:20072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.527 [2024-07-24 02:09:47.543930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:35.527 [2024-07-24 02:09:47.543968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:20104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.527 [2024-07-24 02:09:47.543983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:31:35.527 [2024-07-24 02:09:47.544004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:19832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.527 [2024-07-24 02:09:47.544019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:31:35.527 [2024-07-24 02:09:47.544056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.527 [2024-07-24 02:09:47.544072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:35.527 [2024-07-24 02:09:47.544092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:19192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.527 [2024-07-24 02:09:47.544107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:31:35.527 [2024-07-24 02:09:47.544128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:20456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.527 [2024-07-24 02:09:47.544143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:31:35.527 [2024-07-24 02:09:47.544164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:20472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.527 [2024-07-24 02:09:47.544179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:31:35.527 [2024-07-24 02:09:47.544199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:19816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.527 [2024-07-24 02:09:47.544214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:31:35.527 [2024-07-24 02:09:47.544234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:20200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.527 [2024-07-24 02:09:47.544249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:35.527 [2024-07-24 02:09:47.544270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:20232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.527 [2024-07-24 02:09:47.544284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:31:35.527 [2024-07-24 02:09:47.544329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:20264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.527 [2024-07-24 02:09:47.544347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:31:35.527 [2024-07-24 02:09:47.544373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:20280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.527 [2024-07-24 02:09:47.544389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:31:35.527 [2024-07-24 02:09:47.544410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:20144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.527 [2024-07-24 02:09:47.544425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:35.527 [2024-07-24 02:09:47.544446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:20176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.527 [2024-07-24 02:09:47.544461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:35.527 [2024-07-24 02:09:47.544482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:20328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.527 [2024-07-24 02:09:47.544497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:35.527 [2024-07-24 02:09:47.544518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:20360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.527 [2024-07-24 02:09:47.544534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:35.527 [2024-07-24 02:09:47.544555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:19960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.527 [2024-07-24 02:09:47.544571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:31:35.527 [2024-07-24 02:09:47.544592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:19456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.527 [2024-07-24 02:09:47.544607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:35.527 [2024-07-24 02:09:47.544645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:19616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.527 [2024-07-24 02:09:47.544660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:31:35.527 [2024-07-24 02:09:47.544680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:19552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.527 [2024-07-24 02:09:47.544695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:31:35.527 [2024-07-24 02:09:47.544715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:19424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.528 [2024-07-24 02:09:47.544730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:31:35.528 [2024-07-24 02:09:47.544750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.528 [2024-07-24 02:09:47.544766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:31:35.528 [2024-07-24 02:09:47.544786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:19504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.528 [2024-07-24 02:09:47.544801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:31:35.528 [2024-07-24 02:09:47.544822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:20136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.528 [2024-07-24 02:09:47.544841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:31:35.528 [2024-07-24 02:09:47.546758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:20016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.528 [2024-07-24 02:09:47.546782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:35.528 [2024-07-24 02:09:47.546824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:20080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.528 [2024-07-24 02:09:47.546842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:35.528 [2024-07-24 02:09:47.546864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:20488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.528 [2024-07-24 02:09:47.546880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:35.528 [2024-07-24 02:09:47.546901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:20504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.528 [2024-07-24 02:09:47.546917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:35.528 [2024-07-24 02:09:47.546938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:20520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.528 [2024-07-24 02:09:47.546954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:35.528 [2024-07-24 02:09:47.546975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:20536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.528 [2024-07-24 02:09:47.546990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:35.528 [2024-07-24 02:09:47.547011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:20552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.528 [2024-07-24 02:09:47.547026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:35.528 [2024-07-24 02:09:47.547047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:20568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.528 [2024-07-24 02:09:47.547062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:35.528 [2024-07-24 02:09:47.547084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:20584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.528 [2024-07-24 02:09:47.547099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:35.528 [2024-07-24 02:09:47.548213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:20600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.528 [2024-07-24 02:09:47.548251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:35.528 [2024-07-24 02:09:47.548294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:20616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.528 [2024-07-24 02:09:47.548312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:35.528 [2024-07-24 02:09:47.548345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:20192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.528 [2024-07-24 02:09:47.548366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:35.528 [2024-07-24 02:09:47.548390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:20224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.528 [2024-07-24 02:09:47.548405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:35.528 [2024-07-24 02:09:47.548427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:20256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.528 [2024-07-24 02:09:47.548443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:35.528 [2024-07-24 02:09:47.548464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:20288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.528 [2024-07-24 02:09:47.548480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:35.528 [2024-07-24 02:09:47.548501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:20320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.528 [2024-07-24 02:09:47.548517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:35.528 [2024-07-24 02:09:47.548555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:20352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.528 [2024-07-24 02:09:47.548571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:31:35.528 [2024-07-24 02:09:47.548592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:20096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.528 [2024-07-24 02:09:47.548607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:35.528 [2024-07-24 02:09:47.548644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.528 [2024-07-24 02:09:47.548659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:35.528 [2024-07-24 02:09:47.548680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:20000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.528 [2024-07-24 02:09:47.548695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:35.528 [2024-07-24 02:09:47.548715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.528 [2024-07-24 02:09:47.548730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:35.528 [2024-07-24 02:09:47.548766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:20384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.528 [2024-07-24 02:09:47.548782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:35.528 [2024-07-24 02:09:47.548819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:20416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.528 [2024-07-24 02:09:47.548836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:35.528 [2024-07-24 02:09:47.548859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:19976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.528 [2024-07-24 02:09:47.548875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:35.528 [2024-07-24 02:09:47.548901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:20040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.528 [2024-07-24 02:09:47.548918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:35.528 [2024-07-24 02:09:47.548940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:20104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.528 [2024-07-24 02:09:47.548956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:35.528 [2024-07-24 02:09:47.548977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:19968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.528 [2024-07-24 02:09:47.548993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:31:35.528 [2024-07-24 02:09:47.549015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:20456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.528 [2024-07-24 02:09:47.549031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:31:35.528 [2024-07-24 02:09:47.549069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:19816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.528 [2024-07-24 02:09:47.549085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:31:35.528 [2024-07-24 02:09:47.549106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:20232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.528 [2024-07-24 02:09:47.549122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:35.528 [2024-07-24 02:09:47.549143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:20280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.528 [2024-07-24 02:09:47.549158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:35.528 [2024-07-24 02:09:47.549179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:20176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.528 [2024-07-24 02:09:47.549194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:35.528 [2024-07-24 02:09:47.549231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:20360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.528 [2024-07-24 02:09:47.549246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:35.528 [2024-07-24 02:09:47.549281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:19456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.528 [2024-07-24 02:09:47.549297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:35.528 [2024-07-24 02:09:47.549340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:19552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.528 [2024-07-24 02:09:47.549358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:35.529 [2024-07-24 02:09:47.549383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:19496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.529 [2024-07-24 02:09:47.549400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:35.529 [2024-07-24 02:09:47.549429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:20136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.529 [2024-07-24 02:09:47.549446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:35.529 [2024-07-24 02:09:47.549468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:20632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.529 [2024-07-24 02:09:47.549484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:35.529 [2024-07-24 02:09:47.549506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.529 [2024-07-24 02:09:47.549522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:35.529 [2024-07-24 02:09:47.549544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:20152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.529 [2024-07-24 02:09:47.549560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:35.529 [2024-07-24 02:09:47.549581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:20032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.529 [2024-07-24 02:09:47.549598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:35.529 [2024-07-24 02:09:47.549619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:20392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.529 [2024-07-24 02:09:47.549635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:35.529 [2024-07-24 02:09:47.549672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:20648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.529 [2024-07-24 02:09:47.549688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:35.529 [2024-07-24 02:09:47.549709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:20664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.529 [2024-07-24 02:09:47.549725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:35.529 [2024-07-24 02:09:47.549745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:20408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.529 [2024-07-24 02:09:47.549761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:35.529 [2024-07-24 02:09:47.549797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:20440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.529 [2024-07-24 02:09:47.549813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:35.529 [2024-07-24 02:09:47.549834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.529 [2024-07-24 02:09:47.549849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:35.529 [2024-07-24 02:09:47.549869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:20248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.529 [2024-07-24 02:09:47.549884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:35.529 [2024-07-24 02:09:47.549904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:20680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.529 [2024-07-24 02:09:47.549925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:35.529 [2024-07-24 02:09:47.549947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:20696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.529 [2024-07-24 02:09:47.549962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:35.529 [2024-07-24 02:09:47.549983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:20712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.529 [2024-07-24 02:09:47.549999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:35.529 [2024-07-24 02:09:47.550019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:20728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.529 [2024-07-24 02:09:47.550035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:35.529 [2024-07-24 02:09:47.550055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:20744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.529 [2024-07-24 02:09:47.550070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:35.529 [2024-07-24 02:09:47.550091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.529 [2024-07-24 02:09:47.550106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:35.529 [2024-07-24 02:09:47.550127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.529 [2024-07-24 02:09:47.550142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:35.529 [2024-07-24 02:09:47.550163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:20080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.529 [2024-07-24 02:09:47.550178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:35.529 [2024-07-24 02:09:47.550198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:20504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.529 [2024-07-24 02:09:47.550213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:35.529 [2024-07-24 02:09:47.550233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:20536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.529 [2024-07-24 02:09:47.550248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:35.529 [2024-07-24 02:09:47.550269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:20568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.529 [2024-07-24 02:09:47.550284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:35.529 [2024-07-24 02:09:47.553701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:20752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.529 [2024-07-24 02:09:47.553741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:35.529 [2024-07-24 02:09:47.553796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:20768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.529 [2024-07-24 02:09:47.553824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:31:35.529 [2024-07-24 02:09:47.553864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:20784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.529 [2024-07-24 02:09:47.553881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:31:35.529 [2024-07-24 02:09:47.553903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:20800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.529 [2024-07-24 02:09:47.553919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:31:35.529 [2024-07-24 02:09:47.553940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:20816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.530 [2024-07-24 02:09:47.553956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:35.530 [2024-07-24 02:09:47.553977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:20832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.530 [2024-07-24 02:09:47.553993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:35.530 [2024-07-24 02:09:47.554015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:20848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.530 [2024-07-24 02:09:47.554030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:35.530 [2024-07-24 02:09:47.554052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:20864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.530 [2024-07-24 02:09:47.554068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:35.530 [2024-07-24 02:09:47.554106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:20880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:35.530 [2024-07-24 02:09:47.554122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:35.530 [2024-07-24 02:09:47.554143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:20496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.530 [2024-07-24 02:09:47.554159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:35.530 [2024-07-24 02:09:47.554180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:20528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.530 [2024-07-24 02:09:47.554196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:35.530 [2024-07-24 02:09:47.554217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:35.530 [2024-07-24 02:09:47.554247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:35.530 Received shutdown signal, test time was about 32.351910 seconds 00:31:35.530 00:31:35.530 Latency(us) 00:31:35.530 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:35.530 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:31:35.530 Verification LBA range: start 0x0 length 0x4000 00:31:35.530 Nvme0n1 : 32.35 7815.15 30.53 0.00 0.00 16352.38 509.72 4076242.11 00:31:35.530 =================================================================================================================== 00:31:35.530 Total : 7815.15 30.53 0.00 0.00 16352.38 509.72 4076242.11 00:31:35.530 02:09:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:35.788 02:09:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:31:35.788 02:09:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:35.788 02:09:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:31:35.788 02:09:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:35.788 02:09:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:31:35.788 02:09:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:35.788 02:09:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:31:35.788 02:09:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:35.788 02:09:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:35.788 rmmod nvme_tcp 00:31:35.788 rmmod nvme_fabrics 00:31:35.788 rmmod nvme_keyring 00:31:35.788 02:09:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:35.788 02:09:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:31:35.788 02:09:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:31:35.788 02:09:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 1550449 ']' 00:31:35.788 02:09:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 1550449 00:31:35.788 02:09:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 1550449 ']' 00:31:35.788 02:09:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 1550449 00:31:35.788 02:09:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:31:35.788 02:09:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:35.788 02:09:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1550449 00:31:35.788 02:09:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:35.788 02:09:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:35.788 02:09:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1550449' 00:31:35.788 killing process with pid 1550449 00:31:35.788 02:09:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 1550449 00:31:35.788 02:09:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 1550449 00:31:36.047 02:09:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:36.047 02:09:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:36.047 02:09:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:36.047 02:09:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:36.047 02:09:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:36.047 02:09:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:36.047 02:09:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:36.047 02:09:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:38.585 02:09:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:38.585 00:31:38.585 real 0m40.821s 00:31:38.585 user 2m3.428s 00:31:38.585 sys 0m10.408s 00:31:38.585 02:09:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:38.585 02:09:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:38.585 ************************************ 00:31:38.585 END TEST nvmf_host_multipath_status 00:31:38.585 ************************************ 00:31:38.585 02:09:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:31:38.585 02:09:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:31:38.585 02:09:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:38.585 02:09:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:38.585 ************************************ 00:31:38.585 START TEST nvmf_discovery_remove_ifc 00:31:38.585 ************************************ 00:31:38.585 02:09:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:31:38.585 * Looking for test storage... 00:31:38.585 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:38.585 02:09:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:38.585 02:09:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:31:38.585 02:09:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:38.585 02:09:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:38.585 02:09:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:38.585 02:09:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:38.585 02:09:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:38.585 02:09:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:38.585 02:09:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:38.585 02:09:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:38.585 02:09:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:38.585 02:09:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:38.585 02:09:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:38.585 02:09:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:38.585 02:09:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:38.585 02:09:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:38.585 02:09:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:38.585 02:09:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:38.585 02:09:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:38.585 02:09:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:38.585 02:09:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:38.585 02:09:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:38.585 02:09:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:38.585 02:09:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:38.585 02:09:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:38.585 02:09:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:31:38.585 02:09:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:38.585 02:09:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:31:38.585 02:09:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:38.585 02:09:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:38.585 02:09:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:38.585 02:09:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:38.585 02:09:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:38.585 02:09:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:38.585 02:09:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:38.585 02:09:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:38.585 02:09:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:31:38.585 02:09:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:31:38.585 02:09:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:31:38.585 02:09:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:31:38.585 02:09:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:31:38.585 02:09:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:31:38.585 02:09:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:31:38.585 02:09:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:38.585 02:09:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:38.585 02:09:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:38.585 02:09:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:38.585 02:09:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:38.585 02:09:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:38.585 02:09:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:38.585 02:09:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:38.585 02:09:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:38.585 02:09:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:38.585 02:09:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:31:38.585 02:09:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:40.489 02:09:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:40.490 02:09:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:31:40.490 02:09:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:40.490 02:09:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:40.490 02:09:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:40.490 02:09:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:40.490 02:09:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:40.490 02:09:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:31:40.490 02:09:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:40.490 02:09:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:31:40.490 02:09:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:31:40.490 02:09:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:31:40.490 02:09:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:31:40.490 02:09:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:31:40.490 02:09:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:31:40.490 02:09:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:40.490 02:09:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:40.490 02:09:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:40.490 02:09:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:40.490 02:09:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:40.490 02:09:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:40.490 02:09:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:40.490 02:09:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:40.490 02:09:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:40.490 02:09:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:40.490 02:09:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:40.490 02:09:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:40.490 02:09:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:40.490 02:09:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:40.490 02:09:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:40.490 02:09:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:40.490 02:09:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:40.490 02:09:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:40.490 02:09:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:31:40.490 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:31:40.490 02:09:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:40.490 02:09:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:40.490 02:09:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:40.490 02:09:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:40.490 02:09:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:40.490 02:09:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:40.490 02:09:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:31:40.490 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:31:40.490 02:09:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:40.490 02:09:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:40.490 02:09:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:40.490 02:09:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:40.490 02:09:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:40.490 02:09:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:40.490 02:09:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:40.490 02:09:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:40.490 02:09:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:40.490 02:09:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:40.490 02:09:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:40.490 02:09:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:40.490 02:09:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:40.490 02:09:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:40.490 02:09:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:40.490 02:09:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:31:40.490 Found net devices under 0000:0a:00.0: cvl_0_0 00:31:40.490 02:09:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:40.490 02:09:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:40.490 02:09:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:40.490 02:09:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:40.490 02:09:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:40.490 02:09:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:40.490 02:09:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:40.490 02:09:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:40.490 02:09:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:31:40.490 Found net devices under 0000:0a:00.1: cvl_0_1 00:31:40.490 02:09:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:40.490 02:09:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:40.490 02:09:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:31:40.490 02:09:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:40.490 02:09:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:40.490 02:09:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:40.490 02:09:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:40.490 02:09:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:40.490 02:09:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:40.490 02:09:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:40.490 02:09:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:40.490 02:09:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:40.490 02:09:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:40.490 02:09:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:40.490 02:09:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:40.490 02:09:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:40.490 02:09:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:40.490 02:09:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:40.490 02:09:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:40.490 02:09:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:40.490 02:09:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:40.490 02:09:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:40.490 02:09:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:40.490 02:09:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:40.490 02:09:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:40.490 02:09:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:40.490 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:40.490 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.210 ms 00:31:40.490 00:31:40.490 --- 10.0.0.2 ping statistics --- 00:31:40.490 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:40.490 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:31:40.490 02:09:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:40.490 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:40.490 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.090 ms 00:31:40.490 00:31:40.490 --- 10.0.0.1 ping statistics --- 00:31:40.491 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:40.491 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:31:40.491 02:09:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:40.491 02:09:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:31:40.491 02:09:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:40.491 02:09:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:40.491 02:09:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:40.491 02:09:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:40.491 02:09:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:40.491 02:09:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:40.491 02:09:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:40.491 02:09:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:31:40.491 02:09:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:40.491 02:09:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:40.491 02:09:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:40.491 02:09:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=1556794 00:31:40.491 02:09:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:31:40.491 02:09:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 1556794 00:31:40.491 02:09:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 1556794 ']' 00:31:40.491 02:09:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:40.491 02:09:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:40.491 02:09:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:40.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:40.491 02:09:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:40.491 02:09:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:40.491 [2024-07-24 02:09:55.079426] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:31:40.491 [2024-07-24 02:09:55.079511] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:40.491 EAL: No free 2048 kB hugepages reported on node 1 00:31:40.491 [2024-07-24 02:09:55.146486] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:40.491 [2024-07-24 02:09:55.237408] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:40.491 [2024-07-24 02:09:55.237466] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:40.491 [2024-07-24 02:09:55.237488] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:40.491 [2024-07-24 02:09:55.237500] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:40.491 [2024-07-24 02:09:55.237511] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:40.491 [2024-07-24 02:09:55.237543] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:40.491 02:09:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:40.491 02:09:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:31:40.491 02:09:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:40.491 02:09:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:40.491 02:09:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:40.491 02:09:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:40.491 02:09:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:31:40.491 02:09:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:40.491 02:09:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:40.491 [2024-07-24 02:09:55.381529] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:40.749 [2024-07-24 02:09:55.389754] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:31:40.749 null0 00:31:40.749 [2024-07-24 02:09:55.421683] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:40.749 02:09:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:40.749 02:09:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=1556825 00:31:40.749 02:09:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:31:40.749 02:09:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 1556825 /tmp/host.sock 00:31:40.749 02:09:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 1556825 ']' 00:31:40.749 02:09:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:31:40.749 02:09:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:40.749 02:09:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:31:40.749 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:31:40.749 02:09:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:40.749 02:09:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:40.749 [2024-07-24 02:09:55.484919] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:31:40.749 [2024-07-24 02:09:55.484994] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1556825 ] 00:31:40.749 EAL: No free 2048 kB hugepages reported on node 1 00:31:40.749 [2024-07-24 02:09:55.545702] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:40.749 [2024-07-24 02:09:55.635771] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:41.008 02:09:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:41.008 02:09:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:31:41.008 02:09:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:41.008 02:09:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:31:41.008 02:09:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:41.008 02:09:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:41.008 02:09:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:41.008 02:09:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:31:41.008 02:09:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:41.008 02:09:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:41.008 02:09:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:41.008 02:09:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:31:41.008 02:09:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:41.008 02:09:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:42.381 [2024-07-24 02:09:56.850501] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:42.381 [2024-07-24 02:09:56.850537] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:42.381 [2024-07-24 02:09:56.850562] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:42.381 [2024-07-24 02:09:56.937856] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:31:42.381 [2024-07-24 02:09:57.000131] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:31:42.381 [2024-07-24 02:09:57.000190] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:31:42.381 [2024-07-24 02:09:57.000226] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:31:42.381 [2024-07-24 02:09:57.000249] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:42.381 [2024-07-24 02:09:57.000282] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:42.381 02:09:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:42.381 02:09:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:31:42.381 02:09:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:42.381 02:09:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:42.381 02:09:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:42.381 02:09:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:42.381 02:09:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:42.381 02:09:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:42.381 02:09:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:42.381 [2024-07-24 02:09:57.007584] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x24ee340 was disconnected and freed. delete nvme_qpair. 00:31:42.381 02:09:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:42.381 02:09:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:31:42.381 02:09:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:31:42.381 02:09:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:31:42.381 02:09:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:31:42.381 02:09:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:42.381 02:09:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:42.381 02:09:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:42.381 02:09:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:42.381 02:09:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:42.381 02:09:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:42.381 02:09:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:42.381 02:09:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:42.381 02:09:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:42.381 02:09:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:43.315 02:09:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:43.315 02:09:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:43.315 02:09:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:43.315 02:09:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:43.315 02:09:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:43.315 02:09:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:43.315 02:09:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:43.315 02:09:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:43.315 02:09:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:43.315 02:09:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:44.688 02:09:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:44.688 02:09:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:44.688 02:09:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:44.688 02:09:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:44.688 02:09:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:44.688 02:09:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:44.688 02:09:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:44.688 02:09:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:44.688 02:09:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:44.688 02:09:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:45.622 02:10:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:45.622 02:10:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:45.622 02:10:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:45.622 02:10:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:45.622 02:10:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:45.622 02:10:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:45.622 02:10:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:45.622 02:10:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:45.622 02:10:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:45.622 02:10:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:46.555 02:10:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:46.555 02:10:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:46.555 02:10:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:46.555 02:10:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:46.555 02:10:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:46.555 02:10:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:46.555 02:10:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:46.555 02:10:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:46.555 02:10:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:46.555 02:10:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:47.487 02:10:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:47.487 02:10:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:47.487 02:10:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:47.487 02:10:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:47.487 02:10:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:47.487 02:10:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:47.487 02:10:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:47.487 02:10:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:47.487 02:10:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:47.487 02:10:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:47.745 [2024-07-24 02:10:02.441669] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:31:47.745 [2024-07-24 02:10:02.441743] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:47.745 [2024-07-24 02:10:02.441777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.745 [2024-07-24 02:10:02.441798] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:47.745 [2024-07-24 02:10:02.441813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.745 [2024-07-24 02:10:02.441829] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:47.745 [2024-07-24 02:10:02.441844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.745 [2024-07-24 02:10:02.441858] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:47.745 [2024-07-24 02:10:02.441873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.745 [2024-07-24 02:10:02.441889] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:31:47.745 [2024-07-24 02:10:02.441904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:47.745 [2024-07-24 02:10:02.441920] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b4b60 is same with the state(5) to be set 00:31:47.745 [2024-07-24 02:10:02.451687] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24b4b60 (9): Bad file descriptor 00:31:47.745 [2024-07-24 02:10:02.461736] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:48.676 02:10:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:48.676 02:10:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:48.676 02:10:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:48.676 02:10:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:48.676 02:10:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:48.676 02:10:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:48.676 02:10:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:48.676 [2024-07-24 02:10:03.485347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:31:48.676 [2024-07-24 02:10:03.485412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24b4b60 with addr=10.0.0.2, port=4420 00:31:48.676 [2024-07-24 02:10:03.485434] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b4b60 is same with the state(5) to be set 00:31:48.676 [2024-07-24 02:10:03.485464] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24b4b60 (9): Bad file descriptor 00:31:48.676 [2024-07-24 02:10:03.485862] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:48.676 [2024-07-24 02:10:03.485901] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:48.676 [2024-07-24 02:10:03.485920] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:48.676 [2024-07-24 02:10:03.485938] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:48.676 [2024-07-24 02:10:03.485962] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:48.676 [2024-07-24 02:10:03.485982] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:48.676 02:10:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:48.676 02:10:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:48.676 02:10:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:49.608 [2024-07-24 02:10:04.488473] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:49.608 [2024-07-24 02:10:04.488500] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:49.608 [2024-07-24 02:10:04.488514] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:49.608 [2024-07-24 02:10:04.488526] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:31:49.608 [2024-07-24 02:10:04.488544] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:49.608 [2024-07-24 02:10:04.488581] bdev_nvme.c:6762:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:31:49.608 [2024-07-24 02:10:04.488623] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:49.608 [2024-07-24 02:10:04.488642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:49.608 [2024-07-24 02:10:04.488673] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:49.608 [2024-07-24 02:10:04.488689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:49.608 [2024-07-24 02:10:04.488705] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:49.608 [2024-07-24 02:10:04.488720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:49.608 [2024-07-24 02:10:04.488738] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:49.608 [2024-07-24 02:10:04.488753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:49.608 [2024-07-24 02:10:04.488769] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:31:49.608 [2024-07-24 02:10:04.488783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:49.608 [2024-07-24 02:10:04.488797] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:31:49.608 [2024-07-24 02:10:04.489143] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24b3f80 (9): Bad file descriptor 00:31:49.608 [2024-07-24 02:10:04.490165] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:31:49.608 [2024-07-24 02:10:04.490189] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:31:49.866 02:10:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:49.866 02:10:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:49.866 02:10:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:49.866 02:10:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:49.866 02:10:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:49.866 02:10:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:49.866 02:10:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:49.866 02:10:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:49.866 02:10:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:31:49.866 02:10:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:49.866 02:10:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:49.866 02:10:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:31:49.866 02:10:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:49.866 02:10:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:49.866 02:10:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:49.866 02:10:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:49.866 02:10:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:49.866 02:10:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:49.866 02:10:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:49.866 02:10:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:49.866 02:10:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:31:49.866 02:10:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:50.799 02:10:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:50.799 02:10:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:50.799 02:10:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:50.799 02:10:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:50.799 02:10:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:50.799 02:10:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:50.799 02:10:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:50.799 02:10:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:50.799 02:10:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:31:50.799 02:10:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:51.732 [2024-07-24 02:10:06.546505] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:51.732 [2024-07-24 02:10:06.546536] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:51.732 [2024-07-24 02:10:06.546559] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:51.990 [2024-07-24 02:10:06.633886] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:31:51.990 02:10:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:51.990 02:10:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:51.990 02:10:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:51.990 02:10:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:51.990 02:10:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:51.990 02:10:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:51.990 02:10:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:51.990 02:10:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:51.990 [2024-07-24 02:10:06.695621] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:31:51.990 [2024-07-24 02:10:06.695676] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:31:51.990 [2024-07-24 02:10:06.695712] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:31:51.990 [2024-07-24 02:10:06.695738] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:31:51.990 [2024-07-24 02:10:06.695754] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:51.990 [2024-07-24 02:10:06.703378] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x24cc4f0 was disconnected and freed. delete nvme_qpair. 00:31:51.990 02:10:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:31:51.990 02:10:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:52.923 02:10:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:52.923 02:10:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:52.923 02:10:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:52.923 02:10:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:52.923 02:10:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:52.923 02:10:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:52.923 02:10:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:52.923 02:10:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:52.923 02:10:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:31:52.923 02:10:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:31:52.923 02:10:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 1556825 00:31:52.923 02:10:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 1556825 ']' 00:31:52.923 02:10:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 1556825 00:31:52.923 02:10:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:31:52.923 02:10:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:52.923 02:10:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1556825 00:31:52.923 02:10:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:52.923 02:10:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:52.924 02:10:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1556825' 00:31:52.924 killing process with pid 1556825 00:31:52.924 02:10:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 1556825 00:31:52.924 02:10:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 1556825 00:31:53.182 02:10:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:31:53.182 02:10:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:53.182 02:10:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:31:53.182 02:10:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:53.182 02:10:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:31:53.182 02:10:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:53.182 02:10:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:53.182 rmmod nvme_tcp 00:31:53.182 rmmod nvme_fabrics 00:31:53.182 rmmod nvme_keyring 00:31:53.182 02:10:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:53.182 02:10:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:31:53.182 02:10:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:31:53.182 02:10:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 1556794 ']' 00:31:53.182 02:10:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 1556794 00:31:53.182 02:10:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 1556794 ']' 00:31:53.182 02:10:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 1556794 00:31:53.182 02:10:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:31:53.182 02:10:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:53.182 02:10:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1556794 00:31:53.439 02:10:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:31:53.439 02:10:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:31:53.439 02:10:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1556794' 00:31:53.439 killing process with pid 1556794 00:31:53.439 02:10:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 1556794 00:31:53.439 02:10:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 1556794 00:31:53.439 02:10:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:53.439 02:10:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:53.439 02:10:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:53.439 02:10:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:53.439 02:10:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:53.439 02:10:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:53.439 02:10:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:53.439 02:10:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:55.973 02:10:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:55.973 00:31:55.973 real 0m17.404s 00:31:55.973 user 0m25.327s 00:31:55.973 sys 0m2.951s 00:31:55.973 02:10:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:55.973 02:10:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:55.973 ************************************ 00:31:55.973 END TEST nvmf_discovery_remove_ifc 00:31:55.973 ************************************ 00:31:55.973 02:10:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:31:55.973 02:10:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:31:55.973 02:10:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:55.973 02:10:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.973 ************************************ 00:31:55.973 START TEST nvmf_identify_kernel_target 00:31:55.973 ************************************ 00:31:55.973 02:10:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:31:55.973 * Looking for test storage... 00:31:55.973 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:55.973 02:10:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:55.973 02:10:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:31:55.973 02:10:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:55.973 02:10:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:55.973 02:10:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:55.973 02:10:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:55.973 02:10:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:55.973 02:10:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:55.973 02:10:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:55.973 02:10:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:55.973 02:10:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:55.973 02:10:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:55.973 02:10:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:55.973 02:10:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:55.973 02:10:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:55.973 02:10:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:55.973 02:10:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:55.973 02:10:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:55.973 02:10:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:55.973 02:10:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:55.973 02:10:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:55.973 02:10:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:55.973 02:10:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:55.973 02:10:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:55.973 02:10:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:55.973 02:10:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:31:55.973 02:10:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:55.973 02:10:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:31:55.973 02:10:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:55.973 02:10:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:55.973 02:10:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:55.973 02:10:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:55.973 02:10:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:55.973 02:10:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:55.973 02:10:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:55.973 02:10:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:55.973 02:10:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:31:55.973 02:10:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:55.973 02:10:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:55.973 02:10:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:55.973 02:10:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:55.973 02:10:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:55.973 02:10:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:55.973 02:10:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:55.973 02:10:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:55.974 02:10:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:55.974 02:10:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:55.974 02:10:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:31:55.974 02:10:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:31:57.873 02:10:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:57.873 02:10:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:31:57.873 02:10:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:57.873 02:10:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:57.873 02:10:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:57.873 02:10:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:57.873 02:10:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:57.873 02:10:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:31:57.873 02:10:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:57.873 02:10:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:31:57.873 02:10:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:31:57.873 02:10:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:31:57.873 02:10:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:31:57.873 02:10:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:31:57.873 02:10:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:31:57.873 02:10:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:57.873 02:10:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:57.873 02:10:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:57.873 02:10:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:57.873 02:10:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:57.873 02:10:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:57.873 02:10:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:57.873 02:10:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:57.873 02:10:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:57.873 02:10:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:57.873 02:10:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:57.873 02:10:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:57.873 02:10:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:57.873 02:10:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:57.873 02:10:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:57.873 02:10:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:57.873 02:10:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:57.873 02:10:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:57.873 02:10:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:31:57.873 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:31:57.873 02:10:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:57.873 02:10:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:57.873 02:10:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:57.873 02:10:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:57.873 02:10:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:57.873 02:10:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:57.873 02:10:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:31:57.873 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:31:57.873 02:10:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:57.873 02:10:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:57.873 02:10:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:57.873 02:10:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:57.873 02:10:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:57.873 02:10:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:57.873 02:10:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:57.873 02:10:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:57.873 02:10:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:57.873 02:10:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:57.873 02:10:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:57.873 02:10:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:57.873 02:10:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:57.873 02:10:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:57.873 02:10:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:57.873 02:10:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:31:57.873 Found net devices under 0000:0a:00.0: cvl_0_0 00:31:57.873 02:10:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:57.873 02:10:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:57.873 02:10:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:57.873 02:10:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:57.873 02:10:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:57.873 02:10:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:57.873 02:10:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:57.873 02:10:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:57.873 02:10:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:31:57.873 Found net devices under 0000:0a:00.1: cvl_0_1 00:31:57.873 02:10:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:57.873 02:10:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:57.874 02:10:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:31:57.874 02:10:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:57.874 02:10:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:57.874 02:10:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:57.874 02:10:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:57.874 02:10:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:57.874 02:10:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:57.874 02:10:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:57.874 02:10:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:57.874 02:10:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:57.874 02:10:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:57.874 02:10:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:57.874 02:10:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:57.874 02:10:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:57.874 02:10:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:57.874 02:10:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:57.874 02:10:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:57.874 02:10:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:57.874 02:10:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:57.874 02:10:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:57.874 02:10:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:57.874 02:10:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:57.874 02:10:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:57.874 02:10:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:57.874 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:57.874 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.148 ms 00:31:57.874 00:31:57.874 --- 10.0.0.2 ping statistics --- 00:31:57.874 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:57.874 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:31:57.874 02:10:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:57.874 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:57.874 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.157 ms 00:31:57.874 00:31:57.874 --- 10.0.0.1 ping statistics --- 00:31:57.874 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:57.874 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:31:57.874 02:10:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:57.874 02:10:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:31:57.874 02:10:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:57.874 02:10:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:57.874 02:10:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:57.874 02:10:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:57.874 02:10:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:57.874 02:10:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:57.874 02:10:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:57.874 02:10:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:31:57.874 02:10:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:31:57.874 02:10:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:31:57.874 02:10:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:57.874 02:10:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:57.874 02:10:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:57.874 02:10:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:57.874 02:10:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:57.874 02:10:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:57.874 02:10:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:57.874 02:10:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:57.874 02:10:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:57.874 02:10:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:31:57.874 02:10:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:31:57.874 02:10:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:31:57.874 02:10:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:31:57.874 02:10:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:57.874 02:10:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:57.874 02:10:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:31:57.874 02:10:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:31:57.874 02:10:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:31:57.874 02:10:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:31:57.874 02:10:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:31:57.874 02:10:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:31:58.807 Waiting for block devices as requested 00:31:59.065 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:31:59.065 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:31:59.323 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:31:59.323 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:31:59.323 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:31:59.323 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:31:59.581 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:31:59.581 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:31:59.581 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:31:59.581 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:31:59.839 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:31:59.839 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:31:59.839 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:31:59.839 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:00.116 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:00.116 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:00.116 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:00.381 02:10:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:32:00.381 02:10:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:32:00.381 02:10:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:32:00.381 02:10:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1660 -- # local device=nvme0n1 00:32:00.381 02:10:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:32:00.381 02:10:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1663 -- # [[ none != none ]] 00:32:00.381 02:10:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:32:00.381 02:10:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:32:00.381 02:10:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:32:00.381 No valid GPT data, bailing 00:32:00.381 02:10:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:32:00.381 02:10:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:32:00.381 02:10:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:32:00.381 02:10:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:32:00.381 02:10:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:32:00.381 02:10:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:00.381 02:10:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:00.381 02:10:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:32:00.381 02:10:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:32:00.381 02:10:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:32:00.381 02:10:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:32:00.381 02:10:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:32:00.381 02:10:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:32:00.381 02:10:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:32:00.381 02:10:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:32:00.381 02:10:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:32:00.381 02:10:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:32:00.381 02:10:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:32:00.381 00:32:00.381 Discovery Log Number of Records 2, Generation counter 2 00:32:00.381 =====Discovery Log Entry 0====== 00:32:00.381 trtype: tcp 00:32:00.381 adrfam: ipv4 00:32:00.381 subtype: current discovery subsystem 00:32:00.381 treq: not specified, sq flow control disable supported 00:32:00.381 portid: 1 00:32:00.381 trsvcid: 4420 00:32:00.381 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:32:00.381 traddr: 10.0.0.1 00:32:00.381 eflags: none 00:32:00.381 sectype: none 00:32:00.381 =====Discovery Log Entry 1====== 00:32:00.381 trtype: tcp 00:32:00.381 adrfam: ipv4 00:32:00.381 subtype: nvme subsystem 00:32:00.381 treq: not specified, sq flow control disable supported 00:32:00.381 portid: 1 00:32:00.381 trsvcid: 4420 00:32:00.381 subnqn: nqn.2016-06.io.spdk:testnqn 00:32:00.381 traddr: 10.0.0.1 00:32:00.381 eflags: none 00:32:00.381 sectype: none 00:32:00.381 02:10:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:32:00.381 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:32:00.381 EAL: No free 2048 kB hugepages reported on node 1 00:32:00.642 ===================================================== 00:32:00.642 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:32:00.642 ===================================================== 00:32:00.642 Controller Capabilities/Features 00:32:00.642 ================================ 00:32:00.642 Vendor ID: 0000 00:32:00.642 Subsystem Vendor ID: 0000 00:32:00.642 Serial Number: 5fda8f699849b7014422 00:32:00.642 Model Number: Linux 00:32:00.642 Firmware Version: 6.7.0-68 00:32:00.642 Recommended Arb Burst: 0 00:32:00.642 IEEE OUI Identifier: 00 00 00 00:32:00.642 Multi-path I/O 00:32:00.642 May have multiple subsystem ports: No 00:32:00.642 May have multiple controllers: No 00:32:00.642 Associated with SR-IOV VF: No 00:32:00.642 Max Data Transfer Size: Unlimited 00:32:00.642 Max Number of Namespaces: 0 00:32:00.642 Max Number of I/O Queues: 1024 00:32:00.642 NVMe Specification Version (VS): 1.3 00:32:00.642 NVMe Specification Version (Identify): 1.3 00:32:00.642 Maximum Queue Entries: 1024 00:32:00.642 Contiguous Queues Required: No 00:32:00.642 Arbitration Mechanisms Supported 00:32:00.642 Weighted Round Robin: Not Supported 00:32:00.642 Vendor Specific: Not Supported 00:32:00.642 Reset Timeout: 7500 ms 00:32:00.642 Doorbell Stride: 4 bytes 00:32:00.642 NVM Subsystem Reset: Not Supported 00:32:00.642 Command Sets Supported 00:32:00.642 NVM Command Set: Supported 00:32:00.642 Boot Partition: Not Supported 00:32:00.642 Memory Page Size Minimum: 4096 bytes 00:32:00.642 Memory Page Size Maximum: 4096 bytes 00:32:00.642 Persistent Memory Region: Not Supported 00:32:00.642 Optional Asynchronous Events Supported 00:32:00.642 Namespace Attribute Notices: Not Supported 00:32:00.642 Firmware Activation Notices: Not Supported 00:32:00.642 ANA Change Notices: Not Supported 00:32:00.642 PLE Aggregate Log Change Notices: Not Supported 00:32:00.642 LBA Status Info Alert Notices: Not Supported 00:32:00.642 EGE Aggregate Log Change Notices: Not Supported 00:32:00.642 Normal NVM Subsystem Shutdown event: Not Supported 00:32:00.642 Zone Descriptor Change Notices: Not Supported 00:32:00.642 Discovery Log Change Notices: Supported 00:32:00.642 Controller Attributes 00:32:00.642 128-bit Host Identifier: Not Supported 00:32:00.642 Non-Operational Permissive Mode: Not Supported 00:32:00.642 NVM Sets: Not Supported 00:32:00.642 Read Recovery Levels: Not Supported 00:32:00.642 Endurance Groups: Not Supported 00:32:00.642 Predictable Latency Mode: Not Supported 00:32:00.642 Traffic Based Keep ALive: Not Supported 00:32:00.642 Namespace Granularity: Not Supported 00:32:00.642 SQ Associations: Not Supported 00:32:00.642 UUID List: Not Supported 00:32:00.642 Multi-Domain Subsystem: Not Supported 00:32:00.642 Fixed Capacity Management: Not Supported 00:32:00.642 Variable Capacity Management: Not Supported 00:32:00.642 Delete Endurance Group: Not Supported 00:32:00.642 Delete NVM Set: Not Supported 00:32:00.642 Extended LBA Formats Supported: Not Supported 00:32:00.642 Flexible Data Placement Supported: Not Supported 00:32:00.642 00:32:00.642 Controller Memory Buffer Support 00:32:00.642 ================================ 00:32:00.642 Supported: No 00:32:00.642 00:32:00.642 Persistent Memory Region Support 00:32:00.642 ================================ 00:32:00.642 Supported: No 00:32:00.642 00:32:00.642 Admin Command Set Attributes 00:32:00.642 ============================ 00:32:00.642 Security Send/Receive: Not Supported 00:32:00.642 Format NVM: Not Supported 00:32:00.642 Firmware Activate/Download: Not Supported 00:32:00.642 Namespace Management: Not Supported 00:32:00.642 Device Self-Test: Not Supported 00:32:00.642 Directives: Not Supported 00:32:00.642 NVMe-MI: Not Supported 00:32:00.642 Virtualization Management: Not Supported 00:32:00.642 Doorbell Buffer Config: Not Supported 00:32:00.642 Get LBA Status Capability: Not Supported 00:32:00.642 Command & Feature Lockdown Capability: Not Supported 00:32:00.642 Abort Command Limit: 1 00:32:00.642 Async Event Request Limit: 1 00:32:00.642 Number of Firmware Slots: N/A 00:32:00.642 Firmware Slot 1 Read-Only: N/A 00:32:00.642 Firmware Activation Without Reset: N/A 00:32:00.642 Multiple Update Detection Support: N/A 00:32:00.642 Firmware Update Granularity: No Information Provided 00:32:00.642 Per-Namespace SMART Log: No 00:32:00.642 Asymmetric Namespace Access Log Page: Not Supported 00:32:00.642 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:32:00.642 Command Effects Log Page: Not Supported 00:32:00.642 Get Log Page Extended Data: Supported 00:32:00.642 Telemetry Log Pages: Not Supported 00:32:00.642 Persistent Event Log Pages: Not Supported 00:32:00.642 Supported Log Pages Log Page: May Support 00:32:00.642 Commands Supported & Effects Log Page: Not Supported 00:32:00.642 Feature Identifiers & Effects Log Page:May Support 00:32:00.642 NVMe-MI Commands & Effects Log Page: May Support 00:32:00.642 Data Area 4 for Telemetry Log: Not Supported 00:32:00.642 Error Log Page Entries Supported: 1 00:32:00.642 Keep Alive: Not Supported 00:32:00.642 00:32:00.642 NVM Command Set Attributes 00:32:00.642 ========================== 00:32:00.642 Submission Queue Entry Size 00:32:00.642 Max: 1 00:32:00.642 Min: 1 00:32:00.642 Completion Queue Entry Size 00:32:00.642 Max: 1 00:32:00.642 Min: 1 00:32:00.642 Number of Namespaces: 0 00:32:00.642 Compare Command: Not Supported 00:32:00.642 Write Uncorrectable Command: Not Supported 00:32:00.642 Dataset Management Command: Not Supported 00:32:00.642 Write Zeroes Command: Not Supported 00:32:00.642 Set Features Save Field: Not Supported 00:32:00.642 Reservations: Not Supported 00:32:00.642 Timestamp: Not Supported 00:32:00.642 Copy: Not Supported 00:32:00.642 Volatile Write Cache: Not Present 00:32:00.642 Atomic Write Unit (Normal): 1 00:32:00.642 Atomic Write Unit (PFail): 1 00:32:00.642 Atomic Compare & Write Unit: 1 00:32:00.642 Fused Compare & Write: Not Supported 00:32:00.642 Scatter-Gather List 00:32:00.642 SGL Command Set: Supported 00:32:00.642 SGL Keyed: Not Supported 00:32:00.642 SGL Bit Bucket Descriptor: Not Supported 00:32:00.642 SGL Metadata Pointer: Not Supported 00:32:00.642 Oversized SGL: Not Supported 00:32:00.642 SGL Metadata Address: Not Supported 00:32:00.642 SGL Offset: Supported 00:32:00.642 Transport SGL Data Block: Not Supported 00:32:00.642 Replay Protected Memory Block: Not Supported 00:32:00.642 00:32:00.642 Firmware Slot Information 00:32:00.642 ========================= 00:32:00.642 Active slot: 0 00:32:00.642 00:32:00.642 00:32:00.642 Error Log 00:32:00.642 ========= 00:32:00.642 00:32:00.642 Active Namespaces 00:32:00.642 ================= 00:32:00.642 Discovery Log Page 00:32:00.642 ================== 00:32:00.642 Generation Counter: 2 00:32:00.642 Number of Records: 2 00:32:00.642 Record Format: 0 00:32:00.642 00:32:00.642 Discovery Log Entry 0 00:32:00.642 ---------------------- 00:32:00.643 Transport Type: 3 (TCP) 00:32:00.643 Address Family: 1 (IPv4) 00:32:00.643 Subsystem Type: 3 (Current Discovery Subsystem) 00:32:00.643 Entry Flags: 00:32:00.643 Duplicate Returned Information: 0 00:32:00.643 Explicit Persistent Connection Support for Discovery: 0 00:32:00.643 Transport Requirements: 00:32:00.643 Secure Channel: Not Specified 00:32:00.643 Port ID: 1 (0x0001) 00:32:00.643 Controller ID: 65535 (0xffff) 00:32:00.643 Admin Max SQ Size: 32 00:32:00.643 Transport Service Identifier: 4420 00:32:00.643 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:32:00.643 Transport Address: 10.0.0.1 00:32:00.643 Discovery Log Entry 1 00:32:00.643 ---------------------- 00:32:00.643 Transport Type: 3 (TCP) 00:32:00.643 Address Family: 1 (IPv4) 00:32:00.643 Subsystem Type: 2 (NVM Subsystem) 00:32:00.643 Entry Flags: 00:32:00.643 Duplicate Returned Information: 0 00:32:00.643 Explicit Persistent Connection Support for Discovery: 0 00:32:00.643 Transport Requirements: 00:32:00.643 Secure Channel: Not Specified 00:32:00.643 Port ID: 1 (0x0001) 00:32:00.643 Controller ID: 65535 (0xffff) 00:32:00.643 Admin Max SQ Size: 32 00:32:00.643 Transport Service Identifier: 4420 00:32:00.643 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:32:00.643 Transport Address: 10.0.0.1 00:32:00.643 02:10:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:00.643 EAL: No free 2048 kB hugepages reported on node 1 00:32:00.643 get_feature(0x01) failed 00:32:00.643 get_feature(0x02) failed 00:32:00.643 get_feature(0x04) failed 00:32:00.643 ===================================================== 00:32:00.643 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:32:00.643 ===================================================== 00:32:00.643 Controller Capabilities/Features 00:32:00.643 ================================ 00:32:00.643 Vendor ID: 0000 00:32:00.643 Subsystem Vendor ID: 0000 00:32:00.643 Serial Number: 5ff4d2d983685354cd80 00:32:00.643 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:32:00.643 Firmware Version: 6.7.0-68 00:32:00.643 Recommended Arb Burst: 6 00:32:00.643 IEEE OUI Identifier: 00 00 00 00:32:00.643 Multi-path I/O 00:32:00.643 May have multiple subsystem ports: Yes 00:32:00.643 May have multiple controllers: Yes 00:32:00.643 Associated with SR-IOV VF: No 00:32:00.643 Max Data Transfer Size: Unlimited 00:32:00.643 Max Number of Namespaces: 1024 00:32:00.643 Max Number of I/O Queues: 128 00:32:00.643 NVMe Specification Version (VS): 1.3 00:32:00.643 NVMe Specification Version (Identify): 1.3 00:32:00.643 Maximum Queue Entries: 1024 00:32:00.643 Contiguous Queues Required: No 00:32:00.643 Arbitration Mechanisms Supported 00:32:00.643 Weighted Round Robin: Not Supported 00:32:00.643 Vendor Specific: Not Supported 00:32:00.643 Reset Timeout: 7500 ms 00:32:00.643 Doorbell Stride: 4 bytes 00:32:00.643 NVM Subsystem Reset: Not Supported 00:32:00.643 Command Sets Supported 00:32:00.643 NVM Command Set: Supported 00:32:00.643 Boot Partition: Not Supported 00:32:00.643 Memory Page Size Minimum: 4096 bytes 00:32:00.643 Memory Page Size Maximum: 4096 bytes 00:32:00.643 Persistent Memory Region: Not Supported 00:32:00.643 Optional Asynchronous Events Supported 00:32:00.643 Namespace Attribute Notices: Supported 00:32:00.643 Firmware Activation Notices: Not Supported 00:32:00.643 ANA Change Notices: Supported 00:32:00.643 PLE Aggregate Log Change Notices: Not Supported 00:32:00.643 LBA Status Info Alert Notices: Not Supported 00:32:00.643 EGE Aggregate Log Change Notices: Not Supported 00:32:00.643 Normal NVM Subsystem Shutdown event: Not Supported 00:32:00.643 Zone Descriptor Change Notices: Not Supported 00:32:00.643 Discovery Log Change Notices: Not Supported 00:32:00.643 Controller Attributes 00:32:00.643 128-bit Host Identifier: Supported 00:32:00.643 Non-Operational Permissive Mode: Not Supported 00:32:00.643 NVM Sets: Not Supported 00:32:00.643 Read Recovery Levels: Not Supported 00:32:00.643 Endurance Groups: Not Supported 00:32:00.643 Predictable Latency Mode: Not Supported 00:32:00.643 Traffic Based Keep ALive: Supported 00:32:00.643 Namespace Granularity: Not Supported 00:32:00.643 SQ Associations: Not Supported 00:32:00.643 UUID List: Not Supported 00:32:00.643 Multi-Domain Subsystem: Not Supported 00:32:00.643 Fixed Capacity Management: Not Supported 00:32:00.643 Variable Capacity Management: Not Supported 00:32:00.643 Delete Endurance Group: Not Supported 00:32:00.643 Delete NVM Set: Not Supported 00:32:00.643 Extended LBA Formats Supported: Not Supported 00:32:00.643 Flexible Data Placement Supported: Not Supported 00:32:00.643 00:32:00.643 Controller Memory Buffer Support 00:32:00.643 ================================ 00:32:00.643 Supported: No 00:32:00.643 00:32:00.643 Persistent Memory Region Support 00:32:00.643 ================================ 00:32:00.643 Supported: No 00:32:00.643 00:32:00.643 Admin Command Set Attributes 00:32:00.643 ============================ 00:32:00.643 Security Send/Receive: Not Supported 00:32:00.643 Format NVM: Not Supported 00:32:00.643 Firmware Activate/Download: Not Supported 00:32:00.643 Namespace Management: Not Supported 00:32:00.643 Device Self-Test: Not Supported 00:32:00.643 Directives: Not Supported 00:32:00.643 NVMe-MI: Not Supported 00:32:00.643 Virtualization Management: Not Supported 00:32:00.643 Doorbell Buffer Config: Not Supported 00:32:00.643 Get LBA Status Capability: Not Supported 00:32:00.643 Command & Feature Lockdown Capability: Not Supported 00:32:00.643 Abort Command Limit: 4 00:32:00.643 Async Event Request Limit: 4 00:32:00.643 Number of Firmware Slots: N/A 00:32:00.643 Firmware Slot 1 Read-Only: N/A 00:32:00.643 Firmware Activation Without Reset: N/A 00:32:00.643 Multiple Update Detection Support: N/A 00:32:00.643 Firmware Update Granularity: No Information Provided 00:32:00.643 Per-Namespace SMART Log: Yes 00:32:00.643 Asymmetric Namespace Access Log Page: Supported 00:32:00.643 ANA Transition Time : 10 sec 00:32:00.643 00:32:00.643 Asymmetric Namespace Access Capabilities 00:32:00.643 ANA Optimized State : Supported 00:32:00.643 ANA Non-Optimized State : Supported 00:32:00.643 ANA Inaccessible State : Supported 00:32:00.643 ANA Persistent Loss State : Supported 00:32:00.643 ANA Change State : Supported 00:32:00.643 ANAGRPID is not changed : No 00:32:00.643 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:32:00.643 00:32:00.643 ANA Group Identifier Maximum : 128 00:32:00.643 Number of ANA Group Identifiers : 128 00:32:00.643 Max Number of Allowed Namespaces : 1024 00:32:00.643 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:32:00.643 Command Effects Log Page: Supported 00:32:00.643 Get Log Page Extended Data: Supported 00:32:00.643 Telemetry Log Pages: Not Supported 00:32:00.643 Persistent Event Log Pages: Not Supported 00:32:00.643 Supported Log Pages Log Page: May Support 00:32:00.643 Commands Supported & Effects Log Page: Not Supported 00:32:00.643 Feature Identifiers & Effects Log Page:May Support 00:32:00.643 NVMe-MI Commands & Effects Log Page: May Support 00:32:00.643 Data Area 4 for Telemetry Log: Not Supported 00:32:00.643 Error Log Page Entries Supported: 128 00:32:00.643 Keep Alive: Supported 00:32:00.643 Keep Alive Granularity: 1000 ms 00:32:00.643 00:32:00.643 NVM Command Set Attributes 00:32:00.643 ========================== 00:32:00.643 Submission Queue Entry Size 00:32:00.643 Max: 64 00:32:00.643 Min: 64 00:32:00.643 Completion Queue Entry Size 00:32:00.643 Max: 16 00:32:00.643 Min: 16 00:32:00.643 Number of Namespaces: 1024 00:32:00.643 Compare Command: Not Supported 00:32:00.643 Write Uncorrectable Command: Not Supported 00:32:00.643 Dataset Management Command: Supported 00:32:00.643 Write Zeroes Command: Supported 00:32:00.643 Set Features Save Field: Not Supported 00:32:00.643 Reservations: Not Supported 00:32:00.643 Timestamp: Not Supported 00:32:00.643 Copy: Not Supported 00:32:00.643 Volatile Write Cache: Present 00:32:00.643 Atomic Write Unit (Normal): 1 00:32:00.643 Atomic Write Unit (PFail): 1 00:32:00.643 Atomic Compare & Write Unit: 1 00:32:00.643 Fused Compare & Write: Not Supported 00:32:00.643 Scatter-Gather List 00:32:00.643 SGL Command Set: Supported 00:32:00.643 SGL Keyed: Not Supported 00:32:00.643 SGL Bit Bucket Descriptor: Not Supported 00:32:00.643 SGL Metadata Pointer: Not Supported 00:32:00.643 Oversized SGL: Not Supported 00:32:00.644 SGL Metadata Address: Not Supported 00:32:00.644 SGL Offset: Supported 00:32:00.644 Transport SGL Data Block: Not Supported 00:32:00.644 Replay Protected Memory Block: Not Supported 00:32:00.644 00:32:00.644 Firmware Slot Information 00:32:00.644 ========================= 00:32:00.644 Active slot: 0 00:32:00.644 00:32:00.644 Asymmetric Namespace Access 00:32:00.644 =========================== 00:32:00.644 Change Count : 0 00:32:00.644 Number of ANA Group Descriptors : 1 00:32:00.644 ANA Group Descriptor : 0 00:32:00.644 ANA Group ID : 1 00:32:00.644 Number of NSID Values : 1 00:32:00.644 Change Count : 0 00:32:00.644 ANA State : 1 00:32:00.644 Namespace Identifier : 1 00:32:00.644 00:32:00.644 Commands Supported and Effects 00:32:00.644 ============================== 00:32:00.644 Admin Commands 00:32:00.644 -------------- 00:32:00.644 Get Log Page (02h): Supported 00:32:00.644 Identify (06h): Supported 00:32:00.644 Abort (08h): Supported 00:32:00.644 Set Features (09h): Supported 00:32:00.644 Get Features (0Ah): Supported 00:32:00.644 Asynchronous Event Request (0Ch): Supported 00:32:00.644 Keep Alive (18h): Supported 00:32:00.644 I/O Commands 00:32:00.644 ------------ 00:32:00.644 Flush (00h): Supported 00:32:00.644 Write (01h): Supported LBA-Change 00:32:00.644 Read (02h): Supported 00:32:00.644 Write Zeroes (08h): Supported LBA-Change 00:32:00.644 Dataset Management (09h): Supported 00:32:00.644 00:32:00.644 Error Log 00:32:00.644 ========= 00:32:00.644 Entry: 0 00:32:00.644 Error Count: 0x3 00:32:00.644 Submission Queue Id: 0x0 00:32:00.644 Command Id: 0x5 00:32:00.644 Phase Bit: 0 00:32:00.644 Status Code: 0x2 00:32:00.644 Status Code Type: 0x0 00:32:00.644 Do Not Retry: 1 00:32:00.644 Error Location: 0x28 00:32:00.644 LBA: 0x0 00:32:00.644 Namespace: 0x0 00:32:00.644 Vendor Log Page: 0x0 00:32:00.644 ----------- 00:32:00.644 Entry: 1 00:32:00.644 Error Count: 0x2 00:32:00.644 Submission Queue Id: 0x0 00:32:00.644 Command Id: 0x5 00:32:00.644 Phase Bit: 0 00:32:00.644 Status Code: 0x2 00:32:00.644 Status Code Type: 0x0 00:32:00.644 Do Not Retry: 1 00:32:00.644 Error Location: 0x28 00:32:00.644 LBA: 0x0 00:32:00.644 Namespace: 0x0 00:32:00.644 Vendor Log Page: 0x0 00:32:00.644 ----------- 00:32:00.644 Entry: 2 00:32:00.644 Error Count: 0x1 00:32:00.644 Submission Queue Id: 0x0 00:32:00.644 Command Id: 0x4 00:32:00.644 Phase Bit: 0 00:32:00.644 Status Code: 0x2 00:32:00.644 Status Code Type: 0x0 00:32:00.644 Do Not Retry: 1 00:32:00.644 Error Location: 0x28 00:32:00.644 LBA: 0x0 00:32:00.644 Namespace: 0x0 00:32:00.644 Vendor Log Page: 0x0 00:32:00.644 00:32:00.644 Number of Queues 00:32:00.644 ================ 00:32:00.644 Number of I/O Submission Queues: 128 00:32:00.644 Number of I/O Completion Queues: 128 00:32:00.644 00:32:00.644 ZNS Specific Controller Data 00:32:00.644 ============================ 00:32:00.644 Zone Append Size Limit: 0 00:32:00.644 00:32:00.644 00:32:00.644 Active Namespaces 00:32:00.644 ================= 00:32:00.644 get_feature(0x05) failed 00:32:00.644 Namespace ID:1 00:32:00.644 Command Set Identifier: NVM (00h) 00:32:00.644 Deallocate: Supported 00:32:00.644 Deallocated/Unwritten Error: Not Supported 00:32:00.644 Deallocated Read Value: Unknown 00:32:00.644 Deallocate in Write Zeroes: Not Supported 00:32:00.644 Deallocated Guard Field: 0xFFFF 00:32:00.644 Flush: Supported 00:32:00.644 Reservation: Not Supported 00:32:00.644 Namespace Sharing Capabilities: Multiple Controllers 00:32:00.644 Size (in LBAs): 1953525168 (931GiB) 00:32:00.644 Capacity (in LBAs): 1953525168 (931GiB) 00:32:00.644 Utilization (in LBAs): 1953525168 (931GiB) 00:32:00.644 UUID: 2bed1295-fda2-4218-a785-d354415b829d 00:32:00.644 Thin Provisioning: Not Supported 00:32:00.644 Per-NS Atomic Units: Yes 00:32:00.644 Atomic Boundary Size (Normal): 0 00:32:00.644 Atomic Boundary Size (PFail): 0 00:32:00.644 Atomic Boundary Offset: 0 00:32:00.644 NGUID/EUI64 Never Reused: No 00:32:00.644 ANA group ID: 1 00:32:00.644 Namespace Write Protected: No 00:32:00.644 Number of LBA Formats: 1 00:32:00.644 Current LBA Format: LBA Format #00 00:32:00.644 LBA Format #00: Data Size: 512 Metadata Size: 0 00:32:00.644 00:32:00.644 02:10:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:32:00.644 02:10:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:00.644 02:10:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:32:00.644 02:10:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:00.644 02:10:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:32:00.644 02:10:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:00.644 02:10:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:00.644 rmmod nvme_tcp 00:32:00.644 rmmod nvme_fabrics 00:32:00.644 02:10:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:00.644 02:10:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:32:00.644 02:10:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:32:00.644 02:10:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:32:00.644 02:10:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:00.644 02:10:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:00.644 02:10:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:00.644 02:10:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:00.644 02:10:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:00.644 02:10:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:00.644 02:10:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:00.644 02:10:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:03.177 02:10:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:03.177 02:10:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:32:03.177 02:10:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:32:03.177 02:10:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:32:03.177 02:10:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:03.177 02:10:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:03.177 02:10:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:32:03.177 02:10:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:03.177 02:10:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:32:03.177 02:10:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:32:03.177 02:10:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:04.112 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:32:04.112 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:32:04.112 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:32:04.112 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:32:04.112 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:32:04.112 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:32:04.112 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:32:04.112 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:32:04.112 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:32:04.112 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:32:04.112 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:32:04.112 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:32:04.112 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:32:04.112 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:32:04.112 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:32:04.112 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:32:05.047 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:32:05.047 00:32:05.047 real 0m9.494s 00:32:05.047 user 0m2.111s 00:32:05.047 sys 0m3.367s 00:32:05.047 02:10:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:05.047 02:10:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:32:05.047 ************************************ 00:32:05.047 END TEST nvmf_identify_kernel_target 00:32:05.047 ************************************ 00:32:05.047 02:10:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:32:05.047 02:10:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:32:05.047 02:10:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:05.047 02:10:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.047 ************************************ 00:32:05.047 START TEST nvmf_auth_host 00:32:05.047 ************************************ 00:32:05.047 02:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:32:05.306 * Looking for test storage... 00:32:05.306 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:05.306 02:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:05.306 02:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:32:05.306 02:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:05.306 02:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:05.306 02:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:05.306 02:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:05.306 02:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:05.306 02:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:05.306 02:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:05.306 02:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:05.306 02:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:05.306 02:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:05.306 02:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:05.306 02:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:05.306 02:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:05.306 02:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:05.306 02:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:05.306 02:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:05.306 02:10:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:05.306 02:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:05.306 02:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:05.306 02:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:05.306 02:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:05.306 02:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:05.306 02:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:05.306 02:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:32:05.306 02:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:05.306 02:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:32:05.306 02:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:05.306 02:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:05.306 02:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:05.306 02:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:05.306 02:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:05.306 02:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:05.306 02:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:05.306 02:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:05.306 02:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:32:05.306 02:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:32:05.306 02:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:32:05.306 02:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:32:05.306 02:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:05.306 02:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:32:05.306 02:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:32:05.306 02:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:32:05.306 02:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:32:05.306 02:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:05.306 02:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:05.306 02:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:05.306 02:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:05.306 02:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:05.306 02:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:05.306 02:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:05.306 02:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:05.306 02:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:05.306 02:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:05.306 02:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:32:05.306 02:10:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.205 02:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:07.205 02:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:32:07.205 02:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:07.205 02:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:07.205 02:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:07.205 02:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:07.205 02:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:07.206 02:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:32:07.206 02:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:07.206 02:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:32:07.206 02:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:32:07.206 02:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:32:07.206 02:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:32:07.206 02:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:32:07.206 02:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:32:07.206 02:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:07.206 02:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:07.206 02:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:07.206 02:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:07.206 02:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:07.206 02:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:07.206 02:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:07.206 02:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:07.206 02:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:07.206 02:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:07.206 02:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:07.206 02:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:07.206 02:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:07.206 02:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:07.206 02:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:07.206 02:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:07.206 02:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:07.206 02:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:07.206 02:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:07.206 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:07.206 02:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:07.206 02:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:07.206 02:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:07.206 02:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:07.206 02:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:07.206 02:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:07.206 02:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:07.206 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:07.206 02:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:07.206 02:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:07.206 02:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:07.206 02:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:07.206 02:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:07.206 02:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:07.206 02:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:07.206 02:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:07.206 02:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:07.206 02:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:07.206 02:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:07.206 02:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:07.206 02:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:07.206 02:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:07.206 02:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:07.206 02:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:07.206 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:07.206 02:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:07.206 02:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:07.206 02:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:07.206 02:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:07.206 02:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:07.206 02:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:07.206 02:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:07.206 02:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:07.206 02:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:07.206 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:07.206 02:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:07.206 02:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:07.206 02:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:32:07.206 02:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:07.206 02:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:07.206 02:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:07.206 02:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:07.206 02:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:07.206 02:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:07.206 02:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:07.206 02:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:07.206 02:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:07.206 02:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:07.206 02:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:07.206 02:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:07.206 02:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:07.206 02:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:07.206 02:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:07.206 02:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:07.206 02:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:07.206 02:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:07.206 02:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:07.206 02:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:07.206 02:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:07.206 02:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:07.206 02:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:07.206 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:07.206 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.233 ms 00:32:07.206 00:32:07.206 --- 10.0.0.2 ping statistics --- 00:32:07.206 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:07.206 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:32:07.206 02:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:07.206 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:07.206 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.080 ms 00:32:07.206 00:32:07.206 --- 10.0.0.1 ping statistics --- 00:32:07.206 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:07.206 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:32:07.206 02:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:07.206 02:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:32:07.206 02:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:07.206 02:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:07.206 02:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:07.206 02:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:07.206 02:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:07.206 02:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:07.206 02:10:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:07.206 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:32:07.206 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:07.206 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:07.206 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.206 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=1563891 00:32:07.207 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:32:07.207 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 1563891 00:32:07.207 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 1563891 ']' 00:32:07.207 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:07.207 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:07.207 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:07.207 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:07.207 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.464 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:07.464 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:32:07.464 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:07.464 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:07.464 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.464 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:07.464 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:32:07.464 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:32:07.464 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:07.464 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:07.464 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:07.464 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:32:07.464 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:32:07.464 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:07.464 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=4a192acf05fd47d0eefb1eb50e9e52c6 00:32:07.464 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:32:07.464 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.eQm 00:32:07.464 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 4a192acf05fd47d0eefb1eb50e9e52c6 0 00:32:07.464 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 4a192acf05fd47d0eefb1eb50e9e52c6 0 00:32:07.464 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:07.464 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:07.464 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=4a192acf05fd47d0eefb1eb50e9e52c6 00:32:07.464 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:32:07.464 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:07.722 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.eQm 00:32:07.722 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.eQm 00:32:07.722 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.eQm 00:32:07.722 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:32:07.722 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:07.722 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:07.722 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:07.722 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:32:07.722 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:32:07.722 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:32:07.722 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=181c655b5f5457f5b1bb5e3abfb9f9b06d5772e88ef59949ab8c9e4ef53a21e4 00:32:07.722 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:32:07.722 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.0pw 00:32:07.722 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 181c655b5f5457f5b1bb5e3abfb9f9b06d5772e88ef59949ab8c9e4ef53a21e4 3 00:32:07.722 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 181c655b5f5457f5b1bb5e3abfb9f9b06d5772e88ef59949ab8c9e4ef53a21e4 3 00:32:07.722 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:07.722 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:07.722 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=181c655b5f5457f5b1bb5e3abfb9f9b06d5772e88ef59949ab8c9e4ef53a21e4 00:32:07.722 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:32:07.722 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:07.722 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.0pw 00:32:07.722 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.0pw 00:32:07.722 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.0pw 00:32:07.722 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:32:07.722 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:07.722 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:07.722 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:07.722 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:32:07.722 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:32:07.722 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:32:07.722 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=fcec9324606f379d4bb95e383ae21ac13a38d02b87d153c0 00:32:07.722 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:32:07.722 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.nEB 00:32:07.722 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key fcec9324606f379d4bb95e383ae21ac13a38d02b87d153c0 0 00:32:07.722 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 fcec9324606f379d4bb95e383ae21ac13a38d02b87d153c0 0 00:32:07.722 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:07.722 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:07.722 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=fcec9324606f379d4bb95e383ae21ac13a38d02b87d153c0 00:32:07.722 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:32:07.722 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:07.722 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.nEB 00:32:07.722 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.nEB 00:32:07.722 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.nEB 00:32:07.722 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:32:07.722 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:07.722 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:07.722 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:07.722 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:32:07.722 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:32:07.722 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:32:07.722 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=9c8d8efd2e3544cbde716d1c68ff90f8e279111801594fde 00:32:07.722 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:32:07.722 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.e2J 00:32:07.723 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 9c8d8efd2e3544cbde716d1c68ff90f8e279111801594fde 2 00:32:07.723 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 9c8d8efd2e3544cbde716d1c68ff90f8e279111801594fde 2 00:32:07.723 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:07.723 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:07.723 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=9c8d8efd2e3544cbde716d1c68ff90f8e279111801594fde 00:32:07.723 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:32:07.723 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:07.723 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.e2J 00:32:07.723 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.e2J 00:32:07.723 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.e2J 00:32:07.723 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:32:07.723 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:07.723 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:07.723 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:07.723 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:32:07.723 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:32:07.723 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:07.723 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=e83a4187f2a3929d518b25c48e89cca7 00:32:07.723 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:32:07.723 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.zYm 00:32:07.723 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key e83a4187f2a3929d518b25c48e89cca7 1 00:32:07.723 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 e83a4187f2a3929d518b25c48e89cca7 1 00:32:07.723 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:07.723 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:07.723 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=e83a4187f2a3929d518b25c48e89cca7 00:32:07.723 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:32:07.723 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:07.723 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.zYm 00:32:07.723 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.zYm 00:32:07.723 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.zYm 00:32:07.723 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:32:07.723 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:07.723 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:07.723 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:07.723 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:32:07.723 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:32:07.723 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:07.723 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=587065c5394be0f7c8fb691321279388 00:32:07.723 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:32:07.723 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.rCm 00:32:07.723 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 587065c5394be0f7c8fb691321279388 1 00:32:07.723 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 587065c5394be0f7c8fb691321279388 1 00:32:07.723 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:07.723 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:07.723 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=587065c5394be0f7c8fb691321279388 00:32:07.723 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:32:07.723 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:07.981 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.rCm 00:32:07.981 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.rCm 00:32:07.981 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.rCm 00:32:07.981 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:32:07.981 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:07.981 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:07.981 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:07.981 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:32:07.981 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:32:07.981 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:32:07.981 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=5bcf70ed963238a9ab75fbf07cac52fdd7f325e1aa97eadf 00:32:07.981 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:32:07.981 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.OAx 00:32:07.981 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 5bcf70ed963238a9ab75fbf07cac52fdd7f325e1aa97eadf 2 00:32:07.981 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 5bcf70ed963238a9ab75fbf07cac52fdd7f325e1aa97eadf 2 00:32:07.981 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:07.981 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:07.981 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=5bcf70ed963238a9ab75fbf07cac52fdd7f325e1aa97eadf 00:32:07.981 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:32:07.981 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:07.981 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.OAx 00:32:07.981 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.OAx 00:32:07.981 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.OAx 00:32:07.981 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:32:07.981 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:07.981 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:07.981 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:07.981 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:32:07.981 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:32:07.981 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:07.981 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=e9c8f62b646fd99aa4fa5e59eb657fbc 00:32:07.981 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:32:07.981 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.muD 00:32:07.981 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key e9c8f62b646fd99aa4fa5e59eb657fbc 0 00:32:07.981 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 e9c8f62b646fd99aa4fa5e59eb657fbc 0 00:32:07.981 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:07.981 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:07.981 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=e9c8f62b646fd99aa4fa5e59eb657fbc 00:32:07.981 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:32:07.981 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:07.981 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.muD 00:32:07.981 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.muD 00:32:07.981 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.muD 00:32:07.981 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:32:07.981 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:07.981 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:07.981 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:07.981 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:32:07.981 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:32:07.981 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:32:07.981 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=c6b90b3d0e166c34950485a88190f806c5c0296c9622c272454d53eb6cc5c12b 00:32:07.981 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:32:07.981 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.GjK 00:32:07.981 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key c6b90b3d0e166c34950485a88190f806c5c0296c9622c272454d53eb6cc5c12b 3 00:32:07.981 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 c6b90b3d0e166c34950485a88190f806c5c0296c9622c272454d53eb6cc5c12b 3 00:32:07.981 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:07.981 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:07.981 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=c6b90b3d0e166c34950485a88190f806c5c0296c9622c272454d53eb6cc5c12b 00:32:07.981 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:32:07.981 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:07.981 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.GjK 00:32:07.982 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.GjK 00:32:07.982 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.GjK 00:32:07.982 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:32:07.982 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 1563891 00:32:07.982 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 1563891 ']' 00:32:07.982 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:07.982 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:07.982 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:07.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:07.982 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:07.982 02:10:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.240 02:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:08.240 02:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:32:08.240 02:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:08.240 02:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.eQm 00:32:08.240 02:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:08.240 02:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.240 02:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:08.240 02:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.0pw ]] 00:32:08.240 02:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.0pw 00:32:08.240 02:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:08.240 02:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.240 02:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:08.240 02:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:08.240 02:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.nEB 00:32:08.240 02:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:08.240 02:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.240 02:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:08.240 02:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.e2J ]] 00:32:08.240 02:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.e2J 00:32:08.240 02:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:08.240 02:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.240 02:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:08.240 02:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:08.240 02:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.zYm 00:32:08.240 02:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:08.240 02:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.240 02:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:08.240 02:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.rCm ]] 00:32:08.240 02:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.rCm 00:32:08.240 02:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:08.240 02:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.240 02:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:08.240 02:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:08.240 02:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.OAx 00:32:08.240 02:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:08.240 02:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.240 02:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:08.240 02:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.muD ]] 00:32:08.240 02:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.muD 00:32:08.240 02:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:08.240 02:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.240 02:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:08.240 02:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:08.240 02:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.GjK 00:32:08.240 02:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:08.240 02:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.240 02:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:08.240 02:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:32:08.240 02:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:32:08.240 02:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:32:08.240 02:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:08.240 02:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:08.240 02:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:08.240 02:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:08.240 02:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:08.240 02:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:08.240 02:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:08.240 02:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:08.240 02:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:08.240 02:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:08.240 02:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:32:08.240 02:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:32:08.240 02:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:32:08.240 02:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:08.240 02:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:32:08.240 02:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:32:08.240 02:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:32:08.240 02:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:32:08.240 02:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:32:08.240 02:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:32:08.240 02:10:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:09.613 Waiting for block devices as requested 00:32:09.613 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:32:09.613 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:09.613 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:09.871 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:09.871 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:09.871 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:10.129 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:10.129 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:10.129 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:10.129 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:10.386 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:10.386 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:10.386 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:10.386 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:10.644 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:10.644 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:10.644 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:11.214 02:10:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:32:11.214 02:10:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:32:11.214 02:10:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:32:11.214 02:10:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1660 -- # local device=nvme0n1 00:32:11.214 02:10:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1662 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:32:11.214 02:10:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1663 -- # [[ none != none ]] 00:32:11.214 02:10:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:32:11.214 02:10:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:32:11.214 02:10:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:32:11.214 No valid GPT data, bailing 00:32:11.214 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:32:11.214 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:32:11.214 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:32:11.214 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:32:11.214 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:32:11.214 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:11.214 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:32:11.214 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:32:11.214 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:32:11.214 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:32:11.214 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:32:11.214 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:32:11.214 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:32:11.214 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:32:11.214 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:32:11.214 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:32:11.214 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:32:11.214 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:32:11.472 00:32:11.472 Discovery Log Number of Records 2, Generation counter 2 00:32:11.472 =====Discovery Log Entry 0====== 00:32:11.472 trtype: tcp 00:32:11.472 adrfam: ipv4 00:32:11.472 subtype: current discovery subsystem 00:32:11.472 treq: not specified, sq flow control disable supported 00:32:11.472 portid: 1 00:32:11.472 trsvcid: 4420 00:32:11.472 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:32:11.472 traddr: 10.0.0.1 00:32:11.472 eflags: none 00:32:11.472 sectype: none 00:32:11.472 =====Discovery Log Entry 1====== 00:32:11.472 trtype: tcp 00:32:11.472 adrfam: ipv4 00:32:11.472 subtype: nvme subsystem 00:32:11.472 treq: not specified, sq flow control disable supported 00:32:11.472 portid: 1 00:32:11.472 trsvcid: 4420 00:32:11.472 subnqn: nqn.2024-02.io.spdk:cnode0 00:32:11.472 traddr: 10.0.0.1 00:32:11.472 eflags: none 00:32:11.472 sectype: none 00:32:11.472 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:32:11.472 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:32:11.472 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:32:11.472 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:32:11.472 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:11.472 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:11.472 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:11.472 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:11.472 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmNlYzkzMjQ2MDZmMzc5ZDRiYjk1ZTM4M2FlMjFhYzEzYTM4ZDAyYjg3ZDE1M2MwhpKVfw==: 00:32:11.472 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWM4ZDhlZmQyZTM1NDRjYmRlNzE2ZDFjNjhmZjkwZjhlMjc5MTExODAxNTk0ZmRlVvu5Vw==: 00:32:11.472 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:11.472 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:11.472 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmNlYzkzMjQ2MDZmMzc5ZDRiYjk1ZTM4M2FlMjFhYzEzYTM4ZDAyYjg3ZDE1M2MwhpKVfw==: 00:32:11.472 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWM4ZDhlZmQyZTM1NDRjYmRlNzE2ZDFjNjhmZjkwZjhlMjc5MTExODAxNTk0ZmRlVvu5Vw==: ]] 00:32:11.472 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWM4ZDhlZmQyZTM1NDRjYmRlNzE2ZDFjNjhmZjkwZjhlMjc5MTExODAxNTk0ZmRlVvu5Vw==: 00:32:11.472 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:32:11.472 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:32:11.472 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:32:11.472 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:32:11.472 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:32:11.472 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:11.472 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:32:11.472 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:32:11.472 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:11.472 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:11.472 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:32:11.472 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:11.472 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.472 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:11.472 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:11.472 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:11.472 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:11.472 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:11.472 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:11.472 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:11.472 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:11.472 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:11.472 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:11.472 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:11.472 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:11.472 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:11.472 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:11.472 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.472 nvme0n1 00:32:11.472 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:11.472 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:11.472 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:11.472 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.472 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:11.472 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:11.472 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:11.472 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:11.472 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:11.472 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.472 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:11.472 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:32:11.472 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:11.472 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:11.472 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:32:11.472 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:11.472 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:11.472 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:11.472 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:11.472 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGExOTJhY2YwNWZkNDdkMGVlZmIxZWI1MGU5ZTUyYzYhSQfA: 00:32:11.472 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTgxYzY1NWI1ZjU0NTdmNWIxYmI1ZTNhYmZiOWY5YjA2ZDU3NzJlODhlZjU5OTQ5YWI4YzllNGVmNTNhMjFlNFN/Ar8=: 00:32:11.472 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:11.472 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:11.472 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGExOTJhY2YwNWZkNDdkMGVlZmIxZWI1MGU5ZTUyYzYhSQfA: 00:32:11.472 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTgxYzY1NWI1ZjU0NTdmNWIxYmI1ZTNhYmZiOWY5YjA2ZDU3NzJlODhlZjU5OTQ5YWI4YzllNGVmNTNhMjFlNFN/Ar8=: ]] 00:32:11.472 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTgxYzY1NWI1ZjU0NTdmNWIxYmI1ZTNhYmZiOWY5YjA2ZDU3NzJlODhlZjU5OTQ5YWI4YzllNGVmNTNhMjFlNFN/Ar8=: 00:32:11.472 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:32:11.472 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:11.472 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:11.472 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:11.472 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:11.473 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:11.473 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:11.473 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:11.473 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.473 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:11.473 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:11.473 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:11.473 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:11.473 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:11.731 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:11.731 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:11.731 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:11.731 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:11.731 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:11.731 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:11.731 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:11.731 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:11.731 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:11.731 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.731 nvme0n1 00:32:11.731 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:11.731 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:11.731 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:11.731 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.731 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:11.731 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:11.731 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:11.731 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:11.731 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:11.731 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.731 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:11.731 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:11.731 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:32:11.731 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:11.731 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:11.731 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:11.731 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:11.731 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmNlYzkzMjQ2MDZmMzc5ZDRiYjk1ZTM4M2FlMjFhYzEzYTM4ZDAyYjg3ZDE1M2MwhpKVfw==: 00:32:11.731 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWM4ZDhlZmQyZTM1NDRjYmRlNzE2ZDFjNjhmZjkwZjhlMjc5MTExODAxNTk0ZmRlVvu5Vw==: 00:32:11.731 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:11.731 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:11.731 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmNlYzkzMjQ2MDZmMzc5ZDRiYjk1ZTM4M2FlMjFhYzEzYTM4ZDAyYjg3ZDE1M2MwhpKVfw==: 00:32:11.731 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWM4ZDhlZmQyZTM1NDRjYmRlNzE2ZDFjNjhmZjkwZjhlMjc5MTExODAxNTk0ZmRlVvu5Vw==: ]] 00:32:11.731 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWM4ZDhlZmQyZTM1NDRjYmRlNzE2ZDFjNjhmZjkwZjhlMjc5MTExODAxNTk0ZmRlVvu5Vw==: 00:32:11.731 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:32:11.731 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:11.731 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:11.731 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:11.731 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:11.731 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:11.731 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:11.731 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:11.731 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.731 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:11.731 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:11.731 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:11.731 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:11.731 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:11.731 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:11.731 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:11.731 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:11.731 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:11.731 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:11.731 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:11.731 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:11.731 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:11.731 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:11.731 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.990 nvme0n1 00:32:11.990 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:11.990 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:11.990 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:11.990 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.990 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:11.990 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:11.990 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:11.990 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:11.990 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:11.990 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.990 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:11.990 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:11.990 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:32:11.990 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:11.990 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:11.990 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:11.990 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:11.990 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTgzYTQxODdmMmEzOTI5ZDUxOGIyNWM0OGU4OWNjYTei/Jif: 00:32:11.990 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTg3MDY1YzUzOTRiZTBmN2M4ZmI2OTEzMjEyNzkzODh5ESnS: 00:32:11.990 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:11.990 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:11.990 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTgzYTQxODdmMmEzOTI5ZDUxOGIyNWM0OGU4OWNjYTei/Jif: 00:32:11.990 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTg3MDY1YzUzOTRiZTBmN2M4ZmI2OTEzMjEyNzkzODh5ESnS: ]] 00:32:11.990 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTg3MDY1YzUzOTRiZTBmN2M4ZmI2OTEzMjEyNzkzODh5ESnS: 00:32:11.990 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:32:11.990 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:11.990 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:11.990 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:11.990 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:11.990 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:11.990 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:11.990 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:11.990 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.990 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:11.990 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:11.990 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:11.990 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:11.990 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:11.990 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:11.990 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:11.990 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:11.990 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:11.990 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:11.990 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:11.990 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:11.990 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:11.990 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:11.990 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.248 nvme0n1 00:32:12.248 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:12.248 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:12.248 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:12.248 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.248 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:12.248 02:10:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:12.248 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:12.248 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:12.248 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:12.248 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.248 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:12.248 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:12.248 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:32:12.248 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:12.248 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:12.248 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:12.248 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:12.248 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWJjZjcwZWQ5NjMyMzhhOWFiNzVmYmYwN2NhYzUyZmRkN2YzMjVlMWFhOTdlYWRm4YxG1w==: 00:32:12.248 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTljOGY2MmI2NDZmZDk5YWE0ZmE1ZTU5ZWI2NTdmYmNTfCvl: 00:32:12.248 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:12.248 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:12.248 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWJjZjcwZWQ5NjMyMzhhOWFiNzVmYmYwN2NhYzUyZmRkN2YzMjVlMWFhOTdlYWRm4YxG1w==: 00:32:12.248 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTljOGY2MmI2NDZmZDk5YWE0ZmE1ZTU5ZWI2NTdmYmNTfCvl: ]] 00:32:12.248 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTljOGY2MmI2NDZmZDk5YWE0ZmE1ZTU5ZWI2NTdmYmNTfCvl: 00:32:12.248 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:32:12.248 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:12.248 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:12.248 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:12.248 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:12.248 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:12.248 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:12.248 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:12.248 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.248 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:12.248 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:12.248 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:12.248 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:12.248 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:12.248 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:12.248 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:12.248 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:12.248 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:12.248 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:12.248 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:12.248 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:12.248 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:12.248 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:12.248 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.506 nvme0n1 00:32:12.506 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:12.506 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:12.506 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:12.506 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:12.506 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.506 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:12.506 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:12.506 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:12.506 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:12.506 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.506 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:12.506 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:12.506 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:32:12.506 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:12.506 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:12.506 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:12.506 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:12.506 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzZiOTBiM2QwZTE2NmMzNDk1MDQ4NWE4ODE5MGY4MDZjNWMwMjk2Yzk2MjJjMjcyNDU0ZDUzZWI2Y2M1YzEyYlFMU+I=: 00:32:12.506 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:12.506 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:12.506 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:12.506 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzZiOTBiM2QwZTE2NmMzNDk1MDQ4NWE4ODE5MGY4MDZjNWMwMjk2Yzk2MjJjMjcyNDU0ZDUzZWI2Y2M1YzEyYlFMU+I=: 00:32:12.506 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:12.506 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:32:12.506 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:12.506 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:12.506 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:12.506 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:12.506 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:12.506 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:12.506 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:12.506 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.506 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:12.506 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:12.506 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:12.506 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:12.506 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:12.506 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:12.506 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:12.506 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:12.506 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:12.506 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:12.506 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:12.506 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:12.506 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:12.506 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:12.506 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.506 nvme0n1 00:32:12.506 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:12.506 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:12.506 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:12.506 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:12.506 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.506 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:12.765 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:12.765 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:12.765 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:12.765 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.765 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:12.765 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:12.765 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:12.765 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:32:12.765 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:12.765 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:12.765 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:12.765 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:12.765 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGExOTJhY2YwNWZkNDdkMGVlZmIxZWI1MGU5ZTUyYzYhSQfA: 00:32:12.765 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTgxYzY1NWI1ZjU0NTdmNWIxYmI1ZTNhYmZiOWY5YjA2ZDU3NzJlODhlZjU5OTQ5YWI4YzllNGVmNTNhMjFlNFN/Ar8=: 00:32:12.765 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:12.765 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:12.765 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGExOTJhY2YwNWZkNDdkMGVlZmIxZWI1MGU5ZTUyYzYhSQfA: 00:32:12.765 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTgxYzY1NWI1ZjU0NTdmNWIxYmI1ZTNhYmZiOWY5YjA2ZDU3NzJlODhlZjU5OTQ5YWI4YzllNGVmNTNhMjFlNFN/Ar8=: ]] 00:32:12.765 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTgxYzY1NWI1ZjU0NTdmNWIxYmI1ZTNhYmZiOWY5YjA2ZDU3NzJlODhlZjU5OTQ5YWI4YzllNGVmNTNhMjFlNFN/Ar8=: 00:32:12.765 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:32:12.765 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:12.765 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:12.765 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:12.765 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:12.765 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:12.765 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:12.765 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:12.765 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.765 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:12.765 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:12.765 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:12.765 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:12.765 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:12.765 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:12.765 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:12.765 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:12.765 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:12.765 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:12.765 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:12.765 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:12.765 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:12.765 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:12.765 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.765 nvme0n1 00:32:12.765 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:12.765 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:12.765 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:12.765 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.765 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:12.765 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:13.023 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:13.023 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:13.023 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:13.023 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.023 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:13.023 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:13.023 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:32:13.023 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:13.023 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:13.023 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:13.023 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:13.023 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmNlYzkzMjQ2MDZmMzc5ZDRiYjk1ZTM4M2FlMjFhYzEzYTM4ZDAyYjg3ZDE1M2MwhpKVfw==: 00:32:13.023 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWM4ZDhlZmQyZTM1NDRjYmRlNzE2ZDFjNjhmZjkwZjhlMjc5MTExODAxNTk0ZmRlVvu5Vw==: 00:32:13.023 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:13.023 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:13.023 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmNlYzkzMjQ2MDZmMzc5ZDRiYjk1ZTM4M2FlMjFhYzEzYTM4ZDAyYjg3ZDE1M2MwhpKVfw==: 00:32:13.023 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWM4ZDhlZmQyZTM1NDRjYmRlNzE2ZDFjNjhmZjkwZjhlMjc5MTExODAxNTk0ZmRlVvu5Vw==: ]] 00:32:13.023 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWM4ZDhlZmQyZTM1NDRjYmRlNzE2ZDFjNjhmZjkwZjhlMjc5MTExODAxNTk0ZmRlVvu5Vw==: 00:32:13.023 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:32:13.023 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:13.023 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:13.023 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:13.024 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:13.024 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:13.024 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:13.024 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:13.024 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.024 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:13.024 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:13.024 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:13.024 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:13.024 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:13.024 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:13.024 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:13.024 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:13.024 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:13.024 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:13.024 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:13.024 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:13.024 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:13.024 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:13.024 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.024 nvme0n1 00:32:13.024 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:13.024 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:13.024 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:13.024 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.024 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:13.024 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:13.282 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:13.282 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:13.282 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:13.282 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.282 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:13.282 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:13.282 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:32:13.282 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:13.282 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:13.282 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:13.282 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:13.282 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTgzYTQxODdmMmEzOTI5ZDUxOGIyNWM0OGU4OWNjYTei/Jif: 00:32:13.282 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTg3MDY1YzUzOTRiZTBmN2M4ZmI2OTEzMjEyNzkzODh5ESnS: 00:32:13.282 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:13.282 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:13.282 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTgzYTQxODdmMmEzOTI5ZDUxOGIyNWM0OGU4OWNjYTei/Jif: 00:32:13.282 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTg3MDY1YzUzOTRiZTBmN2M4ZmI2OTEzMjEyNzkzODh5ESnS: ]] 00:32:13.282 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTg3MDY1YzUzOTRiZTBmN2M4ZmI2OTEzMjEyNzkzODh5ESnS: 00:32:13.282 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:32:13.282 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:13.282 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:13.282 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:13.282 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:13.282 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:13.282 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:13.282 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:13.282 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.282 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:13.282 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:13.282 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:13.282 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:13.282 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:13.282 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:13.282 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:13.282 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:13.282 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:13.282 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:13.282 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:13.282 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:13.282 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:13.282 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:13.282 02:10:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.282 nvme0n1 00:32:13.282 02:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:13.282 02:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:13.282 02:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:13.282 02:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:13.282 02:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.282 02:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:13.540 02:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:13.540 02:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:13.540 02:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:13.540 02:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.540 02:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:13.540 02:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:13.540 02:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:32:13.540 02:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:13.540 02:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:13.540 02:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:13.540 02:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:13.540 02:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWJjZjcwZWQ5NjMyMzhhOWFiNzVmYmYwN2NhYzUyZmRkN2YzMjVlMWFhOTdlYWRm4YxG1w==: 00:32:13.540 02:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTljOGY2MmI2NDZmZDk5YWE0ZmE1ZTU5ZWI2NTdmYmNTfCvl: 00:32:13.540 02:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:13.540 02:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:13.540 02:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWJjZjcwZWQ5NjMyMzhhOWFiNzVmYmYwN2NhYzUyZmRkN2YzMjVlMWFhOTdlYWRm4YxG1w==: 00:32:13.540 02:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTljOGY2MmI2NDZmZDk5YWE0ZmE1ZTU5ZWI2NTdmYmNTfCvl: ]] 00:32:13.540 02:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTljOGY2MmI2NDZmZDk5YWE0ZmE1ZTU5ZWI2NTdmYmNTfCvl: 00:32:13.540 02:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:32:13.540 02:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:13.540 02:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:13.540 02:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:13.540 02:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:13.540 02:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:13.540 02:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:13.540 02:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:13.540 02:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.540 02:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:13.540 02:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:13.540 02:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:13.540 02:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:13.540 02:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:13.540 02:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:13.540 02:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:13.540 02:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:13.540 02:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:13.540 02:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:13.540 02:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:13.540 02:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:13.540 02:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:13.540 02:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:13.540 02:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.540 nvme0n1 00:32:13.540 02:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:13.540 02:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:13.541 02:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:13.541 02:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.541 02:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:13.541 02:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:13.798 02:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:13.798 02:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:13.798 02:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:13.798 02:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.798 02:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:13.798 02:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:13.798 02:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:32:13.799 02:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:13.799 02:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:13.799 02:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:13.799 02:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:13.799 02:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzZiOTBiM2QwZTE2NmMzNDk1MDQ4NWE4ODE5MGY4MDZjNWMwMjk2Yzk2MjJjMjcyNDU0ZDUzZWI2Y2M1YzEyYlFMU+I=: 00:32:13.799 02:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:13.799 02:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:13.799 02:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:13.799 02:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzZiOTBiM2QwZTE2NmMzNDk1MDQ4NWE4ODE5MGY4MDZjNWMwMjk2Yzk2MjJjMjcyNDU0ZDUzZWI2Y2M1YzEyYlFMU+I=: 00:32:13.799 02:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:13.799 02:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:32:13.799 02:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:13.799 02:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:13.799 02:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:13.799 02:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:13.799 02:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:13.799 02:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:13.799 02:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:13.799 02:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.799 02:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:13.799 02:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:13.799 02:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:13.799 02:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:13.799 02:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:13.799 02:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:13.799 02:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:13.799 02:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:13.799 02:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:13.799 02:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:13.799 02:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:13.799 02:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:13.799 02:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:13.799 02:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:13.799 02:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.799 nvme0n1 00:32:13.799 02:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:13.799 02:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:13.799 02:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:13.799 02:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:13.799 02:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.799 02:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:14.057 02:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:14.057 02:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:14.057 02:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:14.057 02:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.057 02:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:14.057 02:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:14.057 02:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:14.057 02:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:32:14.057 02:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:14.057 02:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:14.057 02:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:14.057 02:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:14.057 02:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGExOTJhY2YwNWZkNDdkMGVlZmIxZWI1MGU5ZTUyYzYhSQfA: 00:32:14.057 02:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTgxYzY1NWI1ZjU0NTdmNWIxYmI1ZTNhYmZiOWY5YjA2ZDU3NzJlODhlZjU5OTQ5YWI4YzllNGVmNTNhMjFlNFN/Ar8=: 00:32:14.057 02:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:14.057 02:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:14.057 02:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGExOTJhY2YwNWZkNDdkMGVlZmIxZWI1MGU5ZTUyYzYhSQfA: 00:32:14.057 02:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTgxYzY1NWI1ZjU0NTdmNWIxYmI1ZTNhYmZiOWY5YjA2ZDU3NzJlODhlZjU5OTQ5YWI4YzllNGVmNTNhMjFlNFN/Ar8=: ]] 00:32:14.057 02:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTgxYzY1NWI1ZjU0NTdmNWIxYmI1ZTNhYmZiOWY5YjA2ZDU3NzJlODhlZjU5OTQ5YWI4YzllNGVmNTNhMjFlNFN/Ar8=: 00:32:14.057 02:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:32:14.057 02:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:14.057 02:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:14.057 02:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:14.057 02:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:14.057 02:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:14.057 02:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:14.057 02:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:14.057 02:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.057 02:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:14.057 02:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:14.057 02:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:14.057 02:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:14.057 02:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:14.057 02:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:14.057 02:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:14.057 02:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:14.057 02:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:14.057 02:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:14.057 02:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:14.057 02:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:14.057 02:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:14.057 02:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:14.057 02:10:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.315 nvme0n1 00:32:14.315 02:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:14.315 02:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:14.315 02:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:14.315 02:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.315 02:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:14.315 02:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:14.315 02:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:14.315 02:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:14.315 02:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:14.315 02:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.315 02:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:14.315 02:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:14.315 02:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:32:14.315 02:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:14.315 02:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:14.315 02:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:14.315 02:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:14.315 02:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmNlYzkzMjQ2MDZmMzc5ZDRiYjk1ZTM4M2FlMjFhYzEzYTM4ZDAyYjg3ZDE1M2MwhpKVfw==: 00:32:14.315 02:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWM4ZDhlZmQyZTM1NDRjYmRlNzE2ZDFjNjhmZjkwZjhlMjc5MTExODAxNTk0ZmRlVvu5Vw==: 00:32:14.315 02:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:14.315 02:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:14.315 02:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmNlYzkzMjQ2MDZmMzc5ZDRiYjk1ZTM4M2FlMjFhYzEzYTM4ZDAyYjg3ZDE1M2MwhpKVfw==: 00:32:14.315 02:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWM4ZDhlZmQyZTM1NDRjYmRlNzE2ZDFjNjhmZjkwZjhlMjc5MTExODAxNTk0ZmRlVvu5Vw==: ]] 00:32:14.315 02:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWM4ZDhlZmQyZTM1NDRjYmRlNzE2ZDFjNjhmZjkwZjhlMjc5MTExODAxNTk0ZmRlVvu5Vw==: 00:32:14.315 02:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:32:14.315 02:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:14.315 02:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:14.315 02:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:14.315 02:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:14.315 02:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:14.315 02:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:14.315 02:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:14.315 02:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.315 02:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:14.315 02:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:14.315 02:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:14.315 02:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:14.315 02:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:14.315 02:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:14.315 02:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:14.315 02:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:14.315 02:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:14.315 02:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:14.315 02:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:14.315 02:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:14.315 02:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:14.315 02:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:14.315 02:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.573 nvme0n1 00:32:14.573 02:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:14.573 02:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:14.573 02:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:14.573 02:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:14.573 02:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.573 02:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:14.573 02:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:14.573 02:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:14.573 02:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:14.573 02:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.573 02:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:14.573 02:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:14.573 02:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:32:14.573 02:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:14.573 02:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:14.573 02:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:14.573 02:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:14.573 02:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTgzYTQxODdmMmEzOTI5ZDUxOGIyNWM0OGU4OWNjYTei/Jif: 00:32:14.573 02:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTg3MDY1YzUzOTRiZTBmN2M4ZmI2OTEzMjEyNzkzODh5ESnS: 00:32:14.573 02:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:14.573 02:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:14.573 02:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTgzYTQxODdmMmEzOTI5ZDUxOGIyNWM0OGU4OWNjYTei/Jif: 00:32:14.574 02:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTg3MDY1YzUzOTRiZTBmN2M4ZmI2OTEzMjEyNzkzODh5ESnS: ]] 00:32:14.574 02:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTg3MDY1YzUzOTRiZTBmN2M4ZmI2OTEzMjEyNzkzODh5ESnS: 00:32:14.574 02:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:32:14.574 02:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:14.574 02:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:14.574 02:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:14.574 02:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:14.574 02:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:14.574 02:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:14.574 02:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:14.574 02:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.574 02:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:14.574 02:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:14.574 02:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:14.574 02:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:14.574 02:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:14.574 02:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:14.574 02:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:14.574 02:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:14.574 02:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:14.574 02:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:14.574 02:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:14.574 02:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:14.574 02:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:14.574 02:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:14.574 02:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.831 nvme0n1 00:32:14.831 02:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:14.831 02:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:14.831 02:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:14.831 02:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:14.831 02:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:15.089 02:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:15.089 02:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:15.089 02:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:15.089 02:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:15.089 02:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:15.089 02:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:15.089 02:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:15.089 02:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:32:15.090 02:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:15.090 02:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:15.090 02:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:15.090 02:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:15.090 02:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWJjZjcwZWQ5NjMyMzhhOWFiNzVmYmYwN2NhYzUyZmRkN2YzMjVlMWFhOTdlYWRm4YxG1w==: 00:32:15.090 02:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTljOGY2MmI2NDZmZDk5YWE0ZmE1ZTU5ZWI2NTdmYmNTfCvl: 00:32:15.090 02:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:15.090 02:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:15.090 02:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWJjZjcwZWQ5NjMyMzhhOWFiNzVmYmYwN2NhYzUyZmRkN2YzMjVlMWFhOTdlYWRm4YxG1w==: 00:32:15.090 02:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTljOGY2MmI2NDZmZDk5YWE0ZmE1ZTU5ZWI2NTdmYmNTfCvl: ]] 00:32:15.090 02:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTljOGY2MmI2NDZmZDk5YWE0ZmE1ZTU5ZWI2NTdmYmNTfCvl: 00:32:15.090 02:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:32:15.090 02:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:15.090 02:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:15.090 02:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:15.090 02:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:15.090 02:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:15.090 02:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:15.090 02:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:15.090 02:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:15.090 02:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:15.090 02:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:15.090 02:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:15.090 02:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:15.090 02:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:15.090 02:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:15.090 02:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:15.090 02:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:15.090 02:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:15.090 02:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:15.090 02:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:15.090 02:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:15.090 02:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:15.090 02:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:15.090 02:10:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:15.348 nvme0n1 00:32:15.348 02:10:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:15.348 02:10:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:15.348 02:10:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:15.348 02:10:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:15.348 02:10:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:15.348 02:10:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:15.348 02:10:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:15.348 02:10:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:15.348 02:10:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:15.348 02:10:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:15.348 02:10:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:15.348 02:10:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:15.348 02:10:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:32:15.348 02:10:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:15.348 02:10:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:15.348 02:10:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:15.348 02:10:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:15.348 02:10:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzZiOTBiM2QwZTE2NmMzNDk1MDQ4NWE4ODE5MGY4MDZjNWMwMjk2Yzk2MjJjMjcyNDU0ZDUzZWI2Y2M1YzEyYlFMU+I=: 00:32:15.348 02:10:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:15.348 02:10:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:15.348 02:10:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:15.348 02:10:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzZiOTBiM2QwZTE2NmMzNDk1MDQ4NWE4ODE5MGY4MDZjNWMwMjk2Yzk2MjJjMjcyNDU0ZDUzZWI2Y2M1YzEyYlFMU+I=: 00:32:15.348 02:10:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:15.348 02:10:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:32:15.348 02:10:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:15.348 02:10:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:15.348 02:10:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:15.348 02:10:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:15.348 02:10:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:15.348 02:10:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:15.348 02:10:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:15.348 02:10:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:15.348 02:10:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:15.348 02:10:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:15.348 02:10:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:15.348 02:10:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:15.348 02:10:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:15.348 02:10:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:15.348 02:10:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:15.348 02:10:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:15.348 02:10:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:15.348 02:10:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:15.348 02:10:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:15.348 02:10:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:15.348 02:10:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:15.348 02:10:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:15.348 02:10:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:15.606 nvme0n1 00:32:15.606 02:10:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:15.606 02:10:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:15.606 02:10:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:15.606 02:10:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:15.606 02:10:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:15.606 02:10:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:15.606 02:10:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:15.606 02:10:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:15.606 02:10:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:15.606 02:10:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:15.606 02:10:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:15.606 02:10:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:15.606 02:10:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:15.606 02:10:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:32:15.606 02:10:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:15.606 02:10:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:15.606 02:10:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:15.606 02:10:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:15.606 02:10:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGExOTJhY2YwNWZkNDdkMGVlZmIxZWI1MGU5ZTUyYzYhSQfA: 00:32:15.606 02:10:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTgxYzY1NWI1ZjU0NTdmNWIxYmI1ZTNhYmZiOWY5YjA2ZDU3NzJlODhlZjU5OTQ5YWI4YzllNGVmNTNhMjFlNFN/Ar8=: 00:32:15.606 02:10:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:15.606 02:10:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:15.606 02:10:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGExOTJhY2YwNWZkNDdkMGVlZmIxZWI1MGU5ZTUyYzYhSQfA: 00:32:15.606 02:10:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTgxYzY1NWI1ZjU0NTdmNWIxYmI1ZTNhYmZiOWY5YjA2ZDU3NzJlODhlZjU5OTQ5YWI4YzllNGVmNTNhMjFlNFN/Ar8=: ]] 00:32:15.606 02:10:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTgxYzY1NWI1ZjU0NTdmNWIxYmI1ZTNhYmZiOWY5YjA2ZDU3NzJlODhlZjU5OTQ5YWI4YzllNGVmNTNhMjFlNFN/Ar8=: 00:32:15.606 02:10:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:32:15.606 02:10:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:15.606 02:10:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:15.606 02:10:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:15.606 02:10:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:15.606 02:10:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:15.606 02:10:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:15.606 02:10:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:15.606 02:10:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:15.606 02:10:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:15.606 02:10:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:15.606 02:10:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:15.606 02:10:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:15.606 02:10:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:15.606 02:10:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:15.606 02:10:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:15.606 02:10:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:15.606 02:10:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:15.607 02:10:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:15.607 02:10:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:15.607 02:10:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:15.607 02:10:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:15.607 02:10:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:15.607 02:10:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:16.189 nvme0n1 00:32:16.189 02:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:16.189 02:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:16.189 02:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:16.189 02:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:16.189 02:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:16.189 02:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:16.189 02:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:16.189 02:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:16.189 02:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:16.189 02:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:16.189 02:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:16.189 02:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:16.189 02:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:32:16.189 02:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:16.189 02:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:16.189 02:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:16.189 02:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:16.189 02:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmNlYzkzMjQ2MDZmMzc5ZDRiYjk1ZTM4M2FlMjFhYzEzYTM4ZDAyYjg3ZDE1M2MwhpKVfw==: 00:32:16.189 02:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWM4ZDhlZmQyZTM1NDRjYmRlNzE2ZDFjNjhmZjkwZjhlMjc5MTExODAxNTk0ZmRlVvu5Vw==: 00:32:16.189 02:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:16.189 02:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:16.189 02:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmNlYzkzMjQ2MDZmMzc5ZDRiYjk1ZTM4M2FlMjFhYzEzYTM4ZDAyYjg3ZDE1M2MwhpKVfw==: 00:32:16.189 02:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWM4ZDhlZmQyZTM1NDRjYmRlNzE2ZDFjNjhmZjkwZjhlMjc5MTExODAxNTk0ZmRlVvu5Vw==: ]] 00:32:16.189 02:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWM4ZDhlZmQyZTM1NDRjYmRlNzE2ZDFjNjhmZjkwZjhlMjc5MTExODAxNTk0ZmRlVvu5Vw==: 00:32:16.189 02:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:32:16.189 02:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:16.189 02:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:16.189 02:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:16.189 02:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:16.189 02:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:16.189 02:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:16.189 02:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:16.189 02:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:16.189 02:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:16.190 02:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:16.190 02:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:16.190 02:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:16.190 02:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:16.190 02:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:16.190 02:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:16.190 02:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:16.190 02:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:16.190 02:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:16.190 02:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:16.190 02:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:16.451 02:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:16.451 02:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:16.451 02:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:16.709 nvme0n1 00:32:16.709 02:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:16.709 02:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:16.709 02:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:16.709 02:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:16.709 02:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:16.709 02:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:16.967 02:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:16.967 02:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:16.967 02:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:16.967 02:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:16.967 02:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:16.967 02:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:16.967 02:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:32:16.967 02:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:16.967 02:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:16.967 02:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:16.967 02:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:16.967 02:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTgzYTQxODdmMmEzOTI5ZDUxOGIyNWM0OGU4OWNjYTei/Jif: 00:32:16.968 02:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTg3MDY1YzUzOTRiZTBmN2M4ZmI2OTEzMjEyNzkzODh5ESnS: 00:32:16.968 02:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:16.968 02:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:16.968 02:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTgzYTQxODdmMmEzOTI5ZDUxOGIyNWM0OGU4OWNjYTei/Jif: 00:32:16.968 02:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTg3MDY1YzUzOTRiZTBmN2M4ZmI2OTEzMjEyNzkzODh5ESnS: ]] 00:32:16.968 02:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTg3MDY1YzUzOTRiZTBmN2M4ZmI2OTEzMjEyNzkzODh5ESnS: 00:32:16.968 02:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:32:16.968 02:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:16.968 02:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:16.968 02:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:16.968 02:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:16.968 02:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:16.968 02:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:16.968 02:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:16.968 02:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:16.968 02:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:16.968 02:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:16.968 02:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:16.968 02:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:16.968 02:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:16.968 02:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:16.968 02:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:16.968 02:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:16.968 02:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:16.968 02:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:16.968 02:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:16.968 02:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:16.968 02:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:16.968 02:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:16.968 02:10:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:17.535 nvme0n1 00:32:17.535 02:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:17.535 02:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:17.535 02:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:17.535 02:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:17.535 02:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:17.535 02:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:17.535 02:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:17.535 02:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:17.535 02:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:17.535 02:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:17.535 02:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:17.535 02:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:17.535 02:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:32:17.536 02:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:17.536 02:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:17.536 02:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:17.536 02:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:17.536 02:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWJjZjcwZWQ5NjMyMzhhOWFiNzVmYmYwN2NhYzUyZmRkN2YzMjVlMWFhOTdlYWRm4YxG1w==: 00:32:17.536 02:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTljOGY2MmI2NDZmZDk5YWE0ZmE1ZTU5ZWI2NTdmYmNTfCvl: 00:32:17.536 02:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:17.536 02:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:17.536 02:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWJjZjcwZWQ5NjMyMzhhOWFiNzVmYmYwN2NhYzUyZmRkN2YzMjVlMWFhOTdlYWRm4YxG1w==: 00:32:17.536 02:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTljOGY2MmI2NDZmZDk5YWE0ZmE1ZTU5ZWI2NTdmYmNTfCvl: ]] 00:32:17.536 02:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTljOGY2MmI2NDZmZDk5YWE0ZmE1ZTU5ZWI2NTdmYmNTfCvl: 00:32:17.536 02:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:32:17.536 02:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:17.536 02:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:17.536 02:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:17.536 02:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:17.536 02:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:17.536 02:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:17.536 02:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:17.536 02:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:17.536 02:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:17.536 02:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:17.536 02:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:17.536 02:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:17.536 02:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:17.536 02:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:17.536 02:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:17.536 02:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:17.536 02:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:17.536 02:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:17.536 02:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:17.536 02:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:17.536 02:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:17.536 02:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:17.536 02:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:18.129 nvme0n1 00:32:18.129 02:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:18.130 02:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:18.130 02:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:18.130 02:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:18.130 02:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:18.130 02:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:18.130 02:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:18.130 02:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:18.130 02:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:18.130 02:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:18.130 02:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:18.130 02:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:18.130 02:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:32:18.130 02:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:18.130 02:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:18.130 02:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:18.130 02:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:18.130 02:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzZiOTBiM2QwZTE2NmMzNDk1MDQ4NWE4ODE5MGY4MDZjNWMwMjk2Yzk2MjJjMjcyNDU0ZDUzZWI2Y2M1YzEyYlFMU+I=: 00:32:18.130 02:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:18.130 02:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:18.130 02:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:18.130 02:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzZiOTBiM2QwZTE2NmMzNDk1MDQ4NWE4ODE5MGY4MDZjNWMwMjk2Yzk2MjJjMjcyNDU0ZDUzZWI2Y2M1YzEyYlFMU+I=: 00:32:18.130 02:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:18.130 02:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:32:18.130 02:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:18.130 02:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:18.130 02:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:18.130 02:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:18.130 02:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:18.130 02:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:18.130 02:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:18.130 02:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:18.130 02:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:18.130 02:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:18.130 02:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:18.130 02:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:18.130 02:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:18.130 02:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:18.130 02:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:18.130 02:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:18.130 02:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:18.130 02:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:18.130 02:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:18.130 02:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:18.130 02:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:18.130 02:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:18.130 02:10:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:18.695 nvme0n1 00:32:18.695 02:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:18.695 02:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:18.695 02:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:18.695 02:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:18.695 02:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:18.695 02:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:18.695 02:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:18.695 02:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:18.695 02:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:18.695 02:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:18.695 02:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:18.695 02:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:18.695 02:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:18.695 02:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:32:18.695 02:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:18.695 02:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:18.695 02:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:18.695 02:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:18.695 02:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGExOTJhY2YwNWZkNDdkMGVlZmIxZWI1MGU5ZTUyYzYhSQfA: 00:32:18.695 02:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTgxYzY1NWI1ZjU0NTdmNWIxYmI1ZTNhYmZiOWY5YjA2ZDU3NzJlODhlZjU5OTQ5YWI4YzllNGVmNTNhMjFlNFN/Ar8=: 00:32:18.695 02:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:18.695 02:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:18.695 02:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGExOTJhY2YwNWZkNDdkMGVlZmIxZWI1MGU5ZTUyYzYhSQfA: 00:32:18.695 02:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTgxYzY1NWI1ZjU0NTdmNWIxYmI1ZTNhYmZiOWY5YjA2ZDU3NzJlODhlZjU5OTQ5YWI4YzllNGVmNTNhMjFlNFN/Ar8=: ]] 00:32:18.695 02:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTgxYzY1NWI1ZjU0NTdmNWIxYmI1ZTNhYmZiOWY5YjA2ZDU3NzJlODhlZjU5OTQ5YWI4YzllNGVmNTNhMjFlNFN/Ar8=: 00:32:18.695 02:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:32:18.695 02:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:18.695 02:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:18.695 02:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:18.695 02:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:18.695 02:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:18.695 02:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:18.695 02:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:18.695 02:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:18.695 02:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:18.695 02:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:18.695 02:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:18.695 02:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:18.695 02:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:18.695 02:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:18.695 02:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:18.695 02:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:18.695 02:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:18.695 02:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:18.695 02:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:18.695 02:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:18.695 02:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:18.696 02:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:18.696 02:10:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.627 nvme0n1 00:32:19.627 02:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:19.627 02:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:19.627 02:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:19.627 02:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.627 02:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:19.627 02:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:19.627 02:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:19.627 02:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:19.627 02:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:19.627 02:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.627 02:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:19.627 02:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:19.627 02:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:32:19.627 02:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:19.627 02:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:19.627 02:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:19.628 02:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:19.628 02:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmNlYzkzMjQ2MDZmMzc5ZDRiYjk1ZTM4M2FlMjFhYzEzYTM4ZDAyYjg3ZDE1M2MwhpKVfw==: 00:32:19.628 02:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWM4ZDhlZmQyZTM1NDRjYmRlNzE2ZDFjNjhmZjkwZjhlMjc5MTExODAxNTk0ZmRlVvu5Vw==: 00:32:19.628 02:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:19.628 02:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:19.628 02:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmNlYzkzMjQ2MDZmMzc5ZDRiYjk1ZTM4M2FlMjFhYzEzYTM4ZDAyYjg3ZDE1M2MwhpKVfw==: 00:32:19.628 02:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWM4ZDhlZmQyZTM1NDRjYmRlNzE2ZDFjNjhmZjkwZjhlMjc5MTExODAxNTk0ZmRlVvu5Vw==: ]] 00:32:19.628 02:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWM4ZDhlZmQyZTM1NDRjYmRlNzE2ZDFjNjhmZjkwZjhlMjc5MTExODAxNTk0ZmRlVvu5Vw==: 00:32:19.628 02:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:32:19.628 02:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:19.628 02:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:19.628 02:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:19.628 02:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:19.628 02:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:19.628 02:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:19.628 02:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:19.628 02:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.628 02:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:19.628 02:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:19.628 02:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:19.628 02:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:19.628 02:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:19.628 02:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:19.628 02:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:19.628 02:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:19.628 02:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:19.628 02:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:19.628 02:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:19.628 02:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:19.886 02:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:19.886 02:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:19.886 02:10:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.820 nvme0n1 00:32:20.820 02:10:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.820 02:10:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:20.820 02:10:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:20.820 02:10:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.820 02:10:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.820 02:10:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.820 02:10:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:20.820 02:10:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:20.820 02:10:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.820 02:10:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.820 02:10:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.820 02:10:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:20.820 02:10:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:32:20.820 02:10:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:20.820 02:10:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:20.820 02:10:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:20.820 02:10:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:20.820 02:10:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTgzYTQxODdmMmEzOTI5ZDUxOGIyNWM0OGU4OWNjYTei/Jif: 00:32:20.820 02:10:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTg3MDY1YzUzOTRiZTBmN2M4ZmI2OTEzMjEyNzkzODh5ESnS: 00:32:20.820 02:10:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:20.820 02:10:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:20.820 02:10:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTgzYTQxODdmMmEzOTI5ZDUxOGIyNWM0OGU4OWNjYTei/Jif: 00:32:20.820 02:10:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTg3MDY1YzUzOTRiZTBmN2M4ZmI2OTEzMjEyNzkzODh5ESnS: ]] 00:32:20.820 02:10:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTg3MDY1YzUzOTRiZTBmN2M4ZmI2OTEzMjEyNzkzODh5ESnS: 00:32:20.820 02:10:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:32:20.820 02:10:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:20.820 02:10:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:20.820 02:10:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:20.820 02:10:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:20.820 02:10:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:20.820 02:10:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:20.820 02:10:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.820 02:10:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.820 02:10:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.820 02:10:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:20.820 02:10:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:20.820 02:10:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:20.820 02:10:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:20.820 02:10:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:20.820 02:10:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:20.820 02:10:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:20.820 02:10:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:20.820 02:10:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:20.820 02:10:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:20.820 02:10:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:20.820 02:10:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:20.820 02:10:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.820 02:10:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.754 nvme0n1 00:32:21.754 02:10:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.754 02:10:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:21.754 02:10:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.754 02:10:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.754 02:10:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:21.754 02:10:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.754 02:10:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:21.754 02:10:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:21.754 02:10:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.754 02:10:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.754 02:10:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.754 02:10:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:21.754 02:10:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:32:21.754 02:10:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:21.754 02:10:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:21.754 02:10:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:21.754 02:10:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:21.754 02:10:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWJjZjcwZWQ5NjMyMzhhOWFiNzVmYmYwN2NhYzUyZmRkN2YzMjVlMWFhOTdlYWRm4YxG1w==: 00:32:21.754 02:10:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTljOGY2MmI2NDZmZDk5YWE0ZmE1ZTU5ZWI2NTdmYmNTfCvl: 00:32:21.754 02:10:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:21.754 02:10:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:21.754 02:10:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWJjZjcwZWQ5NjMyMzhhOWFiNzVmYmYwN2NhYzUyZmRkN2YzMjVlMWFhOTdlYWRm4YxG1w==: 00:32:21.754 02:10:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTljOGY2MmI2NDZmZDk5YWE0ZmE1ZTU5ZWI2NTdmYmNTfCvl: ]] 00:32:21.754 02:10:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTljOGY2MmI2NDZmZDk5YWE0ZmE1ZTU5ZWI2NTdmYmNTfCvl: 00:32:21.754 02:10:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:32:21.754 02:10:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:21.754 02:10:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:21.754 02:10:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:21.754 02:10:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:21.754 02:10:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:21.754 02:10:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:21.754 02:10:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.754 02:10:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.754 02:10:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.754 02:10:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:21.754 02:10:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:21.754 02:10:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:21.754 02:10:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:21.754 02:10:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:21.754 02:10:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:21.754 02:10:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:21.754 02:10:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:21.754 02:10:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:21.754 02:10:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:21.754 02:10:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:21.754 02:10:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:21.754 02:10:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.754 02:10:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.687 nvme0n1 00:32:22.687 02:10:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.687 02:10:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:22.687 02:10:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.687 02:10:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.687 02:10:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:22.687 02:10:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.687 02:10:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:22.687 02:10:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:22.687 02:10:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.687 02:10:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.687 02:10:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.687 02:10:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:22.687 02:10:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:32:22.687 02:10:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:22.687 02:10:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:22.687 02:10:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:22.687 02:10:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:22.687 02:10:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzZiOTBiM2QwZTE2NmMzNDk1MDQ4NWE4ODE5MGY4MDZjNWMwMjk2Yzk2MjJjMjcyNDU0ZDUzZWI2Y2M1YzEyYlFMU+I=: 00:32:22.687 02:10:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:22.687 02:10:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:22.687 02:10:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:22.687 02:10:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzZiOTBiM2QwZTE2NmMzNDk1MDQ4NWE4ODE5MGY4MDZjNWMwMjk2Yzk2MjJjMjcyNDU0ZDUzZWI2Y2M1YzEyYlFMU+I=: 00:32:22.687 02:10:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:22.687 02:10:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:32:22.687 02:10:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:22.687 02:10:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:22.687 02:10:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:22.687 02:10:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:22.687 02:10:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:22.687 02:10:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:22.687 02:10:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.687 02:10:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.687 02:10:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.687 02:10:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:22.687 02:10:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:22.687 02:10:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:22.687 02:10:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:22.687 02:10:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:22.687 02:10:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:22.687 02:10:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:22.688 02:10:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:22.688 02:10:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:22.688 02:10:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:22.688 02:10:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:22.688 02:10:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:22.688 02:10:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.688 02:10:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.620 nvme0n1 00:32:23.620 02:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.620 02:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:23.620 02:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:23.620 02:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.620 02:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.620 02:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.879 02:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:23.879 02:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:23.879 02:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.879 02:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.879 02:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.879 02:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:32:23.879 02:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:23.879 02:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:23.879 02:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:32:23.879 02:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:23.879 02:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:23.879 02:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:23.879 02:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:23.879 02:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGExOTJhY2YwNWZkNDdkMGVlZmIxZWI1MGU5ZTUyYzYhSQfA: 00:32:23.879 02:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTgxYzY1NWI1ZjU0NTdmNWIxYmI1ZTNhYmZiOWY5YjA2ZDU3NzJlODhlZjU5OTQ5YWI4YzllNGVmNTNhMjFlNFN/Ar8=: 00:32:23.879 02:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:23.879 02:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:23.879 02:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGExOTJhY2YwNWZkNDdkMGVlZmIxZWI1MGU5ZTUyYzYhSQfA: 00:32:23.879 02:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTgxYzY1NWI1ZjU0NTdmNWIxYmI1ZTNhYmZiOWY5YjA2ZDU3NzJlODhlZjU5OTQ5YWI4YzllNGVmNTNhMjFlNFN/Ar8=: ]] 00:32:23.879 02:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTgxYzY1NWI1ZjU0NTdmNWIxYmI1ZTNhYmZiOWY5YjA2ZDU3NzJlODhlZjU5OTQ5YWI4YzllNGVmNTNhMjFlNFN/Ar8=: 00:32:23.879 02:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:32:23.879 02:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:23.879 02:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:23.879 02:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:23.879 02:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:23.879 02:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:23.879 02:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:23.879 02:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.879 02:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.879 02:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.879 02:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:23.879 02:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:23.879 02:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:23.879 02:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:23.879 02:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:23.879 02:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:23.879 02:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:23.879 02:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:23.879 02:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:23.879 02:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:23.879 02:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:23.879 02:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:23.879 02:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.879 02:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.879 nvme0n1 00:32:23.879 02:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.879 02:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:23.879 02:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.879 02:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.879 02:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:23.879 02:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.879 02:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:23.879 02:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:23.879 02:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.879 02:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.879 02:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.879 02:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:23.879 02:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:32:23.879 02:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:23.879 02:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:23.879 02:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:23.879 02:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:23.879 02:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmNlYzkzMjQ2MDZmMzc5ZDRiYjk1ZTM4M2FlMjFhYzEzYTM4ZDAyYjg3ZDE1M2MwhpKVfw==: 00:32:23.879 02:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWM4ZDhlZmQyZTM1NDRjYmRlNzE2ZDFjNjhmZjkwZjhlMjc5MTExODAxNTk0ZmRlVvu5Vw==: 00:32:23.879 02:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:23.879 02:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:23.879 02:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmNlYzkzMjQ2MDZmMzc5ZDRiYjk1ZTM4M2FlMjFhYzEzYTM4ZDAyYjg3ZDE1M2MwhpKVfw==: 00:32:23.879 02:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWM4ZDhlZmQyZTM1NDRjYmRlNzE2ZDFjNjhmZjkwZjhlMjc5MTExODAxNTk0ZmRlVvu5Vw==: ]] 00:32:23.879 02:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWM4ZDhlZmQyZTM1NDRjYmRlNzE2ZDFjNjhmZjkwZjhlMjc5MTExODAxNTk0ZmRlVvu5Vw==: 00:32:23.879 02:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:32:23.879 02:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:23.879 02:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:23.879 02:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:23.879 02:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:23.879 02:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:23.879 02:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:23.879 02:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.879 02:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.137 02:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:24.137 02:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:24.137 02:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:24.137 02:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:24.137 02:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:24.137 02:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:24.137 02:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:24.137 02:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:24.137 02:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:24.137 02:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:24.137 02:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:24.137 02:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:24.137 02:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:24.137 02:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:24.137 02:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.137 nvme0n1 00:32:24.137 02:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:24.137 02:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:24.137 02:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:24.137 02:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.137 02:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:24.137 02:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:24.137 02:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:24.137 02:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:24.137 02:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:24.137 02:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.137 02:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:24.137 02:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:24.137 02:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:32:24.138 02:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:24.138 02:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:24.138 02:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:24.138 02:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:24.138 02:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTgzYTQxODdmMmEzOTI5ZDUxOGIyNWM0OGU4OWNjYTei/Jif: 00:32:24.138 02:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTg3MDY1YzUzOTRiZTBmN2M4ZmI2OTEzMjEyNzkzODh5ESnS: 00:32:24.138 02:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:24.138 02:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:24.138 02:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTgzYTQxODdmMmEzOTI5ZDUxOGIyNWM0OGU4OWNjYTei/Jif: 00:32:24.138 02:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTg3MDY1YzUzOTRiZTBmN2M4ZmI2OTEzMjEyNzkzODh5ESnS: ]] 00:32:24.138 02:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTg3MDY1YzUzOTRiZTBmN2M4ZmI2OTEzMjEyNzkzODh5ESnS: 00:32:24.138 02:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:32:24.138 02:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:24.138 02:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:24.138 02:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:24.138 02:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:24.138 02:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:24.138 02:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:24.138 02:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:24.138 02:10:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.138 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:24.138 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:24.138 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:24.138 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:24.138 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:24.138 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:24.138 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:24.138 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:24.138 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:24.138 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:24.138 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:24.138 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:24.138 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:24.138 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:24.138 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.396 nvme0n1 00:32:24.396 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:24.396 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:24.396 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:24.396 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.396 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:24.396 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:24.396 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:24.396 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:24.396 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:24.396 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.396 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:24.396 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:24.396 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:32:24.396 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:24.396 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:24.396 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:24.396 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:24.396 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWJjZjcwZWQ5NjMyMzhhOWFiNzVmYmYwN2NhYzUyZmRkN2YzMjVlMWFhOTdlYWRm4YxG1w==: 00:32:24.396 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTljOGY2MmI2NDZmZDk5YWE0ZmE1ZTU5ZWI2NTdmYmNTfCvl: 00:32:24.396 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:24.396 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:24.396 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWJjZjcwZWQ5NjMyMzhhOWFiNzVmYmYwN2NhYzUyZmRkN2YzMjVlMWFhOTdlYWRm4YxG1w==: 00:32:24.396 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTljOGY2MmI2NDZmZDk5YWE0ZmE1ZTU5ZWI2NTdmYmNTfCvl: ]] 00:32:24.396 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTljOGY2MmI2NDZmZDk5YWE0ZmE1ZTU5ZWI2NTdmYmNTfCvl: 00:32:24.396 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:32:24.396 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:24.396 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:24.396 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:24.396 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:24.396 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:24.396 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:24.396 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:24.396 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.396 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:24.396 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:24.396 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:24.396 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:24.396 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:24.396 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:24.396 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:24.396 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:24.396 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:24.396 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:24.396 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:24.396 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:24.396 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:24.396 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:24.396 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.654 nvme0n1 00:32:24.654 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:24.654 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:24.654 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:24.654 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.654 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:24.654 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:24.654 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:24.654 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:24.654 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:24.654 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.654 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:24.654 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:24.654 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:32:24.654 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:24.654 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:24.654 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:24.654 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:24.654 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzZiOTBiM2QwZTE2NmMzNDk1MDQ4NWE4ODE5MGY4MDZjNWMwMjk2Yzk2MjJjMjcyNDU0ZDUzZWI2Y2M1YzEyYlFMU+I=: 00:32:24.654 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:24.654 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:24.654 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:24.654 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzZiOTBiM2QwZTE2NmMzNDk1MDQ4NWE4ODE5MGY4MDZjNWMwMjk2Yzk2MjJjMjcyNDU0ZDUzZWI2Y2M1YzEyYlFMU+I=: 00:32:24.654 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:24.654 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:32:24.654 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:24.654 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:24.655 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:24.655 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:24.655 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:24.655 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:24.655 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:24.655 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.655 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:24.655 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:24.655 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:24.655 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:24.655 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:24.655 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:24.655 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:24.655 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:24.655 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:24.655 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:24.655 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:24.655 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:24.655 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:24.655 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:24.655 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.913 nvme0n1 00:32:24.913 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:24.913 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:24.913 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:24.913 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:24.913 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.913 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:24.913 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:24.913 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:24.913 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:24.913 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.913 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:24.913 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:24.913 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:24.913 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:32:24.913 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:24.913 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:24.913 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:24.913 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:24.913 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGExOTJhY2YwNWZkNDdkMGVlZmIxZWI1MGU5ZTUyYzYhSQfA: 00:32:24.913 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTgxYzY1NWI1ZjU0NTdmNWIxYmI1ZTNhYmZiOWY5YjA2ZDU3NzJlODhlZjU5OTQ5YWI4YzllNGVmNTNhMjFlNFN/Ar8=: 00:32:24.913 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:24.913 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:24.913 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGExOTJhY2YwNWZkNDdkMGVlZmIxZWI1MGU5ZTUyYzYhSQfA: 00:32:24.913 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTgxYzY1NWI1ZjU0NTdmNWIxYmI1ZTNhYmZiOWY5YjA2ZDU3NzJlODhlZjU5OTQ5YWI4YzllNGVmNTNhMjFlNFN/Ar8=: ]] 00:32:24.913 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTgxYzY1NWI1ZjU0NTdmNWIxYmI1ZTNhYmZiOWY5YjA2ZDU3NzJlODhlZjU5OTQ5YWI4YzllNGVmNTNhMjFlNFN/Ar8=: 00:32:24.913 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:32:24.914 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:24.914 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:24.914 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:24.914 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:24.914 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:24.914 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:24.914 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:24.914 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.914 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:24.914 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:24.914 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:24.914 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:24.914 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:24.914 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:24.914 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:24.914 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:24.914 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:24.914 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:24.914 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:24.914 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:24.914 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:24.914 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:24.914 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.172 nvme0n1 00:32:25.172 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:25.172 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:25.172 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:25.172 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:25.172 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.172 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:25.172 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:25.172 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:25.172 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:25.172 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.172 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:25.172 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:25.172 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:32:25.172 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:25.172 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:25.172 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:25.172 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:25.172 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmNlYzkzMjQ2MDZmMzc5ZDRiYjk1ZTM4M2FlMjFhYzEzYTM4ZDAyYjg3ZDE1M2MwhpKVfw==: 00:32:25.172 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWM4ZDhlZmQyZTM1NDRjYmRlNzE2ZDFjNjhmZjkwZjhlMjc5MTExODAxNTk0ZmRlVvu5Vw==: 00:32:25.172 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:25.172 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:25.172 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmNlYzkzMjQ2MDZmMzc5ZDRiYjk1ZTM4M2FlMjFhYzEzYTM4ZDAyYjg3ZDE1M2MwhpKVfw==: 00:32:25.172 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWM4ZDhlZmQyZTM1NDRjYmRlNzE2ZDFjNjhmZjkwZjhlMjc5MTExODAxNTk0ZmRlVvu5Vw==: ]] 00:32:25.172 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWM4ZDhlZmQyZTM1NDRjYmRlNzE2ZDFjNjhmZjkwZjhlMjc5MTExODAxNTk0ZmRlVvu5Vw==: 00:32:25.172 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:32:25.172 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:25.172 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:25.172 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:25.172 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:25.173 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:25.173 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:25.173 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:25.173 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.173 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:25.173 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:25.173 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:25.173 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:25.173 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:25.173 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:25.173 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:25.173 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:25.173 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:25.173 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:25.173 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:25.173 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:25.173 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:25.173 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:25.173 02:10:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.431 nvme0n1 00:32:25.431 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:25.431 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:25.431 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:25.431 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:25.431 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.431 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:25.431 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:25.431 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:25.431 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:25.431 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.431 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:25.431 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:25.431 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:32:25.431 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:25.431 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:25.431 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:25.431 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:25.431 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTgzYTQxODdmMmEzOTI5ZDUxOGIyNWM0OGU4OWNjYTei/Jif: 00:32:25.431 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTg3MDY1YzUzOTRiZTBmN2M4ZmI2OTEzMjEyNzkzODh5ESnS: 00:32:25.431 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:25.431 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:25.431 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTgzYTQxODdmMmEzOTI5ZDUxOGIyNWM0OGU4OWNjYTei/Jif: 00:32:25.431 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTg3MDY1YzUzOTRiZTBmN2M4ZmI2OTEzMjEyNzkzODh5ESnS: ]] 00:32:25.431 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTg3MDY1YzUzOTRiZTBmN2M4ZmI2OTEzMjEyNzkzODh5ESnS: 00:32:25.431 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:32:25.431 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:25.431 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:25.431 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:25.431 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:25.431 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:25.431 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:25.431 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:25.431 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.431 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:25.431 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:25.431 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:25.431 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:25.431 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:25.431 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:25.431 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:25.431 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:25.431 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:25.431 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:25.431 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:25.431 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:25.431 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:25.431 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:25.431 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.689 nvme0n1 00:32:25.689 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:25.689 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:25.689 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:25.689 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:25.689 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.689 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:25.689 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:25.689 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:25.689 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:25.689 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.689 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:25.689 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:25.689 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:32:25.689 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:25.689 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:25.689 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:25.689 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:25.689 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWJjZjcwZWQ5NjMyMzhhOWFiNzVmYmYwN2NhYzUyZmRkN2YzMjVlMWFhOTdlYWRm4YxG1w==: 00:32:25.689 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTljOGY2MmI2NDZmZDk5YWE0ZmE1ZTU5ZWI2NTdmYmNTfCvl: 00:32:25.689 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:25.689 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:25.689 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWJjZjcwZWQ5NjMyMzhhOWFiNzVmYmYwN2NhYzUyZmRkN2YzMjVlMWFhOTdlYWRm4YxG1w==: 00:32:25.689 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTljOGY2MmI2NDZmZDk5YWE0ZmE1ZTU5ZWI2NTdmYmNTfCvl: ]] 00:32:25.689 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTljOGY2MmI2NDZmZDk5YWE0ZmE1ZTU5ZWI2NTdmYmNTfCvl: 00:32:25.689 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:32:25.689 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:25.689 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:25.689 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:25.690 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:25.690 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:25.690 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:25.690 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:25.690 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.690 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:25.690 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:25.690 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:25.690 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:25.690 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:25.690 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:25.690 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:25.690 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:25.690 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:25.690 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:25.690 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:25.690 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:25.690 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:25.690 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:25.690 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.948 nvme0n1 00:32:25.948 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:25.948 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:25.948 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:25.948 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:25.948 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.948 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:25.948 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:25.948 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:25.948 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:25.948 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.948 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:25.948 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:25.948 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:32:25.948 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:25.948 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:25.948 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:25.948 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:25.948 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzZiOTBiM2QwZTE2NmMzNDk1MDQ4NWE4ODE5MGY4MDZjNWMwMjk2Yzk2MjJjMjcyNDU0ZDUzZWI2Y2M1YzEyYlFMU+I=: 00:32:25.948 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:25.948 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:25.948 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:25.948 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzZiOTBiM2QwZTE2NmMzNDk1MDQ4NWE4ODE5MGY4MDZjNWMwMjk2Yzk2MjJjMjcyNDU0ZDUzZWI2Y2M1YzEyYlFMU+I=: 00:32:25.948 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:25.948 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:32:25.948 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:25.948 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:25.948 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:25.948 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:25.948 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:25.948 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:25.948 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:25.948 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.948 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:25.948 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:25.948 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:25.948 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:25.948 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:25.948 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:25.948 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:25.948 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:25.948 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:25.948 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:25.948 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:25.948 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:25.948 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:25.948 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:25.948 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.206 nvme0n1 00:32:26.206 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:26.206 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:26.206 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:26.206 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:26.206 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.206 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:26.206 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:26.206 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:26.206 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:26.206 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.206 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:26.206 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:26.206 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:26.206 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:32:26.206 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:26.206 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:26.206 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:26.206 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:26.206 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGExOTJhY2YwNWZkNDdkMGVlZmIxZWI1MGU5ZTUyYzYhSQfA: 00:32:26.206 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTgxYzY1NWI1ZjU0NTdmNWIxYmI1ZTNhYmZiOWY5YjA2ZDU3NzJlODhlZjU5OTQ5YWI4YzllNGVmNTNhMjFlNFN/Ar8=: 00:32:26.206 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:26.206 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:26.206 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGExOTJhY2YwNWZkNDdkMGVlZmIxZWI1MGU5ZTUyYzYhSQfA: 00:32:26.206 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTgxYzY1NWI1ZjU0NTdmNWIxYmI1ZTNhYmZiOWY5YjA2ZDU3NzJlODhlZjU5OTQ5YWI4YzllNGVmNTNhMjFlNFN/Ar8=: ]] 00:32:26.206 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTgxYzY1NWI1ZjU0NTdmNWIxYmI1ZTNhYmZiOWY5YjA2ZDU3NzJlODhlZjU5OTQ5YWI4YzllNGVmNTNhMjFlNFN/Ar8=: 00:32:26.206 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:32:26.206 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:26.206 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:26.206 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:26.206 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:26.206 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:26.206 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:26.206 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:26.206 02:10:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.206 02:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:26.206 02:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:26.206 02:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:26.206 02:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:26.206 02:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:26.206 02:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:26.206 02:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:26.206 02:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:26.206 02:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:26.206 02:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:26.206 02:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:26.206 02:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:26.206 02:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:26.206 02:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:26.206 02:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.464 nvme0n1 00:32:26.464 02:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:26.464 02:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:26.464 02:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:26.464 02:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.464 02:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:26.464 02:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:26.464 02:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:26.464 02:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:26.464 02:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:26.464 02:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.723 02:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:26.723 02:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:26.723 02:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:32:26.723 02:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:26.723 02:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:26.723 02:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:26.723 02:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:26.723 02:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmNlYzkzMjQ2MDZmMzc5ZDRiYjk1ZTM4M2FlMjFhYzEzYTM4ZDAyYjg3ZDE1M2MwhpKVfw==: 00:32:26.723 02:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWM4ZDhlZmQyZTM1NDRjYmRlNzE2ZDFjNjhmZjkwZjhlMjc5MTExODAxNTk0ZmRlVvu5Vw==: 00:32:26.723 02:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:26.723 02:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:26.723 02:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmNlYzkzMjQ2MDZmMzc5ZDRiYjk1ZTM4M2FlMjFhYzEzYTM4ZDAyYjg3ZDE1M2MwhpKVfw==: 00:32:26.723 02:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWM4ZDhlZmQyZTM1NDRjYmRlNzE2ZDFjNjhmZjkwZjhlMjc5MTExODAxNTk0ZmRlVvu5Vw==: ]] 00:32:26.723 02:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWM4ZDhlZmQyZTM1NDRjYmRlNzE2ZDFjNjhmZjkwZjhlMjc5MTExODAxNTk0ZmRlVvu5Vw==: 00:32:26.723 02:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:32:26.723 02:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:26.723 02:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:26.723 02:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:26.723 02:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:26.723 02:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:26.723 02:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:26.723 02:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:26.723 02:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.723 02:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:26.723 02:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:26.723 02:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:26.723 02:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:26.723 02:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:26.723 02:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:26.723 02:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:26.723 02:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:26.723 02:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:26.723 02:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:26.723 02:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:26.723 02:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:26.723 02:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:26.723 02:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:26.723 02:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.981 nvme0n1 00:32:26.981 02:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:26.981 02:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:26.981 02:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:26.981 02:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:26.981 02:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.981 02:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:26.981 02:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:26.981 02:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:26.981 02:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:26.981 02:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.981 02:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:26.981 02:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:26.981 02:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:32:26.981 02:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:26.981 02:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:26.981 02:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:26.981 02:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:26.981 02:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTgzYTQxODdmMmEzOTI5ZDUxOGIyNWM0OGU4OWNjYTei/Jif: 00:32:26.981 02:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTg3MDY1YzUzOTRiZTBmN2M4ZmI2OTEzMjEyNzkzODh5ESnS: 00:32:26.981 02:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:26.981 02:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:26.981 02:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTgzYTQxODdmMmEzOTI5ZDUxOGIyNWM0OGU4OWNjYTei/Jif: 00:32:26.981 02:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTg3MDY1YzUzOTRiZTBmN2M4ZmI2OTEzMjEyNzkzODh5ESnS: ]] 00:32:26.981 02:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTg3MDY1YzUzOTRiZTBmN2M4ZmI2OTEzMjEyNzkzODh5ESnS: 00:32:26.981 02:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:32:26.981 02:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:26.981 02:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:26.981 02:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:26.981 02:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:26.981 02:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:26.981 02:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:26.981 02:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:26.981 02:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.981 02:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:26.981 02:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:26.981 02:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:26.981 02:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:26.981 02:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:26.981 02:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:26.981 02:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:26.981 02:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:26.981 02:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:26.981 02:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:26.981 02:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:26.981 02:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:26.981 02:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:26.981 02:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:26.981 02:10:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.239 nvme0n1 00:32:27.239 02:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.239 02:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:27.239 02:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.239 02:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:27.239 02:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.239 02:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.239 02:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:27.239 02:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:27.239 02:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.239 02:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.239 02:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.239 02:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:27.239 02:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:32:27.239 02:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:27.239 02:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:27.239 02:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:27.239 02:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:27.239 02:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWJjZjcwZWQ5NjMyMzhhOWFiNzVmYmYwN2NhYzUyZmRkN2YzMjVlMWFhOTdlYWRm4YxG1w==: 00:32:27.239 02:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTljOGY2MmI2NDZmZDk5YWE0ZmE1ZTU5ZWI2NTdmYmNTfCvl: 00:32:27.239 02:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:27.239 02:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:27.239 02:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWJjZjcwZWQ5NjMyMzhhOWFiNzVmYmYwN2NhYzUyZmRkN2YzMjVlMWFhOTdlYWRm4YxG1w==: 00:32:27.239 02:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTljOGY2MmI2NDZmZDk5YWE0ZmE1ZTU5ZWI2NTdmYmNTfCvl: ]] 00:32:27.239 02:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTljOGY2MmI2NDZmZDk5YWE0ZmE1ZTU5ZWI2NTdmYmNTfCvl: 00:32:27.239 02:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:32:27.239 02:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:27.239 02:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:27.239 02:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:27.239 02:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:27.239 02:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:27.239 02:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:27.239 02:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.239 02:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.239 02:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.239 02:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:27.239 02:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:27.239 02:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:27.239 02:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:27.239 02:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:27.239 02:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:27.239 02:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:27.239 02:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:27.239 02:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:27.239 02:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:27.240 02:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:27.240 02:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:27.240 02:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.240 02:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.497 nvme0n1 00:32:27.497 02:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.497 02:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:27.497 02:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:27.497 02:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.497 02:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.497 02:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.755 02:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:27.755 02:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:27.755 02:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.755 02:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.755 02:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.755 02:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:27.755 02:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:32:27.755 02:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:27.755 02:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:27.755 02:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:27.755 02:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:27.755 02:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzZiOTBiM2QwZTE2NmMzNDk1MDQ4NWE4ODE5MGY4MDZjNWMwMjk2Yzk2MjJjMjcyNDU0ZDUzZWI2Y2M1YzEyYlFMU+I=: 00:32:27.755 02:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:27.755 02:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:27.755 02:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:27.755 02:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzZiOTBiM2QwZTE2NmMzNDk1MDQ4NWE4ODE5MGY4MDZjNWMwMjk2Yzk2MjJjMjcyNDU0ZDUzZWI2Y2M1YzEyYlFMU+I=: 00:32:27.755 02:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:27.755 02:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:32:27.755 02:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:27.755 02:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:27.755 02:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:27.755 02:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:27.755 02:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:27.755 02:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:27.755 02:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.755 02:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.755 02:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.755 02:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:27.755 02:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:27.755 02:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:27.755 02:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:27.755 02:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:27.755 02:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:27.755 02:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:27.755 02:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:27.755 02:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:27.755 02:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:27.755 02:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:27.755 02:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:27.755 02:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.755 02:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.013 nvme0n1 00:32:28.013 02:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:28.013 02:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:28.013 02:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:28.013 02:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.013 02:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:28.013 02:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:28.013 02:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:28.013 02:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:28.013 02:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:28.013 02:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.013 02:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:28.013 02:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:28.014 02:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:28.014 02:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:32:28.014 02:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:28.014 02:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:28.014 02:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:28.014 02:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:28.014 02:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGExOTJhY2YwNWZkNDdkMGVlZmIxZWI1MGU5ZTUyYzYhSQfA: 00:32:28.014 02:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTgxYzY1NWI1ZjU0NTdmNWIxYmI1ZTNhYmZiOWY5YjA2ZDU3NzJlODhlZjU5OTQ5YWI4YzllNGVmNTNhMjFlNFN/Ar8=: 00:32:28.014 02:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:28.014 02:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:28.014 02:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGExOTJhY2YwNWZkNDdkMGVlZmIxZWI1MGU5ZTUyYzYhSQfA: 00:32:28.014 02:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTgxYzY1NWI1ZjU0NTdmNWIxYmI1ZTNhYmZiOWY5YjA2ZDU3NzJlODhlZjU5OTQ5YWI4YzllNGVmNTNhMjFlNFN/Ar8=: ]] 00:32:28.014 02:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTgxYzY1NWI1ZjU0NTdmNWIxYmI1ZTNhYmZiOWY5YjA2ZDU3NzJlODhlZjU5OTQ5YWI4YzllNGVmNTNhMjFlNFN/Ar8=: 00:32:28.014 02:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:32:28.014 02:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:28.014 02:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:28.014 02:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:28.014 02:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:28.014 02:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:28.014 02:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:28.014 02:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:28.014 02:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.014 02:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:28.014 02:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:28.014 02:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:28.014 02:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:28.014 02:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:28.014 02:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:28.014 02:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:28.014 02:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:28.014 02:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:28.014 02:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:28.014 02:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:28.014 02:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:28.014 02:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:28.014 02:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:28.014 02:10:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.580 nvme0n1 00:32:28.580 02:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:28.580 02:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:28.580 02:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:28.580 02:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.580 02:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:28.580 02:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:28.580 02:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:28.580 02:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:28.580 02:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:28.580 02:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.580 02:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:28.580 02:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:28.580 02:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:32:28.580 02:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:28.580 02:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:28.580 02:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:28.580 02:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:28.580 02:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmNlYzkzMjQ2MDZmMzc5ZDRiYjk1ZTM4M2FlMjFhYzEzYTM4ZDAyYjg3ZDE1M2MwhpKVfw==: 00:32:28.580 02:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWM4ZDhlZmQyZTM1NDRjYmRlNzE2ZDFjNjhmZjkwZjhlMjc5MTExODAxNTk0ZmRlVvu5Vw==: 00:32:28.580 02:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:28.580 02:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:28.580 02:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmNlYzkzMjQ2MDZmMzc5ZDRiYjk1ZTM4M2FlMjFhYzEzYTM4ZDAyYjg3ZDE1M2MwhpKVfw==: 00:32:28.580 02:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWM4ZDhlZmQyZTM1NDRjYmRlNzE2ZDFjNjhmZjkwZjhlMjc5MTExODAxNTk0ZmRlVvu5Vw==: ]] 00:32:28.580 02:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWM4ZDhlZmQyZTM1NDRjYmRlNzE2ZDFjNjhmZjkwZjhlMjc5MTExODAxNTk0ZmRlVvu5Vw==: 00:32:28.580 02:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:32:28.580 02:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:28.580 02:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:28.580 02:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:28.580 02:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:28.580 02:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:28.580 02:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:28.580 02:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:28.580 02:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.580 02:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:28.580 02:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:28.580 02:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:28.580 02:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:28.580 02:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:28.580 02:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:28.580 02:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:28.580 02:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:28.580 02:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:28.580 02:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:28.580 02:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:28.580 02:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:28.580 02:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:28.580 02:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:28.580 02:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.148 nvme0n1 00:32:29.148 02:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.148 02:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:29.148 02:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.148 02:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:29.148 02:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.148 02:10:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.148 02:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:29.148 02:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:29.148 02:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.148 02:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.148 02:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.148 02:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:29.148 02:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:32:29.148 02:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:29.148 02:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:29.148 02:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:29.148 02:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:29.148 02:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTgzYTQxODdmMmEzOTI5ZDUxOGIyNWM0OGU4OWNjYTei/Jif: 00:32:29.148 02:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTg3MDY1YzUzOTRiZTBmN2M4ZmI2OTEzMjEyNzkzODh5ESnS: 00:32:29.148 02:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:29.148 02:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:29.148 02:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTgzYTQxODdmMmEzOTI5ZDUxOGIyNWM0OGU4OWNjYTei/Jif: 00:32:29.148 02:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTg3MDY1YzUzOTRiZTBmN2M4ZmI2OTEzMjEyNzkzODh5ESnS: ]] 00:32:29.148 02:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTg3MDY1YzUzOTRiZTBmN2M4ZmI2OTEzMjEyNzkzODh5ESnS: 00:32:29.148 02:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:32:29.148 02:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:29.148 02:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:29.148 02:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:29.148 02:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:29.148 02:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:29.148 02:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:29.148 02:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.148 02:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.148 02:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.406 02:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:29.406 02:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:29.406 02:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:29.406 02:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:29.406 02:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:29.406 02:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:29.406 02:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:29.406 02:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:29.406 02:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:29.406 02:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:29.406 02:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:29.406 02:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:29.406 02:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.406 02:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.972 nvme0n1 00:32:29.972 02:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.972 02:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:29.972 02:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.972 02:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:29.972 02:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.972 02:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.972 02:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:29.972 02:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:29.972 02:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.972 02:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.972 02:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.972 02:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:29.972 02:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:32:29.972 02:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:29.972 02:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:29.972 02:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:29.972 02:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:29.972 02:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWJjZjcwZWQ5NjMyMzhhOWFiNzVmYmYwN2NhYzUyZmRkN2YzMjVlMWFhOTdlYWRm4YxG1w==: 00:32:29.972 02:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTljOGY2MmI2NDZmZDk5YWE0ZmE1ZTU5ZWI2NTdmYmNTfCvl: 00:32:29.972 02:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:29.972 02:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:29.972 02:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWJjZjcwZWQ5NjMyMzhhOWFiNzVmYmYwN2NhYzUyZmRkN2YzMjVlMWFhOTdlYWRm4YxG1w==: 00:32:29.972 02:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTljOGY2MmI2NDZmZDk5YWE0ZmE1ZTU5ZWI2NTdmYmNTfCvl: ]] 00:32:29.972 02:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTljOGY2MmI2NDZmZDk5YWE0ZmE1ZTU5ZWI2NTdmYmNTfCvl: 00:32:29.972 02:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:32:29.972 02:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:29.972 02:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:29.972 02:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:29.972 02:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:29.972 02:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:29.972 02:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:29.972 02:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.972 02:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.972 02:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.972 02:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:29.972 02:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:29.972 02:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:29.972 02:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:29.972 02:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:29.972 02:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:29.972 02:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:29.972 02:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:29.972 02:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:29.972 02:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:29.972 02:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:29.972 02:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:29.972 02:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.972 02:10:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.538 nvme0n1 00:32:30.538 02:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.538 02:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:30.538 02:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.538 02:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.538 02:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:30.538 02:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.538 02:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:30.538 02:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:30.538 02:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.538 02:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.538 02:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.538 02:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:30.538 02:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:32:30.538 02:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:30.538 02:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:30.538 02:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:30.538 02:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:30.538 02:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzZiOTBiM2QwZTE2NmMzNDk1MDQ4NWE4ODE5MGY4MDZjNWMwMjk2Yzk2MjJjMjcyNDU0ZDUzZWI2Y2M1YzEyYlFMU+I=: 00:32:30.538 02:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:30.538 02:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:30.538 02:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:30.538 02:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzZiOTBiM2QwZTE2NmMzNDk1MDQ4NWE4ODE5MGY4MDZjNWMwMjk2Yzk2MjJjMjcyNDU0ZDUzZWI2Y2M1YzEyYlFMU+I=: 00:32:30.538 02:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:30.538 02:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:32:30.538 02:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:30.538 02:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:30.538 02:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:30.538 02:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:30.538 02:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:30.538 02:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:30.538 02:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.538 02:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.538 02:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.538 02:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:30.538 02:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:30.538 02:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:30.538 02:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:30.538 02:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:30.538 02:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:30.538 02:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:30.538 02:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:30.538 02:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:30.538 02:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:30.538 02:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:30.538 02:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:30.538 02:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.538 02:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.104 nvme0n1 00:32:31.104 02:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.104 02:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:31.104 02:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.104 02:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:31.104 02:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.104 02:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.104 02:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:31.104 02:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:31.104 02:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.104 02:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.104 02:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.104 02:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:31.104 02:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:31.104 02:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:32:31.104 02:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:31.104 02:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:31.104 02:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:31.104 02:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:31.104 02:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGExOTJhY2YwNWZkNDdkMGVlZmIxZWI1MGU5ZTUyYzYhSQfA: 00:32:31.104 02:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTgxYzY1NWI1ZjU0NTdmNWIxYmI1ZTNhYmZiOWY5YjA2ZDU3NzJlODhlZjU5OTQ5YWI4YzllNGVmNTNhMjFlNFN/Ar8=: 00:32:31.104 02:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:31.104 02:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:31.104 02:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGExOTJhY2YwNWZkNDdkMGVlZmIxZWI1MGU5ZTUyYzYhSQfA: 00:32:31.104 02:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTgxYzY1NWI1ZjU0NTdmNWIxYmI1ZTNhYmZiOWY5YjA2ZDU3NzJlODhlZjU5OTQ5YWI4YzllNGVmNTNhMjFlNFN/Ar8=: ]] 00:32:31.104 02:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTgxYzY1NWI1ZjU0NTdmNWIxYmI1ZTNhYmZiOWY5YjA2ZDU3NzJlODhlZjU5OTQ5YWI4YzllNGVmNTNhMjFlNFN/Ar8=: 00:32:31.104 02:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:32:31.104 02:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:31.104 02:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:31.104 02:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:31.104 02:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:31.104 02:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:31.104 02:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:31.104 02:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.104 02:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.104 02:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.104 02:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:31.104 02:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:31.104 02:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:31.104 02:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:31.104 02:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:31.104 02:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:31.104 02:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:31.104 02:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:31.104 02:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:31.104 02:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:31.104 02:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:31.104 02:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:31.104 02:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.104 02:10:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.038 nvme0n1 00:32:32.038 02:10:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.038 02:10:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:32.038 02:10:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.038 02:10:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:32.038 02:10:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.038 02:10:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.038 02:10:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:32.038 02:10:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:32.038 02:10:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.038 02:10:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.038 02:10:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.038 02:10:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:32.038 02:10:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:32:32.038 02:10:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:32.038 02:10:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:32.038 02:10:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:32.038 02:10:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:32.038 02:10:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmNlYzkzMjQ2MDZmMzc5ZDRiYjk1ZTM4M2FlMjFhYzEzYTM4ZDAyYjg3ZDE1M2MwhpKVfw==: 00:32:32.038 02:10:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWM4ZDhlZmQyZTM1NDRjYmRlNzE2ZDFjNjhmZjkwZjhlMjc5MTExODAxNTk0ZmRlVvu5Vw==: 00:32:32.038 02:10:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:32.038 02:10:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:32.038 02:10:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmNlYzkzMjQ2MDZmMzc5ZDRiYjk1ZTM4M2FlMjFhYzEzYTM4ZDAyYjg3ZDE1M2MwhpKVfw==: 00:32:32.038 02:10:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWM4ZDhlZmQyZTM1NDRjYmRlNzE2ZDFjNjhmZjkwZjhlMjc5MTExODAxNTk0ZmRlVvu5Vw==: ]] 00:32:32.038 02:10:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWM4ZDhlZmQyZTM1NDRjYmRlNzE2ZDFjNjhmZjkwZjhlMjc5MTExODAxNTk0ZmRlVvu5Vw==: 00:32:32.038 02:10:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:32:32.038 02:10:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:32.038 02:10:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:32.038 02:10:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:32.038 02:10:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:32.038 02:10:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:32.038 02:10:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:32.038 02:10:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.038 02:10:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.038 02:10:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.038 02:10:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:32.038 02:10:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:32.038 02:10:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:32.038 02:10:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:32.038 02:10:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:32.038 02:10:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:32.038 02:10:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:32.038 02:10:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:32.038 02:10:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:32.038 02:10:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:32.038 02:10:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:32.038 02:10:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:32.038 02:10:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.038 02:10:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.972 nvme0n1 00:32:32.972 02:10:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.972 02:10:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:32.972 02:10:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.972 02:10:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.972 02:10:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:32.972 02:10:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.972 02:10:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:32.972 02:10:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:32.972 02:10:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.972 02:10:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.972 02:10:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.972 02:10:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:32.972 02:10:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:32:32.972 02:10:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:32.972 02:10:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:32.972 02:10:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:32.972 02:10:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:32.972 02:10:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTgzYTQxODdmMmEzOTI5ZDUxOGIyNWM0OGU4OWNjYTei/Jif: 00:32:32.972 02:10:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTg3MDY1YzUzOTRiZTBmN2M4ZmI2OTEzMjEyNzkzODh5ESnS: 00:32:32.972 02:10:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:32.972 02:10:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:32.972 02:10:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTgzYTQxODdmMmEzOTI5ZDUxOGIyNWM0OGU4OWNjYTei/Jif: 00:32:32.972 02:10:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTg3MDY1YzUzOTRiZTBmN2M4ZmI2OTEzMjEyNzkzODh5ESnS: ]] 00:32:32.972 02:10:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTg3MDY1YzUzOTRiZTBmN2M4ZmI2OTEzMjEyNzkzODh5ESnS: 00:32:32.972 02:10:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:32:32.972 02:10:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:32.972 02:10:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:32.972 02:10:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:32.972 02:10:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:32.972 02:10:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:32.972 02:10:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:32.972 02:10:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.972 02:10:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.972 02:10:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.972 02:10:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:32.972 02:10:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:32.972 02:10:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:32.972 02:10:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:32.972 02:10:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:32.972 02:10:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:32.972 02:10:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:32.972 02:10:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:32.972 02:10:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:32.972 02:10:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:32.972 02:10:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:32.972 02:10:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:32.972 02:10:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.972 02:10:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.905 nvme0n1 00:32:33.905 02:10:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.905 02:10:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:33.905 02:10:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:33.905 02:10:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.905 02:10:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.905 02:10:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.164 02:10:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:34.164 02:10:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:34.164 02:10:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.164 02:10:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.164 02:10:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.164 02:10:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:34.164 02:10:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:32:34.164 02:10:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:34.164 02:10:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:34.164 02:10:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:34.164 02:10:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:34.164 02:10:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWJjZjcwZWQ5NjMyMzhhOWFiNzVmYmYwN2NhYzUyZmRkN2YzMjVlMWFhOTdlYWRm4YxG1w==: 00:32:34.164 02:10:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTljOGY2MmI2NDZmZDk5YWE0ZmE1ZTU5ZWI2NTdmYmNTfCvl: 00:32:34.164 02:10:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:34.164 02:10:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:34.164 02:10:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWJjZjcwZWQ5NjMyMzhhOWFiNzVmYmYwN2NhYzUyZmRkN2YzMjVlMWFhOTdlYWRm4YxG1w==: 00:32:34.164 02:10:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTljOGY2MmI2NDZmZDk5YWE0ZmE1ZTU5ZWI2NTdmYmNTfCvl: ]] 00:32:34.164 02:10:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTljOGY2MmI2NDZmZDk5YWE0ZmE1ZTU5ZWI2NTdmYmNTfCvl: 00:32:34.164 02:10:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:32:34.164 02:10:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:34.164 02:10:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:34.164 02:10:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:34.164 02:10:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:34.164 02:10:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:34.164 02:10:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:34.164 02:10:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.164 02:10:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.164 02:10:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.164 02:10:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:34.164 02:10:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:34.164 02:10:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:34.164 02:10:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:34.164 02:10:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:34.164 02:10:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:34.164 02:10:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:34.164 02:10:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:34.164 02:10:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:34.164 02:10:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:34.164 02:10:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:34.164 02:10:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:34.164 02:10:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.164 02:10:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.097 nvme0n1 00:32:35.097 02:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.098 02:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:35.098 02:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.098 02:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.098 02:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:35.098 02:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.098 02:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:35.098 02:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:35.098 02:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.098 02:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.098 02:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.098 02:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:35.098 02:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:32:35.098 02:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:35.098 02:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:35.098 02:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:35.098 02:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:35.098 02:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzZiOTBiM2QwZTE2NmMzNDk1MDQ4NWE4ODE5MGY4MDZjNWMwMjk2Yzk2MjJjMjcyNDU0ZDUzZWI2Y2M1YzEyYlFMU+I=: 00:32:35.098 02:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:35.098 02:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:35.098 02:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:35.098 02:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzZiOTBiM2QwZTE2NmMzNDk1MDQ4NWE4ODE5MGY4MDZjNWMwMjk2Yzk2MjJjMjcyNDU0ZDUzZWI2Y2M1YzEyYlFMU+I=: 00:32:35.098 02:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:35.098 02:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:32:35.098 02:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:35.098 02:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:35.098 02:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:35.098 02:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:35.098 02:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:35.098 02:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:35.098 02:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.098 02:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.098 02:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.098 02:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:35.098 02:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:35.098 02:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:35.098 02:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:35.098 02:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:35.098 02:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:35.098 02:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:35.098 02:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:35.098 02:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:35.098 02:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:35.098 02:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:35.098 02:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:35.098 02:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.098 02:10:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.075 nvme0n1 00:32:36.075 02:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.075 02:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:36.075 02:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:36.075 02:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.075 02:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.075 02:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.075 02:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:36.075 02:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:36.075 02:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.075 02:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.075 02:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.075 02:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:32:36.075 02:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:36.075 02:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:36.075 02:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:32:36.075 02:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:36.075 02:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:36.075 02:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:36.075 02:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:36.075 02:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGExOTJhY2YwNWZkNDdkMGVlZmIxZWI1MGU5ZTUyYzYhSQfA: 00:32:36.075 02:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTgxYzY1NWI1ZjU0NTdmNWIxYmI1ZTNhYmZiOWY5YjA2ZDU3NzJlODhlZjU5OTQ5YWI4YzllNGVmNTNhMjFlNFN/Ar8=: 00:32:36.075 02:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:36.075 02:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:36.075 02:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGExOTJhY2YwNWZkNDdkMGVlZmIxZWI1MGU5ZTUyYzYhSQfA: 00:32:36.076 02:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTgxYzY1NWI1ZjU0NTdmNWIxYmI1ZTNhYmZiOWY5YjA2ZDU3NzJlODhlZjU5OTQ5YWI4YzllNGVmNTNhMjFlNFN/Ar8=: ]] 00:32:36.076 02:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTgxYzY1NWI1ZjU0NTdmNWIxYmI1ZTNhYmZiOWY5YjA2ZDU3NzJlODhlZjU5OTQ5YWI4YzllNGVmNTNhMjFlNFN/Ar8=: 00:32:36.076 02:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:32:36.076 02:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:36.076 02:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:36.076 02:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:36.076 02:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:36.076 02:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:36.076 02:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:36.076 02:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.076 02:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.076 02:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.076 02:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:36.076 02:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:36.076 02:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:36.076 02:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:36.076 02:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:36.076 02:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:36.076 02:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:36.076 02:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:36.076 02:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:36.076 02:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:36.076 02:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:36.076 02:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:36.076 02:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.076 02:10:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.334 nvme0n1 00:32:36.334 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.334 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:36.334 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.334 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.334 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:36.334 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.334 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:36.334 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:36.334 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.334 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.334 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.334 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:36.334 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:32:36.334 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:36.334 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:36.334 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:36.334 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:36.334 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmNlYzkzMjQ2MDZmMzc5ZDRiYjk1ZTM4M2FlMjFhYzEzYTM4ZDAyYjg3ZDE1M2MwhpKVfw==: 00:32:36.334 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWM4ZDhlZmQyZTM1NDRjYmRlNzE2ZDFjNjhmZjkwZjhlMjc5MTExODAxNTk0ZmRlVvu5Vw==: 00:32:36.334 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:36.334 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:36.334 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmNlYzkzMjQ2MDZmMzc5ZDRiYjk1ZTM4M2FlMjFhYzEzYTM4ZDAyYjg3ZDE1M2MwhpKVfw==: 00:32:36.334 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWM4ZDhlZmQyZTM1NDRjYmRlNzE2ZDFjNjhmZjkwZjhlMjc5MTExODAxNTk0ZmRlVvu5Vw==: ]] 00:32:36.334 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWM4ZDhlZmQyZTM1NDRjYmRlNzE2ZDFjNjhmZjkwZjhlMjc5MTExODAxNTk0ZmRlVvu5Vw==: 00:32:36.334 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:32:36.334 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:36.334 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:36.334 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:36.334 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:36.334 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:36.334 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:36.334 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.334 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.334 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.334 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:36.334 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:36.334 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:36.334 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:36.334 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:36.334 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:36.334 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:36.334 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:36.334 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:36.334 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:36.334 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:36.334 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:36.334 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.334 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.593 nvme0n1 00:32:36.593 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.593 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:36.593 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.593 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.593 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:36.593 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.593 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:36.593 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:36.593 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.593 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.593 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.593 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:36.593 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:32:36.593 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:36.593 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:36.593 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:36.593 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:36.593 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTgzYTQxODdmMmEzOTI5ZDUxOGIyNWM0OGU4OWNjYTei/Jif: 00:32:36.593 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTg3MDY1YzUzOTRiZTBmN2M4ZmI2OTEzMjEyNzkzODh5ESnS: 00:32:36.593 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:36.593 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:36.593 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTgzYTQxODdmMmEzOTI5ZDUxOGIyNWM0OGU4OWNjYTei/Jif: 00:32:36.593 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTg3MDY1YzUzOTRiZTBmN2M4ZmI2OTEzMjEyNzkzODh5ESnS: ]] 00:32:36.593 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTg3MDY1YzUzOTRiZTBmN2M4ZmI2OTEzMjEyNzkzODh5ESnS: 00:32:36.593 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:32:36.593 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:36.593 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:36.593 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:36.593 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:36.593 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:36.593 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:36.593 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.593 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.593 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.593 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:36.593 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:36.593 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:36.593 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:36.593 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:36.593 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:36.593 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:36.593 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:36.593 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:36.593 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:36.593 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:36.593 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:36.593 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.593 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.852 nvme0n1 00:32:36.852 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.852 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:36.852 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.852 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.852 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:36.852 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.852 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:36.852 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:36.852 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.852 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.852 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.852 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:36.852 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:32:36.852 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:36.852 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:36.852 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:36.852 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:36.852 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWJjZjcwZWQ5NjMyMzhhOWFiNzVmYmYwN2NhYzUyZmRkN2YzMjVlMWFhOTdlYWRm4YxG1w==: 00:32:36.852 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTljOGY2MmI2NDZmZDk5YWE0ZmE1ZTU5ZWI2NTdmYmNTfCvl: 00:32:36.852 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:36.852 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:36.852 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWJjZjcwZWQ5NjMyMzhhOWFiNzVmYmYwN2NhYzUyZmRkN2YzMjVlMWFhOTdlYWRm4YxG1w==: 00:32:36.852 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTljOGY2MmI2NDZmZDk5YWE0ZmE1ZTU5ZWI2NTdmYmNTfCvl: ]] 00:32:36.852 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTljOGY2MmI2NDZmZDk5YWE0ZmE1ZTU5ZWI2NTdmYmNTfCvl: 00:32:36.852 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:32:36.852 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:36.852 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:36.852 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:36.852 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:36.852 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:36.852 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:36.852 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.852 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.852 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.852 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:36.852 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:36.852 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:36.852 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:36.852 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:36.852 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:36.852 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:36.852 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:36.852 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:36.852 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:36.852 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:36.852 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:36.852 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.852 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.111 nvme0n1 00:32:37.111 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.111 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:37.111 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.111 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.111 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:37.111 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.111 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:37.111 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:37.111 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.111 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.111 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.111 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:37.111 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:32:37.111 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:37.111 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:37.111 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:37.111 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:37.111 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzZiOTBiM2QwZTE2NmMzNDk1MDQ4NWE4ODE5MGY4MDZjNWMwMjk2Yzk2MjJjMjcyNDU0ZDUzZWI2Y2M1YzEyYlFMU+I=: 00:32:37.111 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:37.111 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:37.111 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:37.111 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzZiOTBiM2QwZTE2NmMzNDk1MDQ4NWE4ODE5MGY4MDZjNWMwMjk2Yzk2MjJjMjcyNDU0ZDUzZWI2Y2M1YzEyYlFMU+I=: 00:32:37.111 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:37.111 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:32:37.111 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:37.111 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:37.111 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:37.111 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:37.111 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:37.111 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:37.111 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.111 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.111 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.111 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:37.111 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:37.111 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:37.111 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:37.111 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:37.111 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:37.111 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:37.111 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:37.111 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:37.111 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:37.111 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:37.111 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:37.111 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.111 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.111 nvme0n1 00:32:37.111 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.111 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:37.111 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.111 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.111 02:10:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:37.111 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.370 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:37.370 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:37.370 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.370 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.370 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.370 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:37.370 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:37.370 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:32:37.370 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:37.370 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:37.370 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:37.370 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:37.370 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGExOTJhY2YwNWZkNDdkMGVlZmIxZWI1MGU5ZTUyYzYhSQfA: 00:32:37.370 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTgxYzY1NWI1ZjU0NTdmNWIxYmI1ZTNhYmZiOWY5YjA2ZDU3NzJlODhlZjU5OTQ5YWI4YzllNGVmNTNhMjFlNFN/Ar8=: 00:32:37.370 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:37.370 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:37.370 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGExOTJhY2YwNWZkNDdkMGVlZmIxZWI1MGU5ZTUyYzYhSQfA: 00:32:37.370 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTgxYzY1NWI1ZjU0NTdmNWIxYmI1ZTNhYmZiOWY5YjA2ZDU3NzJlODhlZjU5OTQ5YWI4YzllNGVmNTNhMjFlNFN/Ar8=: ]] 00:32:37.370 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTgxYzY1NWI1ZjU0NTdmNWIxYmI1ZTNhYmZiOWY5YjA2ZDU3NzJlODhlZjU5OTQ5YWI4YzllNGVmNTNhMjFlNFN/Ar8=: 00:32:37.370 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:32:37.370 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:37.370 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:37.370 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:37.370 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:37.370 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:37.370 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:37.370 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.370 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.370 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.370 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:37.370 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:37.370 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:37.370 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:37.370 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:37.370 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:37.370 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:37.370 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:37.370 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:37.370 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:37.370 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:37.370 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:37.370 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.370 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.370 nvme0n1 00:32:37.370 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.370 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:37.370 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:37.370 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.370 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.370 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.629 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:37.629 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:37.629 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.629 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.629 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.629 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:37.629 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:32:37.629 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:37.629 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:37.629 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:37.629 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:37.629 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmNlYzkzMjQ2MDZmMzc5ZDRiYjk1ZTM4M2FlMjFhYzEzYTM4ZDAyYjg3ZDE1M2MwhpKVfw==: 00:32:37.629 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWM4ZDhlZmQyZTM1NDRjYmRlNzE2ZDFjNjhmZjkwZjhlMjc5MTExODAxNTk0ZmRlVvu5Vw==: 00:32:37.629 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:37.629 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:37.629 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmNlYzkzMjQ2MDZmMzc5ZDRiYjk1ZTM4M2FlMjFhYzEzYTM4ZDAyYjg3ZDE1M2MwhpKVfw==: 00:32:37.629 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWM4ZDhlZmQyZTM1NDRjYmRlNzE2ZDFjNjhmZjkwZjhlMjc5MTExODAxNTk0ZmRlVvu5Vw==: ]] 00:32:37.629 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWM4ZDhlZmQyZTM1NDRjYmRlNzE2ZDFjNjhmZjkwZjhlMjc5MTExODAxNTk0ZmRlVvu5Vw==: 00:32:37.629 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:32:37.629 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:37.629 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:37.629 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:37.629 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:37.629 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:37.629 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:37.629 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.629 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.629 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.629 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:37.629 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:37.629 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:37.629 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:37.629 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:37.629 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:37.629 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:37.629 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:37.629 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:37.629 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:37.629 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:37.629 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:37.629 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.629 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.629 nvme0n1 00:32:37.629 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.629 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:37.629 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.629 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:37.629 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.629 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.887 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:37.887 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:37.887 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.887 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.887 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.887 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:37.887 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:32:37.887 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:37.887 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:37.887 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:37.887 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:37.887 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTgzYTQxODdmMmEzOTI5ZDUxOGIyNWM0OGU4OWNjYTei/Jif: 00:32:37.887 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTg3MDY1YzUzOTRiZTBmN2M4ZmI2OTEzMjEyNzkzODh5ESnS: 00:32:37.887 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:37.888 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:37.888 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTgzYTQxODdmMmEzOTI5ZDUxOGIyNWM0OGU4OWNjYTei/Jif: 00:32:37.888 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTg3MDY1YzUzOTRiZTBmN2M4ZmI2OTEzMjEyNzkzODh5ESnS: ]] 00:32:37.888 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTg3MDY1YzUzOTRiZTBmN2M4ZmI2OTEzMjEyNzkzODh5ESnS: 00:32:37.888 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:32:37.888 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:37.888 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:37.888 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:37.888 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:37.888 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:37.888 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:37.888 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.888 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.888 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.888 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:37.888 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:37.888 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:37.888 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:37.888 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:37.888 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:37.888 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:37.888 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:37.888 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:37.888 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:37.888 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:37.888 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:37.888 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.888 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.888 nvme0n1 00:32:37.888 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.888 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:37.888 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:37.888 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.888 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.888 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.146 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:38.146 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:38.146 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.146 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.146 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.146 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:38.146 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:32:38.146 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:38.146 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:38.146 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:38.146 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:38.146 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWJjZjcwZWQ5NjMyMzhhOWFiNzVmYmYwN2NhYzUyZmRkN2YzMjVlMWFhOTdlYWRm4YxG1w==: 00:32:38.146 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTljOGY2MmI2NDZmZDk5YWE0ZmE1ZTU5ZWI2NTdmYmNTfCvl: 00:32:38.146 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:38.146 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:38.146 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWJjZjcwZWQ5NjMyMzhhOWFiNzVmYmYwN2NhYzUyZmRkN2YzMjVlMWFhOTdlYWRm4YxG1w==: 00:32:38.146 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTljOGY2MmI2NDZmZDk5YWE0ZmE1ZTU5ZWI2NTdmYmNTfCvl: ]] 00:32:38.146 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTljOGY2MmI2NDZmZDk5YWE0ZmE1ZTU5ZWI2NTdmYmNTfCvl: 00:32:38.146 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:32:38.146 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:38.146 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:38.146 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:38.146 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:38.146 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:38.146 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:38.146 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.146 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.146 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.146 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:38.146 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:38.146 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:38.146 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:38.146 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:38.146 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:38.146 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:38.146 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:38.146 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:38.146 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:38.146 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:38.146 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:38.146 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.146 02:10:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.146 nvme0n1 00:32:38.146 02:10:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.146 02:10:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:38.146 02:10:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:38.146 02:10:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.146 02:10:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.146 02:10:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.405 02:10:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:38.405 02:10:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:38.405 02:10:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.405 02:10:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.405 02:10:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.405 02:10:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:38.405 02:10:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:32:38.405 02:10:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:38.405 02:10:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:38.405 02:10:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:38.405 02:10:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:38.405 02:10:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzZiOTBiM2QwZTE2NmMzNDk1MDQ4NWE4ODE5MGY4MDZjNWMwMjk2Yzk2MjJjMjcyNDU0ZDUzZWI2Y2M1YzEyYlFMU+I=: 00:32:38.405 02:10:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:38.405 02:10:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:38.405 02:10:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:38.405 02:10:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzZiOTBiM2QwZTE2NmMzNDk1MDQ4NWE4ODE5MGY4MDZjNWMwMjk2Yzk2MjJjMjcyNDU0ZDUzZWI2Y2M1YzEyYlFMU+I=: 00:32:38.405 02:10:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:38.405 02:10:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:32:38.405 02:10:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:38.405 02:10:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:38.405 02:10:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:38.405 02:10:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:38.405 02:10:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:38.405 02:10:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:38.405 02:10:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.405 02:10:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.405 02:10:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.405 02:10:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:38.405 02:10:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:38.405 02:10:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:38.405 02:10:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:38.405 02:10:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:38.405 02:10:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:38.405 02:10:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:38.405 02:10:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:38.405 02:10:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:38.405 02:10:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:38.405 02:10:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:38.405 02:10:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:38.405 02:10:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.405 02:10:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.405 nvme0n1 00:32:38.405 02:10:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.405 02:10:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:38.405 02:10:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.405 02:10:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.405 02:10:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:38.405 02:10:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.663 02:10:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:38.663 02:10:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:38.663 02:10:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.663 02:10:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.663 02:10:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.663 02:10:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:38.663 02:10:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:38.663 02:10:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:32:38.663 02:10:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:38.663 02:10:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:38.663 02:10:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:38.663 02:10:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:38.663 02:10:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGExOTJhY2YwNWZkNDdkMGVlZmIxZWI1MGU5ZTUyYzYhSQfA: 00:32:38.663 02:10:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTgxYzY1NWI1ZjU0NTdmNWIxYmI1ZTNhYmZiOWY5YjA2ZDU3NzJlODhlZjU5OTQ5YWI4YzllNGVmNTNhMjFlNFN/Ar8=: 00:32:38.663 02:10:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:38.663 02:10:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:38.663 02:10:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGExOTJhY2YwNWZkNDdkMGVlZmIxZWI1MGU5ZTUyYzYhSQfA: 00:32:38.663 02:10:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTgxYzY1NWI1ZjU0NTdmNWIxYmI1ZTNhYmZiOWY5YjA2ZDU3NzJlODhlZjU5OTQ5YWI4YzllNGVmNTNhMjFlNFN/Ar8=: ]] 00:32:38.663 02:10:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTgxYzY1NWI1ZjU0NTdmNWIxYmI1ZTNhYmZiOWY5YjA2ZDU3NzJlODhlZjU5OTQ5YWI4YzllNGVmNTNhMjFlNFN/Ar8=: 00:32:38.663 02:10:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:32:38.664 02:10:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:38.664 02:10:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:38.664 02:10:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:38.664 02:10:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:38.664 02:10:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:38.664 02:10:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:38.664 02:10:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.664 02:10:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.664 02:10:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.664 02:10:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:38.664 02:10:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:38.664 02:10:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:38.664 02:10:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:38.664 02:10:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:38.664 02:10:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:38.664 02:10:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:38.664 02:10:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:38.664 02:10:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:38.664 02:10:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:38.664 02:10:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:38.664 02:10:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:38.664 02:10:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.664 02:10:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.922 nvme0n1 00:32:38.922 02:10:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.922 02:10:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:38.922 02:10:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.922 02:10:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.922 02:10:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:38.922 02:10:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.922 02:10:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:38.922 02:10:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:38.922 02:10:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.922 02:10:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.922 02:10:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.922 02:10:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:38.922 02:10:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:32:38.922 02:10:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:38.922 02:10:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:38.922 02:10:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:38.922 02:10:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:38.922 02:10:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmNlYzkzMjQ2MDZmMzc5ZDRiYjk1ZTM4M2FlMjFhYzEzYTM4ZDAyYjg3ZDE1M2MwhpKVfw==: 00:32:38.922 02:10:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWM4ZDhlZmQyZTM1NDRjYmRlNzE2ZDFjNjhmZjkwZjhlMjc5MTExODAxNTk0ZmRlVvu5Vw==: 00:32:38.922 02:10:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:38.922 02:10:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:38.922 02:10:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmNlYzkzMjQ2MDZmMzc5ZDRiYjk1ZTM4M2FlMjFhYzEzYTM4ZDAyYjg3ZDE1M2MwhpKVfw==: 00:32:38.922 02:10:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWM4ZDhlZmQyZTM1NDRjYmRlNzE2ZDFjNjhmZjkwZjhlMjc5MTExODAxNTk0ZmRlVvu5Vw==: ]] 00:32:38.922 02:10:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWM4ZDhlZmQyZTM1NDRjYmRlNzE2ZDFjNjhmZjkwZjhlMjc5MTExODAxNTk0ZmRlVvu5Vw==: 00:32:38.922 02:10:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:32:38.922 02:10:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:38.922 02:10:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:38.922 02:10:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:38.922 02:10:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:38.922 02:10:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:38.922 02:10:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:38.922 02:10:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.922 02:10:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.922 02:10:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.922 02:10:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:38.922 02:10:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:38.922 02:10:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:38.922 02:10:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:38.923 02:10:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:38.923 02:10:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:38.923 02:10:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:38.923 02:10:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:38.923 02:10:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:38.923 02:10:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:38.923 02:10:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:38.923 02:10:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:38.923 02:10:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.923 02:10:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.181 nvme0n1 00:32:39.181 02:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.181 02:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:39.181 02:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:39.181 02:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.181 02:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.181 02:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.181 02:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:39.181 02:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:39.181 02:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.181 02:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.181 02:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.181 02:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:39.181 02:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:32:39.181 02:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:39.181 02:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:39.181 02:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:39.181 02:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:39.181 02:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTgzYTQxODdmMmEzOTI5ZDUxOGIyNWM0OGU4OWNjYTei/Jif: 00:32:39.181 02:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTg3MDY1YzUzOTRiZTBmN2M4ZmI2OTEzMjEyNzkzODh5ESnS: 00:32:39.181 02:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:39.181 02:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:39.181 02:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTgzYTQxODdmMmEzOTI5ZDUxOGIyNWM0OGU4OWNjYTei/Jif: 00:32:39.181 02:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTg3MDY1YzUzOTRiZTBmN2M4ZmI2OTEzMjEyNzkzODh5ESnS: ]] 00:32:39.181 02:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTg3MDY1YzUzOTRiZTBmN2M4ZmI2OTEzMjEyNzkzODh5ESnS: 00:32:39.181 02:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:32:39.181 02:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:39.181 02:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:39.181 02:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:39.181 02:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:39.181 02:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:39.181 02:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:39.181 02:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.181 02:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.181 02:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.181 02:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:39.181 02:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:39.181 02:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:39.181 02:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:39.181 02:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:39.181 02:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:39.181 02:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:39.181 02:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:39.181 02:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:39.181 02:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:39.181 02:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:39.181 02:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:39.181 02:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.181 02:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.748 nvme0n1 00:32:39.748 02:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.748 02:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:39.748 02:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:39.748 02:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.748 02:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.749 02:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.749 02:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:39.749 02:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:39.749 02:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.749 02:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.749 02:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.749 02:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:39.749 02:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:32:39.749 02:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:39.749 02:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:39.749 02:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:39.749 02:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:39.749 02:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWJjZjcwZWQ5NjMyMzhhOWFiNzVmYmYwN2NhYzUyZmRkN2YzMjVlMWFhOTdlYWRm4YxG1w==: 00:32:39.749 02:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTljOGY2MmI2NDZmZDk5YWE0ZmE1ZTU5ZWI2NTdmYmNTfCvl: 00:32:39.749 02:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:39.749 02:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:39.749 02:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWJjZjcwZWQ5NjMyMzhhOWFiNzVmYmYwN2NhYzUyZmRkN2YzMjVlMWFhOTdlYWRm4YxG1w==: 00:32:39.749 02:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTljOGY2MmI2NDZmZDk5YWE0ZmE1ZTU5ZWI2NTdmYmNTfCvl: ]] 00:32:39.749 02:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTljOGY2MmI2NDZmZDk5YWE0ZmE1ZTU5ZWI2NTdmYmNTfCvl: 00:32:39.749 02:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:32:39.749 02:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:39.749 02:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:39.749 02:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:39.749 02:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:39.749 02:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:39.749 02:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:39.749 02:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.749 02:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.749 02:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.749 02:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:39.749 02:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:39.749 02:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:39.749 02:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:39.749 02:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:39.749 02:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:39.749 02:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:39.749 02:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:39.749 02:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:39.749 02:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:39.749 02:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:39.749 02:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:39.749 02:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.749 02:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.008 nvme0n1 00:32:40.008 02:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:40.008 02:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:40.008 02:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:40.008 02:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.008 02:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.008 02:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:40.008 02:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:40.008 02:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:40.008 02:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.008 02:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.008 02:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:40.008 02:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:40.008 02:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:32:40.008 02:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:40.008 02:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:40.008 02:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:40.008 02:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:40.008 02:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzZiOTBiM2QwZTE2NmMzNDk1MDQ4NWE4ODE5MGY4MDZjNWMwMjk2Yzk2MjJjMjcyNDU0ZDUzZWI2Y2M1YzEyYlFMU+I=: 00:32:40.008 02:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:40.008 02:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:40.008 02:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:40.008 02:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzZiOTBiM2QwZTE2NmMzNDk1MDQ4NWE4ODE5MGY4MDZjNWMwMjk2Yzk2MjJjMjcyNDU0ZDUzZWI2Y2M1YzEyYlFMU+I=: 00:32:40.008 02:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:40.008 02:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:32:40.008 02:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:40.008 02:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:40.008 02:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:40.008 02:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:40.008 02:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:40.008 02:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:40.008 02:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.008 02:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.008 02:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:40.008 02:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:40.008 02:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:40.008 02:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:40.008 02:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:40.008 02:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:40.008 02:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:40.008 02:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:40.008 02:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:40.008 02:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:40.008 02:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:40.008 02:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:40.008 02:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:40.008 02:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.008 02:10:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.267 nvme0n1 00:32:40.267 02:10:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:40.267 02:10:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:40.267 02:10:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:40.267 02:10:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.267 02:10:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.267 02:10:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:40.267 02:10:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:40.267 02:10:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:40.267 02:10:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.267 02:10:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.267 02:10:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:40.267 02:10:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:40.267 02:10:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:40.267 02:10:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:32:40.267 02:10:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:40.267 02:10:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:40.267 02:10:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:40.267 02:10:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:40.267 02:10:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGExOTJhY2YwNWZkNDdkMGVlZmIxZWI1MGU5ZTUyYzYhSQfA: 00:32:40.267 02:10:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTgxYzY1NWI1ZjU0NTdmNWIxYmI1ZTNhYmZiOWY5YjA2ZDU3NzJlODhlZjU5OTQ5YWI4YzllNGVmNTNhMjFlNFN/Ar8=: 00:32:40.267 02:10:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:40.267 02:10:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:40.267 02:10:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGExOTJhY2YwNWZkNDdkMGVlZmIxZWI1MGU5ZTUyYzYhSQfA: 00:32:40.267 02:10:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTgxYzY1NWI1ZjU0NTdmNWIxYmI1ZTNhYmZiOWY5YjA2ZDU3NzJlODhlZjU5OTQ5YWI4YzllNGVmNTNhMjFlNFN/Ar8=: ]] 00:32:40.267 02:10:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTgxYzY1NWI1ZjU0NTdmNWIxYmI1ZTNhYmZiOWY5YjA2ZDU3NzJlODhlZjU5OTQ5YWI4YzllNGVmNTNhMjFlNFN/Ar8=: 00:32:40.267 02:10:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:32:40.267 02:10:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:40.267 02:10:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:40.267 02:10:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:40.267 02:10:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:40.267 02:10:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:40.267 02:10:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:32:40.267 02:10:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.267 02:10:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.267 02:10:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:40.267 02:10:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:40.267 02:10:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:40.267 02:10:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:40.267 02:10:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:40.267 02:10:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:40.267 02:10:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:40.267 02:10:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:40.267 02:10:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:40.267 02:10:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:40.267 02:10:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:40.267 02:10:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:40.267 02:10:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:40.267 02:10:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.267 02:10:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.833 nvme0n1 00:32:40.834 02:10:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:40.834 02:10:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:40.834 02:10:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.834 02:10:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.834 02:10:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:40.834 02:10:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:40.834 02:10:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:40.834 02:10:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:40.834 02:10:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.834 02:10:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.834 02:10:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:40.834 02:10:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:40.834 02:10:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:32:40.834 02:10:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:40.834 02:10:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:40.834 02:10:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:40.834 02:10:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:40.834 02:10:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmNlYzkzMjQ2MDZmMzc5ZDRiYjk1ZTM4M2FlMjFhYzEzYTM4ZDAyYjg3ZDE1M2MwhpKVfw==: 00:32:40.834 02:10:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWM4ZDhlZmQyZTM1NDRjYmRlNzE2ZDFjNjhmZjkwZjhlMjc5MTExODAxNTk0ZmRlVvu5Vw==: 00:32:40.834 02:10:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:40.834 02:10:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:40.834 02:10:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmNlYzkzMjQ2MDZmMzc5ZDRiYjk1ZTM4M2FlMjFhYzEzYTM4ZDAyYjg3ZDE1M2MwhpKVfw==: 00:32:40.834 02:10:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWM4ZDhlZmQyZTM1NDRjYmRlNzE2ZDFjNjhmZjkwZjhlMjc5MTExODAxNTk0ZmRlVvu5Vw==: ]] 00:32:40.834 02:10:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWM4ZDhlZmQyZTM1NDRjYmRlNzE2ZDFjNjhmZjkwZjhlMjc5MTExODAxNTk0ZmRlVvu5Vw==: 00:32:40.834 02:10:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:32:40.834 02:10:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:40.834 02:10:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:40.834 02:10:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:40.834 02:10:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:40.834 02:10:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:40.834 02:10:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:32:40.834 02:10:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.834 02:10:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.834 02:10:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:40.834 02:10:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:40.834 02:10:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:40.834 02:10:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:40.834 02:10:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:40.834 02:10:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:40.834 02:10:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:40.834 02:10:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:40.834 02:10:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:40.834 02:10:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:40.834 02:10:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:40.834 02:10:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:40.834 02:10:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:40.834 02:10:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.834 02:10:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.399 nvme0n1 00:32:41.399 02:10:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.399 02:10:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:41.399 02:10:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.399 02:10:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.399 02:10:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:41.399 02:10:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.399 02:10:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:41.399 02:10:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:41.400 02:10:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.400 02:10:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.658 02:10:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.658 02:10:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:41.658 02:10:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:32:41.658 02:10:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:41.658 02:10:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:41.658 02:10:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:41.658 02:10:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:41.658 02:10:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTgzYTQxODdmMmEzOTI5ZDUxOGIyNWM0OGU4OWNjYTei/Jif: 00:32:41.658 02:10:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTg3MDY1YzUzOTRiZTBmN2M4ZmI2OTEzMjEyNzkzODh5ESnS: 00:32:41.658 02:10:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:41.658 02:10:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:41.658 02:10:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTgzYTQxODdmMmEzOTI5ZDUxOGIyNWM0OGU4OWNjYTei/Jif: 00:32:41.658 02:10:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTg3MDY1YzUzOTRiZTBmN2M4ZmI2OTEzMjEyNzkzODh5ESnS: ]] 00:32:41.658 02:10:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTg3MDY1YzUzOTRiZTBmN2M4ZmI2OTEzMjEyNzkzODh5ESnS: 00:32:41.658 02:10:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:32:41.658 02:10:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:41.658 02:10:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:41.658 02:10:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:41.658 02:10:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:41.658 02:10:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:41.658 02:10:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:32:41.658 02:10:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.658 02:10:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.658 02:10:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.658 02:10:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:41.658 02:10:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:41.658 02:10:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:41.658 02:10:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:41.659 02:10:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:41.659 02:10:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:41.659 02:10:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:41.659 02:10:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:41.659 02:10:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:41.659 02:10:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:41.659 02:10:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:41.659 02:10:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:41.659 02:10:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.659 02:10:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.917 nvme0n1 00:32:41.917 02:10:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.917 02:10:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:41.917 02:10:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:41.917 02:10:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.917 02:10:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.175 02:10:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.175 02:10:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:42.175 02:10:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:42.175 02:10:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.175 02:10:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.175 02:10:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.175 02:10:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:42.175 02:10:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:32:42.175 02:10:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:42.175 02:10:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:42.175 02:10:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:42.175 02:10:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:42.175 02:10:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWJjZjcwZWQ5NjMyMzhhOWFiNzVmYmYwN2NhYzUyZmRkN2YzMjVlMWFhOTdlYWRm4YxG1w==: 00:32:42.175 02:10:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTljOGY2MmI2NDZmZDk5YWE0ZmE1ZTU5ZWI2NTdmYmNTfCvl: 00:32:42.175 02:10:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:42.175 02:10:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:42.175 02:10:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWJjZjcwZWQ5NjMyMzhhOWFiNzVmYmYwN2NhYzUyZmRkN2YzMjVlMWFhOTdlYWRm4YxG1w==: 00:32:42.175 02:10:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTljOGY2MmI2NDZmZDk5YWE0ZmE1ZTU5ZWI2NTdmYmNTfCvl: ]] 00:32:42.175 02:10:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTljOGY2MmI2NDZmZDk5YWE0ZmE1ZTU5ZWI2NTdmYmNTfCvl: 00:32:42.175 02:10:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:32:42.175 02:10:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:42.175 02:10:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:42.175 02:10:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:42.175 02:10:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:42.175 02:10:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:42.175 02:10:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:32:42.175 02:10:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.175 02:10:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.175 02:10:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.175 02:10:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:42.175 02:10:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:42.175 02:10:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:42.175 02:10:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:42.175 02:10:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:42.175 02:10:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:42.175 02:10:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:42.175 02:10:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:42.175 02:10:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:42.175 02:10:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:42.175 02:10:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:42.175 02:10:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:42.175 02:10:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.175 02:10:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.742 nvme0n1 00:32:42.742 02:10:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.742 02:10:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:42.742 02:10:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.742 02:10:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:42.742 02:10:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.742 02:10:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.742 02:10:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:42.742 02:10:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:42.742 02:10:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.742 02:10:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.742 02:10:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.742 02:10:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:42.742 02:10:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:32:42.742 02:10:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:42.742 02:10:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:42.742 02:10:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:42.742 02:10:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:42.742 02:10:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzZiOTBiM2QwZTE2NmMzNDk1MDQ4NWE4ODE5MGY4MDZjNWMwMjk2Yzk2MjJjMjcyNDU0ZDUzZWI2Y2M1YzEyYlFMU+I=: 00:32:42.742 02:10:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:42.743 02:10:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:42.743 02:10:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:42.743 02:10:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzZiOTBiM2QwZTE2NmMzNDk1MDQ4NWE4ODE5MGY4MDZjNWMwMjk2Yzk2MjJjMjcyNDU0ZDUzZWI2Y2M1YzEyYlFMU+I=: 00:32:42.743 02:10:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:42.743 02:10:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:32:42.743 02:10:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:42.743 02:10:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:42.743 02:10:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:42.743 02:10:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:42.743 02:10:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:42.743 02:10:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:32:42.743 02:10:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.743 02:10:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.743 02:10:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.743 02:10:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:42.743 02:10:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:42.743 02:10:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:42.743 02:10:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:42.743 02:10:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:42.743 02:10:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:42.743 02:10:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:42.743 02:10:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:42.743 02:10:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:42.743 02:10:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:42.743 02:10:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:42.743 02:10:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:42.743 02:10:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.743 02:10:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.309 nvme0n1 00:32:43.309 02:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.309 02:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:43.309 02:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.309 02:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.309 02:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:43.309 02:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.309 02:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:43.309 02:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:43.309 02:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.309 02:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.309 02:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.309 02:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:43.309 02:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:43.309 02:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:32:43.309 02:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:43.309 02:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:43.309 02:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:43.309 02:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:43.309 02:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGExOTJhY2YwNWZkNDdkMGVlZmIxZWI1MGU5ZTUyYzYhSQfA: 00:32:43.309 02:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTgxYzY1NWI1ZjU0NTdmNWIxYmI1ZTNhYmZiOWY5YjA2ZDU3NzJlODhlZjU5OTQ5YWI4YzllNGVmNTNhMjFlNFN/Ar8=: 00:32:43.309 02:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:43.309 02:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:43.309 02:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGExOTJhY2YwNWZkNDdkMGVlZmIxZWI1MGU5ZTUyYzYhSQfA: 00:32:43.309 02:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTgxYzY1NWI1ZjU0NTdmNWIxYmI1ZTNhYmZiOWY5YjA2ZDU3NzJlODhlZjU5OTQ5YWI4YzllNGVmNTNhMjFlNFN/Ar8=: ]] 00:32:43.309 02:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTgxYzY1NWI1ZjU0NTdmNWIxYmI1ZTNhYmZiOWY5YjA2ZDU3NzJlODhlZjU5OTQ5YWI4YzllNGVmNTNhMjFlNFN/Ar8=: 00:32:43.309 02:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:32:43.309 02:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:43.309 02:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:43.309 02:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:43.310 02:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:43.310 02:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:43.310 02:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:32:43.310 02:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.310 02:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.310 02:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.310 02:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:43.310 02:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:43.310 02:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:43.310 02:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:43.310 02:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:43.310 02:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:43.310 02:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:43.310 02:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:43.310 02:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:43.310 02:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:43.310 02:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:43.310 02:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:43.310 02:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.310 02:10:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.244 nvme0n1 00:32:44.244 02:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.244 02:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:44.244 02:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:44.244 02:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.244 02:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.244 02:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.244 02:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:44.244 02:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:44.244 02:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.244 02:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.244 02:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.244 02:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:44.244 02:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:32:44.244 02:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:44.244 02:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:44.244 02:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:44.244 02:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:44.244 02:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmNlYzkzMjQ2MDZmMzc5ZDRiYjk1ZTM4M2FlMjFhYzEzYTM4ZDAyYjg3ZDE1M2MwhpKVfw==: 00:32:44.244 02:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWM4ZDhlZmQyZTM1NDRjYmRlNzE2ZDFjNjhmZjkwZjhlMjc5MTExODAxNTk0ZmRlVvu5Vw==: 00:32:44.244 02:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:44.244 02:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:44.244 02:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmNlYzkzMjQ2MDZmMzc5ZDRiYjk1ZTM4M2FlMjFhYzEzYTM4ZDAyYjg3ZDE1M2MwhpKVfw==: 00:32:44.244 02:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWM4ZDhlZmQyZTM1NDRjYmRlNzE2ZDFjNjhmZjkwZjhlMjc5MTExODAxNTk0ZmRlVvu5Vw==: ]] 00:32:44.244 02:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWM4ZDhlZmQyZTM1NDRjYmRlNzE2ZDFjNjhmZjkwZjhlMjc5MTExODAxNTk0ZmRlVvu5Vw==: 00:32:44.244 02:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:32:44.244 02:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:44.244 02:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:44.244 02:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:44.244 02:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:44.244 02:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:44.244 02:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:32:44.245 02:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.245 02:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.245 02:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.245 02:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:44.245 02:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:44.245 02:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:44.245 02:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:44.245 02:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:44.245 02:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:44.245 02:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:44.245 02:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:44.245 02:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:44.245 02:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:44.245 02:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:44.245 02:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:44.245 02:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.245 02:10:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.619 nvme0n1 00:32:45.619 02:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.619 02:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:45.619 02:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.619 02:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:45.619 02:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.619 02:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.619 02:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:45.619 02:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:45.619 02:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.619 02:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.619 02:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.619 02:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:45.619 02:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:32:45.619 02:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:45.619 02:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:45.619 02:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:45.619 02:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:45.619 02:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTgzYTQxODdmMmEzOTI5ZDUxOGIyNWM0OGU4OWNjYTei/Jif: 00:32:45.619 02:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTg3MDY1YzUzOTRiZTBmN2M4ZmI2OTEzMjEyNzkzODh5ESnS: 00:32:45.619 02:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:45.619 02:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:45.619 02:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTgzYTQxODdmMmEzOTI5ZDUxOGIyNWM0OGU4OWNjYTei/Jif: 00:32:45.619 02:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTg3MDY1YzUzOTRiZTBmN2M4ZmI2OTEzMjEyNzkzODh5ESnS: ]] 00:32:45.619 02:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTg3MDY1YzUzOTRiZTBmN2M4ZmI2OTEzMjEyNzkzODh5ESnS: 00:32:45.619 02:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:32:45.619 02:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:45.619 02:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:45.619 02:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:45.619 02:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:45.619 02:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:45.619 02:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:32:45.619 02:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.619 02:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.619 02:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.619 02:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:45.619 02:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:45.619 02:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:45.619 02:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:45.619 02:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:45.619 02:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:45.619 02:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:45.619 02:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:45.619 02:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:45.619 02:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:45.619 02:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:45.619 02:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:45.619 02:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.619 02:11:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.554 nvme0n1 00:32:46.554 02:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.554 02:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:46.554 02:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:46.554 02:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.554 02:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.554 02:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.554 02:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:46.554 02:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:46.554 02:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.554 02:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.554 02:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.554 02:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:46.554 02:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:32:46.554 02:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:46.554 02:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:46.554 02:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:46.554 02:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:46.554 02:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWJjZjcwZWQ5NjMyMzhhOWFiNzVmYmYwN2NhYzUyZmRkN2YzMjVlMWFhOTdlYWRm4YxG1w==: 00:32:46.554 02:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTljOGY2MmI2NDZmZDk5YWE0ZmE1ZTU5ZWI2NTdmYmNTfCvl: 00:32:46.554 02:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:46.554 02:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:46.554 02:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWJjZjcwZWQ5NjMyMzhhOWFiNzVmYmYwN2NhYzUyZmRkN2YzMjVlMWFhOTdlYWRm4YxG1w==: 00:32:46.554 02:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTljOGY2MmI2NDZmZDk5YWE0ZmE1ZTU5ZWI2NTdmYmNTfCvl: ]] 00:32:46.554 02:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTljOGY2MmI2NDZmZDk5YWE0ZmE1ZTU5ZWI2NTdmYmNTfCvl: 00:32:46.554 02:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:32:46.554 02:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:46.554 02:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:46.554 02:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:46.554 02:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:46.554 02:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:46.554 02:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:32:46.554 02:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.554 02:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.554 02:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.554 02:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:46.554 02:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:46.554 02:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:46.554 02:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:46.554 02:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:46.554 02:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:46.554 02:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:46.554 02:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:46.554 02:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:46.554 02:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:46.554 02:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:46.554 02:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:46.554 02:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.554 02:11:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.510 nvme0n1 00:32:47.510 02:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.510 02:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:47.510 02:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.510 02:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:47.510 02:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.510 02:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.510 02:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:47.510 02:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:47.510 02:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.510 02:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.510 02:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.510 02:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:47.510 02:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:32:47.510 02:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:47.510 02:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:47.510 02:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:47.510 02:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:47.510 02:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzZiOTBiM2QwZTE2NmMzNDk1MDQ4NWE4ODE5MGY4MDZjNWMwMjk2Yzk2MjJjMjcyNDU0ZDUzZWI2Y2M1YzEyYlFMU+I=: 00:32:47.510 02:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:47.510 02:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:47.510 02:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:47.510 02:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzZiOTBiM2QwZTE2NmMzNDk1MDQ4NWE4ODE5MGY4MDZjNWMwMjk2Yzk2MjJjMjcyNDU0ZDUzZWI2Y2M1YzEyYlFMU+I=: 00:32:47.510 02:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:47.510 02:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:32:47.510 02:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:47.510 02:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:47.510 02:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:47.510 02:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:47.510 02:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:47.510 02:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:32:47.510 02:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.510 02:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.510 02:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.510 02:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:47.510 02:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:47.510 02:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:47.510 02:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:47.510 02:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:47.510 02:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:47.510 02:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:47.510 02:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:47.510 02:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:47.510 02:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:47.510 02:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:47.510 02:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:47.510 02:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.510 02:11:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.444 nvme0n1 00:32:48.444 02:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.444 02:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:48.444 02:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.444 02:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.445 02:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:48.445 02:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.445 02:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:48.445 02:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:48.445 02:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.445 02:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.445 02:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.445 02:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:32:48.445 02:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:48.445 02:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:48.445 02:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:48.445 02:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:48.445 02:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmNlYzkzMjQ2MDZmMzc5ZDRiYjk1ZTM4M2FlMjFhYzEzYTM4ZDAyYjg3ZDE1M2MwhpKVfw==: 00:32:48.445 02:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWM4ZDhlZmQyZTM1NDRjYmRlNzE2ZDFjNjhmZjkwZjhlMjc5MTExODAxNTk0ZmRlVvu5Vw==: 00:32:48.445 02:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:48.445 02:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:48.445 02:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmNlYzkzMjQ2MDZmMzc5ZDRiYjk1ZTM4M2FlMjFhYzEzYTM4ZDAyYjg3ZDE1M2MwhpKVfw==: 00:32:48.445 02:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWM4ZDhlZmQyZTM1NDRjYmRlNzE2ZDFjNjhmZjkwZjhlMjc5MTExODAxNTk0ZmRlVvu5Vw==: ]] 00:32:48.445 02:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWM4ZDhlZmQyZTM1NDRjYmRlNzE2ZDFjNjhmZjkwZjhlMjc5MTExODAxNTk0ZmRlVvu5Vw==: 00:32:48.445 02:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:48.445 02:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.445 02:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.445 02:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.445 02:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:32:48.445 02:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:48.445 02:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:48.445 02:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:48.445 02:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:48.445 02:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:48.445 02:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:48.445 02:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:48.445 02:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:48.445 02:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:48.445 02:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:48.445 02:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:32:48.445 02:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:32:48.445 02:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:32:48.445 02:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:32:48.445 02:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:48.445 02:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:32:48.445 02:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:48.445 02:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:32:48.445 02:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.445 02:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.445 request: 00:32:48.445 { 00:32:48.445 "name": "nvme0", 00:32:48.445 "trtype": "tcp", 00:32:48.445 "traddr": "10.0.0.1", 00:32:48.445 "adrfam": "ipv4", 00:32:48.445 "trsvcid": "4420", 00:32:48.445 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:32:48.445 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:32:48.445 "prchk_reftag": false, 00:32:48.445 "prchk_guard": false, 00:32:48.445 "hdgst": false, 00:32:48.445 "ddgst": false, 00:32:48.445 "method": "bdev_nvme_attach_controller", 00:32:48.445 "req_id": 1 00:32:48.445 } 00:32:48.445 Got JSON-RPC error response 00:32:48.445 response: 00:32:48.445 { 00:32:48.445 "code": -5, 00:32:48.445 "message": "Input/output error" 00:32:48.445 } 00:32:48.445 02:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:32:48.445 02:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:32:48.445 02:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:48.445 02:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:48.445 02:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:48.445 02:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:32:48.445 02:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:32:48.445 02:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.445 02:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.445 02:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.445 02:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:32:48.445 02:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:32:48.445 02:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:48.445 02:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:48.445 02:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:48.445 02:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:48.445 02:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:48.445 02:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:48.445 02:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:48.445 02:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:48.445 02:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:48.445 02:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:48.445 02:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:32:48.445 02:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:32:48.445 02:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:32:48.445 02:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:32:48.445 02:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:48.445 02:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:32:48.445 02:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:48.445 02:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:32:48.445 02:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.445 02:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.707 request: 00:32:48.707 { 00:32:48.707 "name": "nvme0", 00:32:48.707 "trtype": "tcp", 00:32:48.707 "traddr": "10.0.0.1", 00:32:48.707 "adrfam": "ipv4", 00:32:48.707 "trsvcid": "4420", 00:32:48.707 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:32:48.707 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:32:48.707 "prchk_reftag": false, 00:32:48.707 "prchk_guard": false, 00:32:48.707 "hdgst": false, 00:32:48.707 "ddgst": false, 00:32:48.707 "dhchap_key": "key2", 00:32:48.707 "method": "bdev_nvme_attach_controller", 00:32:48.707 "req_id": 1 00:32:48.707 } 00:32:48.707 Got JSON-RPC error response 00:32:48.707 response: 00:32:48.707 { 00:32:48.707 "code": -5, 00:32:48.707 "message": "Input/output error" 00:32:48.707 } 00:32:48.707 02:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:32:48.707 02:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:32:48.707 02:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:48.707 02:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:48.707 02:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:48.707 02:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:32:48.707 02:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.707 02:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.707 02:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:32:48.707 02:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.707 02:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:32:48.707 02:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:32:48.707 02:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:48.707 02:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:48.707 02:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:48.707 02:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:48.707 02:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:48.707 02:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:48.707 02:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:48.707 02:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:48.707 02:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:48.707 02:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:48.707 02:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:32:48.707 02:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:32:48.707 02:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:32:48.707 02:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:32:48.707 02:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:48.707 02:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:32:48.707 02:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:48.707 02:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:32:48.707 02:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.707 02:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.707 request: 00:32:48.707 { 00:32:48.707 "name": "nvme0", 00:32:48.707 "trtype": "tcp", 00:32:48.707 "traddr": "10.0.0.1", 00:32:48.707 "adrfam": "ipv4", 00:32:48.707 "trsvcid": "4420", 00:32:48.707 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:32:48.707 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:32:48.707 "prchk_reftag": false, 00:32:48.707 "prchk_guard": false, 00:32:48.707 "hdgst": false, 00:32:48.707 "ddgst": false, 00:32:48.707 "dhchap_key": "key1", 00:32:48.707 "dhchap_ctrlr_key": "ckey2", 00:32:48.707 "method": "bdev_nvme_attach_controller", 00:32:48.707 "req_id": 1 00:32:48.707 } 00:32:48.707 Got JSON-RPC error response 00:32:48.707 response: 00:32:48.707 { 00:32:48.707 "code": -5, 00:32:48.707 "message": "Input/output error" 00:32:48.707 } 00:32:48.707 02:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:32:48.707 02:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:32:48.707 02:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:48.707 02:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:48.707 02:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:48.707 02:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:32:48.707 02:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:32:48.708 02:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:32:48.708 02:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:48.708 02:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:32:48.708 02:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:48.708 02:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:32:48.708 02:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:48.708 02:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:48.708 rmmod nvme_tcp 00:32:48.708 rmmod nvme_fabrics 00:32:48.708 02:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:48.708 02:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:32:48.708 02:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:32:48.708 02:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 1563891 ']' 00:32:48.708 02:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 1563891 00:32:48.708 02:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 1563891 ']' 00:32:48.708 02:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 1563891 00:32:48.708 02:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:32:48.708 02:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:48.708 02:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1563891 00:32:48.966 02:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:48.966 02:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:48.966 02:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1563891' 00:32:48.966 killing process with pid 1563891 00:32:48.966 02:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 1563891 00:32:48.966 02:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 1563891 00:32:48.966 02:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:48.966 02:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:48.966 02:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:48.966 02:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:48.966 02:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:48.966 02:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:48.966 02:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:48.966 02:11:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:51.499 02:11:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:51.499 02:11:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:32:51.499 02:11:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:32:51.499 02:11:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:32:51.500 02:11:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:32:51.500 02:11:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:32:51.500 02:11:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:51.500 02:11:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:32:51.500 02:11:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:32:51.500 02:11:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:51.500 02:11:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:32:51.500 02:11:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:32:51.500 02:11:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:52.436 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:32:52.436 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:32:52.436 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:32:52.436 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:32:52.436 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:32:52.436 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:32:52.437 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:32:52.437 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:32:52.437 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:32:52.437 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:32:52.437 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:32:52.437 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:32:52.437 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:32:52.437 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:32:52.437 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:32:52.437 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:32:53.375 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:32:53.634 02:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.eQm /tmp/spdk.key-null.nEB /tmp/spdk.key-sha256.zYm /tmp/spdk.key-sha384.OAx /tmp/spdk.key-sha512.GjK /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:32:53.634 02:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:54.638 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:32:54.638 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:32:54.638 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:32:54.638 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:32:54.638 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:32:54.638 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:32:54.638 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:32:54.638 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:32:54.638 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:32:54.638 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:32:54.638 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:32:54.638 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:32:54.638 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:32:54.638 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:32:54.638 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:32:54.638 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:32:54.638 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:32:54.897 00:32:54.897 real 0m49.649s 00:32:54.897 user 0m46.773s 00:32:54.897 sys 0m5.749s 00:32:54.897 02:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:54.897 02:11:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.897 ************************************ 00:32:54.897 END TEST nvmf_auth_host 00:32:54.897 ************************************ 00:32:54.897 02:11:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:32:54.897 02:11:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:32:54.897 02:11:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:32:54.897 02:11:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:54.897 02:11:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.897 ************************************ 00:32:54.897 START TEST nvmf_digest 00:32:54.897 ************************************ 00:32:54.897 02:11:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:32:54.897 * Looking for test storage... 00:32:54.897 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:54.897 02:11:09 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:54.897 02:11:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:32:54.897 02:11:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:54.899 02:11:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:54.899 02:11:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:54.899 02:11:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:54.899 02:11:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:54.899 02:11:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:54.899 02:11:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:54.899 02:11:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:54.899 02:11:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:54.899 02:11:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:54.899 02:11:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:54.899 02:11:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:54.899 02:11:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:54.899 02:11:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:54.899 02:11:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:54.899 02:11:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:54.899 02:11:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:54.899 02:11:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:54.899 02:11:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:54.899 02:11:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:54.899 02:11:09 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:54.899 02:11:09 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:54.899 02:11:09 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:54.899 02:11:09 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:32:54.899 02:11:09 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:54.899 02:11:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:32:54.899 02:11:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:54.899 02:11:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:54.899 02:11:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:54.899 02:11:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:54.899 02:11:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:54.899 02:11:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:54.899 02:11:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:54.899 02:11:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:54.899 02:11:09 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:32:54.899 02:11:09 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:32:54.899 02:11:09 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:32:54.899 02:11:09 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:32:54.900 02:11:09 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:32:54.900 02:11:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:54.900 02:11:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:54.900 02:11:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:54.900 02:11:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:54.900 02:11:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:54.900 02:11:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:54.900 02:11:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:54.900 02:11:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:54.900 02:11:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:54.900 02:11:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:54.900 02:11:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:32:54.900 02:11:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:32:57.441 02:11:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:57.441 02:11:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:32:57.441 02:11:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:57.441 02:11:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:57.441 02:11:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:57.441 02:11:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:57.441 02:11:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:57.441 02:11:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:32:57.441 02:11:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:57.441 02:11:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:32:57.441 02:11:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:32:57.441 02:11:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:32:57.441 02:11:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:32:57.441 02:11:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:32:57.441 02:11:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:32:57.441 02:11:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:57.441 02:11:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:57.441 02:11:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:57.441 02:11:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:57.441 02:11:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:57.441 02:11:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:57.441 02:11:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:57.441 02:11:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:57.441 02:11:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:57.441 02:11:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:57.441 02:11:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:57.441 02:11:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:57.441 02:11:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:57.441 02:11:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:57.441 02:11:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:57.441 02:11:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:57.441 02:11:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:57.441 02:11:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:57.441 02:11:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:57.441 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:57.441 02:11:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:57.441 02:11:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:57.441 02:11:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:57.441 02:11:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:57.441 02:11:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:57.441 02:11:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:57.441 02:11:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:57.441 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:57.442 02:11:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:57.442 02:11:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:57.442 02:11:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:57.442 02:11:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:57.442 02:11:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:57.442 02:11:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:57.442 02:11:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:57.442 02:11:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:57.442 02:11:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:57.442 02:11:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:57.442 02:11:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:57.442 02:11:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:57.442 02:11:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:57.442 02:11:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:57.442 02:11:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:57.442 02:11:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:57.442 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:57.442 02:11:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:57.442 02:11:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:57.442 02:11:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:57.442 02:11:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:57.442 02:11:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:57.442 02:11:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:57.442 02:11:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:57.442 02:11:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:57.442 02:11:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:57.442 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:57.442 02:11:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:57.442 02:11:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:57.442 02:11:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:32:57.442 02:11:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:57.442 02:11:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:57.442 02:11:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:57.442 02:11:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:57.442 02:11:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:57.442 02:11:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:57.442 02:11:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:57.442 02:11:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:57.442 02:11:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:57.442 02:11:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:57.442 02:11:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:57.442 02:11:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:57.442 02:11:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:57.442 02:11:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:57.442 02:11:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:57.442 02:11:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:57.442 02:11:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:57.442 02:11:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:57.442 02:11:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:57.442 02:11:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:57.442 02:11:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:57.442 02:11:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:57.442 02:11:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:57.442 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:57.442 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.252 ms 00:32:57.442 00:32:57.442 --- 10.0.0.2 ping statistics --- 00:32:57.442 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:57.442 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:32:57.442 02:11:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:57.442 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:57.442 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.082 ms 00:32:57.442 00:32:57.442 --- 10.0.0.1 ping statistics --- 00:32:57.442 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:57.442 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:32:57.442 02:11:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:57.442 02:11:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:32:57.442 02:11:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:57.442 02:11:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:57.442 02:11:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:57.442 02:11:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:57.442 02:11:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:57.442 02:11:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:57.442 02:11:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:57.442 02:11:11 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:32:57.442 02:11:11 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:32:57.442 02:11:11 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:32:57.442 02:11:11 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:32:57.442 02:11:11 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:57.442 02:11:11 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:32:57.442 ************************************ 00:32:57.442 START TEST nvmf_digest_clean 00:32:57.442 ************************************ 00:32:57.442 02:11:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:32:57.442 02:11:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:32:57.442 02:11:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:32:57.442 02:11:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:32:57.442 02:11:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:32:57.442 02:11:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:32:57.442 02:11:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:57.442 02:11:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:57.442 02:11:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:57.442 02:11:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=1573454 00:32:57.442 02:11:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:32:57.442 02:11:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 1573454 00:32:57.442 02:11:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1573454 ']' 00:32:57.442 02:11:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:57.442 02:11:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:57.442 02:11:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:57.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:57.442 02:11:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:57.442 02:11:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:57.443 [2024-07-24 02:11:11.951036] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:32:57.443 [2024-07-24 02:11:11.951106] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:57.443 EAL: No free 2048 kB hugepages reported on node 1 00:32:57.443 [2024-07-24 02:11:12.012481] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:57.443 [2024-07-24 02:11:12.096040] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:57.443 [2024-07-24 02:11:12.096092] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:57.443 [2024-07-24 02:11:12.096119] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:57.443 [2024-07-24 02:11:12.096131] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:57.443 [2024-07-24 02:11:12.096140] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:57.443 [2024-07-24 02:11:12.096165] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:57.443 02:11:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:57.443 02:11:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:32:57.443 02:11:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:57.443 02:11:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:57.443 02:11:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:57.443 02:11:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:57.443 02:11:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:32:57.443 02:11:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:32:57.443 02:11:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:32:57.443 02:11:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.443 02:11:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:57.443 null0 00:32:57.443 [2024-07-24 02:11:12.286245] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:57.443 [2024-07-24 02:11:12.310527] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:57.443 02:11:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.443 02:11:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:32:57.443 02:11:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:32:57.443 02:11:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:32:57.443 02:11:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:32:57.443 02:11:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:32:57.443 02:11:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:32:57.443 02:11:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:32:57.443 02:11:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1573479 00:32:57.443 02:11:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1573479 /var/tmp/bperf.sock 00:32:57.443 02:11:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:32:57.443 02:11:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1573479 ']' 00:32:57.443 02:11:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:57.443 02:11:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:57.443 02:11:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:57.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:57.443 02:11:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:57.443 02:11:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:57.701 [2024-07-24 02:11:12.357826] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:32:57.701 [2024-07-24 02:11:12.357899] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1573479 ] 00:32:57.701 EAL: No free 2048 kB hugepages reported on node 1 00:32:57.701 [2024-07-24 02:11:12.418283] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:57.701 [2024-07-24 02:11:12.504400] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:57.701 02:11:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:57.701 02:11:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:32:57.701 02:11:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:32:57.701 02:11:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:32:57.701 02:11:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:58.267 02:11:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:58.267 02:11:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:58.525 nvme0n1 00:32:58.525 02:11:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:32:58.525 02:11:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:58.525 Running I/O for 2 seconds... 00:33:01.052 00:33:01.052 Latency(us) 00:33:01.052 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:01.052 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:33:01.052 nvme0n1 : 2.01 18742.62 73.21 0.00 0.00 6819.73 3713.71 16893.72 00:33:01.052 =================================================================================================================== 00:33:01.052 Total : 18742.62 73.21 0.00 0.00 6819.73 3713.71 16893.72 00:33:01.052 0 00:33:01.052 02:11:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:01.052 02:11:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:33:01.052 02:11:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:01.052 02:11:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:01.052 | select(.opcode=="crc32c") 00:33:01.052 | "\(.module_name) \(.executed)"' 00:33:01.052 02:11:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:01.052 02:11:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:33:01.052 02:11:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:33:01.052 02:11:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:01.052 02:11:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:01.052 02:11:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1573479 00:33:01.052 02:11:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1573479 ']' 00:33:01.052 02:11:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1573479 00:33:01.052 02:11:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:33:01.052 02:11:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:01.052 02:11:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1573479 00:33:01.052 02:11:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:01.052 02:11:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:01.052 02:11:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1573479' 00:33:01.052 killing process with pid 1573479 00:33:01.052 02:11:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1573479 00:33:01.052 Received shutdown signal, test time was about 2.000000 seconds 00:33:01.052 00:33:01.052 Latency(us) 00:33:01.052 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:01.052 =================================================================================================================== 00:33:01.052 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:01.052 02:11:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1573479 00:33:01.053 02:11:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:33:01.053 02:11:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:33:01.053 02:11:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:01.053 02:11:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:33:01.053 02:11:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:33:01.053 02:11:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:33:01.053 02:11:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:33:01.053 02:11:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1573887 00:33:01.053 02:11:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:33:01.053 02:11:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1573887 /var/tmp/bperf.sock 00:33:01.053 02:11:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1573887 ']' 00:33:01.053 02:11:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:01.053 02:11:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:01.053 02:11:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:01.053 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:01.053 02:11:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:01.053 02:11:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:01.311 [2024-07-24 02:11:15.961649] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:33:01.311 [2024-07-24 02:11:15.961763] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1573887 ] 00:33:01.311 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:01.311 Zero copy mechanism will not be used. 00:33:01.311 EAL: No free 2048 kB hugepages reported on node 1 00:33:01.311 [2024-07-24 02:11:16.019511] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:01.311 [2024-07-24 02:11:16.105815] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:01.311 02:11:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:01.311 02:11:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:33:01.311 02:11:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:33:01.311 02:11:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:33:01.311 02:11:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:01.878 02:11:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:01.878 02:11:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:02.136 nvme0n1 00:33:02.136 02:11:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:33:02.136 02:11:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:02.394 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:02.394 Zero copy mechanism will not be used. 00:33:02.394 Running I/O for 2 seconds... 00:33:04.293 00:33:04.293 Latency(us) 00:33:04.293 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:04.293 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:33:04.293 nvme0n1 : 2.00 4127.06 515.88 0.00 0.00 3872.61 904.15 7427.41 00:33:04.293 =================================================================================================================== 00:33:04.293 Total : 4127.06 515.88 0.00 0.00 3872.61 904.15 7427.41 00:33:04.293 0 00:33:04.293 02:11:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:04.293 02:11:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:33:04.293 02:11:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:04.293 02:11:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:04.293 | select(.opcode=="crc32c") 00:33:04.293 | "\(.module_name) \(.executed)"' 00:33:04.293 02:11:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:04.551 02:11:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:33:04.551 02:11:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:33:04.551 02:11:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:04.551 02:11:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:04.551 02:11:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1573887 00:33:04.551 02:11:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1573887 ']' 00:33:04.551 02:11:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1573887 00:33:04.551 02:11:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:33:04.551 02:11:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:04.551 02:11:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1573887 00:33:04.551 02:11:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:04.551 02:11:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:04.551 02:11:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1573887' 00:33:04.551 killing process with pid 1573887 00:33:04.551 02:11:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1573887 00:33:04.551 Received shutdown signal, test time was about 2.000000 seconds 00:33:04.551 00:33:04.551 Latency(us) 00:33:04.551 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:04.551 =================================================================================================================== 00:33:04.551 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:04.551 02:11:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1573887 00:33:04.809 02:11:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:33:04.809 02:11:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:33:04.809 02:11:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:04.809 02:11:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:33:04.809 02:11:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:33:04.809 02:11:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:33:04.809 02:11:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:33:04.809 02:11:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1574307 00:33:04.809 02:11:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1574307 /var/tmp/bperf.sock 00:33:04.809 02:11:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:33:04.809 02:11:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1574307 ']' 00:33:04.809 02:11:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:04.809 02:11:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:04.809 02:11:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:04.809 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:04.809 02:11:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:04.809 02:11:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:04.809 [2024-07-24 02:11:19.608200] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:33:04.809 [2024-07-24 02:11:19.608287] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1574307 ] 00:33:04.809 EAL: No free 2048 kB hugepages reported on node 1 00:33:04.810 [2024-07-24 02:11:19.673048] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:05.067 [2024-07-24 02:11:19.765039] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:05.067 02:11:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:05.067 02:11:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:33:05.067 02:11:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:33:05.067 02:11:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:33:05.067 02:11:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:05.632 02:11:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:05.632 02:11:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:05.891 nvme0n1 00:33:05.891 02:11:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:33:05.891 02:11:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:05.891 Running I/O for 2 seconds... 00:33:08.421 00:33:08.421 Latency(us) 00:33:08.421 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:08.421 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:08.421 nvme0n1 : 2.01 20247.49 79.09 0.00 0.00 6311.14 2694.26 16117.00 00:33:08.421 =================================================================================================================== 00:33:08.421 Total : 20247.49 79.09 0.00 0.00 6311.14 2694.26 16117.00 00:33:08.421 0 00:33:08.421 02:11:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:08.421 02:11:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:33:08.421 02:11:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:08.421 02:11:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:08.421 | select(.opcode=="crc32c") 00:33:08.421 | "\(.module_name) \(.executed)"' 00:33:08.421 02:11:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:08.421 02:11:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:33:08.421 02:11:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:33:08.421 02:11:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:08.421 02:11:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:08.421 02:11:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1574307 00:33:08.421 02:11:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1574307 ']' 00:33:08.421 02:11:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1574307 00:33:08.421 02:11:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:33:08.421 02:11:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:08.421 02:11:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1574307 00:33:08.421 02:11:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:08.421 02:11:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:08.421 02:11:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1574307' 00:33:08.421 killing process with pid 1574307 00:33:08.421 02:11:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1574307 00:33:08.421 Received shutdown signal, test time was about 2.000000 seconds 00:33:08.421 00:33:08.421 Latency(us) 00:33:08.421 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:08.421 =================================================================================================================== 00:33:08.421 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:08.421 02:11:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1574307 00:33:08.421 02:11:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:33:08.421 02:11:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:33:08.421 02:11:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:08.421 02:11:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:33:08.421 02:11:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:33:08.421 02:11:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:33:08.421 02:11:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:33:08.421 02:11:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1574814 00:33:08.421 02:11:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:33:08.421 02:11:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1574814 /var/tmp/bperf.sock 00:33:08.421 02:11:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1574814 ']' 00:33:08.421 02:11:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:08.421 02:11:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:08.421 02:11:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:08.421 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:08.421 02:11:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:08.421 02:11:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:08.421 [2024-07-24 02:11:23.252894] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:33:08.421 [2024-07-24 02:11:23.252987] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1574814 ] 00:33:08.421 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:08.421 Zero copy mechanism will not be used. 00:33:08.421 EAL: No free 2048 kB hugepages reported on node 1 00:33:08.421 [2024-07-24 02:11:23.315159] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:08.679 [2024-07-24 02:11:23.406075] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:08.679 02:11:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:08.679 02:11:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:33:08.679 02:11:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:33:08.679 02:11:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:33:08.679 02:11:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:08.937 02:11:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:08.937 02:11:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:09.503 nvme0n1 00:33:09.503 02:11:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:33:09.503 02:11:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:09.503 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:09.503 Zero copy mechanism will not be used. 00:33:09.503 Running I/O for 2 seconds... 00:33:12.032 00:33:12.032 Latency(us) 00:33:12.032 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:12.032 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:33:12.032 nvme0n1 : 2.00 5564.92 695.61 0.00 0.00 2864.65 1941.81 12621.75 00:33:12.032 =================================================================================================================== 00:33:12.032 Total : 5564.92 695.61 0.00 0.00 2864.65 1941.81 12621.75 00:33:12.032 0 00:33:12.032 02:11:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:12.032 02:11:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:33:12.032 02:11:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:12.032 02:11:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:12.032 02:11:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:12.032 | select(.opcode=="crc32c") 00:33:12.032 | "\(.module_name) \(.executed)"' 00:33:12.032 02:11:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:33:12.032 02:11:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:33:12.032 02:11:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:12.032 02:11:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:12.032 02:11:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1574814 00:33:12.032 02:11:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1574814 ']' 00:33:12.032 02:11:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1574814 00:33:12.032 02:11:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:33:12.032 02:11:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:12.032 02:11:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1574814 00:33:12.032 02:11:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:12.032 02:11:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:12.032 02:11:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1574814' 00:33:12.032 killing process with pid 1574814 00:33:12.032 02:11:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1574814 00:33:12.032 Received shutdown signal, test time was about 2.000000 seconds 00:33:12.032 00:33:12.032 Latency(us) 00:33:12.032 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:12.032 =================================================================================================================== 00:33:12.032 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:12.032 02:11:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1574814 00:33:12.290 02:11:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 1573454 00:33:12.290 02:11:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1573454 ']' 00:33:12.290 02:11:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1573454 00:33:12.290 02:11:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:33:12.290 02:11:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:12.290 02:11:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1573454 00:33:12.290 02:11:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:33:12.290 02:11:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:33:12.290 02:11:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1573454' 00:33:12.290 killing process with pid 1573454 00:33:12.290 02:11:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1573454 00:33:12.290 02:11:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1573454 00:33:12.290 00:33:12.290 real 0m15.286s 00:33:12.290 user 0m29.716s 00:33:12.290 sys 0m4.388s 00:33:12.548 02:11:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:12.548 02:11:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:12.548 ************************************ 00:33:12.548 END TEST nvmf_digest_clean 00:33:12.548 ************************************ 00:33:12.548 02:11:27 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:33:12.548 02:11:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:33:12.548 02:11:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:12.548 02:11:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:33:12.548 ************************************ 00:33:12.548 START TEST nvmf_digest_error 00:33:12.548 ************************************ 00:33:12.548 02:11:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:33:12.548 02:11:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:33:12.548 02:11:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:12.548 02:11:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:33:12.548 02:11:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:12.548 02:11:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=1575249 00:33:12.548 02:11:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:33:12.548 02:11:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 1575249 00:33:12.548 02:11:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1575249 ']' 00:33:12.548 02:11:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:12.548 02:11:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:12.548 02:11:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:12.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:12.548 02:11:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:12.548 02:11:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:12.548 [2024-07-24 02:11:27.286778] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:33:12.549 [2024-07-24 02:11:27.286849] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:12.549 EAL: No free 2048 kB hugepages reported on node 1 00:33:12.549 [2024-07-24 02:11:27.351230] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:12.549 [2024-07-24 02:11:27.435809] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:12.549 [2024-07-24 02:11:27.435880] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:12.549 [2024-07-24 02:11:27.435902] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:12.549 [2024-07-24 02:11:27.435913] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:12.549 [2024-07-24 02:11:27.435923] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:12.549 [2024-07-24 02:11:27.435947] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:12.807 02:11:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:12.807 02:11:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:33:12.807 02:11:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:12.807 02:11:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:12.807 02:11:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:12.807 02:11:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:12.807 02:11:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:33:12.807 02:11:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:12.807 02:11:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:12.807 [2024-07-24 02:11:27.520538] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:33:12.807 02:11:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:12.807 02:11:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:33:12.807 02:11:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:33:12.807 02:11:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:12.807 02:11:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:12.807 null0 00:33:12.807 [2024-07-24 02:11:27.634479] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:12.807 [2024-07-24 02:11:27.658723] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:12.807 02:11:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:12.807 02:11:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:33:12.807 02:11:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:33:12.807 02:11:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:33:12.807 02:11:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:33:12.807 02:11:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:33:12.807 02:11:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1575395 00:33:12.807 02:11:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:33:12.807 02:11:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1575395 /var/tmp/bperf.sock 00:33:12.807 02:11:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1575395 ']' 00:33:12.807 02:11:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:12.807 02:11:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:12.807 02:11:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:12.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:12.807 02:11:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:12.807 02:11:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:13.065 [2024-07-24 02:11:27.705055] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:33:13.065 [2024-07-24 02:11:27.705119] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1575395 ] 00:33:13.065 EAL: No free 2048 kB hugepages reported on node 1 00:33:13.065 [2024-07-24 02:11:27.765957] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:13.065 [2024-07-24 02:11:27.856824] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:13.322 02:11:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:13.322 02:11:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:33:13.322 02:11:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:13.322 02:11:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:13.322 02:11:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:13.322 02:11:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.322 02:11:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:13.322 02:11:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.322 02:11:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:13.322 02:11:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:13.888 nvme0n1 00:33:13.888 02:11:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:33:13.888 02:11:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.888 02:11:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:13.888 02:11:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.888 02:11:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:13.888 02:11:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:13.888 Running I/O for 2 seconds... 00:33:13.888 [2024-07-24 02:11:28.776681] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cfcc0) 00:33:13.888 [2024-07-24 02:11:28.776733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16207 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:13.888 [2024-07-24 02:11:28.776754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.146 [2024-07-24 02:11:28.793339] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cfcc0) 00:33:14.146 [2024-07-24 02:11:28.793401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:14246 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.146 [2024-07-24 02:11:28.793419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.146 [2024-07-24 02:11:28.806430] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cfcc0) 00:33:14.146 [2024-07-24 02:11:28.806461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:9977 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.146 [2024-07-24 02:11:28.806478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.146 [2024-07-24 02:11:28.822266] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cfcc0) 00:33:14.146 [2024-07-24 02:11:28.822303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:12261 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.146 [2024-07-24 02:11:28.822333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.146 [2024-07-24 02:11:28.838754] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cfcc0) 00:33:14.146 [2024-07-24 02:11:28.838790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:18131 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.146 [2024-07-24 02:11:28.838809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.146 [2024-07-24 02:11:28.853804] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cfcc0) 00:33:14.146 [2024-07-24 02:11:28.853839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:23534 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.146 [2024-07-24 02:11:28.853868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.146 [2024-07-24 02:11:28.866268] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cfcc0) 00:33:14.146 [2024-07-24 02:11:28.866304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:22665 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.146 [2024-07-24 02:11:28.866337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.146 [2024-07-24 02:11:28.881141] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cfcc0) 00:33:14.146 [2024-07-24 02:11:28.881177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:2803 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.146 [2024-07-24 02:11:28.881197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.146 [2024-07-24 02:11:28.895909] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cfcc0) 00:33:14.146 [2024-07-24 02:11:28.895944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:24190 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.146 [2024-07-24 02:11:28.895963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.146 [2024-07-24 02:11:28.907334] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cfcc0) 00:33:14.146 [2024-07-24 02:11:28.907380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9374 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.146 [2024-07-24 02:11:28.907395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.146 [2024-07-24 02:11:28.923635] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cfcc0) 00:33:14.146 [2024-07-24 02:11:28.923683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:24330 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.146 [2024-07-24 02:11:28.923703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.146 [2024-07-24 02:11:28.936243] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cfcc0) 00:33:14.146 [2024-07-24 02:11:28.936278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:10683 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.146 [2024-07-24 02:11:28.936297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.146 [2024-07-24 02:11:28.950548] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cfcc0) 00:33:14.146 [2024-07-24 02:11:28.950579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:673 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.146 [2024-07-24 02:11:28.950596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.146 [2024-07-24 02:11:28.963443] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cfcc0) 00:33:14.147 [2024-07-24 02:11:28.963473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:6131 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.147 [2024-07-24 02:11:28.963504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.147 [2024-07-24 02:11:28.977983] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cfcc0) 00:33:14.147 [2024-07-24 02:11:28.978018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:23814 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.147 [2024-07-24 02:11:28.978037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.147 [2024-07-24 02:11:28.991197] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cfcc0) 00:33:14.147 [2024-07-24 02:11:28.991232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:21617 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.147 [2024-07-24 02:11:28.991252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.147 [2024-07-24 02:11:29.005425] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cfcc0) 00:33:14.147 [2024-07-24 02:11:29.005478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:1404 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.147 [2024-07-24 02:11:29.005495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.147 [2024-07-24 02:11:29.018650] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cfcc0) 00:33:14.147 [2024-07-24 02:11:29.018685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:18844 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.147 [2024-07-24 02:11:29.018704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.147 [2024-07-24 02:11:29.030522] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cfcc0) 00:33:14.147 [2024-07-24 02:11:29.030554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:19958 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.147 [2024-07-24 02:11:29.030571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.405 [2024-07-24 02:11:29.043397] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cfcc0) 00:33:14.405 [2024-07-24 02:11:29.043432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:13244 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.405 [2024-07-24 02:11:29.043450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.405 [2024-07-24 02:11:29.056607] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cfcc0) 00:33:14.405 [2024-07-24 02:11:29.056653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11586 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.405 [2024-07-24 02:11:29.056670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.405 [2024-07-24 02:11:29.068912] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cfcc0) 00:33:14.405 [2024-07-24 02:11:29.068943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:2229 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.405 [2024-07-24 02:11:29.068960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.405 [2024-07-24 02:11:29.080076] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cfcc0) 00:33:14.405 [2024-07-24 02:11:29.080106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:13379 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.405 [2024-07-24 02:11:29.080127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.405 [2024-07-24 02:11:29.095396] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cfcc0) 00:33:14.405 [2024-07-24 02:11:29.095427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:21168 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.405 [2024-07-24 02:11:29.095443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.405 [2024-07-24 02:11:29.105246] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cfcc0) 00:33:14.405 [2024-07-24 02:11:29.105274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:13846 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.405 [2024-07-24 02:11:29.105290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.405 [2024-07-24 02:11:29.120146] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cfcc0) 00:33:14.405 [2024-07-24 02:11:29.120177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10701 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.405 [2024-07-24 02:11:29.120193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.405 [2024-07-24 02:11:29.133647] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cfcc0) 00:33:14.405 [2024-07-24 02:11:29.133678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:6984 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.405 [2024-07-24 02:11:29.133695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.405 [2024-07-24 02:11:29.144294] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cfcc0) 00:33:14.405 [2024-07-24 02:11:29.144328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:6354 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.405 [2024-07-24 02:11:29.144362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.405 [2024-07-24 02:11:29.156414] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cfcc0) 00:33:14.405 [2024-07-24 02:11:29.156445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:8198 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.405 [2024-07-24 02:11:29.156462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.405 [2024-07-24 02:11:29.172027] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cfcc0) 00:33:14.405 [2024-07-24 02:11:29.172057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:14559 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.405 [2024-07-24 02:11:29.172073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.405 [2024-07-24 02:11:29.182431] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cfcc0) 00:33:14.405 [2024-07-24 02:11:29.182460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:22861 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.405 [2024-07-24 02:11:29.182476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.405 [2024-07-24 02:11:29.195329] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cfcc0) 00:33:14.405 [2024-07-24 02:11:29.195370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:26 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.405 [2024-07-24 02:11:29.195388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.405 [2024-07-24 02:11:29.208822] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cfcc0) 00:33:14.405 [2024-07-24 02:11:29.208851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:18710 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.405 [2024-07-24 02:11:29.208867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.405 [2024-07-24 02:11:29.223842] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cfcc0) 00:33:14.405 [2024-07-24 02:11:29.223872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19292 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.405 [2024-07-24 02:11:29.223888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.405 [2024-07-24 02:11:29.239183] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cfcc0) 00:33:14.405 [2024-07-24 02:11:29.239214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:15756 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.405 [2024-07-24 02:11:29.239231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.405 [2024-07-24 02:11:29.250272] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cfcc0) 00:33:14.405 [2024-07-24 02:11:29.250323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10987 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.405 [2024-07-24 02:11:29.250342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.405 [2024-07-24 02:11:29.263109] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cfcc0) 00:33:14.405 [2024-07-24 02:11:29.263139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19255 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.406 [2024-07-24 02:11:29.263156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.406 [2024-07-24 02:11:29.278208] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cfcc0) 00:33:14.406 [2024-07-24 02:11:29.278238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:22047 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.406 [2024-07-24 02:11:29.278255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.406 [2024-07-24 02:11:29.288488] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cfcc0) 00:33:14.406 [2024-07-24 02:11:29.288519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:5680 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.406 [2024-07-24 02:11:29.288536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.664 [2024-07-24 02:11:29.303792] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cfcc0) 00:33:14.664 [2024-07-24 02:11:29.303825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:21020 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.664 [2024-07-24 02:11:29.303843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.664 [2024-07-24 02:11:29.318343] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cfcc0) 00:33:14.664 [2024-07-24 02:11:29.318377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:6731 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.664 [2024-07-24 02:11:29.318395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.664 [2024-07-24 02:11:29.329216] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cfcc0) 00:33:14.664 [2024-07-24 02:11:29.329247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:11127 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.664 [2024-07-24 02:11:29.329264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.664 [2024-07-24 02:11:29.344847] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cfcc0) 00:33:14.664 [2024-07-24 02:11:29.344877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:24144 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.664 [2024-07-24 02:11:29.344893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.664 [2024-07-24 02:11:29.358048] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cfcc0) 00:33:14.664 [2024-07-24 02:11:29.358080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:9969 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.664 [2024-07-24 02:11:29.358097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.664 [2024-07-24 02:11:29.368861] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cfcc0) 00:33:14.664 [2024-07-24 02:11:29.368890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:16930 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.664 [2024-07-24 02:11:29.368906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.664 [2024-07-24 02:11:29.384511] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cfcc0) 00:33:14.664 [2024-07-24 02:11:29.384541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:10528 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.664 [2024-07-24 02:11:29.384558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.664 [2024-07-24 02:11:29.398485] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cfcc0) 00:33:14.664 [2024-07-24 02:11:29.398515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:23542 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.664 [2024-07-24 02:11:29.398531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.664 [2024-07-24 02:11:29.410165] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cfcc0) 00:33:14.664 [2024-07-24 02:11:29.410196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:12577 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.664 [2024-07-24 02:11:29.410213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.664 [2024-07-24 02:11:29.423024] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cfcc0) 00:33:14.664 [2024-07-24 02:11:29.423055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:6270 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.664 [2024-07-24 02:11:29.423079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.664 [2024-07-24 02:11:29.434983] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cfcc0) 00:33:14.664 [2024-07-24 02:11:29.435012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:5705 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.664 [2024-07-24 02:11:29.435028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.664 [2024-07-24 02:11:29.445522] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cfcc0) 00:33:14.664 [2024-07-24 02:11:29.445551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:21021 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.664 [2024-07-24 02:11:29.445567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.664 [2024-07-24 02:11:29.460397] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cfcc0) 00:33:14.664 [2024-07-24 02:11:29.460428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:8402 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.664 [2024-07-24 02:11:29.460444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.665 [2024-07-24 02:11:29.473763] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cfcc0) 00:33:14.665 [2024-07-24 02:11:29.473793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:16146 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.665 [2024-07-24 02:11:29.473810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.665 [2024-07-24 02:11:29.486816] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cfcc0) 00:33:14.665 [2024-07-24 02:11:29.486848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:24974 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.665 [2024-07-24 02:11:29.486864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.665 [2024-07-24 02:11:29.498379] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cfcc0) 00:33:14.665 [2024-07-24 02:11:29.498412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:17069 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.665 [2024-07-24 02:11:29.498429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.665 [2024-07-24 02:11:29.512793] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cfcc0) 00:33:14.665 [2024-07-24 02:11:29.512824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:21655 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.665 [2024-07-24 02:11:29.512842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.665 [2024-07-24 02:11:29.524685] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cfcc0) 00:33:14.665 [2024-07-24 02:11:29.524715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24957 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.665 [2024-07-24 02:11:29.524747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.665 [2024-07-24 02:11:29.539453] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cfcc0) 00:33:14.665 [2024-07-24 02:11:29.539486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:3838 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.665 [2024-07-24 02:11:29.539503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.665 [2024-07-24 02:11:29.554043] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cfcc0) 00:33:14.665 [2024-07-24 02:11:29.554073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:20608 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.665 [2024-07-24 02:11:29.554089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.923 [2024-07-24 02:11:29.568243] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cfcc0) 00:33:14.923 [2024-07-24 02:11:29.568276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:22392 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.923 [2024-07-24 02:11:29.568294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.923 [2024-07-24 02:11:29.578405] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cfcc0) 00:33:14.923 [2024-07-24 02:11:29.578438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:24403 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.923 [2024-07-24 02:11:29.578456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.923 [2024-07-24 02:11:29.591177] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cfcc0) 00:33:14.923 [2024-07-24 02:11:29.591207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:19989 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.923 [2024-07-24 02:11:29.591223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.923 [2024-07-24 02:11:29.603881] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cfcc0) 00:33:14.923 [2024-07-24 02:11:29.603925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:12641 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.923 [2024-07-24 02:11:29.603941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.923 [2024-07-24 02:11:29.619687] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cfcc0) 00:33:14.923 [2024-07-24 02:11:29.619718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:13033 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.923 [2024-07-24 02:11:29.619734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.923 [2024-07-24 02:11:29.633617] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cfcc0) 00:33:14.923 [2024-07-24 02:11:29.633649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:4628 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.923 [2024-07-24 02:11:29.633682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.923 [2024-07-24 02:11:29.644814] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cfcc0) 00:33:14.923 [2024-07-24 02:11:29.644858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:25538 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.923 [2024-07-24 02:11:29.644880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.923 [2024-07-24 02:11:29.659287] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cfcc0) 00:33:14.923 [2024-07-24 02:11:29.659324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:3079 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.923 [2024-07-24 02:11:29.659344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.923 [2024-07-24 02:11:29.670203] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cfcc0) 00:33:14.923 [2024-07-24 02:11:29.670233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5328 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.923 [2024-07-24 02:11:29.670249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.923 [2024-07-24 02:11:29.683289] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cfcc0) 00:33:14.923 [2024-07-24 02:11:29.683341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:5938 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.923 [2024-07-24 02:11:29.683359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.923 [2024-07-24 02:11:29.695576] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cfcc0) 00:33:14.923 [2024-07-24 02:11:29.695606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:14433 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.924 [2024-07-24 02:11:29.695622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.924 [2024-07-24 02:11:29.707712] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cfcc0) 00:33:14.924 [2024-07-24 02:11:29.707755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:7709 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.924 [2024-07-24 02:11:29.707772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.924 [2024-07-24 02:11:29.721596] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cfcc0) 00:33:14.924 [2024-07-24 02:11:29.721627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:14723 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.924 [2024-07-24 02:11:29.721644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.924 [2024-07-24 02:11:29.734593] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cfcc0) 00:33:14.924 [2024-07-24 02:11:29.734639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:1551 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.924 [2024-07-24 02:11:29.734656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.924 [2024-07-24 02:11:29.745340] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cfcc0) 00:33:14.924 [2024-07-24 02:11:29.745370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:25024 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.924 [2024-07-24 02:11:29.745387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.924 [2024-07-24 02:11:29.760759] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cfcc0) 00:33:14.924 [2024-07-24 02:11:29.760796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:14908 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.924 [2024-07-24 02:11:29.760814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.924 [2024-07-24 02:11:29.776304] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cfcc0) 00:33:14.924 [2024-07-24 02:11:29.776347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:21958 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.924 [2024-07-24 02:11:29.776376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.924 [2024-07-24 02:11:29.786995] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cfcc0) 00:33:14.924 [2024-07-24 02:11:29.787026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:4603 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.924 [2024-07-24 02:11:29.787058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.924 [2024-07-24 02:11:29.801020] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cfcc0) 00:33:14.924 [2024-07-24 02:11:29.801052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:22813 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.924 [2024-07-24 02:11:29.801069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:14.924 [2024-07-24 02:11:29.815649] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cfcc0) 00:33:14.924 [2024-07-24 02:11:29.815691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:13805 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:14.924 [2024-07-24 02:11:29.815731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.208 [2024-07-24 02:11:29.827651] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cfcc0) 00:33:15.208 [2024-07-24 02:11:29.827698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:6353 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.208 [2024-07-24 02:11:29.827715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.208 [2024-07-24 02:11:29.842401] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cfcc0) 00:33:15.208 [2024-07-24 02:11:29.842448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:1080 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.208 [2024-07-24 02:11:29.842466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.208 [2024-07-24 02:11:29.857427] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cfcc0) 00:33:15.208 [2024-07-24 02:11:29.857459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:14630 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.208 [2024-07-24 02:11:29.857477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.208 [2024-07-24 02:11:29.868857] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cfcc0) 00:33:15.208 [2024-07-24 02:11:29.868885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1963 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.208 [2024-07-24 02:11:29.868902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.208 [2024-07-24 02:11:29.883814] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cfcc0) 00:33:15.208 [2024-07-24 02:11:29.883868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:21063 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.208 [2024-07-24 02:11:29.883888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.208 [2024-07-24 02:11:29.899138] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cfcc0) 00:33:15.208 [2024-07-24 02:11:29.899173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:7672 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.208 [2024-07-24 02:11:29.899192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.208 [2024-07-24 02:11:29.917373] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cfcc0) 00:33:15.208 [2024-07-24 02:11:29.917404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:17398 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.208 [2024-07-24 02:11:29.917422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.208 [2024-07-24 02:11:29.933113] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cfcc0) 00:33:15.208 [2024-07-24 02:11:29.933147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:9439 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.208 [2024-07-24 02:11:29.933166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.208 [2024-07-24 02:11:29.945261] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cfcc0) 00:33:15.208 [2024-07-24 02:11:29.945296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:7230 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.208 [2024-07-24 02:11:29.945315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.208 [2024-07-24 02:11:29.963176] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cfcc0) 00:33:15.208 [2024-07-24 02:11:29.963211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:6749 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.208 [2024-07-24 02:11:29.963231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.208 [2024-07-24 02:11:29.978507] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cfcc0) 00:33:15.208 [2024-07-24 02:11:29.978538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:15646 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.208 [2024-07-24 02:11:29.978556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.208 [2024-07-24 02:11:29.991081] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cfcc0) 00:33:15.208 [2024-07-24 02:11:29.991115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:17377 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.208 [2024-07-24 02:11:29.991134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.208 [2024-07-24 02:11:30.006365] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cfcc0) 00:33:15.208 [2024-07-24 02:11:30.006410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:18749 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.208 [2024-07-24 02:11:30.006444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.208 [2024-07-24 02:11:30.018614] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cfcc0) 00:33:15.208 [2024-07-24 02:11:30.018648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:23671 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.208 [2024-07-24 02:11:30.018683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.208 [2024-07-24 02:11:30.031695] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cfcc0) 00:33:15.208 [2024-07-24 02:11:30.031731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:25497 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.208 [2024-07-24 02:11:30.031751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.208 [2024-07-24 02:11:30.046541] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cfcc0) 00:33:15.208 [2024-07-24 02:11:30.046572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:20987 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.209 [2024-07-24 02:11:30.046603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.209 [2024-07-24 02:11:30.059668] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cfcc0) 00:33:15.209 [2024-07-24 02:11:30.059703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:22072 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.209 [2024-07-24 02:11:30.059722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.209 [2024-07-24 02:11:30.076779] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cfcc0) 00:33:15.209 [2024-07-24 02:11:30.076816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:5449 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.209 [2024-07-24 02:11:30.076836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.475 [2024-07-24 02:11:30.089862] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cfcc0) 00:33:15.475 [2024-07-24 02:11:30.089899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21461 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.475 [2024-07-24 02:11:30.089920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.475 [2024-07-24 02:11:30.104261] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cfcc0) 00:33:15.475 [2024-07-24 02:11:30.104295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:18793 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.475 [2024-07-24 02:11:30.104314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.475 [2024-07-24 02:11:30.117061] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cfcc0) 00:33:15.475 [2024-07-24 02:11:30.117097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:9365 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.475 [2024-07-24 02:11:30.117116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.475 [2024-07-24 02:11:30.132832] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cfcc0) 00:33:15.475 [2024-07-24 02:11:30.132867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:22980 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.475 [2024-07-24 02:11:30.132886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.475 [2024-07-24 02:11:30.148661] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cfcc0) 00:33:15.475 [2024-07-24 02:11:30.148696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:5553 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.475 [2024-07-24 02:11:30.148715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.475 [2024-07-24 02:11:30.166550] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cfcc0) 00:33:15.475 [2024-07-24 02:11:30.166579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:11089 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.475 [2024-07-24 02:11:30.166611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.475 [2024-07-24 02:11:30.180029] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cfcc0) 00:33:15.475 [2024-07-24 02:11:30.180064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:1065 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.475 [2024-07-24 02:11:30.180083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.475 [2024-07-24 02:11:30.192245] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cfcc0) 00:33:15.475 [2024-07-24 02:11:30.192280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:9417 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.475 [2024-07-24 02:11:30.192300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.475 [2024-07-24 02:11:30.206265] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cfcc0) 00:33:15.475 [2024-07-24 02:11:30.206299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10871 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.475 [2024-07-24 02:11:30.206326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.475 [2024-07-24 02:11:30.219338] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cfcc0) 00:33:15.475 [2024-07-24 02:11:30.219387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:11727 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.475 [2024-07-24 02:11:30.219406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.475 [2024-07-24 02:11:30.232912] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cfcc0) 00:33:15.475 [2024-07-24 02:11:30.232946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4411 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.475 [2024-07-24 02:11:30.232965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.475 [2024-07-24 02:11:30.245791] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cfcc0) 00:33:15.475 [2024-07-24 02:11:30.245826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:14634 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.475 [2024-07-24 02:11:30.245851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.475 [2024-07-24 02:11:30.261253] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cfcc0) 00:33:15.475 [2024-07-24 02:11:30.261289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:7912 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.476 [2024-07-24 02:11:30.261308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.476 [2024-07-24 02:11:30.275105] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cfcc0) 00:33:15.476 [2024-07-24 02:11:30.275140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:17899 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.476 [2024-07-24 02:11:30.275158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.476 [2024-07-24 02:11:30.286997] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cfcc0) 00:33:15.476 [2024-07-24 02:11:30.287031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:6782 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.476 [2024-07-24 02:11:30.287051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.476 [2024-07-24 02:11:30.301828] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cfcc0) 00:33:15.476 [2024-07-24 02:11:30.301862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:15272 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.476 [2024-07-24 02:11:30.301881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.476 [2024-07-24 02:11:30.318612] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cfcc0) 00:33:15.476 [2024-07-24 02:11:30.318657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:16689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.476 [2024-07-24 02:11:30.318676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.476 [2024-07-24 02:11:30.334391] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cfcc0) 00:33:15.476 [2024-07-24 02:11:30.334437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:24992 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.476 [2024-07-24 02:11:30.334454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.476 [2024-07-24 02:11:30.346397] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cfcc0) 00:33:15.476 [2024-07-24 02:11:30.346426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:16154 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.476 [2024-07-24 02:11:30.346442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.476 [2024-07-24 02:11:30.363450] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cfcc0) 00:33:15.476 [2024-07-24 02:11:30.363482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:12533 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.476 [2024-07-24 02:11:30.363499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.734 [2024-07-24 02:11:30.380064] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cfcc0) 00:33:15.734 [2024-07-24 02:11:30.380105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:23277 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.734 [2024-07-24 02:11:30.380125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.734 [2024-07-24 02:11:30.390992] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cfcc0) 00:33:15.734 [2024-07-24 02:11:30.391027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:20950 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.734 [2024-07-24 02:11:30.391046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.734 [2024-07-24 02:11:30.407228] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cfcc0) 00:33:15.734 [2024-07-24 02:11:30.407262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:6308 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.734 [2024-07-24 02:11:30.407282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.734 [2024-07-24 02:11:30.423943] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cfcc0) 00:33:15.734 [2024-07-24 02:11:30.423979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:23474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.734 [2024-07-24 02:11:30.423998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.734 [2024-07-24 02:11:30.441089] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cfcc0) 00:33:15.734 [2024-07-24 02:11:30.441125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:4909 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.734 [2024-07-24 02:11:30.441144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.734 [2024-07-24 02:11:30.456222] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cfcc0) 00:33:15.734 [2024-07-24 02:11:30.456257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:5224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.735 [2024-07-24 02:11:30.456277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.735 [2024-07-24 02:11:30.468758] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cfcc0) 00:33:15.735 [2024-07-24 02:11:30.468794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:19744 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.735 [2024-07-24 02:11:30.468815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.735 [2024-07-24 02:11:30.483148] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cfcc0) 00:33:15.735 [2024-07-24 02:11:30.483183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19058 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.735 [2024-07-24 02:11:30.483203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.735 [2024-07-24 02:11:30.494750] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cfcc0) 00:33:15.735 [2024-07-24 02:11:30.494785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:18540 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.735 [2024-07-24 02:11:30.494803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.735 [2024-07-24 02:11:30.510609] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cfcc0) 00:33:15.735 [2024-07-24 02:11:30.510639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:7710 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.735 [2024-07-24 02:11:30.510674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.735 [2024-07-24 02:11:30.523438] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cfcc0) 00:33:15.735 [2024-07-24 02:11:30.523467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:20741 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.735 [2024-07-24 02:11:30.523483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.735 [2024-07-24 02:11:30.539169] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cfcc0) 00:33:15.735 [2024-07-24 02:11:30.539205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:3509 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.735 [2024-07-24 02:11:30.539224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.735 [2024-07-24 02:11:30.554676] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cfcc0) 00:33:15.735 [2024-07-24 02:11:30.554711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4760 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.735 [2024-07-24 02:11:30.554730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.735 [2024-07-24 02:11:30.566499] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cfcc0) 00:33:15.735 [2024-07-24 02:11:30.566529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:3250 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.735 [2024-07-24 02:11:30.566546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.735 [2024-07-24 02:11:30.583841] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cfcc0) 00:33:15.735 [2024-07-24 02:11:30.583878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:21816 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.735 [2024-07-24 02:11:30.583897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.735 [2024-07-24 02:11:30.601386] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cfcc0) 00:33:15.735 [2024-07-24 02:11:30.601416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:9709 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.735 [2024-07-24 02:11:30.601432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.735 [2024-07-24 02:11:30.617796] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cfcc0) 00:33:15.735 [2024-07-24 02:11:30.617831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:20751 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.735 [2024-07-24 02:11:30.617850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.993 [2024-07-24 02:11:30.631729] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cfcc0) 00:33:15.993 [2024-07-24 02:11:30.631764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:20685 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.993 [2024-07-24 02:11:30.631789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.993 [2024-07-24 02:11:30.643656] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cfcc0) 00:33:15.993 [2024-07-24 02:11:30.643716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:7438 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.993 [2024-07-24 02:11:30.643736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.993 [2024-07-24 02:11:30.659681] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cfcc0) 00:33:15.993 [2024-07-24 02:11:30.659716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:21882 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.993 [2024-07-24 02:11:30.659735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.993 [2024-07-24 02:11:30.673718] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cfcc0) 00:33:15.993 [2024-07-24 02:11:30.673753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:5278 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.993 [2024-07-24 02:11:30.673772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.993 [2024-07-24 02:11:30.687280] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cfcc0) 00:33:15.993 [2024-07-24 02:11:30.687313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:18442 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.993 [2024-07-24 02:11:30.687357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.993 [2024-07-24 02:11:30.701758] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cfcc0) 00:33:15.993 [2024-07-24 02:11:30.701793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:9293 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.993 [2024-07-24 02:11:30.701813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.993 [2024-07-24 02:11:30.712688] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cfcc0) 00:33:15.993 [2024-07-24 02:11:30.712722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:17072 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.993 [2024-07-24 02:11:30.712741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.993 [2024-07-24 02:11:30.729334] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cfcc0) 00:33:15.994 [2024-07-24 02:11:30.729382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:7555 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.994 [2024-07-24 02:11:30.729398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.994 [2024-07-24 02:11:30.742825] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cfcc0) 00:33:15.994 [2024-07-24 02:11:30.742859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:17137 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.994 [2024-07-24 02:11:30.742878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.994 [2024-07-24 02:11:30.754541] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18cfcc0) 00:33:15.994 [2024-07-24 02:11:30.754574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:14635 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.994 [2024-07-24 02:11:30.754591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.994 00:33:15.994 Latency(us) 00:33:15.994 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:15.994 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:33:15.994 nvme0n1 : 2.01 18370.52 71.76 0.00 0.00 6958.76 3398.16 20194.80 00:33:15.994 =================================================================================================================== 00:33:15.994 Total : 18370.52 71.76 0.00 0.00 6958.76 3398.16 20194.80 00:33:15.994 0 00:33:15.994 02:11:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:15.994 02:11:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:15.994 02:11:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:15.994 02:11:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:15.994 | .driver_specific 00:33:15.994 | .nvme_error 00:33:15.994 | .status_code 00:33:15.994 | .command_transient_transport_error' 00:33:16.251 02:11:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 144 > 0 )) 00:33:16.251 02:11:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1575395 00:33:16.251 02:11:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1575395 ']' 00:33:16.251 02:11:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1575395 00:33:16.251 02:11:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:33:16.251 02:11:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:16.251 02:11:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1575395 00:33:16.251 02:11:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:16.251 02:11:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:16.251 02:11:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1575395' 00:33:16.251 killing process with pid 1575395 00:33:16.251 02:11:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1575395 00:33:16.251 Received shutdown signal, test time was about 2.000000 seconds 00:33:16.251 00:33:16.251 Latency(us) 00:33:16.251 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:16.251 =================================================================================================================== 00:33:16.251 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:16.251 02:11:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1575395 00:33:16.509 02:11:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:33:16.509 02:11:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:33:16.509 02:11:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:33:16.509 02:11:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:33:16.509 02:11:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:33:16.509 02:11:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1575801 00:33:16.509 02:11:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:33:16.509 02:11:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1575801 /var/tmp/bperf.sock 00:33:16.509 02:11:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1575801 ']' 00:33:16.509 02:11:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:16.509 02:11:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:16.509 02:11:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:16.509 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:16.509 02:11:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:16.509 02:11:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:16.509 [2024-07-24 02:11:31.348695] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:33:16.509 [2024-07-24 02:11:31.348785] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1575801 ] 00:33:16.509 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:16.509 Zero copy mechanism will not be used. 00:33:16.509 EAL: No free 2048 kB hugepages reported on node 1 00:33:16.767 [2024-07-24 02:11:31.409893] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:16.767 [2024-07-24 02:11:31.494424] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:16.767 02:11:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:16.767 02:11:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:33:16.767 02:11:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:16.767 02:11:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:17.025 02:11:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:17.025 02:11:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:17.025 02:11:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:17.025 02:11:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:17.025 02:11:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:17.025 02:11:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:17.590 nvme0n1 00:33:17.590 02:11:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:33:17.590 02:11:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:17.590 02:11:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:17.590 02:11:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:17.590 02:11:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:17.590 02:11:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:17.590 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:17.590 Zero copy mechanism will not be used. 00:33:17.590 Running I/O for 2 seconds... 00:33:17.590 [2024-07-24 02:11:32.372052] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:17.590 [2024-07-24 02:11:32.372107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.590 [2024-07-24 02:11:32.372129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:17.590 [2024-07-24 02:11:32.378658] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:17.590 [2024-07-24 02:11:32.378696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.590 [2024-07-24 02:11:32.378716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:17.591 [2024-07-24 02:11:32.384967] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:17.591 [2024-07-24 02:11:32.384998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.591 [2024-07-24 02:11:32.385014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:17.591 [2024-07-24 02:11:32.391069] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:17.591 [2024-07-24 02:11:32.391105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.591 [2024-07-24 02:11:32.391125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:17.591 [2024-07-24 02:11:32.397233] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:17.591 [2024-07-24 02:11:32.397268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.591 [2024-07-24 02:11:32.397287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:17.591 [2024-07-24 02:11:32.403728] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:17.591 [2024-07-24 02:11:32.403764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.591 [2024-07-24 02:11:32.403783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:17.591 [2024-07-24 02:11:32.410339] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:17.591 [2024-07-24 02:11:32.410374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.591 [2024-07-24 02:11:32.410406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:17.591 [2024-07-24 02:11:32.417768] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:17.591 [2024-07-24 02:11:32.417804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.591 [2024-07-24 02:11:32.417836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:17.591 [2024-07-24 02:11:32.423621] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:17.591 [2024-07-24 02:11:32.423656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.591 [2024-07-24 02:11:32.423675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:17.591 [2024-07-24 02:11:32.430475] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:17.591 [2024-07-24 02:11:32.430506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.591 [2024-07-24 02:11:32.430522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:17.591 [2024-07-24 02:11:32.437647] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:17.591 [2024-07-24 02:11:32.437683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.591 [2024-07-24 02:11:32.437703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:17.591 [2024-07-24 02:11:32.444811] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:17.591 [2024-07-24 02:11:32.444848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.591 [2024-07-24 02:11:32.444868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:17.591 [2024-07-24 02:11:32.452788] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:17.591 [2024-07-24 02:11:32.452824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.591 [2024-07-24 02:11:32.452848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:17.591 [2024-07-24 02:11:32.458935] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:17.591 [2024-07-24 02:11:32.458971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.591 [2024-07-24 02:11:32.458991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:17.591 [2024-07-24 02:11:32.467112] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:17.591 [2024-07-24 02:11:32.467148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.591 [2024-07-24 02:11:32.467168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:17.591 [2024-07-24 02:11:32.476002] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:17.591 [2024-07-24 02:11:32.476038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.591 [2024-07-24 02:11:32.476058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:17.591 [2024-07-24 02:11:32.484232] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:17.591 [2024-07-24 02:11:32.484276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.591 [2024-07-24 02:11:32.484296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:17.849 [2024-07-24 02:11:32.491442] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:17.849 [2024-07-24 02:11:32.491473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.849 [2024-07-24 02:11:32.491490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:17.849 [2024-07-24 02:11:32.499550] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:17.849 [2024-07-24 02:11:32.499581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.849 [2024-07-24 02:11:32.499598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:17.849 [2024-07-24 02:11:32.506584] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:17.849 [2024-07-24 02:11:32.506614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.849 [2024-07-24 02:11:32.506632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:17.849 [2024-07-24 02:11:32.514698] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:17.849 [2024-07-24 02:11:32.514735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.849 [2024-07-24 02:11:32.514754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:17.849 [2024-07-24 02:11:32.522268] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:17.849 [2024-07-24 02:11:32.522305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.849 [2024-07-24 02:11:32.522333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:17.849 [2024-07-24 02:11:32.529359] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:17.850 [2024-07-24 02:11:32.529395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.850 [2024-07-24 02:11:32.529411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:17.850 [2024-07-24 02:11:32.535730] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:17.850 [2024-07-24 02:11:32.535760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.850 [2024-07-24 02:11:32.535776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:17.850 [2024-07-24 02:11:32.541859] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:17.850 [2024-07-24 02:11:32.541894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.850 [2024-07-24 02:11:32.541913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:17.850 [2024-07-24 02:11:32.547980] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:17.850 [2024-07-24 02:11:32.548015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.850 [2024-07-24 02:11:32.548034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:17.850 [2024-07-24 02:11:32.554177] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:17.850 [2024-07-24 02:11:32.554212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.850 [2024-07-24 02:11:32.554230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:17.850 [2024-07-24 02:11:32.560594] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:17.850 [2024-07-24 02:11:32.560624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.850 [2024-07-24 02:11:32.560666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:17.850 [2024-07-24 02:11:32.566914] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:17.850 [2024-07-24 02:11:32.566948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.850 [2024-07-24 02:11:32.566966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:17.850 [2024-07-24 02:11:32.573073] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:17.850 [2024-07-24 02:11:32.573108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.850 [2024-07-24 02:11:32.573126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:17.850 [2024-07-24 02:11:32.579558] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:17.850 [2024-07-24 02:11:32.579589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.850 [2024-07-24 02:11:32.579605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:17.850 [2024-07-24 02:11:32.585816] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:17.850 [2024-07-24 02:11:32.585850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.850 [2024-07-24 02:11:32.585869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:17.850 [2024-07-24 02:11:32.592000] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:17.850 [2024-07-24 02:11:32.592035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.850 [2024-07-24 02:11:32.592053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:17.850 [2024-07-24 02:11:32.598430] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:17.850 [2024-07-24 02:11:32.598469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.850 [2024-07-24 02:11:32.598485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:17.850 [2024-07-24 02:11:32.604559] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:17.850 [2024-07-24 02:11:32.604608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.850 [2024-07-24 02:11:32.604628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:17.850 [2024-07-24 02:11:32.610708] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:17.850 [2024-07-24 02:11:32.610742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.850 [2024-07-24 02:11:32.610761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:17.850 [2024-07-24 02:11:32.616844] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:17.850 [2024-07-24 02:11:32.616879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.850 [2024-07-24 02:11:32.616898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:17.850 [2024-07-24 02:11:32.622974] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:17.850 [2024-07-24 02:11:32.623009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.850 [2024-07-24 02:11:32.623027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:17.850 [2024-07-24 02:11:32.629439] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:17.850 [2024-07-24 02:11:32.629469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.850 [2024-07-24 02:11:32.629485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:17.850 [2024-07-24 02:11:32.636059] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:17.850 [2024-07-24 02:11:32.636093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.850 [2024-07-24 02:11:32.636112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:17.850 [2024-07-24 02:11:32.642629] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:17.850 [2024-07-24 02:11:32.642664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.850 [2024-07-24 02:11:32.642682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:17.850 [2024-07-24 02:11:32.649014] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:17.850 [2024-07-24 02:11:32.649047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.850 [2024-07-24 02:11:32.649066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:17.850 [2024-07-24 02:11:32.655588] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:17.850 [2024-07-24 02:11:32.655631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.850 [2024-07-24 02:11:32.655647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:17.850 [2024-07-24 02:11:32.661802] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:17.850 [2024-07-24 02:11:32.661851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.850 [2024-07-24 02:11:32.661870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:17.850 [2024-07-24 02:11:32.668038] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:17.850 [2024-07-24 02:11:32.668072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.850 [2024-07-24 02:11:32.668090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:17.850 [2024-07-24 02:11:32.674506] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:17.850 [2024-07-24 02:11:32.674535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.850 [2024-07-24 02:11:32.674551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:17.850 [2024-07-24 02:11:32.680843] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:17.850 [2024-07-24 02:11:32.680877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.850 [2024-07-24 02:11:32.680895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:17.850 [2024-07-24 02:11:32.687019] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:17.850 [2024-07-24 02:11:32.687053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.851 [2024-07-24 02:11:32.687071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:17.851 [2024-07-24 02:11:32.693398] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:17.851 [2024-07-24 02:11:32.693427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.851 [2024-07-24 02:11:32.693443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:17.851 [2024-07-24 02:11:32.699523] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:17.851 [2024-07-24 02:11:32.699554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.851 [2024-07-24 02:11:32.699570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:17.851 [2024-07-24 02:11:32.705680] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:17.851 [2024-07-24 02:11:32.705713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.851 [2024-07-24 02:11:32.705740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:17.851 [2024-07-24 02:11:32.712030] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:17.851 [2024-07-24 02:11:32.712063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.851 [2024-07-24 02:11:32.712082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:17.851 [2024-07-24 02:11:32.718434] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:17.851 [2024-07-24 02:11:32.718463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.851 [2024-07-24 02:11:32.718479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:17.851 [2024-07-24 02:11:32.724674] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:17.851 [2024-07-24 02:11:32.724707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.851 [2024-07-24 02:11:32.724726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:17.851 [2024-07-24 02:11:32.730767] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:17.851 [2024-07-24 02:11:32.730801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.851 [2024-07-24 02:11:32.730820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:17.851 [2024-07-24 02:11:32.737094] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:17.851 [2024-07-24 02:11:32.737123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.851 [2024-07-24 02:11:32.737139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:17.851 [2024-07-24 02:11:32.743543] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:17.851 [2024-07-24 02:11:32.743575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.851 [2024-07-24 02:11:32.743593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:18.110 [2024-07-24 02:11:32.749948] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.110 [2024-07-24 02:11:32.749983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.110 [2024-07-24 02:11:32.750001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:18.110 [2024-07-24 02:11:32.756196] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.110 [2024-07-24 02:11:32.756231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.110 [2024-07-24 02:11:32.756249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:18.110 [2024-07-24 02:11:32.762492] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.110 [2024-07-24 02:11:32.762529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.110 [2024-07-24 02:11:32.762545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:18.110 [2024-07-24 02:11:32.768737] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.110 [2024-07-24 02:11:32.768772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.110 [2024-07-24 02:11:32.768790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:18.110 [2024-07-24 02:11:32.774909] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.110 [2024-07-24 02:11:32.774943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.110 [2024-07-24 02:11:32.774961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:18.110 [2024-07-24 02:11:32.781022] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.110 [2024-07-24 02:11:32.781056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.110 [2024-07-24 02:11:32.781074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:18.110 [2024-07-24 02:11:32.787357] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.110 [2024-07-24 02:11:32.787402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.110 [2024-07-24 02:11:32.787418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:18.110 [2024-07-24 02:11:32.793120] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.110 [2024-07-24 02:11:32.793150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.110 [2024-07-24 02:11:32.793165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:18.110 [2024-07-24 02:11:32.799386] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.110 [2024-07-24 02:11:32.799414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.110 [2024-07-24 02:11:32.799430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:18.110 [2024-07-24 02:11:32.805584] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.110 [2024-07-24 02:11:32.805629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.110 [2024-07-24 02:11:32.805645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:18.110 [2024-07-24 02:11:32.811833] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.110 [2024-07-24 02:11:32.811863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.110 [2024-07-24 02:11:32.811901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:18.110 [2024-07-24 02:11:32.818166] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.110 [2024-07-24 02:11:32.818196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.110 [2024-07-24 02:11:32.818212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:18.111 [2024-07-24 02:11:32.824411] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.111 [2024-07-24 02:11:32.824439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.111 [2024-07-24 02:11:32.824454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:18.111 [2024-07-24 02:11:32.830737] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.111 [2024-07-24 02:11:32.830767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.111 [2024-07-24 02:11:32.830783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:18.111 [2024-07-24 02:11:32.836928] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.111 [2024-07-24 02:11:32.836957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.111 [2024-07-24 02:11:32.836973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:18.111 [2024-07-24 02:11:32.843210] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.111 [2024-07-24 02:11:32.843239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.111 [2024-07-24 02:11:32.843255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:18.111 [2024-07-24 02:11:32.849443] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.111 [2024-07-24 02:11:32.849473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.111 [2024-07-24 02:11:32.849489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:18.111 [2024-07-24 02:11:32.855662] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.111 [2024-07-24 02:11:32.855692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.111 [2024-07-24 02:11:32.855708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:18.111 [2024-07-24 02:11:32.861886] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.111 [2024-07-24 02:11:32.861916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.111 [2024-07-24 02:11:32.861932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:18.111 [2024-07-24 02:11:32.868234] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.111 [2024-07-24 02:11:32.868269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.111 [2024-07-24 02:11:32.868285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:18.111 [2024-07-24 02:11:32.874460] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.111 [2024-07-24 02:11:32.874490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.111 [2024-07-24 02:11:32.874506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:18.111 [2024-07-24 02:11:32.880827] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.111 [2024-07-24 02:11:32.880871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.111 [2024-07-24 02:11:32.880887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:18.111 [2024-07-24 02:11:32.887422] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.111 [2024-07-24 02:11:32.887467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.111 [2024-07-24 02:11:32.887483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:18.111 [2024-07-24 02:11:32.894054] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.111 [2024-07-24 02:11:32.894083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.111 [2024-07-24 02:11:32.894099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:18.111 [2024-07-24 02:11:32.900431] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.111 [2024-07-24 02:11:32.900460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.111 [2024-07-24 02:11:32.900476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:18.111 [2024-07-24 02:11:32.906515] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.111 [2024-07-24 02:11:32.906544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.111 [2024-07-24 02:11:32.906560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:18.111 [2024-07-24 02:11:32.912707] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.111 [2024-07-24 02:11:32.912736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.111 [2024-07-24 02:11:32.912751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:18.111 [2024-07-24 02:11:32.919397] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.111 [2024-07-24 02:11:32.919425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.111 [2024-07-24 02:11:32.919441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:18.111 [2024-07-24 02:11:32.927368] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.111 [2024-07-24 02:11:32.927399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.111 [2024-07-24 02:11:32.927415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:18.111 [2024-07-24 02:11:32.935099] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.111 [2024-07-24 02:11:32.935130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.111 [2024-07-24 02:11:32.935147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:18.111 [2024-07-24 02:11:32.942828] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.111 [2024-07-24 02:11:32.942864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.111 [2024-07-24 02:11:32.942883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:18.111 [2024-07-24 02:11:32.948736] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.111 [2024-07-24 02:11:32.948766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.111 [2024-07-24 02:11:32.948783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:18.111 [2024-07-24 02:11:32.955281] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.111 [2024-07-24 02:11:32.955312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.111 [2024-07-24 02:11:32.955338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:18.111 [2024-07-24 02:11:32.962012] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.111 [2024-07-24 02:11:32.962042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.111 [2024-07-24 02:11:32.962074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:18.111 [2024-07-24 02:11:32.968785] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.111 [2024-07-24 02:11:32.968816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.111 [2024-07-24 02:11:32.968846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:18.111 [2024-07-24 02:11:32.975390] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.111 [2024-07-24 02:11:32.975420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.111 [2024-07-24 02:11:32.975437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:18.111 [2024-07-24 02:11:32.981594] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.111 [2024-07-24 02:11:32.981624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.111 [2024-07-24 02:11:32.981647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:18.111 [2024-07-24 02:11:32.988080] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.112 [2024-07-24 02:11:32.988110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.112 [2024-07-24 02:11:32.988126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:18.112 [2024-07-24 02:11:32.994718] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.112 [2024-07-24 02:11:32.994750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.112 [2024-07-24 02:11:32.994766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:18.112 [2024-07-24 02:11:33.001632] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.112 [2024-07-24 02:11:33.001680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.112 [2024-07-24 02:11:33.001701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:18.370 [2024-07-24 02:11:33.008160] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.370 [2024-07-24 02:11:33.008193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.370 [2024-07-24 02:11:33.008226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:18.370 [2024-07-24 02:11:33.012404] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.370 [2024-07-24 02:11:33.012435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.370 [2024-07-24 02:11:33.012452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:18.370 [2024-07-24 02:11:33.017935] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.370 [2024-07-24 02:11:33.017965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.370 [2024-07-24 02:11:33.017981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:18.370 [2024-07-24 02:11:33.023636] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.370 [2024-07-24 02:11:33.023666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.370 [2024-07-24 02:11:33.023682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:18.370 [2024-07-24 02:11:33.029688] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.371 [2024-07-24 02:11:33.029718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.371 [2024-07-24 02:11:33.029735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:18.371 [2024-07-24 02:11:33.036360] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.371 [2024-07-24 02:11:33.036397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.371 [2024-07-24 02:11:33.036414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:18.371 [2024-07-24 02:11:33.042758] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.371 [2024-07-24 02:11:33.042788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.371 [2024-07-24 02:11:33.042804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:18.371 [2024-07-24 02:11:33.049272] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.371 [2024-07-24 02:11:33.049324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.371 [2024-07-24 02:11:33.049344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:18.371 [2024-07-24 02:11:33.055770] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.371 [2024-07-24 02:11:33.055800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.371 [2024-07-24 02:11:33.055816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:18.371 [2024-07-24 02:11:33.062541] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.371 [2024-07-24 02:11:33.062572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.371 [2024-07-24 02:11:33.062588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:18.371 [2024-07-24 02:11:33.069097] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.371 [2024-07-24 02:11:33.069128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.371 [2024-07-24 02:11:33.069145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:18.371 [2024-07-24 02:11:33.075946] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.371 [2024-07-24 02:11:33.075977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.371 [2024-07-24 02:11:33.075994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:18.371 [2024-07-24 02:11:33.082542] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.371 [2024-07-24 02:11:33.082571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.371 [2024-07-24 02:11:33.082596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:18.371 [2024-07-24 02:11:33.088959] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.371 [2024-07-24 02:11:33.088989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.371 [2024-07-24 02:11:33.089004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:18.371 [2024-07-24 02:11:33.095603] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.371 [2024-07-24 02:11:33.095632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.371 [2024-07-24 02:11:33.095649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:18.371 [2024-07-24 02:11:33.102443] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.371 [2024-07-24 02:11:33.102473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.371 [2024-07-24 02:11:33.102490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:18.371 [2024-07-24 02:11:33.108846] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.371 [2024-07-24 02:11:33.108876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.371 [2024-07-24 02:11:33.108892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:18.371 [2024-07-24 02:11:33.115536] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.371 [2024-07-24 02:11:33.115566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.371 [2024-07-24 02:11:33.115582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:18.371 [2024-07-24 02:11:33.122641] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.371 [2024-07-24 02:11:33.122672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.371 [2024-07-24 02:11:33.122688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:18.371 [2024-07-24 02:11:33.129827] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.371 [2024-07-24 02:11:33.129857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.371 [2024-07-24 02:11:33.129873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:18.371 [2024-07-24 02:11:33.136622] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.371 [2024-07-24 02:11:33.136657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.371 [2024-07-24 02:11:33.136676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:18.371 [2024-07-24 02:11:33.142559] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.371 [2024-07-24 02:11:33.142590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.371 [2024-07-24 02:11:33.142621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:18.371 [2024-07-24 02:11:33.149076] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.371 [2024-07-24 02:11:33.149107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.371 [2024-07-24 02:11:33.149131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:18.371 [2024-07-24 02:11:33.155843] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.371 [2024-07-24 02:11:33.155873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.371 [2024-07-24 02:11:33.155889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:18.371 [2024-07-24 02:11:33.162372] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.371 [2024-07-24 02:11:33.162403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.371 [2024-07-24 02:11:33.162419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:18.371 [2024-07-24 02:11:33.168638] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.371 [2024-07-24 02:11:33.168667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.371 [2024-07-24 02:11:33.168683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:18.371 [2024-07-24 02:11:33.175216] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.371 [2024-07-24 02:11:33.175246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.371 [2024-07-24 02:11:33.175262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:18.371 [2024-07-24 02:11:33.181984] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.371 [2024-07-24 02:11:33.182015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.371 [2024-07-24 02:11:33.182031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:18.371 [2024-07-24 02:11:33.188394] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.371 [2024-07-24 02:11:33.188424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.371 [2024-07-24 02:11:33.188440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:18.371 [2024-07-24 02:11:33.194800] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.371 [2024-07-24 02:11:33.194832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.371 [2024-07-24 02:11:33.194848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:18.371 [2024-07-24 02:11:33.201395] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.372 [2024-07-24 02:11:33.201424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.372 [2024-07-24 02:11:33.201439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:18.372 [2024-07-24 02:11:33.207679] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.372 [2024-07-24 02:11:33.207715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.372 [2024-07-24 02:11:33.207731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:18.372 [2024-07-24 02:11:33.213985] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.372 [2024-07-24 02:11:33.214014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.372 [2024-07-24 02:11:33.214030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:18.372 [2024-07-24 02:11:33.220296] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.372 [2024-07-24 02:11:33.220333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.372 [2024-07-24 02:11:33.220351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:18.372 [2024-07-24 02:11:33.226618] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.372 [2024-07-24 02:11:33.226647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.372 [2024-07-24 02:11:33.226663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:18.372 [2024-07-24 02:11:33.233016] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.372 [2024-07-24 02:11:33.233045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.372 [2024-07-24 02:11:33.233061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:18.372 [2024-07-24 02:11:33.239222] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.372 [2024-07-24 02:11:33.239252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.372 [2024-07-24 02:11:33.239268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:18.372 [2024-07-24 02:11:33.245694] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.372 [2024-07-24 02:11:33.245724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.372 [2024-07-24 02:11:33.245740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:18.372 [2024-07-24 02:11:33.252198] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.372 [2024-07-24 02:11:33.252228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.372 [2024-07-24 02:11:33.252244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:18.372 [2024-07-24 02:11:33.258410] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.372 [2024-07-24 02:11:33.258440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.372 [2024-07-24 02:11:33.258468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:18.631 [2024-07-24 02:11:33.265274] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.631 [2024-07-24 02:11:33.265328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.631 [2024-07-24 02:11:33.265347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:18.631 [2024-07-24 02:11:33.271751] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.631 [2024-07-24 02:11:33.271781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.631 [2024-07-24 02:11:33.271797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:18.631 [2024-07-24 02:11:33.278129] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.631 [2024-07-24 02:11:33.278159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.631 [2024-07-24 02:11:33.278176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:18.631 [2024-07-24 02:11:33.284490] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.631 [2024-07-24 02:11:33.284519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.631 [2024-07-24 02:11:33.284535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:18.631 [2024-07-24 02:11:33.290643] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.631 [2024-07-24 02:11:33.290674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.631 [2024-07-24 02:11:33.290706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:18.631 [2024-07-24 02:11:33.297183] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.631 [2024-07-24 02:11:33.297213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.631 [2024-07-24 02:11:33.297228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:18.631 [2024-07-24 02:11:33.303540] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.631 [2024-07-24 02:11:33.303569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.631 [2024-07-24 02:11:33.303584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:18.631 [2024-07-24 02:11:33.309862] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.631 [2024-07-24 02:11:33.309891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.631 [2024-07-24 02:11:33.309907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:18.631 [2024-07-24 02:11:33.316307] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.632 [2024-07-24 02:11:33.316350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.632 [2024-07-24 02:11:33.316366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:18.632 [2024-07-24 02:11:33.322646] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.632 [2024-07-24 02:11:33.322675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.632 [2024-07-24 02:11:33.322692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:18.632 [2024-07-24 02:11:33.329002] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.632 [2024-07-24 02:11:33.329031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.632 [2024-07-24 02:11:33.329047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:18.632 [2024-07-24 02:11:33.335359] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.632 [2024-07-24 02:11:33.335389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.632 [2024-07-24 02:11:33.335405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:18.632 [2024-07-24 02:11:33.341565] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.632 [2024-07-24 02:11:33.341595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.632 [2024-07-24 02:11:33.341611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:18.632 [2024-07-24 02:11:33.347798] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.632 [2024-07-24 02:11:33.347826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.632 [2024-07-24 02:11:33.347842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:18.632 [2024-07-24 02:11:33.354215] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.632 [2024-07-24 02:11:33.354245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.632 [2024-07-24 02:11:33.354261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:18.632 [2024-07-24 02:11:33.360533] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.632 [2024-07-24 02:11:33.360562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.632 [2024-07-24 02:11:33.360578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:18.632 [2024-07-24 02:11:33.366930] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.632 [2024-07-24 02:11:33.366959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.632 [2024-07-24 02:11:33.366974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:18.632 [2024-07-24 02:11:33.373346] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.632 [2024-07-24 02:11:33.373376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.632 [2024-07-24 02:11:33.373392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:18.632 [2024-07-24 02:11:33.379661] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.632 [2024-07-24 02:11:33.379691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.632 [2024-07-24 02:11:33.379707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:18.632 [2024-07-24 02:11:33.386369] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.632 [2024-07-24 02:11:33.386399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.632 [2024-07-24 02:11:33.386416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:18.632 [2024-07-24 02:11:33.392733] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.632 [2024-07-24 02:11:33.392762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.632 [2024-07-24 02:11:33.392778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:18.632 [2024-07-24 02:11:33.399266] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.632 [2024-07-24 02:11:33.399309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.632 [2024-07-24 02:11:33.399335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:18.632 [2024-07-24 02:11:33.405717] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.632 [2024-07-24 02:11:33.405746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.632 [2024-07-24 02:11:33.405761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:18.632 [2024-07-24 02:11:33.412025] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.632 [2024-07-24 02:11:33.412054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.632 [2024-07-24 02:11:33.412070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:18.632 [2024-07-24 02:11:33.418468] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.632 [2024-07-24 02:11:33.418498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.632 [2024-07-24 02:11:33.418514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:18.632 [2024-07-24 02:11:33.424830] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.632 [2024-07-24 02:11:33.424859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.632 [2024-07-24 02:11:33.424884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:18.632 [2024-07-24 02:11:33.431244] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.632 [2024-07-24 02:11:33.431273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.632 [2024-07-24 02:11:33.431289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:18.632 [2024-07-24 02:11:33.437477] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.632 [2024-07-24 02:11:33.437507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.632 [2024-07-24 02:11:33.437523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:18.632 [2024-07-24 02:11:33.445830] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.632 [2024-07-24 02:11:33.445862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.632 [2024-07-24 02:11:33.445878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:18.632 [2024-07-24 02:11:33.452949] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.632 [2024-07-24 02:11:33.452978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.632 [2024-07-24 02:11:33.452994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:18.632 [2024-07-24 02:11:33.459599] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.632 [2024-07-24 02:11:33.459646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.632 [2024-07-24 02:11:33.459662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:18.632 [2024-07-24 02:11:33.466259] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.632 [2024-07-24 02:11:33.466290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.632 [2024-07-24 02:11:33.466330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:18.632 [2024-07-24 02:11:33.472852] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.633 [2024-07-24 02:11:33.472882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.633 [2024-07-24 02:11:33.472898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:18.633 [2024-07-24 02:11:33.479241] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.633 [2024-07-24 02:11:33.479272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.633 [2024-07-24 02:11:33.479288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:18.633 [2024-07-24 02:11:33.485449] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.633 [2024-07-24 02:11:33.485492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.633 [2024-07-24 02:11:33.485509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:18.633 [2024-07-24 02:11:33.491799] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.633 [2024-07-24 02:11:33.491830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.633 [2024-07-24 02:11:33.491846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:18.633 [2024-07-24 02:11:33.498340] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.633 [2024-07-24 02:11:33.498371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.633 [2024-07-24 02:11:33.498388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:18.633 [2024-07-24 02:11:33.504695] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.633 [2024-07-24 02:11:33.504725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.633 [2024-07-24 02:11:33.504742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:18.633 [2024-07-24 02:11:33.511138] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.633 [2024-07-24 02:11:33.511168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.633 [2024-07-24 02:11:33.511184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:18.633 [2024-07-24 02:11:33.517513] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.633 [2024-07-24 02:11:33.517543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.633 [2024-07-24 02:11:33.517559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:18.633 [2024-07-24 02:11:33.524145] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.633 [2024-07-24 02:11:33.524177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.633 [2024-07-24 02:11:33.524194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:18.892 [2024-07-24 02:11:33.530723] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.892 [2024-07-24 02:11:33.530757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.892 [2024-07-24 02:11:33.530775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:18.892 [2024-07-24 02:11:33.537334] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.892 [2024-07-24 02:11:33.537363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.892 [2024-07-24 02:11:33.537380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:18.892 [2024-07-24 02:11:33.543825] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.892 [2024-07-24 02:11:33.543856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.892 [2024-07-24 02:11:33.543873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:18.892 [2024-07-24 02:11:33.550326] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.892 [2024-07-24 02:11:33.550356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.892 [2024-07-24 02:11:33.550372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:18.892 [2024-07-24 02:11:33.557293] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.892 [2024-07-24 02:11:33.557340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.893 [2024-07-24 02:11:33.557359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:18.893 [2024-07-24 02:11:33.565500] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.893 [2024-07-24 02:11:33.565532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.893 [2024-07-24 02:11:33.565549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:18.893 [2024-07-24 02:11:33.573518] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.893 [2024-07-24 02:11:33.573551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.893 [2024-07-24 02:11:33.573568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:18.893 [2024-07-24 02:11:33.581238] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.893 [2024-07-24 02:11:33.581269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.893 [2024-07-24 02:11:33.581286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:18.893 [2024-07-24 02:11:33.587777] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.893 [2024-07-24 02:11:33.587808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.893 [2024-07-24 02:11:33.587824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:18.893 [2024-07-24 02:11:33.594285] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.893 [2024-07-24 02:11:33.594314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.893 [2024-07-24 02:11:33.594341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:18.893 [2024-07-24 02:11:33.600746] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.893 [2024-07-24 02:11:33.600776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.893 [2024-07-24 02:11:33.600804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:18.893 [2024-07-24 02:11:33.607225] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.893 [2024-07-24 02:11:33.607255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.893 [2024-07-24 02:11:33.607271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:18.893 [2024-07-24 02:11:33.613860] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.893 [2024-07-24 02:11:33.613891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.893 [2024-07-24 02:11:33.613908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:18.893 [2024-07-24 02:11:33.620234] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.893 [2024-07-24 02:11:33.620265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.893 [2024-07-24 02:11:33.620281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:18.893 [2024-07-24 02:11:33.626630] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.893 [2024-07-24 02:11:33.626661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.893 [2024-07-24 02:11:33.626678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:18.893 [2024-07-24 02:11:33.633073] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.893 [2024-07-24 02:11:33.633103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.893 [2024-07-24 02:11:33.633120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:18.893 [2024-07-24 02:11:33.639588] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.893 [2024-07-24 02:11:33.639633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.893 [2024-07-24 02:11:33.639650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:18.893 [2024-07-24 02:11:33.646201] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.893 [2024-07-24 02:11:33.646231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.893 [2024-07-24 02:11:33.646247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:18.893 [2024-07-24 02:11:33.652832] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.893 [2024-07-24 02:11:33.652865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.893 [2024-07-24 02:11:33.652882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:18.893 [2024-07-24 02:11:33.659412] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.893 [2024-07-24 02:11:33.659460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.893 [2024-07-24 02:11:33.659478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:18.893 [2024-07-24 02:11:33.665701] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.893 [2024-07-24 02:11:33.665732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.893 [2024-07-24 02:11:33.665749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:18.893 [2024-07-24 02:11:33.672201] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.893 [2024-07-24 02:11:33.672232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.893 [2024-07-24 02:11:33.672248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:18.893 [2024-07-24 02:11:33.678547] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.893 [2024-07-24 02:11:33.678577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.893 [2024-07-24 02:11:33.678594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:18.893 [2024-07-24 02:11:33.685197] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.893 [2024-07-24 02:11:33.685228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.893 [2024-07-24 02:11:33.685244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:18.893 [2024-07-24 02:11:33.691513] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.893 [2024-07-24 02:11:33.691544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.893 [2024-07-24 02:11:33.691560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:18.893 [2024-07-24 02:11:33.698065] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.893 [2024-07-24 02:11:33.698097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.893 [2024-07-24 02:11:33.698114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:18.893 [2024-07-24 02:11:33.704579] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.893 [2024-07-24 02:11:33.704611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.893 [2024-07-24 02:11:33.704627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:18.893 [2024-07-24 02:11:33.711052] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.893 [2024-07-24 02:11:33.711083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.893 [2024-07-24 02:11:33.711108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:18.893 [2024-07-24 02:11:33.717519] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.893 [2024-07-24 02:11:33.717551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.893 [2024-07-24 02:11:33.717568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:18.893 [2024-07-24 02:11:33.724006] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.893 [2024-07-24 02:11:33.724037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.893 [2024-07-24 02:11:33.724054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:18.893 [2024-07-24 02:11:33.730433] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.893 [2024-07-24 02:11:33.730464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.894 [2024-07-24 02:11:33.730481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:18.894 [2024-07-24 02:11:33.737031] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.894 [2024-07-24 02:11:33.737061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.894 [2024-07-24 02:11:33.737077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:18.894 [2024-07-24 02:11:33.743467] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.894 [2024-07-24 02:11:33.743497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.894 [2024-07-24 02:11:33.743514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:18.894 [2024-07-24 02:11:33.750179] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.894 [2024-07-24 02:11:33.750209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.894 [2024-07-24 02:11:33.750225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:18.894 [2024-07-24 02:11:33.756603] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.894 [2024-07-24 02:11:33.756634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.894 [2024-07-24 02:11:33.756650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:18.894 [2024-07-24 02:11:33.763346] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.894 [2024-07-24 02:11:33.763394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.894 [2024-07-24 02:11:33.763414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:18.894 [2024-07-24 02:11:33.769727] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.894 [2024-07-24 02:11:33.769766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.894 [2024-07-24 02:11:33.769783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:18.894 [2024-07-24 02:11:33.776280] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.894 [2024-07-24 02:11:33.776311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.894 [2024-07-24 02:11:33.776337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:18.894 [2024-07-24 02:11:33.782655] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:18.894 [2024-07-24 02:11:33.782687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:18.894 [2024-07-24 02:11:33.782704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:19.153 [2024-07-24 02:11:33.789252] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:19.153 [2024-07-24 02:11:33.789283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.153 [2024-07-24 02:11:33.789300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:19.153 [2024-07-24 02:11:33.795773] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:19.153 [2024-07-24 02:11:33.795803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.153 [2024-07-24 02:11:33.795820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:19.153 [2024-07-24 02:11:33.802171] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:19.153 [2024-07-24 02:11:33.802201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.153 [2024-07-24 02:11:33.802217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:19.153 [2024-07-24 02:11:33.808520] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:19.153 [2024-07-24 02:11:33.808550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.153 [2024-07-24 02:11:33.808566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:19.153 [2024-07-24 02:11:33.815047] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:19.153 [2024-07-24 02:11:33.815079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.153 [2024-07-24 02:11:33.815096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:19.153 [2024-07-24 02:11:33.821536] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:19.153 [2024-07-24 02:11:33.821566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.154 [2024-07-24 02:11:33.821582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:19.154 [2024-07-24 02:11:33.828131] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:19.154 [2024-07-24 02:11:33.828161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.154 [2024-07-24 02:11:33.828177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:19.154 [2024-07-24 02:11:33.834607] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:19.154 [2024-07-24 02:11:33.834638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.154 [2024-07-24 02:11:33.834654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:19.154 [2024-07-24 02:11:33.841031] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:19.154 [2024-07-24 02:11:33.841062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.154 [2024-07-24 02:11:33.841078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:19.154 [2024-07-24 02:11:33.847440] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:19.154 [2024-07-24 02:11:33.847471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.154 [2024-07-24 02:11:33.847488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:19.154 [2024-07-24 02:11:33.853807] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:19.154 [2024-07-24 02:11:33.853840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.154 [2024-07-24 02:11:33.853857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:19.154 [2024-07-24 02:11:33.860358] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:19.154 [2024-07-24 02:11:33.860389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.154 [2024-07-24 02:11:33.860406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:19.154 [2024-07-24 02:11:33.866848] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:19.154 [2024-07-24 02:11:33.866880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.154 [2024-07-24 02:11:33.866897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:19.154 [2024-07-24 02:11:33.873356] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:19.154 [2024-07-24 02:11:33.873387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.154 [2024-07-24 02:11:33.873404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:19.154 [2024-07-24 02:11:33.879706] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:19.154 [2024-07-24 02:11:33.879736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.154 [2024-07-24 02:11:33.879764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:19.154 [2024-07-24 02:11:33.886386] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:19.154 [2024-07-24 02:11:33.886417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.154 [2024-07-24 02:11:33.886434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:19.154 [2024-07-24 02:11:33.893021] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:19.154 [2024-07-24 02:11:33.893052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.154 [2024-07-24 02:11:33.893069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:19.154 [2024-07-24 02:11:33.899516] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:19.154 [2024-07-24 02:11:33.899548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.154 [2024-07-24 02:11:33.899565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:19.154 [2024-07-24 02:11:33.906086] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:19.154 [2024-07-24 02:11:33.906118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.154 [2024-07-24 02:11:33.906135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:19.154 [2024-07-24 02:11:33.912574] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:19.154 [2024-07-24 02:11:33.912620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.154 [2024-07-24 02:11:33.912636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:19.154 [2024-07-24 02:11:33.919275] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:19.154 [2024-07-24 02:11:33.919309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.154 [2024-07-24 02:11:33.919339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:19.154 [2024-07-24 02:11:33.925686] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:19.154 [2024-07-24 02:11:33.925716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.154 [2024-07-24 02:11:33.925733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:19.154 [2024-07-24 02:11:33.932099] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:19.154 [2024-07-24 02:11:33.932129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.154 [2024-07-24 02:11:33.932145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:19.154 [2024-07-24 02:11:33.938509] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:19.154 [2024-07-24 02:11:33.938550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.154 [2024-07-24 02:11:33.938568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:19.154 [2024-07-24 02:11:33.945036] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:19.154 [2024-07-24 02:11:33.945066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.154 [2024-07-24 02:11:33.945082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:19.154 [2024-07-24 02:11:33.951473] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:19.154 [2024-07-24 02:11:33.951503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.154 [2024-07-24 02:11:33.951519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:19.154 [2024-07-24 02:11:33.957971] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:19.154 [2024-07-24 02:11:33.958001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.154 [2024-07-24 02:11:33.958018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:19.154 [2024-07-24 02:11:33.964489] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:19.154 [2024-07-24 02:11:33.964520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.154 [2024-07-24 02:11:33.964536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:19.154 [2024-07-24 02:11:33.970850] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:19.154 [2024-07-24 02:11:33.970880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.154 [2024-07-24 02:11:33.970896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:19.154 [2024-07-24 02:11:33.977460] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:19.154 [2024-07-24 02:11:33.977490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.154 [2024-07-24 02:11:33.977507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:19.154 [2024-07-24 02:11:33.983831] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:19.154 [2024-07-24 02:11:33.983863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.154 [2024-07-24 02:11:33.983879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:19.154 [2024-07-24 02:11:33.990376] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:19.155 [2024-07-24 02:11:33.990406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.155 [2024-07-24 02:11:33.990423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:19.155 [2024-07-24 02:11:33.996839] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:19.155 [2024-07-24 02:11:33.996870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.155 [2024-07-24 02:11:33.996886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:19.155 [2024-07-24 02:11:34.003411] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:19.155 [2024-07-24 02:11:34.003441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.155 [2024-07-24 02:11:34.003457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:19.155 [2024-07-24 02:11:34.010047] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:19.155 [2024-07-24 02:11:34.010078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.155 [2024-07-24 02:11:34.010095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:19.155 [2024-07-24 02:11:34.016600] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:19.155 [2024-07-24 02:11:34.016630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.155 [2024-07-24 02:11:34.016647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:19.155 [2024-07-24 02:11:34.023055] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:19.155 [2024-07-24 02:11:34.023085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.155 [2024-07-24 02:11:34.023101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:19.155 [2024-07-24 02:11:34.029540] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:19.155 [2024-07-24 02:11:34.029569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.155 [2024-07-24 02:11:34.029585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:19.155 [2024-07-24 02:11:34.036151] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:19.155 [2024-07-24 02:11:34.036182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.155 [2024-07-24 02:11:34.036198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:19.155 [2024-07-24 02:11:34.042593] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:19.155 [2024-07-24 02:11:34.042624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.155 [2024-07-24 02:11:34.042640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:19.414 [2024-07-24 02:11:34.049354] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:19.414 [2024-07-24 02:11:34.049385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.414 [2024-07-24 02:11:34.049413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:19.415 [2024-07-24 02:11:34.055861] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:19.415 [2024-07-24 02:11:34.055891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.415 [2024-07-24 02:11:34.055908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:19.415 [2024-07-24 02:11:34.062369] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:19.415 [2024-07-24 02:11:34.062401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.415 [2024-07-24 02:11:34.062417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:19.415 [2024-07-24 02:11:34.068750] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:19.415 [2024-07-24 02:11:34.068781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.415 [2024-07-24 02:11:34.068797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:19.415 [2024-07-24 02:11:34.075461] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:19.415 [2024-07-24 02:11:34.075508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.415 [2024-07-24 02:11:34.075525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:19.415 [2024-07-24 02:11:34.081877] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:19.415 [2024-07-24 02:11:34.081908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.415 [2024-07-24 02:11:34.081925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:19.415 [2024-07-24 02:11:34.088361] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:19.415 [2024-07-24 02:11:34.088392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.415 [2024-07-24 02:11:34.088408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:19.415 [2024-07-24 02:11:34.094732] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:19.415 [2024-07-24 02:11:34.094762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.415 [2024-07-24 02:11:34.094778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:19.415 [2024-07-24 02:11:34.101289] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:19.415 [2024-07-24 02:11:34.101326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.415 [2024-07-24 02:11:34.101345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:19.415 [2024-07-24 02:11:34.107681] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:19.415 [2024-07-24 02:11:34.107711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.415 [2024-07-24 02:11:34.107728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:19.415 [2024-07-24 02:11:34.114178] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:19.415 [2024-07-24 02:11:34.114207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.415 [2024-07-24 02:11:34.114223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:19.415 [2024-07-24 02:11:34.120444] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:19.415 [2024-07-24 02:11:34.120475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.415 [2024-07-24 02:11:34.120491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:19.415 [2024-07-24 02:11:34.126891] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:19.415 [2024-07-24 02:11:34.126921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.415 [2024-07-24 02:11:34.126937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:19.415 [2024-07-24 02:11:34.133309] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:19.415 [2024-07-24 02:11:34.133347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.415 [2024-07-24 02:11:34.133363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:19.415 [2024-07-24 02:11:34.139662] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:19.415 [2024-07-24 02:11:34.139708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.415 [2024-07-24 02:11:34.139725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:19.415 [2024-07-24 02:11:34.146440] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:19.415 [2024-07-24 02:11:34.146472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.415 [2024-07-24 02:11:34.146489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:19.415 [2024-07-24 02:11:34.153124] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:19.415 [2024-07-24 02:11:34.153154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.415 [2024-07-24 02:11:34.153171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:19.415 [2024-07-24 02:11:34.159738] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:19.415 [2024-07-24 02:11:34.159768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.415 [2024-07-24 02:11:34.159794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:19.415 [2024-07-24 02:11:34.166397] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:19.415 [2024-07-24 02:11:34.166429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.415 [2024-07-24 02:11:34.166446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:19.415 [2024-07-24 02:11:34.172965] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:19.415 [2024-07-24 02:11:34.172995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.415 [2024-07-24 02:11:34.173012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:19.415 [2024-07-24 02:11:34.179398] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:19.415 [2024-07-24 02:11:34.179440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.415 [2024-07-24 02:11:34.179457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:19.415 [2024-07-24 02:11:34.185822] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:19.415 [2024-07-24 02:11:34.185852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.415 [2024-07-24 02:11:34.185869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:19.415 [2024-07-24 02:11:34.192361] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:19.415 [2024-07-24 02:11:34.192391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.415 [2024-07-24 02:11:34.192408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:19.415 [2024-07-24 02:11:34.198864] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:19.415 [2024-07-24 02:11:34.198895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.415 [2024-07-24 02:11:34.198911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:19.415 [2024-07-24 02:11:34.205354] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:19.415 [2024-07-24 02:11:34.205384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.415 [2024-07-24 02:11:34.205399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:19.415 [2024-07-24 02:11:34.211805] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:19.415 [2024-07-24 02:11:34.211836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.415 [2024-07-24 02:11:34.211852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:19.415 [2024-07-24 02:11:34.218303] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:19.415 [2024-07-24 02:11:34.218349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.415 [2024-07-24 02:11:34.218368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:19.416 [2024-07-24 02:11:34.224872] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:19.416 [2024-07-24 02:11:34.224902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.416 [2024-07-24 02:11:34.224918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:19.416 [2024-07-24 02:11:34.231442] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:19.416 [2024-07-24 02:11:34.231473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.416 [2024-07-24 02:11:34.231491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:19.416 [2024-07-24 02:11:34.237932] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:19.416 [2024-07-24 02:11:34.237963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.416 [2024-07-24 02:11:34.237979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:19.416 [2024-07-24 02:11:34.244313] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:19.416 [2024-07-24 02:11:34.244362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.416 [2024-07-24 02:11:34.244393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:19.416 [2024-07-24 02:11:34.250800] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:19.416 [2024-07-24 02:11:34.250830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.416 [2024-07-24 02:11:34.250847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:19.416 [2024-07-24 02:11:34.257412] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:19.416 [2024-07-24 02:11:34.257441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.416 [2024-07-24 02:11:34.257458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:19.416 [2024-07-24 02:11:34.263803] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:19.416 [2024-07-24 02:11:34.263834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.416 [2024-07-24 02:11:34.263851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:19.416 [2024-07-24 02:11:34.270402] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:19.416 [2024-07-24 02:11:34.270433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.416 [2024-07-24 02:11:34.270450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:19.416 [2024-07-24 02:11:34.277033] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:19.416 [2024-07-24 02:11:34.277064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.416 [2024-07-24 02:11:34.277081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:19.416 [2024-07-24 02:11:34.283647] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:19.416 [2024-07-24 02:11:34.283678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.416 [2024-07-24 02:11:34.283694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:19.416 [2024-07-24 02:11:34.290176] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:19.416 [2024-07-24 02:11:34.290206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.416 [2024-07-24 02:11:34.290222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:19.416 [2024-07-24 02:11:34.296842] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:19.416 [2024-07-24 02:11:34.296872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.416 [2024-07-24 02:11:34.296888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:19.416 [2024-07-24 02:11:34.303383] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:19.416 [2024-07-24 02:11:34.303414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.416 [2024-07-24 02:11:34.303430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:19.675 [2024-07-24 02:11:34.310192] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:19.675 [2024-07-24 02:11:34.310224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.675 [2024-07-24 02:11:34.310241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:19.675 [2024-07-24 02:11:34.316694] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:19.675 [2024-07-24 02:11:34.316725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.675 [2024-07-24 02:11:34.316741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:19.675 [2024-07-24 02:11:34.323246] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:19.675 [2024-07-24 02:11:34.323276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.675 [2024-07-24 02:11:34.323292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:19.675 [2024-07-24 02:11:34.329651] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:19.675 [2024-07-24 02:11:34.329683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.675 [2024-07-24 02:11:34.329710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:19.675 [2024-07-24 02:11:34.336581] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:19.675 [2024-07-24 02:11:34.336612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.675 [2024-07-24 02:11:34.336629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:19.675 [2024-07-24 02:11:34.344746] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:19.675 [2024-07-24 02:11:34.344779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.675 [2024-07-24 02:11:34.344798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:19.675 [2024-07-24 02:11:34.352650] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:19.675 [2024-07-24 02:11:34.352683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.675 [2024-07-24 02:11:34.352700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:19.675 [2024-07-24 02:11:34.360880] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:19.675 [2024-07-24 02:11:34.360912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.675 [2024-07-24 02:11:34.360929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:19.675 [2024-07-24 02:11:34.368106] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e286b0) 00:33:19.675 [2024-07-24 02:11:34.368138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.675 [2024-07-24 02:11:34.368154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:19.675 00:33:19.675 Latency(us) 00:33:19.675 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:19.675 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:33:19.675 nvme0n1 : 2.00 4746.08 593.26 0.00 0.00 3366.21 761.55 9272.13 00:33:19.675 =================================================================================================================== 00:33:19.675 Total : 4746.08 593.26 0.00 0.00 3366.21 761.55 9272.13 00:33:19.675 0 00:33:19.675 02:11:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:19.675 02:11:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:19.675 02:11:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:19.675 02:11:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:19.675 | .driver_specific 00:33:19.675 | .nvme_error 00:33:19.675 | .status_code 00:33:19.675 | .command_transient_transport_error' 00:33:19.933 02:11:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 306 > 0 )) 00:33:19.933 02:11:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1575801 00:33:19.933 02:11:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1575801 ']' 00:33:19.933 02:11:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1575801 00:33:19.933 02:11:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:33:19.933 02:11:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:19.933 02:11:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1575801 00:33:19.933 02:11:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:19.933 02:11:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:19.933 02:11:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1575801' 00:33:19.933 killing process with pid 1575801 00:33:19.933 02:11:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1575801 00:33:19.933 Received shutdown signal, test time was about 2.000000 seconds 00:33:19.933 00:33:19.933 Latency(us) 00:33:19.933 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:19.933 =================================================================================================================== 00:33:19.933 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:19.933 02:11:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1575801 00:33:20.191 02:11:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:33:20.191 02:11:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:33:20.191 02:11:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:33:20.191 02:11:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:33:20.191 02:11:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:33:20.191 02:11:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1576209 00:33:20.191 02:11:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:33:20.191 02:11:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1576209 /var/tmp/bperf.sock 00:33:20.191 02:11:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1576209 ']' 00:33:20.191 02:11:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:20.191 02:11:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:20.191 02:11:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:20.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:20.191 02:11:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:20.191 02:11:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:20.191 [2024-07-24 02:11:34.983045] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:33:20.191 [2024-07-24 02:11:34.983137] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1576209 ] 00:33:20.191 EAL: No free 2048 kB hugepages reported on node 1 00:33:20.191 [2024-07-24 02:11:35.045643] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:20.450 [2024-07-24 02:11:35.133840] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:20.450 02:11:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:20.450 02:11:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:33:20.450 02:11:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:20.450 02:11:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:20.707 02:11:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:20.707 02:11:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:20.707 02:11:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:20.707 02:11:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:20.707 02:11:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:20.707 02:11:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:21.274 nvme0n1 00:33:21.274 02:11:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:33:21.274 02:11:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:21.274 02:11:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:21.274 02:11:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:21.274 02:11:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:21.274 02:11:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:21.274 Running I/O for 2 seconds... 00:33:21.274 [2024-07-24 02:11:36.016631] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190ee5c8 00:33:21.274 [2024-07-24 02:11:36.017749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4997 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.274 [2024-07-24 02:11:36.017789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:21.274 [2024-07-24 02:11:36.028714] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190fac10 00:33:21.274 [2024-07-24 02:11:36.029731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:20230 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.274 [2024-07-24 02:11:36.029775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:21.274 [2024-07-24 02:11:36.041846] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190eaef0 00:33:21.274 [2024-07-24 02:11:36.043009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:20303 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.274 [2024-07-24 02:11:36.043043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:21.274 [2024-07-24 02:11:36.056163] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190e6300 00:33:21.274 [2024-07-24 02:11:36.057555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:11842 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.274 [2024-07-24 02:11:36.057608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:21.274 [2024-07-24 02:11:36.069460] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190fb480 00:33:21.274 [2024-07-24 02:11:36.070991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:17435 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.274 [2024-07-24 02:11:36.071024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:21.274 [2024-07-24 02:11:36.081524] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190e7c50 00:33:21.274 [2024-07-24 02:11:36.083046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:24034 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.274 [2024-07-24 02:11:36.083078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:21.274 [2024-07-24 02:11:36.094804] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190ddc00 00:33:21.274 [2024-07-24 02:11:36.096555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:14323 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.274 [2024-07-24 02:11:36.096599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:21.274 [2024-07-24 02:11:36.106818] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190f0bc0 00:33:21.274 [2024-07-24 02:11:36.107974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:25573 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.274 [2024-07-24 02:11:36.108006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:21.274 [2024-07-24 02:11:36.119738] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190e01f8 00:33:21.274 [2024-07-24 02:11:36.120743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:17479 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.274 [2024-07-24 02:11:36.120776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:21.274 [2024-07-24 02:11:36.131756] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190f7100 00:33:21.274 [2024-07-24 02:11:36.133643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:11140 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.274 [2024-07-24 02:11:36.133676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:21.274 [2024-07-24 02:11:36.142619] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190f2d80 00:33:21.274 [2024-07-24 02:11:36.143487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:18007 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.274 [2024-07-24 02:11:36.143530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:21.274 [2024-07-24 02:11:36.155970] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190fda78 00:33:21.274 [2024-07-24 02:11:36.156976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:10531 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.275 [2024-07-24 02:11:36.157008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:21.533 [2024-07-24 02:11:36.169505] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190e3498 00:33:21.533 [2024-07-24 02:11:36.170716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:9894 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.533 [2024-07-24 02:11:36.170750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:21.533 [2024-07-24 02:11:36.183782] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190fef90 00:33:21.533 [2024-07-24 02:11:36.185130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18755 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.533 [2024-07-24 02:11:36.185163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:21.533 [2024-07-24 02:11:36.195586] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190ed0b0 00:33:21.533 [2024-07-24 02:11:36.196959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:16236 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.533 [2024-07-24 02:11:36.196990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:21.533 [2024-07-24 02:11:36.208876] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190e7818 00:33:21.533 [2024-07-24 02:11:36.210397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:11885 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.533 [2024-07-24 02:11:36.210441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:21.533 [2024-07-24 02:11:36.220814] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190df988 00:33:21.533 [2024-07-24 02:11:36.221814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:17924 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.533 [2024-07-24 02:11:36.221846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:21.533 [2024-07-24 02:11:36.233659] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190ec840 00:33:21.533 [2024-07-24 02:11:36.234505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:11814 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.533 [2024-07-24 02:11:36.234535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:21.533 [2024-07-24 02:11:36.246929] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190f2d80 00:33:21.533 [2024-07-24 02:11:36.247929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:14168 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.533 [2024-07-24 02:11:36.247961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:21.533 [2024-07-24 02:11:36.258949] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190fb048 00:33:21.533 [2024-07-24 02:11:36.260856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:20705 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.533 [2024-07-24 02:11:36.260888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:21.533 [2024-07-24 02:11:36.269885] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190f57b0 00:33:21.533 [2024-07-24 02:11:36.270728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:9051 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.533 [2024-07-24 02:11:36.270760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:21.533 [2024-07-24 02:11:36.283293] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190fb480 00:33:21.533 [2024-07-24 02:11:36.284386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:12519 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.533 [2024-07-24 02:11:36.284415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:21.533 [2024-07-24 02:11:36.296739] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190ec840 00:33:21.533 [2024-07-24 02:11:36.297901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:9070 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.533 [2024-07-24 02:11:36.297934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:21.533 [2024-07-24 02:11:36.310201] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190e6300 00:33:21.533 [2024-07-24 02:11:36.311612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:21582 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.533 [2024-07-24 02:11:36.311645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:21.533 [2024-07-24 02:11:36.323605] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190e84c0 00:33:21.533 [2024-07-24 02:11:36.325122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:13272 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.533 [2024-07-24 02:11:36.325154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:21.533 [2024-07-24 02:11:36.336965] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190fb480 00:33:21.533 [2024-07-24 02:11:36.338651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:12778 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.533 [2024-07-24 02:11:36.338683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:21.533 [2024-07-24 02:11:36.348918] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190f8618 00:33:21.533 [2024-07-24 02:11:36.350111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:4145 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.533 [2024-07-24 02:11:36.350143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:21.533 [2024-07-24 02:11:36.361793] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190f4f40 00:33:21.533 [2024-07-24 02:11:36.362805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:23862 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.534 [2024-07-24 02:11:36.362838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:21.534 [2024-07-24 02:11:36.376437] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190feb58 00:33:21.534 [2024-07-24 02:11:36.378485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:22229 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.534 [2024-07-24 02:11:36.378531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:21.534 [2024-07-24 02:11:36.385494] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190f57b0 00:33:21.534 [2024-07-24 02:11:36.386338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:6474 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.534 [2024-07-24 02:11:36.386379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:21.534 [2024-07-24 02:11:36.397571] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190f3e60 00:33:21.534 [2024-07-24 02:11:36.398427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:1876 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.534 [2024-07-24 02:11:36.398471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:21.534 [2024-07-24 02:11:36.410842] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190fd208 00:33:21.534 [2024-07-24 02:11:36.411834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:711 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.534 [2024-07-24 02:11:36.411865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:21.534 [2024-07-24 02:11:36.424122] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190ebb98 00:33:21.534 [2024-07-24 02:11:36.425326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:7135 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.534 [2024-07-24 02:11:36.425374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:21.792 [2024-07-24 02:11:36.437658] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190df988 00:33:21.792 [2024-07-24 02:11:36.439002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:24779 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.792 [2024-07-24 02:11:36.439039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:21.792 [2024-07-24 02:11:36.450942] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190e88f8 00:33:21.792 [2024-07-24 02:11:36.452486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:18876 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.792 [2024-07-24 02:11:36.452519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:21.792 [2024-07-24 02:11:36.464220] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190fd208 00:33:21.792 [2024-07-24 02:11:36.465901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:16609 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.792 [2024-07-24 02:11:36.465934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:21.792 [2024-07-24 02:11:36.476039] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190f9b30 00:33:21.792 [2024-07-24 02:11:36.477225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:13490 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.792 [2024-07-24 02:11:36.477258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:21.792 [2024-07-24 02:11:36.488902] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190ef270 00:33:21.792 [2024-07-24 02:11:36.489896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:15848 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.792 [2024-07-24 02:11:36.489928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:21.792 [2024-07-24 02:11:36.501000] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190fb8b8 00:33:21.792 [2024-07-24 02:11:36.502943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:9596 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.792 [2024-07-24 02:11:36.502976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:21.792 [2024-07-24 02:11:36.511872] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190f7100 00:33:21.792 [2024-07-24 02:11:36.512693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:15890 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.792 [2024-07-24 02:11:36.512727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:21.792 [2024-07-24 02:11:36.526105] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190e6738 00:33:21.792 [2024-07-24 02:11:36.527157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:20279 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.792 [2024-07-24 02:11:36.527189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:21.792 [2024-07-24 02:11:36.539204] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190fdeb0 00:33:21.792 [2024-07-24 02:11:36.540079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:19398 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.792 [2024-07-24 02:11:36.540111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:21.792 [2024-07-24 02:11:36.552529] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190e12d8 00:33:21.792 [2024-07-24 02:11:36.553492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:20438 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.792 [2024-07-24 02:11:36.553522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:21.792 [2024-07-24 02:11:36.564378] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190e01f8 00:33:21.792 [2024-07-24 02:11:36.566286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:3482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.792 [2024-07-24 02:11:36.566326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:21.792 [2024-07-24 02:11:36.575400] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190ef6a8 00:33:21.792 [2024-07-24 02:11:36.576235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:13196 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.792 [2024-07-24 02:11:36.576269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:21.792 [2024-07-24 02:11:36.588750] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190fe720 00:33:21.792 [2024-07-24 02:11:36.589774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:13630 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.792 [2024-07-24 02:11:36.589810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:21.792 [2024-07-24 02:11:36.602158] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190fdeb0 00:33:21.792 [2024-07-24 02:11:36.603383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:22336 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.792 [2024-07-24 02:11:36.603412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:21.792 [2024-07-24 02:11:36.615607] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190f96f8 00:33:21.792 [2024-07-24 02:11:36.616989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:8554 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.792 [2024-07-24 02:11:36.617022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:21.792 [2024-07-24 02:11:36.629094] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190ff3c8 00:33:21.792 [2024-07-24 02:11:36.630604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:20987 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.793 [2024-07-24 02:11:36.630636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:21.793 [2024-07-24 02:11:36.642450] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190fe720 00:33:21.793 [2024-07-24 02:11:36.644168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:25302 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.793 [2024-07-24 02:11:36.644210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:21.793 [2024-07-24 02:11:36.655906] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190e84c0 00:33:21.793 [2024-07-24 02:11:36.657786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:4443 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.793 [2024-07-24 02:11:36.657819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:21.793 [2024-07-24 02:11:36.669337] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190e6fa8 00:33:21.793 [2024-07-24 02:11:36.671409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:16416 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.793 [2024-07-24 02:11:36.671439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:21.793 [2024-07-24 02:11:36.678461] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190e5ec8 00:33:21.793 [2024-07-24 02:11:36.679291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:13567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:21.793 [2024-07-24 02:11:36.679339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:22.051 [2024-07-24 02:11:36.690812] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190fa3a0 00:33:22.051 [2024-07-24 02:11:36.691632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:12486 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.051 [2024-07-24 02:11:36.691676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:22.051 [2024-07-24 02:11:36.704177] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190f35f0 00:33:22.051 [2024-07-24 02:11:36.705180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:21756 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.051 [2024-07-24 02:11:36.705212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:22.051 [2024-07-24 02:11:36.717443] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190f8e88 00:33:22.051 [2024-07-24 02:11:36.718601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:6925 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.051 [2024-07-24 02:11:36.718661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:22.051 [2024-07-24 02:11:36.730800] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190e6b70 00:33:22.051 [2024-07-24 02:11:36.732140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:837 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.051 [2024-07-24 02:11:36.732172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:22.051 [2024-07-24 02:11:36.744069] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190df118 00:33:22.051 [2024-07-24 02:11:36.745682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:10486 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.051 [2024-07-24 02:11:36.745714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:22.051 [2024-07-24 02:11:36.754228] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190e1710 00:33:22.051 [2024-07-24 02:11:36.755039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:12326 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.051 [2024-07-24 02:11:36.755071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:22.051 [2024-07-24 02:11:36.767463] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190f6458 00:33:22.051 [2024-07-24 02:11:36.768485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:6191 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.051 [2024-07-24 02:11:36.768528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:22.051 [2024-07-24 02:11:36.780772] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190e3d08 00:33:22.051 [2024-07-24 02:11:36.781969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:9641 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.051 [2024-07-24 02:11:36.782001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:22.051 [2024-07-24 02:11:36.794146] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190f81e0 00:33:22.051 [2024-07-24 02:11:36.795524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:1277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.051 [2024-07-24 02:11:36.795553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:22.051 [2024-07-24 02:11:36.806173] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190fb8b8 00:33:22.051 [2024-07-24 02:11:36.807001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:21005 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.051 [2024-07-24 02:11:36.807033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:22.051 [2024-07-24 02:11:36.819029] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190e12d8 00:33:22.051 [2024-07-24 02:11:36.819716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:10294 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.051 [2024-07-24 02:11:36.819748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:22.051 [2024-07-24 02:11:36.832273] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190fbcf0 00:33:22.051 [2024-07-24 02:11:36.833096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:24885 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.051 [2024-07-24 02:11:36.833128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:22.051 [2024-07-24 02:11:36.846867] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190ed920 00:33:22.051 [2024-07-24 02:11:36.848708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:18468 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.051 [2024-07-24 02:11:36.848739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:22.051 [2024-07-24 02:11:36.856945] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190e12d8 00:33:22.051 [2024-07-24 02:11:36.858088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:19732 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.051 [2024-07-24 02:11:36.858120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:22.051 [2024-07-24 02:11:36.870286] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190fef90 00:33:22.051 [2024-07-24 02:11:36.871680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:5249 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.051 [2024-07-24 02:11:36.871713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:22.051 [2024-07-24 02:11:36.883686] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190f5be8 00:33:22.051 [2024-07-24 02:11:36.885198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:2339 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.051 [2024-07-24 02:11:36.885230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:22.051 [2024-07-24 02:11:36.897058] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190f8e88 00:33:22.051 [2024-07-24 02:11:36.898718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:7520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.051 [2024-07-24 02:11:36.898750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:22.051 [2024-07-24 02:11:36.910343] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190ef6a8 00:33:22.051 [2024-07-24 02:11:36.912148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.051 [2024-07-24 02:11:36.912180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:22.052 [2024-07-24 02:11:36.923751] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190e0a68 00:33:22.052 [2024-07-24 02:11:36.925764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:3361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.052 [2024-07-24 02:11:36.925797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:22.052 [2024-07-24 02:11:36.932837] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190f6458 00:33:22.052 [2024-07-24 02:11:36.933672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:14563 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.052 [2024-07-24 02:11:36.933704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:22.052 [2024-07-24 02:11:36.944967] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190eaab8 00:33:22.310 [2024-07-24 02:11:36.945846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:5468 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.310 [2024-07-24 02:11:36.945879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:22.310 [2024-07-24 02:11:36.959299] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190eb328 00:33:22.310 [2024-07-24 02:11:36.960325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:22529 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.310 [2024-07-24 02:11:36.960374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:22.310 [2024-07-24 02:11:36.972411] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190e7818 00:33:22.310 [2024-07-24 02:11:36.973562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:2772 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.310 [2024-07-24 02:11:36.973606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:22.310 [2024-07-24 02:11:36.985791] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190ec840 00:33:22.310 [2024-07-24 02:11:36.987121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:18957 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.310 [2024-07-24 02:11:36.987154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:22.310 [2024-07-24 02:11:37.000250] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190fb8b8 00:33:22.310 [2024-07-24 02:11:37.002233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:7275 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.310 [2024-07-24 02:11:37.002265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:22.310 [2024-07-24 02:11:37.010439] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190fc128 00:33:22.310 [2024-07-24 02:11:37.011700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:8862 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.310 [2024-07-24 02:11:37.011733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:22.310 [2024-07-24 02:11:37.023721] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190e38d0 00:33:22.310 [2024-07-24 02:11:37.025206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:2314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.310 [2024-07-24 02:11:37.025239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:22.310 [2024-07-24 02:11:37.037005] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190e4140 00:33:22.310 [2024-07-24 02:11:37.038683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:8813 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.310 [2024-07-24 02:11:37.038716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:22.310 [2024-07-24 02:11:37.050467] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190ed4e8 00:33:22.310 [2024-07-24 02:11:37.052311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:16536 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.310 [2024-07-24 02:11:37.052364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:22.310 [2024-07-24 02:11:37.063815] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190e27f0 00:33:22.310 [2024-07-24 02:11:37.065815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:9160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.310 [2024-07-24 02:11:37.065848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:22.310 [2024-07-24 02:11:37.073941] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190e9168 00:33:22.310 [2024-07-24 02:11:37.075244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:6115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.310 [2024-07-24 02:11:37.075275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:22.310 [2024-07-24 02:11:37.087208] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190eaef0 00:33:22.310 [2024-07-24 02:11:37.088685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:19967 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.310 [2024-07-24 02:11:37.088718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:22.310 [2024-07-24 02:11:37.100572] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190e0a68 00:33:22.310 [2024-07-24 02:11:37.102217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:20376 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.310 [2024-07-24 02:11:37.102249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:22.310 [2024-07-24 02:11:37.113911] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190fd640 00:33:22.310 [2024-07-24 02:11:37.115768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:981 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.310 [2024-07-24 02:11:37.115801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:22.310 [2024-07-24 02:11:37.127253] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190f9f68 00:33:22.310 [2024-07-24 02:11:37.129240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:3629 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.310 [2024-07-24 02:11:37.129276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:22.310 [2024-07-24 02:11:37.140575] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190e0ea0 00:33:22.310 [2024-07-24 02:11:37.142754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10478 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.310 [2024-07-24 02:11:37.142787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:22.310 [2024-07-24 02:11:37.149599] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190f57b0 00:33:22.311 [2024-07-24 02:11:37.150665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:17621 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.311 [2024-07-24 02:11:37.150697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:22.311 [2024-07-24 02:11:37.161563] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190f3a28 00:33:22.311 [2024-07-24 02:11:37.162604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:11846 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.311 [2024-07-24 02:11:37.162654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:22.311 [2024-07-24 02:11:37.174944] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190de470 00:33:22.311 [2024-07-24 02:11:37.176054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:19306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.311 [2024-07-24 02:11:37.176086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:22.311 [2024-07-24 02:11:37.188172] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190e9168 00:33:22.311 [2024-07-24 02:11:37.189502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:21911 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.311 [2024-07-24 02:11:37.189531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:22.311 [2024-07-24 02:11:37.201643] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190fc128 00:33:22.311 [2024-07-24 02:11:37.203178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:22373 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.311 [2024-07-24 02:11:37.203210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:22.569 [2024-07-24 02:11:37.215196] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190ebb98 00:33:22.569 [2024-07-24 02:11:37.216835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:10192 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.569 [2024-07-24 02:11:37.216868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:22.569 [2024-07-24 02:11:37.228548] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190e49b0 00:33:22.569 [2024-07-24 02:11:37.230358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:456 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.569 [2024-07-24 02:11:37.230403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:22.569 [2024-07-24 02:11:37.240458] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190fcdd0 00:33:22.569 [2024-07-24 02:11:37.241783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:7812 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.569 [2024-07-24 02:11:37.241816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:22.569 [2024-07-24 02:11:37.253373] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190fc998 00:33:22.569 [2024-07-24 02:11:37.254526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:12303 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.569 [2024-07-24 02:11:37.254570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:22.569 [2024-07-24 02:11:37.265347] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190ef6a8 00:33:22.569 [2024-07-24 02:11:37.267410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:7254 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.569 [2024-07-24 02:11:37.267439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:22.569 [2024-07-24 02:11:37.276254] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190f35f0 00:33:22.569 [2024-07-24 02:11:37.277232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:24614 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.569 [2024-07-24 02:11:37.277263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:22.569 [2024-07-24 02:11:37.289534] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190e9e10 00:33:22.570 [2024-07-24 02:11:37.290694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:15473 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.570 [2024-07-24 02:11:37.290726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:22.570 [2024-07-24 02:11:37.302968] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190e9168 00:33:22.570 [2024-07-24 02:11:37.304297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:23373 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.570 [2024-07-24 02:11:37.304337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:22.570 [2024-07-24 02:11:37.316565] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190ecc78 00:33:22.570 [2024-07-24 02:11:37.318061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:13549 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.570 [2024-07-24 02:11:37.318093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:22.570 [2024-07-24 02:11:37.329805] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190fe2e8 00:33:22.570 [2024-07-24 02:11:37.331500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:8212 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.570 [2024-07-24 02:11:37.331530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:22.570 [2024-07-24 02:11:37.343189] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190fef90 00:33:22.570 [2024-07-24 02:11:37.344990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:13385 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.570 [2024-07-24 02:11:37.345022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:22.570 [2024-07-24 02:11:37.356394] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190e7818 00:33:22.570 [2024-07-24 02:11:37.358366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:4932 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.570 [2024-07-24 02:11:37.358394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:22.570 [2024-07-24 02:11:37.369673] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190f8618 00:33:22.570 [2024-07-24 02:11:37.371823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:4567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.570 [2024-07-24 02:11:37.371855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:22.570 [2024-07-24 02:11:37.378636] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190df988 00:33:22.570 [2024-07-24 02:11:37.379747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:21457 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.570 [2024-07-24 02:11:37.379779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:22.570 [2024-07-24 02:11:37.390735] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190e5ec8 00:33:22.570 [2024-07-24 02:11:37.391759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:8488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.570 [2024-07-24 02:11:37.391790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:22.570 [2024-07-24 02:11:37.404103] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190e88f8 00:33:22.570 [2024-07-24 02:11:37.405224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:25036 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.570 [2024-07-24 02:11:37.405256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:22.570 [2024-07-24 02:11:37.418279] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190fe2e8 00:33:22.570 [2024-07-24 02:11:37.419599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:24710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.570 [2024-07-24 02:11:37.419647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:22.570 [2024-07-24 02:11:37.431426] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190f0350 00:33:22.570 [2024-07-24 02:11:37.432884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:23630 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.570 [2024-07-24 02:11:37.432917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:22.570 [2024-07-24 02:11:37.444738] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190e9168 00:33:22.570 [2024-07-24 02:11:37.446396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:24154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.570 [2024-07-24 02:11:37.446425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:22.570 [2024-07-24 02:11:37.456774] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190f6020 00:33:22.570 [2024-07-24 02:11:37.458489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:5741 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.570 [2024-07-24 02:11:37.458518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:22.828 [2024-07-24 02:11:37.470331] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190df118 00:33:22.828 [2024-07-24 02:11:37.472133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:12705 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.828 [2024-07-24 02:11:37.472167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:22.828 [2024-07-24 02:11:37.482154] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190f9b30 00:33:22.829 [2024-07-24 02:11:37.483507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:15145 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.829 [2024-07-24 02:11:37.483536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:22.829 [2024-07-24 02:11:37.496378] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190e73e0 00:33:22.829 [2024-07-24 02:11:37.498373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:13658 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.829 [2024-07-24 02:11:37.498421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:22.829 [2024-07-24 02:11:37.506607] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190f8e88 00:33:22.829 [2024-07-24 02:11:37.507894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:7262 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.829 [2024-07-24 02:11:37.507926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:22.829 [2024-07-24 02:11:37.519956] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190ddc00 00:33:22.829 [2024-07-24 02:11:37.521422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:4857 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.829 [2024-07-24 02:11:37.521451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:22.829 [2024-07-24 02:11:37.533261] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190e23b8 00:33:22.829 [2024-07-24 02:11:37.534875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:178 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.829 [2024-07-24 02:11:37.534908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:22.829 [2024-07-24 02:11:37.546482] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190f4b08 00:33:22.829 [2024-07-24 02:11:37.548308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:20434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.829 [2024-07-24 02:11:37.548348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:22.829 [2024-07-24 02:11:37.559869] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190ebb98 00:33:22.829 [2024-07-24 02:11:37.561847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:12422 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.829 [2024-07-24 02:11:37.561880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:22.829 [2024-07-24 02:11:37.573269] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190f3a28 00:33:22.829 [2024-07-24 02:11:37.575445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:4497 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.829 [2024-07-24 02:11:37.575478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:22.829 [2024-07-24 02:11:37.582395] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190f1868 00:33:22.829 [2024-07-24 02:11:37.583379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:489 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.829 [2024-07-24 02:11:37.583407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:22.829 [2024-07-24 02:11:37.595820] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190f9b30 00:33:22.829 [2024-07-24 02:11:37.596958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:19873 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.829 [2024-07-24 02:11:37.596994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:22.829 [2024-07-24 02:11:37.607920] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190eff18 00:33:22.829 [2024-07-24 02:11:37.609034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:16255 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.829 [2024-07-24 02:11:37.609066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:22.829 [2024-07-24 02:11:37.621181] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190f8e88 00:33:22.829 [2024-07-24 02:11:37.622483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:10143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.829 [2024-07-24 02:11:37.622526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:22.829 [2024-07-24 02:11:37.634536] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190fa3a0 00:33:22.829 [2024-07-24 02:11:37.635999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:23165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.829 [2024-07-24 02:11:37.636031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:22.829 [2024-07-24 02:11:37.647835] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190fe720 00:33:22.829 [2024-07-24 02:11:37.649484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:16079 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.829 [2024-07-24 02:11:37.649526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:22.829 [2024-07-24 02:11:37.661161] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190f8618 00:33:22.829 [2024-07-24 02:11:37.662915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:2958 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.829 [2024-07-24 02:11:37.662948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:22.829 [2024-07-24 02:11:37.672922] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190f0788 00:33:22.829 [2024-07-24 02:11:37.674217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:18053 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.829 [2024-07-24 02:11:37.674250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:22.829 [2024-07-24 02:11:37.685781] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190e6738 00:33:22.829 [2024-07-24 02:11:37.686928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:2765 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.829 [2024-07-24 02:11:37.686961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:22.829 [2024-07-24 02:11:37.700464] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190f6458 00:33:22.829 [2024-07-24 02:11:37.702589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:16793 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.829 [2024-07-24 02:11:37.702619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:22.829 [2024-07-24 02:11:37.709450] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190f1430 00:33:22.829 [2024-07-24 02:11:37.710401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:4780 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.829 [2024-07-24 02:11:37.710444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:22.829 [2024-07-24 02:11:37.721576] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190f1ca0 00:33:22.829 [2024-07-24 02:11:37.722613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:13526 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:22.829 [2024-07-24 02:11:37.722658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:23.088 [2024-07-24 02:11:37.735042] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190e5a90 00:33:23.088 [2024-07-24 02:11:37.736148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:25065 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.088 [2024-07-24 02:11:37.736181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:23.088 [2024-07-24 02:11:37.748452] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190f8e88 00:33:23.088 [2024-07-24 02:11:37.749743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:14442 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.088 [2024-07-24 02:11:37.749776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:23.088 [2024-07-24 02:11:37.761733] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190fe2e8 00:33:23.088 [2024-07-24 02:11:37.763185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:4223 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.088 [2024-07-24 02:11:37.763217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:23.088 [2024-07-24 02:11:37.775002] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190e7c50 00:33:23.088 [2024-07-24 02:11:37.776685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:21591 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.088 [2024-07-24 02:11:37.776717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:23.088 [2024-07-24 02:11:37.788490] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190fe720 00:33:23.088 [2024-07-24 02:11:37.790308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:1937 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.088 [2024-07-24 02:11:37.790367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:23.088 [2024-07-24 02:11:37.802057] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190ebb98 00:33:23.088 [2024-07-24 02:11:37.804082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:20447 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.088 [2024-07-24 02:11:37.804114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:23.088 [2024-07-24 02:11:37.815514] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190df988 00:33:23.088 [2024-07-24 02:11:37.817699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:4192 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.088 [2024-07-24 02:11:37.817731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:23.088 [2024-07-24 02:11:37.824616] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190dfdc0 00:33:23.088 [2024-07-24 02:11:37.825563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:20475 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.088 [2024-07-24 02:11:37.825607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:23.088 [2024-07-24 02:11:37.837995] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190e0ea0 00:33:23.088 [2024-07-24 02:11:37.839074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:564 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.088 [2024-07-24 02:11:37.839106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:23.088 [2024-07-24 02:11:37.851287] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190f6458 00:33:23.088 [2024-07-24 02:11:37.852588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:8387 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.088 [2024-07-24 02:11:37.852630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:23.088 [2024-07-24 02:11:37.864666] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190e3060 00:33:23.088 [2024-07-24 02:11:37.866098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:19284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.088 [2024-07-24 02:11:37.866129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:23.088 [2024-07-24 02:11:37.876592] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190e3060 00:33:23.088 [2024-07-24 02:11:37.878056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20964 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.088 [2024-07-24 02:11:37.878087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:23.088 [2024-07-24 02:11:37.889921] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190e23b8 00:33:23.088 [2024-07-24 02:11:37.891510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:5362 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.088 [2024-07-24 02:11:37.891539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:23.088 [2024-07-24 02:11:37.903281] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190eaab8 00:33:23.088 [2024-07-24 02:11:37.905090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:527 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.088 [2024-07-24 02:11:37.905121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:23.088 [2024-07-24 02:11:37.915100] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190e1710 00:33:23.088 [2024-07-24 02:11:37.916419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:16918 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.088 [2024-07-24 02:11:37.916446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:23.089 [2024-07-24 02:11:37.927958] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190f3a28 00:33:23.089 [2024-07-24 02:11:37.929076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:22887 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.089 [2024-07-24 02:11:37.929108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:23.089 [2024-07-24 02:11:37.939882] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190e7c50 00:33:23.089 [2024-07-24 02:11:37.941808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12697 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.089 [2024-07-24 02:11:37.941837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:23.089 [2024-07-24 02:11:37.951285] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190e01f8 00:33:23.089 [2024-07-24 02:11:37.952276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:754 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.089 [2024-07-24 02:11:37.952309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:23.089 [2024-07-24 02:11:37.964538] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190de8a8 00:33:23.089 [2024-07-24 02:11:37.965690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3852 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.089 [2024-07-24 02:11:37.965723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:23.089 [2024-07-24 02:11:37.976612] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190fe720 00:33:23.089 [2024-07-24 02:11:37.977768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:12551 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.089 [2024-07-24 02:11:37.977800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:23.347 [2024-07-24 02:11:37.990299] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190f8e88 00:33:23.347 [2024-07-24 02:11:37.991753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25258 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.347 [2024-07-24 02:11:37.991786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:23.347 [2024-07-24 02:11:38.004633] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa25480) with pdu=0x2000190f5be8 00:33:23.347 [2024-07-24 02:11:38.006118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:18628 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:23.347 [2024-07-24 02:11:38.006150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:23.347 00:33:23.347 Latency(us) 00:33:23.347 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:23.347 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:23.347 nvme0n1 : 2.01 20053.18 78.33 0.00 0.00 6375.71 2997.67 15631.55 00:33:23.347 =================================================================================================================== 00:33:23.347 Total : 20053.18 78.33 0.00 0.00 6375.71 2997.67 15631.55 00:33:23.347 0 00:33:23.347 02:11:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:23.347 02:11:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:23.347 02:11:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:23.347 | .driver_specific 00:33:23.347 | .nvme_error 00:33:23.347 | .status_code 00:33:23.347 | .command_transient_transport_error' 00:33:23.347 02:11:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:23.605 02:11:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 157 > 0 )) 00:33:23.605 02:11:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1576209 00:33:23.605 02:11:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1576209 ']' 00:33:23.605 02:11:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1576209 00:33:23.605 02:11:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:33:23.605 02:11:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:23.605 02:11:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1576209 00:33:23.605 02:11:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:23.605 02:11:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:23.605 02:11:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1576209' 00:33:23.605 killing process with pid 1576209 00:33:23.605 02:11:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1576209 00:33:23.605 Received shutdown signal, test time was about 2.000000 seconds 00:33:23.605 00:33:23.605 Latency(us) 00:33:23.605 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:23.605 =================================================================================================================== 00:33:23.605 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:23.605 02:11:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1576209 00:33:23.863 02:11:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:33:23.864 02:11:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:33:23.864 02:11:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:33:23.864 02:11:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:33:23.864 02:11:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:33:23.864 02:11:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1576612 00:33:23.864 02:11:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:33:23.864 02:11:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1576612 /var/tmp/bperf.sock 00:33:23.864 02:11:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1576612 ']' 00:33:23.864 02:11:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:23.864 02:11:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:23.864 02:11:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:23.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:23.864 02:11:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:23.864 02:11:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:23.864 [2024-07-24 02:11:38.577462] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:33:23.864 [2024-07-24 02:11:38.577543] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1576612 ] 00:33:23.864 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:23.864 Zero copy mechanism will not be used. 00:33:23.864 EAL: No free 2048 kB hugepages reported on node 1 00:33:23.864 [2024-07-24 02:11:38.636969] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:23.864 [2024-07-24 02:11:38.723026] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:24.122 02:11:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:24.122 02:11:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:33:24.122 02:11:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:24.122 02:11:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:24.379 02:11:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:24.379 02:11:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:24.379 02:11:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:24.379 02:11:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:24.379 02:11:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:24.379 02:11:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:24.637 nvme0n1 00:33:24.637 02:11:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:33:24.637 02:11:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:24.637 02:11:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:24.637 02:11:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:24.637 02:11:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:24.637 02:11:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:24.637 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:24.637 Zero copy mechanism will not be used. 00:33:24.637 Running I/O for 2 seconds... 00:33:24.637 [2024-07-24 02:11:39.519700] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:24.637 [2024-07-24 02:11:39.520150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.637 [2024-07-24 02:11:39.520205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:24.637 [2024-07-24 02:11:39.527579] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:24.637 [2024-07-24 02:11:39.527947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.637 [2024-07-24 02:11:39.527977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:24.895 [2024-07-24 02:11:39.535357] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:24.896 [2024-07-24 02:11:39.535722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.896 [2024-07-24 02:11:39.535751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:24.896 [2024-07-24 02:11:39.543831] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:24.896 [2024-07-24 02:11:39.544169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.896 [2024-07-24 02:11:39.544198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:24.896 [2024-07-24 02:11:39.551962] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:24.896 [2024-07-24 02:11:39.552431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.896 [2024-07-24 02:11:39.552458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:24.896 [2024-07-24 02:11:39.561075] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:24.896 [2024-07-24 02:11:39.561448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.896 [2024-07-24 02:11:39.561492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:24.896 [2024-07-24 02:11:39.570348] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:24.896 [2024-07-24 02:11:39.570675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.896 [2024-07-24 02:11:39.570704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:24.896 [2024-07-24 02:11:39.579246] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:24.896 [2024-07-24 02:11:39.579608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.896 [2024-07-24 02:11:39.579636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:24.896 [2024-07-24 02:11:39.588206] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:24.896 [2024-07-24 02:11:39.588532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.896 [2024-07-24 02:11:39.588560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:24.896 [2024-07-24 02:11:39.597605] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:24.896 [2024-07-24 02:11:39.597970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.896 [2024-07-24 02:11:39.597997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:24.896 [2024-07-24 02:11:39.606161] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:24.896 [2024-07-24 02:11:39.606341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.896 [2024-07-24 02:11:39.606380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:24.896 [2024-07-24 02:11:39.614170] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:24.896 [2024-07-24 02:11:39.614564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.896 [2024-07-24 02:11:39.614613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:24.896 [2024-07-24 02:11:39.622314] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:24.896 [2024-07-24 02:11:39.622747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.896 [2024-07-24 02:11:39.622778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:24.896 [2024-07-24 02:11:39.630785] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:24.896 [2024-07-24 02:11:39.631133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.896 [2024-07-24 02:11:39.631161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:24.896 [2024-07-24 02:11:39.638960] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:24.896 [2024-07-24 02:11:39.639367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.896 [2024-07-24 02:11:39.639395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:24.896 [2024-07-24 02:11:39.647567] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:24.896 [2024-07-24 02:11:39.647939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.896 [2024-07-24 02:11:39.647967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:24.896 [2024-07-24 02:11:39.655768] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:24.896 [2024-07-24 02:11:39.656090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.896 [2024-07-24 02:11:39.656118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:24.896 [2024-07-24 02:11:39.663141] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:24.896 [2024-07-24 02:11:39.663441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.896 [2024-07-24 02:11:39.663469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:24.896 [2024-07-24 02:11:39.670382] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:24.896 [2024-07-24 02:11:39.670707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.896 [2024-07-24 02:11:39.670735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:24.896 [2024-07-24 02:11:39.678579] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:24.896 [2024-07-24 02:11:39.678986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.896 [2024-07-24 02:11:39.679016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:24.896 [2024-07-24 02:11:39.687265] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:24.896 [2024-07-24 02:11:39.687709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.896 [2024-07-24 02:11:39.687741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:24.896 [2024-07-24 02:11:39.696149] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:24.896 [2024-07-24 02:11:39.696504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.896 [2024-07-24 02:11:39.696533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:24.896 [2024-07-24 02:11:39.705119] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:24.896 [2024-07-24 02:11:39.705461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.896 [2024-07-24 02:11:39.705489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:24.896 [2024-07-24 02:11:39.714105] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:24.896 [2024-07-24 02:11:39.714508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.896 [2024-07-24 02:11:39.714536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:24.896 [2024-07-24 02:11:39.722876] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:24.896 [2024-07-24 02:11:39.723232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.896 [2024-07-24 02:11:39.723260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:24.896 [2024-07-24 02:11:39.731739] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:24.896 [2024-07-24 02:11:39.732072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.896 [2024-07-24 02:11:39.732100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:24.896 [2024-07-24 02:11:39.740424] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:24.896 [2024-07-24 02:11:39.740826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.896 [2024-07-24 02:11:39.740858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:24.896 [2024-07-24 02:11:39.749156] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:24.896 [2024-07-24 02:11:39.749520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.896 [2024-07-24 02:11:39.749549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:24.897 [2024-07-24 02:11:39.757858] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:24.897 [2024-07-24 02:11:39.758256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.897 [2024-07-24 02:11:39.758283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:24.897 [2024-07-24 02:11:39.766618] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:24.897 [2024-07-24 02:11:39.766966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.897 [2024-07-24 02:11:39.766995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:24.897 [2024-07-24 02:11:39.775081] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:24.897 [2024-07-24 02:11:39.775475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.897 [2024-07-24 02:11:39.775503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:24.897 [2024-07-24 02:11:39.784193] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:24.897 [2024-07-24 02:11:39.784544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.897 [2024-07-24 02:11:39.784572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:25.156 [2024-07-24 02:11:39.792957] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:25.156 [2024-07-24 02:11:39.793325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.156 [2024-07-24 02:11:39.793354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:25.156 [2024-07-24 02:11:39.801332] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:25.156 [2024-07-24 02:11:39.801728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.156 [2024-07-24 02:11:39.801754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:25.156 [2024-07-24 02:11:39.810027] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:25.156 [2024-07-24 02:11:39.810437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.156 [2024-07-24 02:11:39.810465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:25.156 [2024-07-24 02:11:39.818903] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:25.156 [2024-07-24 02:11:39.819262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.156 [2024-07-24 02:11:39.819289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:25.156 [2024-07-24 02:11:39.827144] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:25.156 [2024-07-24 02:11:39.827468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.156 [2024-07-24 02:11:39.827496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:25.156 [2024-07-24 02:11:39.835113] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:25.156 [2024-07-24 02:11:39.835476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.156 [2024-07-24 02:11:39.835514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:25.156 [2024-07-24 02:11:39.843990] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:25.156 [2024-07-24 02:11:39.844410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.156 [2024-07-24 02:11:39.844438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:25.156 [2024-07-24 02:11:39.852450] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:25.156 [2024-07-24 02:11:39.852804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.156 [2024-07-24 02:11:39.852846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:25.156 [2024-07-24 02:11:39.861269] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:25.156 [2024-07-24 02:11:39.861692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.156 [2024-07-24 02:11:39.861723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:25.156 [2024-07-24 02:11:39.870159] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:25.156 [2024-07-24 02:11:39.870516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.156 [2024-07-24 02:11:39.870547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:25.156 [2024-07-24 02:11:39.879228] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:25.156 [2024-07-24 02:11:39.879635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.156 [2024-07-24 02:11:39.879663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:25.156 [2024-07-24 02:11:39.888107] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:25.156 [2024-07-24 02:11:39.888515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.156 [2024-07-24 02:11:39.888544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:25.157 [2024-07-24 02:11:39.896707] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:25.157 [2024-07-24 02:11:39.897054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.157 [2024-07-24 02:11:39.897096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:25.157 [2024-07-24 02:11:39.905302] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:25.157 [2024-07-24 02:11:39.905742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.157 [2024-07-24 02:11:39.905774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:25.157 [2024-07-24 02:11:39.914084] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:25.157 [2024-07-24 02:11:39.914498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.157 [2024-07-24 02:11:39.914526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:25.157 [2024-07-24 02:11:39.922456] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:25.157 [2024-07-24 02:11:39.922849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.157 [2024-07-24 02:11:39.922895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:25.157 [2024-07-24 02:11:39.931187] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:25.157 [2024-07-24 02:11:39.931554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.157 [2024-07-24 02:11:39.931584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:25.157 [2024-07-24 02:11:39.939787] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:25.157 [2024-07-24 02:11:39.940221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.157 [2024-07-24 02:11:39.940267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:25.157 [2024-07-24 02:11:39.948519] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:25.157 [2024-07-24 02:11:39.948868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.157 [2024-07-24 02:11:39.948896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:25.157 [2024-07-24 02:11:39.957006] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:25.157 [2024-07-24 02:11:39.957480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.157 [2024-07-24 02:11:39.957507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:25.157 [2024-07-24 02:11:39.966088] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:25.157 [2024-07-24 02:11:39.966503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.157 [2024-07-24 02:11:39.966533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:25.157 [2024-07-24 02:11:39.974790] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:25.157 [2024-07-24 02:11:39.975152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.157 [2024-07-24 02:11:39.975179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:25.157 [2024-07-24 02:11:39.983340] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:25.157 [2024-07-24 02:11:39.983743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.157 [2024-07-24 02:11:39.983771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:25.157 [2024-07-24 02:11:39.992004] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:25.157 [2024-07-24 02:11:39.992346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.157 [2024-07-24 02:11:39.992400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:25.157 [2024-07-24 02:11:40.000209] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:25.157 [2024-07-24 02:11:40.000567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.157 [2024-07-24 02:11:40.000597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:25.157 [2024-07-24 02:11:40.009506] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:25.157 [2024-07-24 02:11:40.009934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.157 [2024-07-24 02:11:40.009973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:25.157 [2024-07-24 02:11:40.020476] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:25.157 [2024-07-24 02:11:40.020865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.157 [2024-07-24 02:11:40.020914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:25.157 [2024-07-24 02:11:40.031026] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:25.157 [2024-07-24 02:11:40.031521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.157 [2024-07-24 02:11:40.031553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:25.157 [2024-07-24 02:11:40.040947] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:25.157 [2024-07-24 02:11:40.041352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.157 [2024-07-24 02:11:40.041384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:25.416 [2024-07-24 02:11:40.050936] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:25.416 [2024-07-24 02:11:40.051285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.416 [2024-07-24 02:11:40.051323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:25.416 [2024-07-24 02:11:40.058956] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:25.416 [2024-07-24 02:11:40.059275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.416 [2024-07-24 02:11:40.059306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:25.416 [2024-07-24 02:11:40.066215] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:25.416 [2024-07-24 02:11:40.066531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.416 [2024-07-24 02:11:40.066571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:25.416 [2024-07-24 02:11:40.074117] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:25.416 [2024-07-24 02:11:40.074431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.416 [2024-07-24 02:11:40.074460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:25.416 [2024-07-24 02:11:40.081604] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:25.416 [2024-07-24 02:11:40.081888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.416 [2024-07-24 02:11:40.081916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:25.416 [2024-07-24 02:11:40.089194] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:25.416 [2024-07-24 02:11:40.089507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.416 [2024-07-24 02:11:40.089535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:25.416 [2024-07-24 02:11:40.097083] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:25.416 [2024-07-24 02:11:40.097399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.416 [2024-07-24 02:11:40.097427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:25.416 [2024-07-24 02:11:40.104252] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:25.416 [2024-07-24 02:11:40.104616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.416 [2024-07-24 02:11:40.104644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:25.417 [2024-07-24 02:11:40.111693] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:25.417 [2024-07-24 02:11:40.111986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.417 [2024-07-24 02:11:40.112013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:25.417 [2024-07-24 02:11:40.119082] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:25.417 [2024-07-24 02:11:40.119403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.417 [2024-07-24 02:11:40.119431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:25.417 [2024-07-24 02:11:40.126865] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:25.417 [2024-07-24 02:11:40.127170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.417 [2024-07-24 02:11:40.127197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:25.417 [2024-07-24 02:11:40.134748] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:25.417 [2024-07-24 02:11:40.135072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.417 [2024-07-24 02:11:40.135113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:25.417 [2024-07-24 02:11:40.142336] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:25.417 [2024-07-24 02:11:40.142653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.417 [2024-07-24 02:11:40.142681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:25.417 [2024-07-24 02:11:40.150053] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:25.417 [2024-07-24 02:11:40.150365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.417 [2024-07-24 02:11:40.150392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:25.417 [2024-07-24 02:11:40.157505] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:25.417 [2024-07-24 02:11:40.157833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.417 [2024-07-24 02:11:40.157860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:25.417 [2024-07-24 02:11:40.165181] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:25.417 [2024-07-24 02:11:40.165495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.417 [2024-07-24 02:11:40.165523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:25.417 [2024-07-24 02:11:40.172642] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:25.417 [2024-07-24 02:11:40.172941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.417 [2024-07-24 02:11:40.172969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:25.417 [2024-07-24 02:11:40.180485] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:25.417 [2024-07-24 02:11:40.180783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.417 [2024-07-24 02:11:40.180811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:25.417 [2024-07-24 02:11:40.187974] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:25.417 [2024-07-24 02:11:40.188278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.417 [2024-07-24 02:11:40.188328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:25.417 [2024-07-24 02:11:40.195453] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:25.417 [2024-07-24 02:11:40.195762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.417 [2024-07-24 02:11:40.195794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:25.417 [2024-07-24 02:11:40.202793] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:25.417 [2024-07-24 02:11:40.203138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.417 [2024-07-24 02:11:40.203166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:25.417 [2024-07-24 02:11:40.210536] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:25.417 [2024-07-24 02:11:40.210872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.417 [2024-07-24 02:11:40.210899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:25.417 [2024-07-24 02:11:40.217979] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:25.417 [2024-07-24 02:11:40.218263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.417 [2024-07-24 02:11:40.218290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:25.417 [2024-07-24 02:11:40.224979] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:25.417 [2024-07-24 02:11:40.225278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.417 [2024-07-24 02:11:40.225327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:25.417 [2024-07-24 02:11:40.232246] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:25.417 [2024-07-24 02:11:40.232541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.417 [2024-07-24 02:11:40.232569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:25.417 [2024-07-24 02:11:40.239514] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:25.417 [2024-07-24 02:11:40.239786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.417 [2024-07-24 02:11:40.239814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:25.417 [2024-07-24 02:11:40.247166] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:25.417 [2024-07-24 02:11:40.247448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.417 [2024-07-24 02:11:40.247476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:25.417 [2024-07-24 02:11:40.254505] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:25.417 [2024-07-24 02:11:40.254809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.417 [2024-07-24 02:11:40.254837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:25.417 [2024-07-24 02:11:40.262687] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:25.417 [2024-07-24 02:11:40.263053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.417 [2024-07-24 02:11:40.263103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:25.417 [2024-07-24 02:11:40.271107] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:25.417 [2024-07-24 02:11:40.271494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.417 [2024-07-24 02:11:40.271523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:25.417 [2024-07-24 02:11:40.278767] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:25.417 [2024-07-24 02:11:40.279058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.417 [2024-07-24 02:11:40.279087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:25.417 [2024-07-24 02:11:40.285730] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:25.417 [2024-07-24 02:11:40.285958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.417 [2024-07-24 02:11:40.285986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:25.417 [2024-07-24 02:11:40.292695] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:25.417 [2024-07-24 02:11:40.292983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.417 [2024-07-24 02:11:40.293013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:25.417 [2024-07-24 02:11:40.299520] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:25.417 [2024-07-24 02:11:40.299813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.418 [2024-07-24 02:11:40.299841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:25.418 [2024-07-24 02:11:40.306865] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:25.418 [2024-07-24 02:11:40.307165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.418 [2024-07-24 02:11:40.307194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:25.676 [2024-07-24 02:11:40.314147] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:25.676 [2024-07-24 02:11:40.314464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.676 [2024-07-24 02:11:40.314493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:25.676 [2024-07-24 02:11:40.321393] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:25.676 [2024-07-24 02:11:40.321680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.676 [2024-07-24 02:11:40.321723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:25.676 [2024-07-24 02:11:40.329105] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:25.676 [2024-07-24 02:11:40.329443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.676 [2024-07-24 02:11:40.329487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:25.676 [2024-07-24 02:11:40.336646] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:25.676 [2024-07-24 02:11:40.336949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.677 [2024-07-24 02:11:40.336977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:25.677 [2024-07-24 02:11:40.344293] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:25.677 [2024-07-24 02:11:40.344588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.677 [2024-07-24 02:11:40.344630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:25.677 [2024-07-24 02:11:40.351981] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:25.677 [2024-07-24 02:11:40.352332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.677 [2024-07-24 02:11:40.352376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:25.677 [2024-07-24 02:11:40.359240] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:25.677 [2024-07-24 02:11:40.359551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.677 [2024-07-24 02:11:40.359579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:25.677 [2024-07-24 02:11:40.366740] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:25.677 [2024-07-24 02:11:40.367025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.677 [2024-07-24 02:11:40.367067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:25.677 [2024-07-24 02:11:40.374173] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:25.677 [2024-07-24 02:11:40.374482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.677 [2024-07-24 02:11:40.374510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:25.677 [2024-07-24 02:11:40.381843] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:25.677 [2024-07-24 02:11:40.382157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.677 [2024-07-24 02:11:40.382184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:25.677 [2024-07-24 02:11:40.389023] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:25.677 [2024-07-24 02:11:40.389376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.677 [2024-07-24 02:11:40.389409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:25.677 [2024-07-24 02:11:40.396707] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:25.677 [2024-07-24 02:11:40.397024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.677 [2024-07-24 02:11:40.397052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:25.677 [2024-07-24 02:11:40.404058] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:25.677 [2024-07-24 02:11:40.404375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.677 [2024-07-24 02:11:40.404403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:25.677 [2024-07-24 02:11:40.411630] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:25.677 [2024-07-24 02:11:40.411912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.677 [2024-07-24 02:11:40.411940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:25.677 [2024-07-24 02:11:40.419133] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:25.677 [2024-07-24 02:11:40.419432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.677 [2024-07-24 02:11:40.419460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:25.677 [2024-07-24 02:11:40.426600] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:25.677 [2024-07-24 02:11:40.426935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.677 [2024-07-24 02:11:40.426962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:25.677 [2024-07-24 02:11:40.433886] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:25.677 [2024-07-24 02:11:40.434193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.677 [2024-07-24 02:11:40.434221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:25.677 [2024-07-24 02:11:40.441311] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:25.677 [2024-07-24 02:11:40.441625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.677 [2024-07-24 02:11:40.441652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:25.677 [2024-07-24 02:11:40.448654] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:25.677 [2024-07-24 02:11:40.448941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.677 [2024-07-24 02:11:40.448969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:25.677 [2024-07-24 02:11:40.456116] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:25.677 [2024-07-24 02:11:40.456424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.677 [2024-07-24 02:11:40.456453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:25.677 [2024-07-24 02:11:40.463538] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:25.677 [2024-07-24 02:11:40.463838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.677 [2024-07-24 02:11:40.463865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:25.677 [2024-07-24 02:11:40.470581] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:25.677 [2024-07-24 02:11:40.470853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.677 [2024-07-24 02:11:40.470881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:25.677 [2024-07-24 02:11:40.478244] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:25.677 [2024-07-24 02:11:40.478547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.677 [2024-07-24 02:11:40.478575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:25.677 [2024-07-24 02:11:40.485456] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:25.677 [2024-07-24 02:11:40.485729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.677 [2024-07-24 02:11:40.485757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:25.677 [2024-07-24 02:11:40.492637] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:25.677 [2024-07-24 02:11:40.492947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.677 [2024-07-24 02:11:40.492975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:25.677 [2024-07-24 02:11:40.499978] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:25.677 [2024-07-24 02:11:40.500296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.677 [2024-07-24 02:11:40.500331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:25.677 [2024-07-24 02:11:40.507428] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:25.677 [2024-07-24 02:11:40.507704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.677 [2024-07-24 02:11:40.507732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:25.677 [2024-07-24 02:11:40.515187] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:25.677 [2024-07-24 02:11:40.515488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.677 [2024-07-24 02:11:40.515515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:25.677 [2024-07-24 02:11:40.522778] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:25.677 [2024-07-24 02:11:40.523099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.677 [2024-07-24 02:11:40.523140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:25.677 [2024-07-24 02:11:40.529932] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:25.677 [2024-07-24 02:11:40.530250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.678 [2024-07-24 02:11:40.530281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:25.678 [2024-07-24 02:11:40.537486] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:25.678 [2024-07-24 02:11:40.537761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.678 [2024-07-24 02:11:40.537788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:25.678 [2024-07-24 02:11:40.544754] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:25.678 [2024-07-24 02:11:40.545040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.678 [2024-07-24 02:11:40.545067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:25.678 [2024-07-24 02:11:40.552227] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:25.678 [2024-07-24 02:11:40.552556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.678 [2024-07-24 02:11:40.552584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:25.678 [2024-07-24 02:11:40.559516] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:25.678 [2024-07-24 02:11:40.559831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.678 [2024-07-24 02:11:40.559858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:25.678 [2024-07-24 02:11:40.566701] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:25.678 [2024-07-24 02:11:40.567023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.678 [2024-07-24 02:11:40.567052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:25.937 [2024-07-24 02:11:40.574307] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:25.937 [2024-07-24 02:11:40.574609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.937 [2024-07-24 02:11:40.574638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:25.937 [2024-07-24 02:11:40.581280] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:25.937 [2024-07-24 02:11:40.581593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.937 [2024-07-24 02:11:40.581625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:25.937 [2024-07-24 02:11:40.588568] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:25.937 [2024-07-24 02:11:40.588879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.937 [2024-07-24 02:11:40.588907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:25.937 [2024-07-24 02:11:40.595869] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:25.937 [2024-07-24 02:11:40.596210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.937 [2024-07-24 02:11:40.596241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:25.937 [2024-07-24 02:11:40.603436] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:25.937 [2024-07-24 02:11:40.603720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.937 [2024-07-24 02:11:40.603748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:25.937 [2024-07-24 02:11:40.610844] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:25.937 [2024-07-24 02:11:40.611179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.937 [2024-07-24 02:11:40.611210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:25.937 [2024-07-24 02:11:40.618443] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:25.937 [2024-07-24 02:11:40.618756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.937 [2024-07-24 02:11:40.618783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:25.937 [2024-07-24 02:11:40.625720] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:25.937 [2024-07-24 02:11:40.626028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.937 [2024-07-24 02:11:40.626055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:25.937 [2024-07-24 02:11:40.633257] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:25.937 [2024-07-24 02:11:40.633552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.937 [2024-07-24 02:11:40.633582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:25.937 [2024-07-24 02:11:40.640749] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:25.937 [2024-07-24 02:11:40.641058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.937 [2024-07-24 02:11:40.641085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:25.937 [2024-07-24 02:11:40.648021] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:25.937 [2024-07-24 02:11:40.648350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.937 [2024-07-24 02:11:40.648378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:25.937 [2024-07-24 02:11:40.655445] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:25.937 [2024-07-24 02:11:40.655753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.937 [2024-07-24 02:11:40.655782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:25.937 [2024-07-24 02:11:40.662743] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:25.937 [2024-07-24 02:11:40.663075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.937 [2024-07-24 02:11:40.663102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:25.937 [2024-07-24 02:11:40.670005] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:25.937 [2024-07-24 02:11:40.670295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.937 [2024-07-24 02:11:40.670344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:25.937 [2024-07-24 02:11:40.676952] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:25.937 [2024-07-24 02:11:40.677273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.937 [2024-07-24 02:11:40.677304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:25.937 [2024-07-24 02:11:40.684229] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:25.937 [2024-07-24 02:11:40.684544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.937 [2024-07-24 02:11:40.684572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:25.937 [2024-07-24 02:11:40.691844] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:25.937 [2024-07-24 02:11:40.692188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.937 [2024-07-24 02:11:40.692219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:25.937 [2024-07-24 02:11:40.699183] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:25.937 [2024-07-24 02:11:40.699493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.937 [2024-07-24 02:11:40.699521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:25.937 [2024-07-24 02:11:40.706891] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:25.937 [2024-07-24 02:11:40.707194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.937 [2024-07-24 02:11:40.707221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:25.937 [2024-07-24 02:11:40.714094] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:25.937 [2024-07-24 02:11:40.714422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.937 [2024-07-24 02:11:40.714449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:25.937 [2024-07-24 02:11:40.721595] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:25.937 [2024-07-24 02:11:40.721914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.937 [2024-07-24 02:11:40.721941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:25.937 [2024-07-24 02:11:40.729003] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:25.937 [2024-07-24 02:11:40.729309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.937 [2024-07-24 02:11:40.729343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:25.937 [2024-07-24 02:11:40.736363] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:25.937 [2024-07-24 02:11:40.736647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.937 [2024-07-24 02:11:40.736674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:25.937 [2024-07-24 02:11:40.743580] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:25.937 [2024-07-24 02:11:40.743856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.938 [2024-07-24 02:11:40.743883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:25.938 [2024-07-24 02:11:40.750746] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:25.938 [2024-07-24 02:11:40.751035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.938 [2024-07-24 02:11:40.751062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:25.938 [2024-07-24 02:11:40.757970] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:25.938 [2024-07-24 02:11:40.758324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.938 [2024-07-24 02:11:40.758351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:25.938 [2024-07-24 02:11:40.765245] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:25.938 [2024-07-24 02:11:40.765544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.938 [2024-07-24 02:11:40.765572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:25.938 [2024-07-24 02:11:40.772518] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:25.938 [2024-07-24 02:11:40.772832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.938 [2024-07-24 02:11:40.772864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:25.938 [2024-07-24 02:11:40.779964] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:25.938 [2024-07-24 02:11:40.780266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.938 [2024-07-24 02:11:40.780292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:25.938 [2024-07-24 02:11:40.786942] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:25.938 [2024-07-24 02:11:40.787276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.938 [2024-07-24 02:11:40.787307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:25.938 [2024-07-24 02:11:40.794397] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:25.938 [2024-07-24 02:11:40.794696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.938 [2024-07-24 02:11:40.794724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:25.938 [2024-07-24 02:11:40.801606] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:25.938 [2024-07-24 02:11:40.801876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.938 [2024-07-24 02:11:40.801919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:25.938 [2024-07-24 02:11:40.808724] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:25.938 [2024-07-24 02:11:40.809024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.938 [2024-07-24 02:11:40.809051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:25.938 [2024-07-24 02:11:40.816919] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:25.938 [2024-07-24 02:11:40.817329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.938 [2024-07-24 02:11:40.817369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:25.938 [2024-07-24 02:11:40.825585] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:25.938 [2024-07-24 02:11:40.825930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.938 [2024-07-24 02:11:40.825974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:26.196 [2024-07-24 02:11:40.834360] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:26.196 [2024-07-24 02:11:40.834703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.197 [2024-07-24 02:11:40.834746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:26.197 [2024-07-24 02:11:40.842595] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:26.197 [2024-07-24 02:11:40.843037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.197 [2024-07-24 02:11:40.843069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.197 [2024-07-24 02:11:40.852145] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:26.197 [2024-07-24 02:11:40.852538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.197 [2024-07-24 02:11:40.852566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:26.197 [2024-07-24 02:11:40.860774] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:26.197 [2024-07-24 02:11:40.861107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.197 [2024-07-24 02:11:40.861134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:26.197 [2024-07-24 02:11:40.869565] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:26.197 [2024-07-24 02:11:40.869966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.197 [2024-07-24 02:11:40.869995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:26.197 [2024-07-24 02:11:40.878307] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:26.197 [2024-07-24 02:11:40.878764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.197 [2024-07-24 02:11:40.878795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.197 [2024-07-24 02:11:40.887289] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:26.197 [2024-07-24 02:11:40.887707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.197 [2024-07-24 02:11:40.887735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:26.197 [2024-07-24 02:11:40.895778] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:26.197 [2024-07-24 02:11:40.896154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.197 [2024-07-24 02:11:40.896200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:26.197 [2024-07-24 02:11:40.904448] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:26.197 [2024-07-24 02:11:40.904775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.197 [2024-07-24 02:11:40.904817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:26.197 [2024-07-24 02:11:40.912962] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:26.197 [2024-07-24 02:11:40.913298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.197 [2024-07-24 02:11:40.913339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.197 [2024-07-24 02:11:40.920629] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:26.197 [2024-07-24 02:11:40.920961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.197 [2024-07-24 02:11:40.920989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:26.197 [2024-07-24 02:11:40.928017] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:26.197 [2024-07-24 02:11:40.928329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.197 [2024-07-24 02:11:40.928356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:26.197 [2024-07-24 02:11:40.935728] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:26.197 [2024-07-24 02:11:40.936060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.197 [2024-07-24 02:11:40.936102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:26.197 [2024-07-24 02:11:40.944168] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:26.197 [2024-07-24 02:11:40.944494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.197 [2024-07-24 02:11:40.944522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.197 [2024-07-24 02:11:40.952202] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:26.197 [2024-07-24 02:11:40.952592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.197 [2024-07-24 02:11:40.952637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:26.197 [2024-07-24 02:11:40.960431] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:26.197 [2024-07-24 02:11:40.960780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.197 [2024-07-24 02:11:40.960823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:26.197 [2024-07-24 02:11:40.968296] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:26.197 [2024-07-24 02:11:40.968628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.197 [2024-07-24 02:11:40.968656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:26.197 [2024-07-24 02:11:40.975578] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:26.197 [2024-07-24 02:11:40.975863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.197 [2024-07-24 02:11:40.975891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.197 [2024-07-24 02:11:40.982530] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:26.197 [2024-07-24 02:11:40.982815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.197 [2024-07-24 02:11:40.982847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:26.197 [2024-07-24 02:11:40.990074] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:26.197 [2024-07-24 02:11:40.990400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.197 [2024-07-24 02:11:40.990428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:26.197 [2024-07-24 02:11:40.997147] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:26.197 [2024-07-24 02:11:40.997425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.197 [2024-07-24 02:11:40.997468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:26.197 [2024-07-24 02:11:41.004469] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:26.197 [2024-07-24 02:11:41.004755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.197 [2024-07-24 02:11:41.004782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.197 [2024-07-24 02:11:41.011652] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:26.197 [2024-07-24 02:11:41.011988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.197 [2024-07-24 02:11:41.012015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:26.197 [2024-07-24 02:11:41.018723] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:26.197 [2024-07-24 02:11:41.019007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.197 [2024-07-24 02:11:41.019049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:26.197 [2024-07-24 02:11:41.025972] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:26.197 [2024-07-24 02:11:41.026263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.197 [2024-07-24 02:11:41.026291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:26.197 [2024-07-24 02:11:41.033579] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:26.197 [2024-07-24 02:11:41.033867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.197 [2024-07-24 02:11:41.033895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.197 [2024-07-24 02:11:41.040535] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:26.198 [2024-07-24 02:11:41.040833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.198 [2024-07-24 02:11:41.040861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:26.198 [2024-07-24 02:11:41.047732] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:26.198 [2024-07-24 02:11:41.048042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.198 [2024-07-24 02:11:41.048070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:26.198 [2024-07-24 02:11:41.055056] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:26.198 [2024-07-24 02:11:41.055354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.198 [2024-07-24 02:11:41.055381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:26.198 [2024-07-24 02:11:41.062190] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:26.198 [2024-07-24 02:11:41.062486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.198 [2024-07-24 02:11:41.062514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.198 [2024-07-24 02:11:41.069416] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:26.198 [2024-07-24 02:11:41.069687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.198 [2024-07-24 02:11:41.069716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:26.198 [2024-07-24 02:11:41.076707] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:26.198 [2024-07-24 02:11:41.077047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.198 [2024-07-24 02:11:41.077075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:26.198 [2024-07-24 02:11:41.084056] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:26.198 [2024-07-24 02:11:41.084385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.198 [2024-07-24 02:11:41.084413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:26.457 [2024-07-24 02:11:41.091400] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:26.457 [2024-07-24 02:11:41.091677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.457 [2024-07-24 02:11:41.091721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.457 [2024-07-24 02:11:41.099327] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:26.457 [2024-07-24 02:11:41.099638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.457 [2024-07-24 02:11:41.099665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:26.457 [2024-07-24 02:11:41.106663] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:26.457 [2024-07-24 02:11:41.106977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.457 [2024-07-24 02:11:41.107005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:26.457 [2024-07-24 02:11:41.113952] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:26.457 [2024-07-24 02:11:41.114283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.457 [2024-07-24 02:11:41.114314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:26.457 [2024-07-24 02:11:41.121960] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:26.457 [2024-07-24 02:11:41.122283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.457 [2024-07-24 02:11:41.122334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.457 [2024-07-24 02:11:41.129328] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:26.457 [2024-07-24 02:11:41.129623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.457 [2024-07-24 02:11:41.129651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:26.457 [2024-07-24 02:11:41.136620] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:26.457 [2024-07-24 02:11:41.136942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.457 [2024-07-24 02:11:41.136971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:26.457 [2024-07-24 02:11:41.143913] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:26.457 [2024-07-24 02:11:41.144283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.457 [2024-07-24 02:11:41.144331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:26.457 [2024-07-24 02:11:41.151280] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:26.457 [2024-07-24 02:11:41.151578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.457 [2024-07-24 02:11:41.151606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.457 [2024-07-24 02:11:41.158592] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:26.457 [2024-07-24 02:11:41.158891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.457 [2024-07-24 02:11:41.158922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:26.457 [2024-07-24 02:11:41.166212] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:26.457 [2024-07-24 02:11:41.166513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.457 [2024-07-24 02:11:41.166541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:26.457 [2024-07-24 02:11:41.173759] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:26.457 [2024-07-24 02:11:41.174098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.457 [2024-07-24 02:11:41.174138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:26.457 [2024-07-24 02:11:41.181150] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:26.457 [2024-07-24 02:11:41.181451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.457 [2024-07-24 02:11:41.181480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.457 [2024-07-24 02:11:41.188597] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:26.457 [2024-07-24 02:11:41.188897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.457 [2024-07-24 02:11:41.188926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:26.457 [2024-07-24 02:11:41.195633] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:26.457 [2024-07-24 02:11:41.195945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.457 [2024-07-24 02:11:41.195974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:26.457 [2024-07-24 02:11:41.202900] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:26.457 [2024-07-24 02:11:41.203185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.457 [2024-07-24 02:11:41.203214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:26.457 [2024-07-24 02:11:41.210557] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:26.457 [2024-07-24 02:11:41.210857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.457 [2024-07-24 02:11:41.210885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.457 [2024-07-24 02:11:41.217837] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:26.457 [2024-07-24 02:11:41.218160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.457 [2024-07-24 02:11:41.218191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:26.457 [2024-07-24 02:11:41.225376] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:26.457 [2024-07-24 02:11:41.225647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.457 [2024-07-24 02:11:41.225675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:26.457 [2024-07-24 02:11:41.232879] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:26.457 [2024-07-24 02:11:41.233177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.457 [2024-07-24 02:11:41.233204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:26.457 [2024-07-24 02:11:41.240555] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:26.457 [2024-07-24 02:11:41.240870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.457 [2024-07-24 02:11:41.240898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.457 [2024-07-24 02:11:41.248230] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:26.457 [2024-07-24 02:11:41.248536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.457 [2024-07-24 02:11:41.248565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:26.457 [2024-07-24 02:11:41.255629] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:26.458 [2024-07-24 02:11:41.255943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.458 [2024-07-24 02:11:41.255970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:26.458 [2024-07-24 02:11:41.264053] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:26.458 [2024-07-24 02:11:41.264433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.458 [2024-07-24 02:11:41.264477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:26.458 [2024-07-24 02:11:41.271519] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:26.458 [2024-07-24 02:11:41.271803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.458 [2024-07-24 02:11:41.271831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.458 [2024-07-24 02:11:41.279928] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:26.458 [2024-07-24 02:11:41.280332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.458 [2024-07-24 02:11:41.280360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:26.458 [2024-07-24 02:11:41.288961] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:26.458 [2024-07-24 02:11:41.289328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.458 [2024-07-24 02:11:41.289372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:26.458 [2024-07-24 02:11:41.297684] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:26.458 [2024-07-24 02:11:41.298102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.458 [2024-07-24 02:11:41.298133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:26.458 [2024-07-24 02:11:41.306413] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:26.458 [2024-07-24 02:11:41.306761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.458 [2024-07-24 02:11:41.306788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.458 [2024-07-24 02:11:41.315410] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:26.458 [2024-07-24 02:11:41.315772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.458 [2024-07-24 02:11:41.315818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:26.458 [2024-07-24 02:11:41.324161] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:26.458 [2024-07-24 02:11:41.324568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.458 [2024-07-24 02:11:41.324612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:26.458 [2024-07-24 02:11:41.333220] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:26.458 [2024-07-24 02:11:41.333615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.458 [2024-07-24 02:11:41.333643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:26.458 [2024-07-24 02:11:41.341756] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:26.458 [2024-07-24 02:11:41.342179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.458 [2024-07-24 02:11:41.342210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.458 [2024-07-24 02:11:41.350911] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:26.717 [2024-07-24 02:11:41.351325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.717 [2024-07-24 02:11:41.351371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:26.717 [2024-07-24 02:11:41.360048] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:26.717 [2024-07-24 02:11:41.360462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.717 [2024-07-24 02:11:41.360490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:26.717 [2024-07-24 02:11:41.368876] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:26.717 [2024-07-24 02:11:41.369222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.717 [2024-07-24 02:11:41.369253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:26.717 [2024-07-24 02:11:41.376184] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:26.717 [2024-07-24 02:11:41.376550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.717 [2024-07-24 02:11:41.376578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.717 [2024-07-24 02:11:41.383842] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:26.717 [2024-07-24 02:11:41.384149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.717 [2024-07-24 02:11:41.384181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:26.717 [2024-07-24 02:11:41.391349] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:26.717 [2024-07-24 02:11:41.391625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.717 [2024-07-24 02:11:41.391653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:26.717 [2024-07-24 02:11:41.398406] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:26.717 [2024-07-24 02:11:41.398678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.717 [2024-07-24 02:11:41.398706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:26.717 [2024-07-24 02:11:41.406001] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:26.717 [2024-07-24 02:11:41.406303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.717 [2024-07-24 02:11:41.406354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.717 [2024-07-24 02:11:41.413464] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:26.717 [2024-07-24 02:11:41.413785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.717 [2024-07-24 02:11:41.413813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:26.717 [2024-07-24 02:11:41.420564] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:26.717 [2024-07-24 02:11:41.420851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.717 [2024-07-24 02:11:41.420878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:26.717 [2024-07-24 02:11:41.427531] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:26.717 [2024-07-24 02:11:41.427816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.717 [2024-07-24 02:11:41.427858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:26.717 [2024-07-24 02:11:41.434764] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:26.717 [2024-07-24 02:11:41.435053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.717 [2024-07-24 02:11:41.435080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.717 [2024-07-24 02:11:41.442249] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:26.717 [2024-07-24 02:11:41.442568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.717 [2024-07-24 02:11:41.442596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:26.717 [2024-07-24 02:11:41.449906] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:26.717 [2024-07-24 02:11:41.450210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.717 [2024-07-24 02:11:41.450237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:26.717 [2024-07-24 02:11:41.457252] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:26.717 [2024-07-24 02:11:41.457548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.717 [2024-07-24 02:11:41.457576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:26.717 [2024-07-24 02:11:41.464284] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:26.717 [2024-07-24 02:11:41.464605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.717 [2024-07-24 02:11:41.464632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.717 [2024-07-24 02:11:41.471328] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:26.717 [2024-07-24 02:11:41.471619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.717 [2024-07-24 02:11:41.471662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:26.717 [2024-07-24 02:11:41.478679] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:26.717 [2024-07-24 02:11:41.479039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.717 [2024-07-24 02:11:41.479065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:26.717 [2024-07-24 02:11:41.486635] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:26.717 [2024-07-24 02:11:41.486960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.717 [2024-07-24 02:11:41.487003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:26.717 [2024-07-24 02:11:41.494142] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:26.717 [2024-07-24 02:11:41.494456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.717 [2024-07-24 02:11:41.494483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.717 [2024-07-24 02:11:41.501195] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:26.717 [2024-07-24 02:11:41.501524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.717 [2024-07-24 02:11:41.501553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:26.717 [2024-07-24 02:11:41.508595] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:26.717 [2024-07-24 02:11:41.508903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.717 [2024-07-24 02:11:41.508949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:26.717 [2024-07-24 02:11:41.515710] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa257c0) with pdu=0x2000190fef90 00:33:26.717 [2024-07-24 02:11:41.515976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:26.717 [2024-07-24 02:11:41.516003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:26.717 00:33:26.717 Latency(us) 00:33:26.717 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:26.717 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:33:26.717 nvme0n1 : 2.00 3951.17 493.90 0.00 0.00 4039.98 2063.17 10971.21 00:33:26.717 =================================================================================================================== 00:33:26.718 Total : 3951.17 493.90 0.00 0.00 4039.98 2063.17 10971.21 00:33:26.718 0 00:33:26.718 02:11:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:26.718 02:11:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:26.718 02:11:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:26.718 02:11:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:26.718 | .driver_specific 00:33:26.718 | .nvme_error 00:33:26.718 | .status_code 00:33:26.718 | .command_transient_transport_error' 00:33:26.976 02:11:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 255 > 0 )) 00:33:26.976 02:11:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1576612 00:33:26.976 02:11:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1576612 ']' 00:33:26.976 02:11:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1576612 00:33:26.976 02:11:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:33:26.976 02:11:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:26.976 02:11:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1576612 00:33:26.976 02:11:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:26.976 02:11:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:26.976 02:11:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1576612' 00:33:26.976 killing process with pid 1576612 00:33:26.976 02:11:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1576612 00:33:26.976 Received shutdown signal, test time was about 2.000000 seconds 00:33:26.976 00:33:26.976 Latency(us) 00:33:26.976 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:26.976 =================================================================================================================== 00:33:26.976 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:26.976 02:11:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1576612 00:33:27.233 02:11:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 1575249 00:33:27.233 02:11:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1575249 ']' 00:33:27.233 02:11:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1575249 00:33:27.233 02:11:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:33:27.234 02:11:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:27.234 02:11:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1575249 00:33:27.234 02:11:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:33:27.234 02:11:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:33:27.234 02:11:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1575249' 00:33:27.234 killing process with pid 1575249 00:33:27.234 02:11:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1575249 00:33:27.234 02:11:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1575249 00:33:27.492 00:33:27.492 real 0m15.072s 00:33:27.492 user 0m30.173s 00:33:27.492 sys 0m3.968s 00:33:27.492 02:11:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:27.492 02:11:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:27.492 ************************************ 00:33:27.492 END TEST nvmf_digest_error 00:33:27.492 ************************************ 00:33:27.492 02:11:42 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:33:27.492 02:11:42 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:33:27.492 02:11:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:27.492 02:11:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:33:27.492 02:11:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:27.492 02:11:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:33:27.492 02:11:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:27.492 02:11:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:27.492 rmmod nvme_tcp 00:33:27.492 rmmod nvme_fabrics 00:33:27.492 rmmod nvme_keyring 00:33:27.750 02:11:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:27.750 02:11:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:33:27.750 02:11:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:33:27.750 02:11:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 1575249 ']' 00:33:27.750 02:11:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 1575249 00:33:27.750 02:11:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 1575249 ']' 00:33:27.750 02:11:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 1575249 00:33:27.750 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1575249) - No such process 00:33:27.750 02:11:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 1575249 is not found' 00:33:27.750 Process with pid 1575249 is not found 00:33:27.750 02:11:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:27.750 02:11:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:27.750 02:11:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:27.750 02:11:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:27.750 02:11:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:27.750 02:11:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:27.750 02:11:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:27.750 02:11:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:29.649 02:11:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:29.649 00:33:29.649 real 0m34.814s 00:33:29.649 user 1m0.739s 00:33:29.649 sys 0m9.938s 00:33:29.649 02:11:44 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:29.649 02:11:44 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:33:29.649 ************************************ 00:33:29.649 END TEST nvmf_digest 00:33:29.649 ************************************ 00:33:29.649 02:11:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:33:29.649 02:11:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:33:29.649 02:11:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:33:29.649 02:11:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:33:29.649 02:11:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:33:29.649 02:11:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:29.649 02:11:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:29.649 ************************************ 00:33:29.649 START TEST nvmf_bdevperf 00:33:29.649 ************************************ 00:33:29.650 02:11:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:33:29.908 * Looking for test storage... 00:33:29.908 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:29.908 02:11:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:29.908 02:11:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:33:29.908 02:11:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:29.908 02:11:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:29.908 02:11:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:29.908 02:11:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:29.908 02:11:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:29.908 02:11:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:29.908 02:11:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:29.908 02:11:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:29.908 02:11:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:29.908 02:11:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:29.908 02:11:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:29.908 02:11:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:29.908 02:11:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:29.908 02:11:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:29.908 02:11:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:29.908 02:11:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:29.908 02:11:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:29.908 02:11:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:29.908 02:11:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:29.908 02:11:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:29.908 02:11:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:29.908 02:11:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:29.908 02:11:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:29.908 02:11:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:33:29.908 02:11:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:29.908 02:11:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:33:29.908 02:11:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:29.908 02:11:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:29.908 02:11:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:29.908 02:11:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:29.908 02:11:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:29.908 02:11:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:29.908 02:11:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:29.908 02:11:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:29.908 02:11:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:29.908 02:11:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:29.909 02:11:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:33:29.909 02:11:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:29.909 02:11:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:29.909 02:11:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:29.909 02:11:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:29.909 02:11:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:29.909 02:11:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:29.909 02:11:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:29.909 02:11:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:29.909 02:11:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:29.909 02:11:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:29.909 02:11:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:33:29.909 02:11:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:31.821 02:11:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:31.821 02:11:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:33:31.821 02:11:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:31.821 02:11:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:31.821 02:11:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:31.821 02:11:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:31.821 02:11:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:31.821 02:11:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:33:31.821 02:11:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:31.821 02:11:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:33:31.821 02:11:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:33:31.821 02:11:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:33:31.821 02:11:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:33:31.821 02:11:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:33:31.821 02:11:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:33:31.821 02:11:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:31.821 02:11:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:31.821 02:11:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:31.821 02:11:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:31.821 02:11:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:31.821 02:11:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:31.821 02:11:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:31.821 02:11:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:31.821 02:11:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:31.821 02:11:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:31.821 02:11:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:31.821 02:11:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:31.821 02:11:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:31.821 02:11:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:31.821 02:11:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:31.821 02:11:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:31.821 02:11:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:31.821 02:11:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:31.821 02:11:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:31.821 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:31.821 02:11:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:31.821 02:11:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:31.821 02:11:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:31.821 02:11:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:31.821 02:11:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:31.821 02:11:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:31.821 02:11:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:31.821 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:31.821 02:11:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:31.821 02:11:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:31.821 02:11:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:31.821 02:11:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:31.821 02:11:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:31.821 02:11:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:31.821 02:11:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:31.821 02:11:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:31.821 02:11:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:31.821 02:11:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:31.821 02:11:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:31.821 02:11:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:31.821 02:11:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:31.821 02:11:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:31.821 02:11:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:31.821 02:11:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:31.821 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:31.821 02:11:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:31.821 02:11:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:31.821 02:11:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:31.821 02:11:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:31.821 02:11:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:31.821 02:11:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:31.821 02:11:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:31.821 02:11:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:31.821 02:11:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:31.821 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:31.821 02:11:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:31.821 02:11:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:31.821 02:11:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:33:31.821 02:11:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:31.821 02:11:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:31.822 02:11:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:31.822 02:11:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:31.822 02:11:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:31.822 02:11:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:31.822 02:11:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:31.822 02:11:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:31.822 02:11:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:31.822 02:11:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:31.822 02:11:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:31.822 02:11:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:31.822 02:11:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:31.822 02:11:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:31.822 02:11:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:31.822 02:11:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:31.822 02:11:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:31.822 02:11:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:31.822 02:11:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:31.822 02:11:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:31.822 02:11:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:32.081 02:11:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:32.081 02:11:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:32.081 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:32.081 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.166 ms 00:33:32.081 00:33:32.081 --- 10.0.0.2 ping statistics --- 00:33:32.081 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:32.081 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:33:32.081 02:11:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:32.081 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:32.081 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.088 ms 00:33:32.081 00:33:32.081 --- 10.0.0.1 ping statistics --- 00:33:32.081 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:32.081 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:33:32.081 02:11:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:32.081 02:11:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:33:32.081 02:11:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:32.081 02:11:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:32.081 02:11:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:32.081 02:11:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:32.081 02:11:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:32.081 02:11:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:32.081 02:11:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:32.081 02:11:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:33:32.081 02:11:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:33:32.081 02:11:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:32.081 02:11:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:33:32.081 02:11:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:32.081 02:11:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=1578961 00:33:32.081 02:11:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:33:32.081 02:11:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 1578961 00:33:32.081 02:11:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 1578961 ']' 00:33:32.081 02:11:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:32.081 02:11:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:32.081 02:11:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:32.081 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:32.081 02:11:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:32.081 02:11:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:32.081 [2024-07-24 02:11:46.808423] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:33:32.081 [2024-07-24 02:11:46.808532] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:32.081 EAL: No free 2048 kB hugepages reported on node 1 00:33:32.081 [2024-07-24 02:11:46.879023] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:32.340 [2024-07-24 02:11:46.976255] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:32.340 [2024-07-24 02:11:46.976330] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:32.340 [2024-07-24 02:11:46.976349] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:32.340 [2024-07-24 02:11:46.976363] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:32.340 [2024-07-24 02:11:46.976376] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:32.340 [2024-07-24 02:11:46.976441] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:33:32.340 [2024-07-24 02:11:46.976508] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:33:32.340 [2024-07-24 02:11:46.976513] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:32.340 02:11:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:32.340 02:11:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:33:32.340 02:11:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:32.340 02:11:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:32.340 02:11:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:32.340 02:11:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:32.340 02:11:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:32.340 02:11:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:32.340 02:11:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:32.340 [2024-07-24 02:11:47.126998] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:32.340 02:11:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:32.340 02:11:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:32.340 02:11:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:32.340 02:11:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:32.340 Malloc0 00:33:32.340 02:11:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:32.340 02:11:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:32.340 02:11:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:32.340 02:11:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:32.340 02:11:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:32.340 02:11:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:32.340 02:11:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:32.340 02:11:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:32.340 02:11:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:32.340 02:11:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:32.340 02:11:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:32.340 02:11:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:32.340 [2024-07-24 02:11:47.188707] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:32.340 02:11:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:32.340 02:11:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:33:32.340 02:11:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:33:32.340 02:11:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:33:32.340 02:11:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:33:32.340 02:11:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:33:32.340 02:11:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:33:32.340 { 00:33:32.340 "params": { 00:33:32.340 "name": "Nvme$subsystem", 00:33:32.340 "trtype": "$TEST_TRANSPORT", 00:33:32.340 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:32.340 "adrfam": "ipv4", 00:33:32.340 "trsvcid": "$NVMF_PORT", 00:33:32.340 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:32.340 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:32.340 "hdgst": ${hdgst:-false}, 00:33:32.340 "ddgst": ${ddgst:-false} 00:33:32.340 }, 00:33:32.340 "method": "bdev_nvme_attach_controller" 00:33:32.340 } 00:33:32.340 EOF 00:33:32.340 )") 00:33:32.340 02:11:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:33:32.340 02:11:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:33:32.340 02:11:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:33:32.340 02:11:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:33:32.340 "params": { 00:33:32.340 "name": "Nvme1", 00:33:32.340 "trtype": "tcp", 00:33:32.340 "traddr": "10.0.0.2", 00:33:32.340 "adrfam": "ipv4", 00:33:32.340 "trsvcid": "4420", 00:33:32.340 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:32.340 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:32.340 "hdgst": false, 00:33:32.340 "ddgst": false 00:33:32.340 }, 00:33:32.340 "method": "bdev_nvme_attach_controller" 00:33:32.340 }' 00:33:32.340 [2024-07-24 02:11:47.234207] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:33:32.340 [2024-07-24 02:11:47.234297] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1579110 ] 00:33:32.598 EAL: No free 2048 kB hugepages reported on node 1 00:33:32.598 [2024-07-24 02:11:47.293603] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:32.598 [2024-07-24 02:11:47.380348] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:32.856 Running I/O for 1 seconds... 00:33:34.231 00:33:34.231 Latency(us) 00:33:34.231 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:34.231 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:34.231 Verification LBA range: start 0x0 length 0x4000 00:33:34.231 Nvme1n1 : 1.01 8919.99 34.84 0.00 0.00 14258.38 1456.36 14951.92 00:33:34.231 =================================================================================================================== 00:33:34.231 Total : 8919.99 34.84 0.00 0.00 14258.38 1456.36 14951.92 00:33:34.231 02:11:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=1579249 00:33:34.231 02:11:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:33:34.231 02:11:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:33:34.231 02:11:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:33:34.231 02:11:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:33:34.231 02:11:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:33:34.231 02:11:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:33:34.231 02:11:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:33:34.231 { 00:33:34.231 "params": { 00:33:34.231 "name": "Nvme$subsystem", 00:33:34.231 "trtype": "$TEST_TRANSPORT", 00:33:34.231 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:34.231 "adrfam": "ipv4", 00:33:34.231 "trsvcid": "$NVMF_PORT", 00:33:34.231 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:34.231 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:34.231 "hdgst": ${hdgst:-false}, 00:33:34.231 "ddgst": ${ddgst:-false} 00:33:34.231 }, 00:33:34.231 "method": "bdev_nvme_attach_controller" 00:33:34.231 } 00:33:34.231 EOF 00:33:34.231 )") 00:33:34.231 02:11:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:33:34.231 02:11:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:33:34.231 02:11:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:33:34.231 02:11:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:33:34.231 "params": { 00:33:34.231 "name": "Nvme1", 00:33:34.231 "trtype": "tcp", 00:33:34.231 "traddr": "10.0.0.2", 00:33:34.231 "adrfam": "ipv4", 00:33:34.231 "trsvcid": "4420", 00:33:34.231 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:34.231 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:34.231 "hdgst": false, 00:33:34.231 "ddgst": false 00:33:34.231 }, 00:33:34.231 "method": "bdev_nvme_attach_controller" 00:33:34.231 }' 00:33:34.231 [2024-07-24 02:11:48.998058] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:33:34.231 [2024-07-24 02:11:48.998143] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1579249 ] 00:33:34.231 EAL: No free 2048 kB hugepages reported on node 1 00:33:34.231 [2024-07-24 02:11:49.057175] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:34.489 [2024-07-24 02:11:49.144106] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:34.489 Running I/O for 15 seconds... 00:33:37.775 02:11:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 1578961 00:33:37.775 02:11:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:33:37.775 [2024-07-24 02:11:51.967456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:48936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.775 [2024-07-24 02:11:51.967505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:37.775 [2024-07-24 02:11:51.967545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:48944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.775 [2024-07-24 02:11:51.967561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:37.775 [2024-07-24 02:11:51.967580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:48952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.775 [2024-07-24 02:11:51.967623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:37.775 [2024-07-24 02:11:51.967642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:48960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.775 [2024-07-24 02:11:51.967659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:37.775 [2024-07-24 02:11:51.967678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:48968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.775 [2024-07-24 02:11:51.967695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:37.775 [2024-07-24 02:11:51.967713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:48976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.775 [2024-07-24 02:11:51.967731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:37.775 [2024-07-24 02:11:51.967749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:48984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.776 [2024-07-24 02:11:51.967774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:37.776 [2024-07-24 02:11:51.967793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:48992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.776 [2024-07-24 02:11:51.967809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:37.776 [2024-07-24 02:11:51.967828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:49000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.776 [2024-07-24 02:11:51.967845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:37.776 [2024-07-24 02:11:51.967862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:49008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.776 [2024-07-24 02:11:51.967877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:37.776 [2024-07-24 02:11:51.967894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:49016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.776 [2024-07-24 02:11:51.967910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:37.776 [2024-07-24 02:11:51.967927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:49024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.776 [2024-07-24 02:11:51.967941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:37.776 [2024-07-24 02:11:51.967958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:49032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.776 [2024-07-24 02:11:51.967973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:37.776 [2024-07-24 02:11:51.967990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:49040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.776 [2024-07-24 02:11:51.968005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:37.776 [2024-07-24 02:11:51.968022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:49048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.776 [2024-07-24 02:11:51.968037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:37.776 [2024-07-24 02:11:51.968054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:49056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.776 [2024-07-24 02:11:51.968069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:37.776 [2024-07-24 02:11:51.968086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:49064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.776 [2024-07-24 02:11:51.968101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:37.776 [2024-07-24 02:11:51.968118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:49072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.776 [2024-07-24 02:11:51.968133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:37.776 [2024-07-24 02:11:51.968151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:49080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.776 [2024-07-24 02:11:51.968166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:37.776 [2024-07-24 02:11:51.968187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:49088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.776 [2024-07-24 02:11:51.968204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:37.776 [2024-07-24 02:11:51.968221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:49096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.776 [2024-07-24 02:11:51.968237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:37.776 [2024-07-24 02:11:51.968255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:49104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.776 [2024-07-24 02:11:51.968270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:37.776 [2024-07-24 02:11:51.968286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:49112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.776 [2024-07-24 02:11:51.968312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:37.776 [2024-07-24 02:11:51.968339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:49120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.776 [2024-07-24 02:11:51.968356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:37.776 [2024-07-24 02:11:51.968387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:49128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.776 [2024-07-24 02:11:51.968401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:37.776 [2024-07-24 02:11:51.968416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:49136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.776 [2024-07-24 02:11:51.968430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:37.776 [2024-07-24 02:11:51.968445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:49144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.776 [2024-07-24 02:11:51.968459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:37.776 [2024-07-24 02:11:51.968474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:49152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.776 [2024-07-24 02:11:51.968487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:37.776 [2024-07-24 02:11:51.968502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:49160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.776 [2024-07-24 02:11:51.968516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:37.776 [2024-07-24 02:11:51.968531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:49168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.776 [2024-07-24 02:11:51.968545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:37.776 [2024-07-24 02:11:51.968560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:49176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.776 [2024-07-24 02:11:51.968573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:37.776 [2024-07-24 02:11:51.968588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:49184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.776 [2024-07-24 02:11:51.968636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:37.776 [2024-07-24 02:11:51.968654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:49192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.776 [2024-07-24 02:11:51.968670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:37.776 [2024-07-24 02:11:51.968686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:49200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.776 [2024-07-24 02:11:51.968701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:37.776 [2024-07-24 02:11:51.968718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:49208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.776 [2024-07-24 02:11:51.968733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:37.776 [2024-07-24 02:11:51.968750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:49216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.776 [2024-07-24 02:11:51.968765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:37.776 [2024-07-24 02:11:51.968781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:49224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.776 [2024-07-24 02:11:51.968796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:37.776 [2024-07-24 02:11:51.968813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:49232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.776 [2024-07-24 02:11:51.968828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:37.776 [2024-07-24 02:11:51.968844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:49240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.776 [2024-07-24 02:11:51.968859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:37.776 [2024-07-24 02:11:51.968877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:49248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.776 [2024-07-24 02:11:51.968892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:37.776 [2024-07-24 02:11:51.968908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:49256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.776 [2024-07-24 02:11:51.968923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:37.776 [2024-07-24 02:11:51.968939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:49264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.776 [2024-07-24 02:11:51.968954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:37.776 [2024-07-24 02:11:51.968971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:49272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.776 [2024-07-24 02:11:51.968986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:37.776 [2024-07-24 02:11:51.969002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:49280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.776 [2024-07-24 02:11:51.969017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:37.776 [2024-07-24 02:11:51.969038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:49288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.776 [2024-07-24 02:11:51.969053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:37.776 [2024-07-24 02:11:51.969070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:49296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.776 [2024-07-24 02:11:51.969085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:37.776 [2024-07-24 02:11:51.969101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:49304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.776 [2024-07-24 02:11:51.969116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:37.776 [2024-07-24 02:11:51.969133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:49312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.776 [2024-07-24 02:11:51.969148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:37.776 [2024-07-24 02:11:51.969164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:49320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.776 [2024-07-24 02:11:51.969179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:37.776 [2024-07-24 02:11:51.969196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:49328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.776 [2024-07-24 02:11:51.969211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:37.776 [2024-07-24 02:11:51.969227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:49336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.776 [2024-07-24 02:11:51.969242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:37.776 [2024-07-24 02:11:51.969259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:49344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.776 [2024-07-24 02:11:51.969274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:37.776 [2024-07-24 02:11:51.969290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:49352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.776 [2024-07-24 02:11:51.969311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:37.776 [2024-07-24 02:11:51.969336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:49360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.776 [2024-07-24 02:11:51.969367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:37.776 [2024-07-24 02:11:51.969383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:49368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.776 [2024-07-24 02:11:51.969397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:37.776 [2024-07-24 02:11:51.969412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:49376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.776 [2024-07-24 02:11:51.969426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:37.776 [2024-07-24 02:11:51.969442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:49384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.776 [2024-07-24 02:11:51.969460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:37.776 [2024-07-24 02:11:51.969476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:49392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.776 [2024-07-24 02:11:51.969490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:37.776 [2024-07-24 02:11:51.969505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:49400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.776 [2024-07-24 02:11:51.969519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:37.776 [2024-07-24 02:11:51.969533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:49408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.776 [2024-07-24 02:11:51.969547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:37.776 [2024-07-24 02:11:51.969562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:49416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.776 [2024-07-24 02:11:51.969575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:37.776 [2024-07-24 02:11:51.969605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:49424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.776 [2024-07-24 02:11:51.969621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:37.776 [2024-07-24 02:11:51.969638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:49432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.776 [2024-07-24 02:11:51.969653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:37.776 [2024-07-24 02:11:51.969670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:49440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.776 [2024-07-24 02:11:51.969685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:37.776 [2024-07-24 02:11:51.969701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:49448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.776 [2024-07-24 02:11:51.969716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:37.776 [2024-07-24 02:11:51.969733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:49456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.776 [2024-07-24 02:11:51.969748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:37.776 [2024-07-24 02:11:51.969764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:49464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.776 [2024-07-24 02:11:51.969779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:37.776 [2024-07-24 02:11:51.969795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:49472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.776 [2024-07-24 02:11:51.969811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:37.776 [2024-07-24 02:11:51.969828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:49480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.776 [2024-07-24 02:11:51.969843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:37.776 [2024-07-24 02:11:51.969859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:49488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.776 [2024-07-24 02:11:51.969878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:37.776 [2024-07-24 02:11:51.969895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:49496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.776 [2024-07-24 02:11:51.969911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:37.776 [2024-07-24 02:11:51.969928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:49504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.776 [2024-07-24 02:11:51.969943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:37.776 [2024-07-24 02:11:51.969960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:49512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.776 [2024-07-24 02:11:51.969975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:37.776 [2024-07-24 02:11:51.969992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:49520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.776 [2024-07-24 02:11:51.970007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:37.776 [2024-07-24 02:11:51.970023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:49528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.776 [2024-07-24 02:11:51.970038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:37.776 [2024-07-24 02:11:51.970054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:49536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.777 [2024-07-24 02:11:51.970069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:37.777 [2024-07-24 02:11:51.970085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:49544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.777 [2024-07-24 02:11:51.970101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:37.777 [2024-07-24 02:11:51.970118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:49552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.777 [2024-07-24 02:11:51.970132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:37.777 [2024-07-24 02:11:51.970149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:49560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.777 [2024-07-24 02:11:51.970164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:37.777 [2024-07-24 02:11:51.970181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:49952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:37.777 [2024-07-24 02:11:51.970196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:37.777 [2024-07-24 02:11:51.970213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:49568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.777 [2024-07-24 02:11:51.970227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:37.777 [2024-07-24 02:11:51.970244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:49576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.777 [2024-07-24 02:11:51.970259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:37.777 [2024-07-24 02:11:51.970279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:49584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.777 [2024-07-24 02:11:51.970295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:37.777 [2024-07-24 02:11:51.970314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:49592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.777 [2024-07-24 02:11:51.970337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:37.777 [2024-07-24 02:11:51.970369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:49600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.777 [2024-07-24 02:11:51.970383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:37.777 [2024-07-24 02:11:51.970399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:49608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.777 [2024-07-24 02:11:51.970412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:37.777 [2024-07-24 02:11:51.970427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:49616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.777 [2024-07-24 02:11:51.970441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:37.777 [2024-07-24 02:11:51.970458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:49624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.777 [2024-07-24 02:11:51.970471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:37.777 [2024-07-24 02:11:51.970486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:49632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.777 [2024-07-24 02:11:51.970500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:37.777 [2024-07-24 02:11:51.970516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:49640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.777 [2024-07-24 02:11:51.970529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:37.777 [2024-07-24 02:11:51.970544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:49648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.777 [2024-07-24 02:11:51.970558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:37.777 [2024-07-24 02:11:51.970573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:49656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.777 [2024-07-24 02:11:51.970586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:37.777 [2024-07-24 02:11:51.970623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:49664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.777 [2024-07-24 02:11:51.970636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:37.777 [2024-07-24 02:11:51.970651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:49672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.777 [2024-07-24 02:11:51.970680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:37.777 [2024-07-24 02:11:51.970698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:49680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.777 [2024-07-24 02:11:51.970717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:37.777 [2024-07-24 02:11:51.970735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:49688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.777 [2024-07-24 02:11:51.970750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:37.777 [2024-07-24 02:11:51.970766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:49696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.777 [2024-07-24 02:11:51.970781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:37.777 [2024-07-24 02:11:51.970798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:49704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.777 [2024-07-24 02:11:51.970812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:37.777 [2024-07-24 02:11:51.970830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:49712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.777 [2024-07-24 02:11:51.970845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:37.777 [2024-07-24 02:11:51.970861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:49720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.777 [2024-07-24 02:11:51.970876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:37.777 [2024-07-24 02:11:51.970893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:49728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.777 [2024-07-24 02:11:51.970908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:37.777 [2024-07-24 02:11:51.970924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:49736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.777 [2024-07-24 02:11:51.970941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:37.777 [2024-07-24 02:11:51.970959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:49744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.777 [2024-07-24 02:11:51.970974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:37.777 [2024-07-24 02:11:51.970991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:49752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.777 [2024-07-24 02:11:51.971006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:37.777 [2024-07-24 02:11:51.971023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:49760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.777 [2024-07-24 02:11:51.971039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:37.777 [2024-07-24 02:11:51.971056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.777 [2024-07-24 02:11:51.971071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:37.777 [2024-07-24 02:11:51.971088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:49776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.777 [2024-07-24 02:11:51.971103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:37.777 [2024-07-24 02:11:51.971124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:49784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.777 [2024-07-24 02:11:51.971139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:37.777 [2024-07-24 02:11:51.971157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:49792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.777 [2024-07-24 02:11:51.971171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:37.777 [2024-07-24 02:11:51.971188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:49800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.777 [2024-07-24 02:11:51.971204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:37.777 [2024-07-24 02:11:51.971222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:49808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.777 [2024-07-24 02:11:51.971237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:37.777 [2024-07-24 02:11:51.971254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:49816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.777 [2024-07-24 02:11:51.971269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:37.777 [2024-07-24 02:11:51.971286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:49824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.777 [2024-07-24 02:11:51.971307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:37.777 [2024-07-24 02:11:51.971330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:49832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.777 [2024-07-24 02:11:51.971347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:37.777 [2024-07-24 02:11:51.971380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:49840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.777 [2024-07-24 02:11:51.971394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:37.777 [2024-07-24 02:11:51.971409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:49848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.777 [2024-07-24 02:11:51.971423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:37.777 [2024-07-24 02:11:51.971439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:49856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.777 [2024-07-24 02:11:51.971452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:37.777 [2024-07-24 02:11:51.971468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:49864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.777 [2024-07-24 02:11:51.971482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:37.777 [2024-07-24 02:11:51.971497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:49872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.777 [2024-07-24 02:11:51.971511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:37.777 [2024-07-24 02:11:51.971526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:49880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.777 [2024-07-24 02:11:51.971543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:37.777 [2024-07-24 02:11:51.971560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:49888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.777 [2024-07-24 02:11:51.971574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:37.777 [2024-07-24 02:11:51.971590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:49896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.777 [2024-07-24 02:11:51.971617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:37.777 [2024-07-24 02:11:51.971633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:49904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.777 [2024-07-24 02:11:51.971645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:37.777 [2024-07-24 02:11:51.971659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:49912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.777 [2024-07-24 02:11:51.971689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:37.777 [2024-07-24 02:11:51.971707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:49920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.777 [2024-07-24 02:11:51.971723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:37.777 [2024-07-24 02:11:51.971749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:49928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.777 [2024-07-24 02:11:51.971764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:37.777 [2024-07-24 02:11:51.971781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:49936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:37.777 [2024-07-24 02:11:51.971797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:37.777 [2024-07-24 02:11:51.971813] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10ec240 is same with the state(5) to be set 00:33:37.777 [2024-07-24 02:11:51.971831] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:37.777 [2024-07-24 02:11:51.971845] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:37.777 [2024-07-24 02:11:51.971858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49944 len:8 PRP1 0x0 PRP2 0x0 00:33:37.777 [2024-07-24 02:11:51.971872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:37.777 [2024-07-24 02:11:51.971936] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x10ec240 was disconnected and freed. reset controller. 00:33:37.777 [2024-07-24 02:11:51.975788] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:37.777 [2024-07-24 02:11:51.975876] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:37.777 [2024-07-24 02:11:51.976543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.777 [2024-07-24 02:11:51.976575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:37.777 [2024-07-24 02:11:51.976593] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:37.777 [2024-07-24 02:11:51.976856] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:37.777 [2024-07-24 02:11:51.977108] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:37.777 [2024-07-24 02:11:51.977133] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:37.777 [2024-07-24 02:11:51.977151] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:37.777 [2024-07-24 02:11:51.980784] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:37.777 [2024-07-24 02:11:51.990093] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:37.777 [2024-07-24 02:11:51.990520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.777 [2024-07-24 02:11:51.990553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:37.777 [2024-07-24 02:11:51.990571] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:37.777 [2024-07-24 02:11:51.990810] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:37.777 [2024-07-24 02:11:51.991054] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:37.777 [2024-07-24 02:11:51.991079] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:37.777 [2024-07-24 02:11:51.991095] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:37.777 [2024-07-24 02:11:51.994685] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:37.777 [2024-07-24 02:11:52.003984] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:37.777 [2024-07-24 02:11:52.004405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.777 [2024-07-24 02:11:52.004438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:37.777 [2024-07-24 02:11:52.004462] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:37.777 [2024-07-24 02:11:52.004701] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:37.777 [2024-07-24 02:11:52.004945] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:37.777 [2024-07-24 02:11:52.004969] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:37.777 [2024-07-24 02:11:52.004985] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:37.777 [2024-07-24 02:11:52.008583] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:37.777 [2024-07-24 02:11:52.017887] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:37.777 [2024-07-24 02:11:52.018299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.777 [2024-07-24 02:11:52.018337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:37.777 [2024-07-24 02:11:52.018357] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:37.778 [2024-07-24 02:11:52.018596] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:37.778 [2024-07-24 02:11:52.018840] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:37.778 [2024-07-24 02:11:52.018864] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:37.778 [2024-07-24 02:11:52.018880] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:37.778 [2024-07-24 02:11:52.022472] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:37.778 [2024-07-24 02:11:52.031771] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:37.778 [2024-07-24 02:11:52.032196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.778 [2024-07-24 02:11:52.032228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:37.778 [2024-07-24 02:11:52.032248] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:37.778 [2024-07-24 02:11:52.032499] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:37.778 [2024-07-24 02:11:52.032744] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:37.778 [2024-07-24 02:11:52.032768] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:37.778 [2024-07-24 02:11:52.032785] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:37.778 [2024-07-24 02:11:52.036373] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:37.778 [2024-07-24 02:11:52.045672] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:37.778 [2024-07-24 02:11:52.046064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.778 [2024-07-24 02:11:52.046105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:37.778 [2024-07-24 02:11:52.046123] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:37.778 [2024-07-24 02:11:52.046378] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:37.778 [2024-07-24 02:11:52.046623] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:37.778 [2024-07-24 02:11:52.046647] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:37.778 [2024-07-24 02:11:52.046663] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:37.778 [2024-07-24 02:11:52.050254] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:37.778 [2024-07-24 02:11:52.059559] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:37.778 [2024-07-24 02:11:52.059948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.778 [2024-07-24 02:11:52.059990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:37.778 [2024-07-24 02:11:52.060008] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:37.778 [2024-07-24 02:11:52.060253] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:37.778 [2024-07-24 02:11:52.060506] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:37.778 [2024-07-24 02:11:52.060531] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:37.778 [2024-07-24 02:11:52.060547] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:37.778 [2024-07-24 02:11:52.064124] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:37.778 [2024-07-24 02:11:52.073431] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:37.778 [2024-07-24 02:11:52.073857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.778 [2024-07-24 02:11:52.073889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:37.778 [2024-07-24 02:11:52.073918] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:37.778 [2024-07-24 02:11:52.074159] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:37.778 [2024-07-24 02:11:52.074414] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:37.778 [2024-07-24 02:11:52.074440] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:37.778 [2024-07-24 02:11:52.074456] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:37.778 [2024-07-24 02:11:52.078036] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:37.778 [2024-07-24 02:11:52.087337] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:37.778 [2024-07-24 02:11:52.087759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.778 [2024-07-24 02:11:52.087791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:37.778 [2024-07-24 02:11:52.087814] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:37.778 [2024-07-24 02:11:52.088052] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:37.778 [2024-07-24 02:11:52.088296] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:37.778 [2024-07-24 02:11:52.088332] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:37.778 [2024-07-24 02:11:52.088352] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:37.778 [2024-07-24 02:11:52.091938] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:37.778 [2024-07-24 02:11:52.101241] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:37.778 [2024-07-24 02:11:52.101671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.778 [2024-07-24 02:11:52.101703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:37.778 [2024-07-24 02:11:52.101723] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:37.778 [2024-07-24 02:11:52.101962] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:37.778 [2024-07-24 02:11:52.102206] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:37.778 [2024-07-24 02:11:52.102230] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:37.778 [2024-07-24 02:11:52.102246] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:37.778 [2024-07-24 02:11:52.105837] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:37.778 [2024-07-24 02:11:52.115137] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:37.778 [2024-07-24 02:11:52.115552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.778 [2024-07-24 02:11:52.115583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:37.778 [2024-07-24 02:11:52.115607] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:37.778 [2024-07-24 02:11:52.115846] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:37.778 [2024-07-24 02:11:52.116090] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:37.778 [2024-07-24 02:11:52.116121] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:37.778 [2024-07-24 02:11:52.116137] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:37.778 [2024-07-24 02:11:52.119726] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:37.778 [2024-07-24 02:11:52.129027] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:37.778 [2024-07-24 02:11:52.129430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.778 [2024-07-24 02:11:52.129462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:37.778 [2024-07-24 02:11:52.129480] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:37.778 [2024-07-24 02:11:52.129722] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:37.778 [2024-07-24 02:11:52.129967] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:37.778 [2024-07-24 02:11:52.129991] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:37.778 [2024-07-24 02:11:52.130006] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:37.778 [2024-07-24 02:11:52.133597] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:37.778 [2024-07-24 02:11:52.142894] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:37.778 [2024-07-24 02:11:52.143290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.778 [2024-07-24 02:11:52.143333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:37.778 [2024-07-24 02:11:52.143353] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:37.778 [2024-07-24 02:11:52.143592] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:37.778 [2024-07-24 02:11:52.143837] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:37.778 [2024-07-24 02:11:52.143861] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:37.778 [2024-07-24 02:11:52.143878] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:37.778 [2024-07-24 02:11:52.147466] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:37.778 [2024-07-24 02:11:52.156804] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:37.778 [2024-07-24 02:11:52.157218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.778 [2024-07-24 02:11:52.157250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:37.778 [2024-07-24 02:11:52.157268] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:37.778 [2024-07-24 02:11:52.157516] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:37.778 [2024-07-24 02:11:52.157762] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:37.778 [2024-07-24 02:11:52.157786] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:37.778 [2024-07-24 02:11:52.157802] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:37.778 [2024-07-24 02:11:52.161389] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:37.778 [2024-07-24 02:11:52.170690] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:37.778 [2024-07-24 02:11:52.171111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.778 [2024-07-24 02:11:52.171142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:37.778 [2024-07-24 02:11:52.171164] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:37.778 [2024-07-24 02:11:52.171413] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:37.778 [2024-07-24 02:11:52.171669] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:37.778 [2024-07-24 02:11:52.171694] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:37.778 [2024-07-24 02:11:52.171710] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:37.778 [2024-07-24 02:11:52.175292] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:37.778 [2024-07-24 02:11:52.184602] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:37.778 [2024-07-24 02:11:52.185035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.778 [2024-07-24 02:11:52.185067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:37.778 [2024-07-24 02:11:52.185085] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:37.778 [2024-07-24 02:11:52.185335] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:37.778 [2024-07-24 02:11:52.185580] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:37.778 [2024-07-24 02:11:52.185604] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:37.778 [2024-07-24 02:11:52.185621] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:37.778 [2024-07-24 02:11:52.189200] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:37.778 [2024-07-24 02:11:52.198519] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:37.778 [2024-07-24 02:11:52.198914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.778 [2024-07-24 02:11:52.198951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:37.778 [2024-07-24 02:11:52.198969] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:37.778 [2024-07-24 02:11:52.199213] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:37.779 [2024-07-24 02:11:52.199469] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:37.779 [2024-07-24 02:11:52.199494] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:37.779 [2024-07-24 02:11:52.199510] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:37.779 [2024-07-24 02:11:52.203086] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:37.779 [2024-07-24 02:11:52.212397] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:37.779 [2024-07-24 02:11:52.212792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.779 [2024-07-24 02:11:52.212830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:37.779 [2024-07-24 02:11:52.212848] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:37.779 [2024-07-24 02:11:52.213104] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:37.779 [2024-07-24 02:11:52.213358] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:37.779 [2024-07-24 02:11:52.213384] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:37.779 [2024-07-24 02:11:52.213400] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:37.779 [2024-07-24 02:11:52.216981] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:37.779 [2024-07-24 02:11:52.226286] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:37.779 [2024-07-24 02:11:52.226704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.779 [2024-07-24 02:11:52.226739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:37.779 [2024-07-24 02:11:52.226757] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:37.779 [2024-07-24 02:11:52.226996] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:37.779 [2024-07-24 02:11:52.227239] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:37.779 [2024-07-24 02:11:52.227263] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:37.779 [2024-07-24 02:11:52.227280] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:37.779 [2024-07-24 02:11:52.230870] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:37.779 [2024-07-24 02:11:52.240174] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:37.779 [2024-07-24 02:11:52.240578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.779 [2024-07-24 02:11:52.240616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:37.779 [2024-07-24 02:11:52.240634] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:37.779 [2024-07-24 02:11:52.240878] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:37.779 [2024-07-24 02:11:52.241123] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:37.779 [2024-07-24 02:11:52.241147] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:37.779 [2024-07-24 02:11:52.241163] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:37.779 [2024-07-24 02:11:52.244749] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:37.779 [2024-07-24 02:11:52.254060] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:37.779 [2024-07-24 02:11:52.254470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.779 [2024-07-24 02:11:52.254501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:37.779 [2024-07-24 02:11:52.254519] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:37.779 [2024-07-24 02:11:52.254758] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:37.779 [2024-07-24 02:11:52.255003] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:37.779 [2024-07-24 02:11:52.255027] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:37.779 [2024-07-24 02:11:52.255048] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:37.779 [2024-07-24 02:11:52.258641] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:37.779 [2024-07-24 02:11:52.267939] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:37.779 [2024-07-24 02:11:52.268346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.779 [2024-07-24 02:11:52.268379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:37.779 [2024-07-24 02:11:52.268408] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:37.779 [2024-07-24 02:11:52.268647] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:37.779 [2024-07-24 02:11:52.268891] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:37.779 [2024-07-24 02:11:52.268915] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:37.779 [2024-07-24 02:11:52.268932] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:37.779 [2024-07-24 02:11:52.272522] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:37.779 [2024-07-24 02:11:52.281820] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:37.779 [2024-07-24 02:11:52.282244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.779 [2024-07-24 02:11:52.282275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:37.779 [2024-07-24 02:11:52.282295] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:37.779 [2024-07-24 02:11:52.282543] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:37.779 [2024-07-24 02:11:52.282789] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:37.779 [2024-07-24 02:11:52.282813] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:37.779 [2024-07-24 02:11:52.282829] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:37.779 [2024-07-24 02:11:52.286448] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:37.779 [2024-07-24 02:11:52.295790] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:37.779 [2024-07-24 02:11:52.296249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.779 [2024-07-24 02:11:52.296281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:37.779 [2024-07-24 02:11:52.296299] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:37.779 [2024-07-24 02:11:52.296548] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:37.779 [2024-07-24 02:11:52.296793] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:37.779 [2024-07-24 02:11:52.296817] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:37.779 [2024-07-24 02:11:52.296834] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:37.779 [2024-07-24 02:11:52.300424] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:37.779 [2024-07-24 02:11:52.309729] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:37.779 [2024-07-24 02:11:52.310252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.779 [2024-07-24 02:11:52.310305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:37.779 [2024-07-24 02:11:52.310332] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:37.779 [2024-07-24 02:11:52.310575] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:37.779 [2024-07-24 02:11:52.310820] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:37.779 [2024-07-24 02:11:52.310844] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:37.779 [2024-07-24 02:11:52.310860] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:37.779 [2024-07-24 02:11:52.314444] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:37.779 [2024-07-24 02:11:52.323747] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:37.779 [2024-07-24 02:11:52.324159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.779 [2024-07-24 02:11:52.324191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:37.779 [2024-07-24 02:11:52.324216] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:37.779 [2024-07-24 02:11:52.324466] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:37.779 [2024-07-24 02:11:52.324713] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:37.779 [2024-07-24 02:11:52.324737] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:37.779 [2024-07-24 02:11:52.324753] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:37.779 [2024-07-24 02:11:52.328350] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:37.779 [2024-07-24 02:11:52.337676] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:37.779 [2024-07-24 02:11:52.338086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.779 [2024-07-24 02:11:52.338119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:37.779 [2024-07-24 02:11:52.338137] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:37.779 [2024-07-24 02:11:52.338390] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:37.779 [2024-07-24 02:11:52.338634] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:37.779 [2024-07-24 02:11:52.338660] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:37.779 [2024-07-24 02:11:52.338676] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:37.779 [2024-07-24 02:11:52.342260] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:37.779 [2024-07-24 02:11:52.351590] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:37.779 [2024-07-24 02:11:52.352063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.779 [2024-07-24 02:11:52.352116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:37.779 [2024-07-24 02:11:52.352134] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:37.779 [2024-07-24 02:11:52.352383] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:37.779 [2024-07-24 02:11:52.352633] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:37.779 [2024-07-24 02:11:52.352659] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:37.779 [2024-07-24 02:11:52.352675] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:37.779 [2024-07-24 02:11:52.356261] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:37.779 [2024-07-24 02:11:52.365573] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:37.779 [2024-07-24 02:11:52.365984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.779 [2024-07-24 02:11:52.366017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:37.779 [2024-07-24 02:11:52.366036] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:37.779 [2024-07-24 02:11:52.366276] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:37.779 [2024-07-24 02:11:52.366534] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:37.779 [2024-07-24 02:11:52.366561] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:37.779 [2024-07-24 02:11:52.366577] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:37.779 [2024-07-24 02:11:52.370159] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:37.779 [2024-07-24 02:11:52.379475] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:37.779 [2024-07-24 02:11:52.379898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.779 [2024-07-24 02:11:52.379931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:37.779 [2024-07-24 02:11:52.379949] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:37.779 [2024-07-24 02:11:52.380190] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:37.779 [2024-07-24 02:11:52.380449] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:37.779 [2024-07-24 02:11:52.380475] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:37.779 [2024-07-24 02:11:52.380492] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:37.779 [2024-07-24 02:11:52.384072] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:37.779 [2024-07-24 02:11:52.393384] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:37.779 [2024-07-24 02:11:52.393781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.779 [2024-07-24 02:11:52.393814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:37.779 [2024-07-24 02:11:52.393832] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:37.779 [2024-07-24 02:11:52.394071] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:37.779 [2024-07-24 02:11:52.394330] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:37.779 [2024-07-24 02:11:52.394356] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:37.779 [2024-07-24 02:11:52.394371] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:37.779 [2024-07-24 02:11:52.397958] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:37.779 [2024-07-24 02:11:52.407263] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:37.779 [2024-07-24 02:11:52.407683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.779 [2024-07-24 02:11:52.407716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:37.779 [2024-07-24 02:11:52.407734] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:37.779 [2024-07-24 02:11:52.407975] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:37.779 [2024-07-24 02:11:52.408220] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:37.779 [2024-07-24 02:11:52.408245] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:37.779 [2024-07-24 02:11:52.408262] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:37.779 [2024-07-24 02:11:52.411858] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:37.779 [2024-07-24 02:11:52.421160] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:37.779 [2024-07-24 02:11:52.421575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.779 [2024-07-24 02:11:52.421607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:37.779 [2024-07-24 02:11:52.421626] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:37.779 [2024-07-24 02:11:52.421865] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:37.779 [2024-07-24 02:11:52.422108] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:37.779 [2024-07-24 02:11:52.422133] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:37.779 [2024-07-24 02:11:52.422149] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:37.779 [2024-07-24 02:11:52.425743] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:37.779 [2024-07-24 02:11:52.435050] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:37.779 [2024-07-24 02:11:52.435463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.779 [2024-07-24 02:11:52.435496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:37.779 [2024-07-24 02:11:52.435514] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:37.779 [2024-07-24 02:11:52.435753] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:37.779 [2024-07-24 02:11:52.435997] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:37.779 [2024-07-24 02:11:52.436022] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:37.780 [2024-07-24 02:11:52.436038] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:37.780 [2024-07-24 02:11:52.439631] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:37.780 [2024-07-24 02:11:52.448936] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:37.780 [2024-07-24 02:11:52.449358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.780 [2024-07-24 02:11:52.449391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:37.780 [2024-07-24 02:11:52.449414] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:37.780 [2024-07-24 02:11:52.449655] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:37.780 [2024-07-24 02:11:52.449899] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:37.780 [2024-07-24 02:11:52.449924] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:37.780 [2024-07-24 02:11:52.449940] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:37.780 [2024-07-24 02:11:52.453550] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:37.780 [2024-07-24 02:11:52.462851] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:37.780 [2024-07-24 02:11:52.463260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.780 [2024-07-24 02:11:52.463291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:37.780 [2024-07-24 02:11:52.463309] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:37.780 [2024-07-24 02:11:52.463561] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:37.780 [2024-07-24 02:11:52.463805] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:37.780 [2024-07-24 02:11:52.463831] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:37.780 [2024-07-24 02:11:52.463848] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:37.780 [2024-07-24 02:11:52.467442] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:37.780 [2024-07-24 02:11:52.476769] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:37.780 [2024-07-24 02:11:52.477188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.780 [2024-07-24 02:11:52.477219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:37.780 [2024-07-24 02:11:52.477238] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:37.780 [2024-07-24 02:11:52.477487] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:37.780 [2024-07-24 02:11:52.477732] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:37.780 [2024-07-24 02:11:52.477757] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:37.780 [2024-07-24 02:11:52.477773] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:37.780 [2024-07-24 02:11:52.481369] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:37.780 [2024-07-24 02:11:52.490694] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:37.780 [2024-07-24 02:11:52.491102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.780 [2024-07-24 02:11:52.491134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:37.780 [2024-07-24 02:11:52.491152] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:37.780 [2024-07-24 02:11:52.491402] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:37.780 [2024-07-24 02:11:52.491652] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:37.780 [2024-07-24 02:11:52.491677] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:37.780 [2024-07-24 02:11:52.491693] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:37.780 [2024-07-24 02:11:52.495290] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:37.780 [2024-07-24 02:11:52.504628] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:37.780 [2024-07-24 02:11:52.505118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.780 [2024-07-24 02:11:52.505150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:37.780 [2024-07-24 02:11:52.505171] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:37.780 [2024-07-24 02:11:52.505431] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:37.780 [2024-07-24 02:11:52.505676] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:37.780 [2024-07-24 02:11:52.505701] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:37.780 [2024-07-24 02:11:52.505717] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:37.780 [2024-07-24 02:11:52.509303] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:37.780 [2024-07-24 02:11:52.518647] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:37.780 [2024-07-24 02:11:52.519073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.780 [2024-07-24 02:11:52.519104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:37.780 [2024-07-24 02:11:52.519123] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:37.780 [2024-07-24 02:11:52.519373] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:37.780 [2024-07-24 02:11:52.519619] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:37.780 [2024-07-24 02:11:52.519643] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:37.780 [2024-07-24 02:11:52.519659] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:37.780 [2024-07-24 02:11:52.523249] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:37.780 [2024-07-24 02:11:52.532584] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:37.780 [2024-07-24 02:11:52.533001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.780 [2024-07-24 02:11:52.533032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:37.780 [2024-07-24 02:11:52.533049] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:37.780 [2024-07-24 02:11:52.533289] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:37.780 [2024-07-24 02:11:52.533545] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:37.780 [2024-07-24 02:11:52.533583] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:37.780 [2024-07-24 02:11:52.533600] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:37.780 [2024-07-24 02:11:52.537184] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:37.780 [2024-07-24 02:11:52.546565] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:37.780 [2024-07-24 02:11:52.546977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.780 [2024-07-24 02:11:52.547009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:37.780 [2024-07-24 02:11:52.547028] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:37.780 [2024-07-24 02:11:52.547267] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:37.780 [2024-07-24 02:11:52.547524] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:37.780 [2024-07-24 02:11:52.547549] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:37.780 [2024-07-24 02:11:52.547566] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:37.780 [2024-07-24 02:11:52.551160] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:37.780 [2024-07-24 02:11:52.560522] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:37.780 [2024-07-24 02:11:52.560933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.780 [2024-07-24 02:11:52.560966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:37.780 [2024-07-24 02:11:52.560984] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:37.780 [2024-07-24 02:11:52.561223] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:37.780 [2024-07-24 02:11:52.561480] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:37.780 [2024-07-24 02:11:52.561506] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:37.780 [2024-07-24 02:11:52.561522] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:37.780 [2024-07-24 02:11:52.565105] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:37.780 [2024-07-24 02:11:52.574425] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:37.780 [2024-07-24 02:11:52.574833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.780 [2024-07-24 02:11:52.574865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:37.780 [2024-07-24 02:11:52.574883] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:37.780 [2024-07-24 02:11:52.575123] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:37.780 [2024-07-24 02:11:52.575382] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:37.780 [2024-07-24 02:11:52.575407] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:37.780 [2024-07-24 02:11:52.575423] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:37.780 [2024-07-24 02:11:52.579008] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:37.780 [2024-07-24 02:11:52.588333] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:37.780 [2024-07-24 02:11:52.588717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.780 [2024-07-24 02:11:52.588749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:37.780 [2024-07-24 02:11:52.588773] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:37.780 [2024-07-24 02:11:52.589014] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:37.780 [2024-07-24 02:11:52.589259] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:37.780 [2024-07-24 02:11:52.589284] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:37.780 [2024-07-24 02:11:52.589301] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:37.780 [2024-07-24 02:11:52.592893] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:37.780 [2024-07-24 02:11:52.602215] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:37.780 [2024-07-24 02:11:52.602592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.780 [2024-07-24 02:11:52.602624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:37.780 [2024-07-24 02:11:52.602642] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:37.780 [2024-07-24 02:11:52.602881] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:37.780 [2024-07-24 02:11:52.603126] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:37.780 [2024-07-24 02:11:52.603150] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:37.780 [2024-07-24 02:11:52.603167] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:37.780 [2024-07-24 02:11:52.606762] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:37.780 [2024-07-24 02:11:52.616084] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:37.780 [2024-07-24 02:11:52.616455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.780 [2024-07-24 02:11:52.616488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:37.780 [2024-07-24 02:11:52.616506] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:37.780 [2024-07-24 02:11:52.616745] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:37.780 [2024-07-24 02:11:52.616988] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:37.780 [2024-07-24 02:11:52.617013] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:37.780 [2024-07-24 02:11:52.617029] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:37.780 [2024-07-24 02:11:52.620632] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:37.780 [2024-07-24 02:11:52.629965] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:37.780 [2024-07-24 02:11:52.630375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.780 [2024-07-24 02:11:52.630408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:37.780 [2024-07-24 02:11:52.630426] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:37.780 [2024-07-24 02:11:52.630665] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:37.780 [2024-07-24 02:11:52.630910] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:37.780 [2024-07-24 02:11:52.630941] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:37.780 [2024-07-24 02:11:52.630957] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:37.780 [2024-07-24 02:11:52.634553] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:37.780 [2024-07-24 02:11:52.643892] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:37.780 [2024-07-24 02:11:52.644283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.780 [2024-07-24 02:11:52.644325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:37.780 [2024-07-24 02:11:52.644347] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:37.780 [2024-07-24 02:11:52.644588] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:37.780 [2024-07-24 02:11:52.644841] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:37.780 [2024-07-24 02:11:52.644867] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:37.780 [2024-07-24 02:11:52.644883] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:37.780 [2024-07-24 02:11:52.648474] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:37.780 [2024-07-24 02:11:52.657815] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:37.780 [2024-07-24 02:11:52.658222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.780 [2024-07-24 02:11:52.658254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:37.780 [2024-07-24 02:11:52.658272] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:37.780 [2024-07-24 02:11:52.658519] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:37.780 [2024-07-24 02:11:52.658765] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:37.781 [2024-07-24 02:11:52.658789] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:37.781 [2024-07-24 02:11:52.658805] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:37.781 [2024-07-24 02:11:52.662513] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:38.039 [2024-07-24 02:11:52.671820] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:38.039 [2024-07-24 02:11:52.672241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.039 [2024-07-24 02:11:52.672275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:38.039 [2024-07-24 02:11:52.672295] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:38.039 [2024-07-24 02:11:52.672546] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:38.039 [2024-07-24 02:11:52.672792] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:38.039 [2024-07-24 02:11:52.672816] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:38.039 [2024-07-24 02:11:52.672832] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:38.039 [2024-07-24 02:11:52.676426] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:38.039 [2024-07-24 02:11:52.685881] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:38.039 [2024-07-24 02:11:52.686327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.039 [2024-07-24 02:11:52.686361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:38.039 [2024-07-24 02:11:52.686380] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:38.039 [2024-07-24 02:11:52.686620] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:38.039 [2024-07-24 02:11:52.686866] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:38.039 [2024-07-24 02:11:52.686890] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:38.039 [2024-07-24 02:11:52.686907] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:38.039 [2024-07-24 02:11:52.690500] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:38.039 [2024-07-24 02:11:52.699816] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:38.039 [2024-07-24 02:11:52.700242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.039 [2024-07-24 02:11:52.700275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:38.039 [2024-07-24 02:11:52.700293] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:38.039 [2024-07-24 02:11:52.700545] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:38.039 [2024-07-24 02:11:52.700789] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:38.039 [2024-07-24 02:11:52.700815] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:38.039 [2024-07-24 02:11:52.700830] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:38.039 [2024-07-24 02:11:52.704427] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:38.039 [2024-07-24 02:11:52.713771] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:38.039 [2024-07-24 02:11:52.714185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.039 [2024-07-24 02:11:52.714217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:38.039 [2024-07-24 02:11:52.714236] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:38.039 [2024-07-24 02:11:52.714490] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:38.039 [2024-07-24 02:11:52.714735] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:38.039 [2024-07-24 02:11:52.714760] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:38.039 [2024-07-24 02:11:52.714777] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:38.039 [2024-07-24 02:11:52.718397] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:38.039 [2024-07-24 02:11:52.727723] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:38.039 [2024-07-24 02:11:52.728206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.039 [2024-07-24 02:11:52.728256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:38.039 [2024-07-24 02:11:52.728275] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:38.039 [2024-07-24 02:11:52.728531] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:38.039 [2024-07-24 02:11:52.728776] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:38.039 [2024-07-24 02:11:52.728801] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:38.039 [2024-07-24 02:11:52.728817] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:38.040 [2024-07-24 02:11:52.732428] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:38.040 [2024-07-24 02:11:52.741758] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:38.040 [2024-07-24 02:11:52.742168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.040 [2024-07-24 02:11:52.742201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:38.040 [2024-07-24 02:11:52.742219] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:38.040 [2024-07-24 02:11:52.742468] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:38.040 [2024-07-24 02:11:52.742714] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:38.040 [2024-07-24 02:11:52.742740] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:38.040 [2024-07-24 02:11:52.742756] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:38.040 [2024-07-24 02:11:52.746351] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:38.040 [2024-07-24 02:11:52.755682] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:38.040 [2024-07-24 02:11:52.756104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.040 [2024-07-24 02:11:52.756137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:38.040 [2024-07-24 02:11:52.756155] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:38.040 [2024-07-24 02:11:52.756409] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:38.040 [2024-07-24 02:11:52.756654] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:38.040 [2024-07-24 02:11:52.756680] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:38.040 [2024-07-24 02:11:52.756696] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:38.040 [2024-07-24 02:11:52.760278] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:38.040 [2024-07-24 02:11:52.769597] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:38.040 [2024-07-24 02:11:52.769979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.040 [2024-07-24 02:11:52.770013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:38.040 [2024-07-24 02:11:52.770032] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:38.040 [2024-07-24 02:11:52.770272] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:38.040 [2024-07-24 02:11:52.770529] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:38.040 [2024-07-24 02:11:52.770555] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:38.040 [2024-07-24 02:11:52.770587] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:38.040 [2024-07-24 02:11:52.774170] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:38.040 [2024-07-24 02:11:52.783488] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:38.040 [2024-07-24 02:11:52.783909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.040 [2024-07-24 02:11:52.783941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:38.040 [2024-07-24 02:11:52.783960] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:38.040 [2024-07-24 02:11:52.784199] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:38.040 [2024-07-24 02:11:52.784457] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:38.040 [2024-07-24 02:11:52.784483] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:38.040 [2024-07-24 02:11:52.784499] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:38.040 [2024-07-24 02:11:52.788084] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:38.040 [2024-07-24 02:11:52.797403] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:38.040 [2024-07-24 02:11:52.797836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.040 [2024-07-24 02:11:52.797869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:38.040 [2024-07-24 02:11:52.797887] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:38.040 [2024-07-24 02:11:52.798128] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:38.040 [2024-07-24 02:11:52.798386] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:38.040 [2024-07-24 02:11:52.798413] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:38.040 [2024-07-24 02:11:52.798430] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:38.040 [2024-07-24 02:11:52.802011] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:38.040 [2024-07-24 02:11:52.811334] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:38.040 [2024-07-24 02:11:52.811752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.040 [2024-07-24 02:11:52.811786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:38.040 [2024-07-24 02:11:52.811805] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:38.040 [2024-07-24 02:11:52.812045] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:38.040 [2024-07-24 02:11:52.812291] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:38.040 [2024-07-24 02:11:52.812329] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:38.040 [2024-07-24 02:11:52.812349] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:38.040 [2024-07-24 02:11:52.815933] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:38.040 [2024-07-24 02:11:52.825241] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:38.040 [2024-07-24 02:11:52.825670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.040 [2024-07-24 02:11:52.825707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:38.040 [2024-07-24 02:11:52.825726] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:38.040 [2024-07-24 02:11:52.825966] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:38.040 [2024-07-24 02:11:52.826209] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:38.040 [2024-07-24 02:11:52.826235] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:38.040 [2024-07-24 02:11:52.826251] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:38.040 [2024-07-24 02:11:52.829848] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:38.040 [2024-07-24 02:11:52.839153] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:38.040 [2024-07-24 02:11:52.839570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.040 [2024-07-24 02:11:52.839603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:38.040 [2024-07-24 02:11:52.839621] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:38.040 [2024-07-24 02:11:52.839860] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:38.040 [2024-07-24 02:11:52.840104] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:38.040 [2024-07-24 02:11:52.840129] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:38.040 [2024-07-24 02:11:52.840145] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:38.040 [2024-07-24 02:11:52.843756] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:38.040 [2024-07-24 02:11:52.853076] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:38.040 [2024-07-24 02:11:52.853494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.040 [2024-07-24 02:11:52.853527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:38.040 [2024-07-24 02:11:52.853545] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:38.040 [2024-07-24 02:11:52.853785] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:38.040 [2024-07-24 02:11:52.854028] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:38.040 [2024-07-24 02:11:52.854054] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:38.040 [2024-07-24 02:11:52.854069] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:38.040 [2024-07-24 02:11:52.857664] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:38.040 [2024-07-24 02:11:52.866970] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:38.040 [2024-07-24 02:11:52.867392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.040 [2024-07-24 02:11:52.867424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:38.040 [2024-07-24 02:11:52.867442] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:38.040 [2024-07-24 02:11:52.867682] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:38.040 [2024-07-24 02:11:52.867932] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:38.041 [2024-07-24 02:11:52.867958] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:38.041 [2024-07-24 02:11:52.867974] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:38.041 [2024-07-24 02:11:52.871570] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:38.041 [2024-07-24 02:11:52.880905] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:38.041 [2024-07-24 02:11:52.881332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.041 [2024-07-24 02:11:52.881364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:38.041 [2024-07-24 02:11:52.881382] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:38.041 [2024-07-24 02:11:52.881622] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:38.041 [2024-07-24 02:11:52.881865] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:38.041 [2024-07-24 02:11:52.881891] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:38.041 [2024-07-24 02:11:52.881907] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:38.041 [2024-07-24 02:11:52.885499] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:38.041 [2024-07-24 02:11:52.894809] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:38.041 [2024-07-24 02:11:52.895207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.041 [2024-07-24 02:11:52.895239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:38.041 [2024-07-24 02:11:52.895258] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:38.041 [2024-07-24 02:11:52.895507] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:38.041 [2024-07-24 02:11:52.895752] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:38.041 [2024-07-24 02:11:52.895778] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:38.041 [2024-07-24 02:11:52.895794] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:38.041 [2024-07-24 02:11:52.899385] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:38.041 [2024-07-24 02:11:52.908706] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:38.041 [2024-07-24 02:11:52.909112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.041 [2024-07-24 02:11:52.909144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:38.041 [2024-07-24 02:11:52.909162] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:38.041 [2024-07-24 02:11:52.909412] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:38.041 [2024-07-24 02:11:52.909657] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:38.041 [2024-07-24 02:11:52.909683] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:38.041 [2024-07-24 02:11:52.909700] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:38.041 [2024-07-24 02:11:52.913289] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:38.041 [2024-07-24 02:11:52.922608] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:38.041 [2024-07-24 02:11:52.923026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.041 [2024-07-24 02:11:52.923058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:38.041 [2024-07-24 02:11:52.923077] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:38.041 [2024-07-24 02:11:52.923403] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:38.041 [2024-07-24 02:11:52.923652] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:38.041 [2024-07-24 02:11:52.923678] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:38.041 [2024-07-24 02:11:52.923695] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:38.041 [2024-07-24 02:11:52.927278] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:38.300 [2024-07-24 02:11:52.936734] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:38.300 [2024-07-24 02:11:52.937218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.300 [2024-07-24 02:11:52.937280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:38.300 [2024-07-24 02:11:52.937313] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:38.300 [2024-07-24 02:11:52.937574] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:38.300 [2024-07-24 02:11:52.937818] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:38.300 [2024-07-24 02:11:52.937842] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:38.300 [2024-07-24 02:11:52.937859] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:38.300 [2024-07-24 02:11:52.941559] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:38.300 [2024-07-24 02:11:52.950666] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:38.300 [2024-07-24 02:11:52.951090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.300 [2024-07-24 02:11:52.951124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:38.300 [2024-07-24 02:11:52.951143] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:38.300 [2024-07-24 02:11:52.951398] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:38.300 [2024-07-24 02:11:52.951645] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:38.300 [2024-07-24 02:11:52.951671] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:38.300 [2024-07-24 02:11:52.951687] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:38.300 [2024-07-24 02:11:52.955284] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:38.300 [2024-07-24 02:11:52.964592] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:38.300 [2024-07-24 02:11:52.965005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.300 [2024-07-24 02:11:52.965039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:38.300 [2024-07-24 02:11:52.965064] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:38.300 [2024-07-24 02:11:52.965305] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:38.300 [2024-07-24 02:11:52.965565] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:38.300 [2024-07-24 02:11:52.965591] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:38.301 [2024-07-24 02:11:52.965607] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:38.301 [2024-07-24 02:11:52.969190] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:38.301 [2024-07-24 02:11:52.978518] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:38.301 [2024-07-24 02:11:52.978940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.301 [2024-07-24 02:11:52.978973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:38.301 [2024-07-24 02:11:52.978991] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:38.301 [2024-07-24 02:11:52.979231] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:38.301 [2024-07-24 02:11:52.979486] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:38.301 [2024-07-24 02:11:52.979513] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:38.301 [2024-07-24 02:11:52.979529] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:38.301 [2024-07-24 02:11:52.983113] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:38.301 [2024-07-24 02:11:52.992566] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:38.301 [2024-07-24 02:11:52.992987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.301 [2024-07-24 02:11:52.993019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:38.301 [2024-07-24 02:11:52.993037] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:38.301 [2024-07-24 02:11:52.993277] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:38.301 [2024-07-24 02:11:52.993533] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:38.301 [2024-07-24 02:11:52.993559] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:38.301 [2024-07-24 02:11:52.993575] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:38.301 [2024-07-24 02:11:52.997157] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:38.301 [2024-07-24 02:11:53.006465] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:38.301 [2024-07-24 02:11:53.006887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.301 [2024-07-24 02:11:53.006919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:38.301 [2024-07-24 02:11:53.006938] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:38.301 [2024-07-24 02:11:53.007177] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:38.301 [2024-07-24 02:11:53.007437] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:38.301 [2024-07-24 02:11:53.007469] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:38.301 [2024-07-24 02:11:53.007486] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:38.301 [2024-07-24 02:11:53.011073] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:38.301 [2024-07-24 02:11:53.020380] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:38.301 [2024-07-24 02:11:53.020765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.301 [2024-07-24 02:11:53.020797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:38.301 [2024-07-24 02:11:53.020815] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:38.301 [2024-07-24 02:11:53.021054] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:38.301 [2024-07-24 02:11:53.021298] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:38.301 [2024-07-24 02:11:53.021336] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:38.301 [2024-07-24 02:11:53.021356] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:38.301 [2024-07-24 02:11:53.024939] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:38.301 [2024-07-24 02:11:53.034240] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:38.301 [2024-07-24 02:11:53.034656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.301 [2024-07-24 02:11:53.034689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:38.301 [2024-07-24 02:11:53.034707] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:38.301 [2024-07-24 02:11:53.034947] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:38.301 [2024-07-24 02:11:53.035191] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:38.301 [2024-07-24 02:11:53.035216] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:38.301 [2024-07-24 02:11:53.035232] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:38.301 [2024-07-24 02:11:53.038825] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:38.301 [2024-07-24 02:11:53.048127] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:38.301 [2024-07-24 02:11:53.048552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.301 [2024-07-24 02:11:53.048585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:38.301 [2024-07-24 02:11:53.048603] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:38.301 [2024-07-24 02:11:53.048842] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:38.301 [2024-07-24 02:11:53.049086] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:38.301 [2024-07-24 02:11:53.049111] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:38.301 [2024-07-24 02:11:53.049127] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:38.301 [2024-07-24 02:11:53.052720] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:38.301 [2024-07-24 02:11:53.062041] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:38.301 [2024-07-24 02:11:53.062441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.301 [2024-07-24 02:11:53.062474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:38.301 [2024-07-24 02:11:53.062492] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:38.301 [2024-07-24 02:11:53.062732] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:38.301 [2024-07-24 02:11:53.062976] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:38.301 [2024-07-24 02:11:53.063001] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:38.301 [2024-07-24 02:11:53.063017] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:38.301 [2024-07-24 02:11:53.066611] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:38.301 [2024-07-24 02:11:53.075920] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:38.301 [2024-07-24 02:11:53.076323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.301 [2024-07-24 02:11:53.076356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:38.301 [2024-07-24 02:11:53.076374] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:38.301 [2024-07-24 02:11:53.076615] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:38.301 [2024-07-24 02:11:53.076860] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:38.301 [2024-07-24 02:11:53.076886] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:38.301 [2024-07-24 02:11:53.076902] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:38.301 [2024-07-24 02:11:53.080490] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:38.301 [2024-07-24 02:11:53.089789] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:38.301 [2024-07-24 02:11:53.090207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.301 [2024-07-24 02:11:53.090239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:38.301 [2024-07-24 02:11:53.090258] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:38.301 [2024-07-24 02:11:53.090508] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:38.301 [2024-07-24 02:11:53.090752] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:38.301 [2024-07-24 02:11:53.090777] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:38.302 [2024-07-24 02:11:53.090793] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:38.302 [2024-07-24 02:11:53.094382] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:38.302 [2024-07-24 02:11:53.103682] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:38.302 [2024-07-24 02:11:53.104090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.302 [2024-07-24 02:11:53.104121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:38.302 [2024-07-24 02:11:53.104144] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:38.302 [2024-07-24 02:11:53.104398] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:38.302 [2024-07-24 02:11:53.104643] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:38.302 [2024-07-24 02:11:53.104669] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:38.302 [2024-07-24 02:11:53.104686] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:38.302 [2024-07-24 02:11:53.108272] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:38.302 [2024-07-24 02:11:53.117603] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:38.302 [2024-07-24 02:11:53.117990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.302 [2024-07-24 02:11:53.118022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:38.302 [2024-07-24 02:11:53.118040] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:38.302 [2024-07-24 02:11:53.118280] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:38.302 [2024-07-24 02:11:53.118535] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:38.302 [2024-07-24 02:11:53.118561] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:38.302 [2024-07-24 02:11:53.118578] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:38.302 [2024-07-24 02:11:53.122153] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:38.302 [2024-07-24 02:11:53.131458] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:38.302 [2024-07-24 02:11:53.131865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.302 [2024-07-24 02:11:53.131897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:38.302 [2024-07-24 02:11:53.131915] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:38.302 [2024-07-24 02:11:53.132154] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:38.302 [2024-07-24 02:11:53.132411] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:38.302 [2024-07-24 02:11:53.132438] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:38.302 [2024-07-24 02:11:53.132454] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:38.302 [2024-07-24 02:11:53.136038] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:38.302 [2024-07-24 02:11:53.145345] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:38.302 [2024-07-24 02:11:53.145752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.302 [2024-07-24 02:11:53.145784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:38.302 [2024-07-24 02:11:53.145802] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:38.302 [2024-07-24 02:11:53.146042] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:38.302 [2024-07-24 02:11:53.146285] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:38.302 [2024-07-24 02:11:53.146310] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:38.302 [2024-07-24 02:11:53.146346] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:38.302 [2024-07-24 02:11:53.149935] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:38.302 [2024-07-24 02:11:53.159252] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:38.302 [2024-07-24 02:11:53.159655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.302 [2024-07-24 02:11:53.159689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:38.302 [2024-07-24 02:11:53.159707] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:38.302 [2024-07-24 02:11:53.159948] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:38.302 [2024-07-24 02:11:53.160193] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:38.302 [2024-07-24 02:11:53.160219] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:38.302 [2024-07-24 02:11:53.160236] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:38.302 [2024-07-24 02:11:53.163835] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:38.302 [2024-07-24 02:11:53.173139] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:38.302 [2024-07-24 02:11:53.173556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.302 [2024-07-24 02:11:53.173589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:38.302 [2024-07-24 02:11:53.173607] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:38.302 [2024-07-24 02:11:53.173848] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:38.302 [2024-07-24 02:11:53.174092] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:38.302 [2024-07-24 02:11:53.174117] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:38.302 [2024-07-24 02:11:53.174133] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:38.302 [2024-07-24 02:11:53.177729] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:38.302 [2024-07-24 02:11:53.187049] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:38.302 [2024-07-24 02:11:53.187477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.302 [2024-07-24 02:11:53.187509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:38.302 [2024-07-24 02:11:53.187527] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:38.302 [2024-07-24 02:11:53.187767] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:38.302 [2024-07-24 02:11:53.188011] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:38.302 [2024-07-24 02:11:53.188037] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:38.302 [2024-07-24 02:11:53.188053] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:38.302 [2024-07-24 02:11:53.191744] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:38.586 [2024-07-24 02:11:53.201232] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:38.586 [2024-07-24 02:11:53.201667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.586 [2024-07-24 02:11:53.201701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:38.586 [2024-07-24 02:11:53.201721] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:38.586 [2024-07-24 02:11:53.201962] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:38.586 [2024-07-24 02:11:53.202208] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:38.586 [2024-07-24 02:11:53.202234] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:38.586 [2024-07-24 02:11:53.202251] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:38.586 [2024-07-24 02:11:53.205854] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:38.586 [2024-07-24 02:11:53.215164] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:38.586 [2024-07-24 02:11:53.215599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.586 [2024-07-24 02:11:53.215633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:38.586 [2024-07-24 02:11:53.215652] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:38.586 [2024-07-24 02:11:53.215892] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:38.586 [2024-07-24 02:11:53.216135] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:38.586 [2024-07-24 02:11:53.216160] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:38.586 [2024-07-24 02:11:53.216176] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:38.586 [2024-07-24 02:11:53.219772] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:38.586 [2024-07-24 02:11:53.229073] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:38.586 [2024-07-24 02:11:53.229501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.586 [2024-07-24 02:11:53.229534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:38.586 [2024-07-24 02:11:53.229552] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:38.586 [2024-07-24 02:11:53.229792] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:38.586 [2024-07-24 02:11:53.230036] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:38.586 [2024-07-24 02:11:53.230061] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:38.586 [2024-07-24 02:11:53.230077] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:38.586 [2024-07-24 02:11:53.233669] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:38.586 [2024-07-24 02:11:53.242983] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:38.586 [2024-07-24 02:11:53.243394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.586 [2024-07-24 02:11:53.243427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:38.586 [2024-07-24 02:11:53.243446] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:38.586 [2024-07-24 02:11:53.243692] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:38.586 [2024-07-24 02:11:53.243935] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:38.586 [2024-07-24 02:11:53.243961] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:38.586 [2024-07-24 02:11:53.243978] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:38.586 [2024-07-24 02:11:53.247569] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:38.586 [2024-07-24 02:11:53.256878] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:38.586 [2024-07-24 02:11:53.257286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.586 [2024-07-24 02:11:53.257326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:38.586 [2024-07-24 02:11:53.257347] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:38.586 [2024-07-24 02:11:53.257588] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:38.586 [2024-07-24 02:11:53.257831] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:38.586 [2024-07-24 02:11:53.257856] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:38.586 [2024-07-24 02:11:53.257873] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:38.586 [2024-07-24 02:11:53.261462] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:38.586 [2024-07-24 02:11:53.270759] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:38.586 [2024-07-24 02:11:53.271239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.586 [2024-07-24 02:11:53.271271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:38.586 [2024-07-24 02:11:53.271289] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:38.586 [2024-07-24 02:11:53.271540] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:38.586 [2024-07-24 02:11:53.271784] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:38.586 [2024-07-24 02:11:53.271810] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:38.586 [2024-07-24 02:11:53.271826] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:38.586 [2024-07-24 02:11:53.275414] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:38.586 [2024-07-24 02:11:53.284718] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:38.586 [2024-07-24 02:11:53.285134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.586 [2024-07-24 02:11:53.285166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:38.586 [2024-07-24 02:11:53.285184] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:38.586 [2024-07-24 02:11:53.285437] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:38.586 [2024-07-24 02:11:53.285681] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:38.586 [2024-07-24 02:11:53.285706] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:38.586 [2024-07-24 02:11:53.285728] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:38.586 [2024-07-24 02:11:53.289310] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:38.586 [2024-07-24 02:11:53.298617] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:38.586 [2024-07-24 02:11:53.299036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.586 [2024-07-24 02:11:53.299069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:38.586 [2024-07-24 02:11:53.299086] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:38.586 [2024-07-24 02:11:53.299337] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:38.586 [2024-07-24 02:11:53.299582] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:38.586 [2024-07-24 02:11:53.299608] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:38.586 [2024-07-24 02:11:53.299624] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:38.586 [2024-07-24 02:11:53.303207] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:38.586 [2024-07-24 02:11:53.312528] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:38.586 [2024-07-24 02:11:53.312942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.586 [2024-07-24 02:11:53.312973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:38.586 [2024-07-24 02:11:53.312992] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:38.586 [2024-07-24 02:11:53.313233] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:38.586 [2024-07-24 02:11:53.313490] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:38.586 [2024-07-24 02:11:53.313517] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:38.586 [2024-07-24 02:11:53.313533] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:38.586 [2024-07-24 02:11:53.317115] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:38.586 [2024-07-24 02:11:53.326440] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:38.586 [2024-07-24 02:11:53.326856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.586 [2024-07-24 02:11:53.326887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:38.586 [2024-07-24 02:11:53.326905] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:38.586 [2024-07-24 02:11:53.327144] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:38.586 [2024-07-24 02:11:53.327401] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:38.586 [2024-07-24 02:11:53.327428] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:38.586 [2024-07-24 02:11:53.327445] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:38.586 [2024-07-24 02:11:53.331032] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:38.586 [2024-07-24 02:11:53.340358] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:38.586 [2024-07-24 02:11:53.340889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.586 [2024-07-24 02:11:53.340953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:38.586 [2024-07-24 02:11:53.340972] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:38.586 [2024-07-24 02:11:53.341212] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:38.586 [2024-07-24 02:11:53.341467] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:38.586 [2024-07-24 02:11:53.341492] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:38.586 [2024-07-24 02:11:53.341508] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:38.586 [2024-07-24 02:11:53.345103] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:38.586 [2024-07-24 02:11:53.354206] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:38.586 [2024-07-24 02:11:53.354630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.586 [2024-07-24 02:11:53.354664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:38.586 [2024-07-24 02:11:53.354682] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:38.586 [2024-07-24 02:11:53.354923] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:38.586 [2024-07-24 02:11:53.355168] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:38.586 [2024-07-24 02:11:53.355193] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:38.586 [2024-07-24 02:11:53.355210] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:38.586 [2024-07-24 02:11:53.358799] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:38.586 [2024-07-24 02:11:53.368098] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:38.586 [2024-07-24 02:11:53.368496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.586 [2024-07-24 02:11:53.368528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:38.586 [2024-07-24 02:11:53.368547] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:38.586 [2024-07-24 02:11:53.368786] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:38.586 [2024-07-24 02:11:53.369030] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:38.586 [2024-07-24 02:11:53.369055] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:38.586 [2024-07-24 02:11:53.369072] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:38.586 [2024-07-24 02:11:53.372670] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:38.586 [2024-07-24 02:11:53.381972] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:38.586 [2024-07-24 02:11:53.382395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.586 [2024-07-24 02:11:53.382427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:38.586 [2024-07-24 02:11:53.382446] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:38.586 [2024-07-24 02:11:53.382685] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:38.586 [2024-07-24 02:11:53.382935] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:38.586 [2024-07-24 02:11:53.382961] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:38.586 [2024-07-24 02:11:53.382977] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:38.586 [2024-07-24 02:11:53.386565] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:38.586 [2024-07-24 02:11:53.395869] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:38.586 [2024-07-24 02:11:53.396265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.586 [2024-07-24 02:11:53.396297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:38.586 [2024-07-24 02:11:53.396325] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:38.586 [2024-07-24 02:11:53.396569] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:38.586 [2024-07-24 02:11:53.396812] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:38.586 [2024-07-24 02:11:53.396837] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:38.586 [2024-07-24 02:11:53.396853] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:38.586 [2024-07-24 02:11:53.400438] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:38.586 [2024-07-24 02:11:53.409741] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:38.586 [2024-07-24 02:11:53.410138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.586 [2024-07-24 02:11:53.410171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:38.586 [2024-07-24 02:11:53.410189] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:38.587 [2024-07-24 02:11:53.410440] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:38.587 [2024-07-24 02:11:53.410685] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:38.587 [2024-07-24 02:11:53.410710] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:38.587 [2024-07-24 02:11:53.410726] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:38.587 [2024-07-24 02:11:53.414305] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:38.587 [2024-07-24 02:11:53.423611] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:38.587 [2024-07-24 02:11:53.424035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.587 [2024-07-24 02:11:53.424067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:38.587 [2024-07-24 02:11:53.424085] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:38.587 [2024-07-24 02:11:53.424334] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:38.587 [2024-07-24 02:11:53.424579] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:38.587 [2024-07-24 02:11:53.424605] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:38.587 [2024-07-24 02:11:53.424621] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:38.587 [2024-07-24 02:11:53.428208] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:38.587 [2024-07-24 02:11:53.437514] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:38.587 [2024-07-24 02:11:53.437926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.587 [2024-07-24 02:11:53.437958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:38.587 [2024-07-24 02:11:53.437976] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:38.587 [2024-07-24 02:11:53.438215] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:38.587 [2024-07-24 02:11:53.438469] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:38.587 [2024-07-24 02:11:53.438495] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:38.587 [2024-07-24 02:11:53.438512] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:38.587 [2024-07-24 02:11:53.442094] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:38.587 [2024-07-24 02:11:53.451402] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:38.587 [2024-07-24 02:11:53.451803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.587 [2024-07-24 02:11:53.451835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:38.587 [2024-07-24 02:11:53.451853] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:38.587 [2024-07-24 02:11:53.452092] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:38.587 [2024-07-24 02:11:53.452346] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:38.587 [2024-07-24 02:11:53.452372] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:38.587 [2024-07-24 02:11:53.452390] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:38.587 [2024-07-24 02:11:53.456085] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:38.587 [2024-07-24 02:11:53.465528] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:38.587 [2024-07-24 02:11:53.465957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.587 [2024-07-24 02:11:53.465992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:38.587 [2024-07-24 02:11:53.466011] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:38.587 [2024-07-24 02:11:53.466251] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:38.587 [2024-07-24 02:11:53.466507] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:38.587 [2024-07-24 02:11:53.466533] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:38.587 [2024-07-24 02:11:53.466550] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:38.587 [2024-07-24 02:11:53.470136] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:38.856 [2024-07-24 02:11:53.479404] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:38.856 [2024-07-24 02:11:53.479810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.856 [2024-07-24 02:11:53.479844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:38.856 [2024-07-24 02:11:53.479869] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:38.856 [2024-07-24 02:11:53.480110] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:38.856 [2024-07-24 02:11:53.480365] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:38.856 [2024-07-24 02:11:53.480391] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:38.856 [2024-07-24 02:11:53.480408] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:38.856 [2024-07-24 02:11:53.484034] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:38.856 [2024-07-24 02:11:53.493450] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:38.856 [2024-07-24 02:11:53.493866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.857 [2024-07-24 02:11:53.493900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:38.857 [2024-07-24 02:11:53.493919] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:38.857 [2024-07-24 02:11:53.494158] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:38.857 [2024-07-24 02:11:53.494413] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:38.857 [2024-07-24 02:11:53.494439] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:38.857 [2024-07-24 02:11:53.494456] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:38.857 [2024-07-24 02:11:53.498043] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:38.857 [2024-07-24 02:11:53.507361] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:38.857 [2024-07-24 02:11:53.507783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.857 [2024-07-24 02:11:53.507815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:38.857 [2024-07-24 02:11:53.507834] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:38.857 [2024-07-24 02:11:53.508073] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:38.857 [2024-07-24 02:11:53.508326] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:38.857 [2024-07-24 02:11:53.508352] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:38.857 [2024-07-24 02:11:53.508369] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:38.857 [2024-07-24 02:11:53.511953] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:38.857 [2024-07-24 02:11:53.521252] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:38.857 [2024-07-24 02:11:53.521671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.857 [2024-07-24 02:11:53.521703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:38.857 [2024-07-24 02:11:53.521722] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:38.857 [2024-07-24 02:11:53.521961] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:38.857 [2024-07-24 02:11:53.522204] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:38.857 [2024-07-24 02:11:53.522235] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:38.857 [2024-07-24 02:11:53.522253] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:38.857 [2024-07-24 02:11:53.525844] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:38.857 [2024-07-24 02:11:53.535160] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:38.857 [2024-07-24 02:11:53.535575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.857 [2024-07-24 02:11:53.535608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:38.857 [2024-07-24 02:11:53.535626] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:38.857 [2024-07-24 02:11:53.535866] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:38.857 [2024-07-24 02:11:53.536110] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:38.857 [2024-07-24 02:11:53.536134] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:38.857 [2024-07-24 02:11:53.536150] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:38.857 [2024-07-24 02:11:53.539740] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:38.857 [2024-07-24 02:11:53.549039] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:38.857 [2024-07-24 02:11:53.549443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.857 [2024-07-24 02:11:53.549477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:38.857 [2024-07-24 02:11:53.549496] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:38.857 [2024-07-24 02:11:53.549737] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:38.857 [2024-07-24 02:11:53.549982] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:38.857 [2024-07-24 02:11:53.550008] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:38.857 [2024-07-24 02:11:53.550025] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:38.857 [2024-07-24 02:11:53.553617] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:38.857 [2024-07-24 02:11:53.562930] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:38.857 [2024-07-24 02:11:53.563323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.857 [2024-07-24 02:11:53.563356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:38.857 [2024-07-24 02:11:53.563375] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:38.857 [2024-07-24 02:11:53.563615] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:38.857 [2024-07-24 02:11:53.563859] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:38.857 [2024-07-24 02:11:53.563884] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:38.857 [2024-07-24 02:11:53.563901] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:38.857 [2024-07-24 02:11:53.567489] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:38.857 [2024-07-24 02:11:53.576802] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:38.857 [2024-07-24 02:11:53.577187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.857 [2024-07-24 02:11:53.577220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:38.857 [2024-07-24 02:11:53.577239] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:38.857 [2024-07-24 02:11:53.577489] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:38.857 [2024-07-24 02:11:53.577734] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:38.857 [2024-07-24 02:11:53.577759] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:38.857 [2024-07-24 02:11:53.577776] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:38.857 [2024-07-24 02:11:53.581366] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:38.857 [2024-07-24 02:11:53.590665] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:38.857 [2024-07-24 02:11:53.591074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.857 [2024-07-24 02:11:53.591106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:38.857 [2024-07-24 02:11:53.591124] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:38.857 [2024-07-24 02:11:53.591375] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:38.857 [2024-07-24 02:11:53.591619] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:38.857 [2024-07-24 02:11:53.591645] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:38.857 [2024-07-24 02:11:53.591662] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:38.857 [2024-07-24 02:11:53.595242] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:38.857 [2024-07-24 02:11:53.604549] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:38.857 [2024-07-24 02:11:53.604944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.857 [2024-07-24 02:11:53.604976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:38.857 [2024-07-24 02:11:53.604995] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:38.857 [2024-07-24 02:11:53.605234] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:38.857 [2024-07-24 02:11:53.605490] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:38.857 [2024-07-24 02:11:53.605517] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:38.857 [2024-07-24 02:11:53.605533] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:38.857 [2024-07-24 02:11:53.609119] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:38.857 [2024-07-24 02:11:53.618428] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:38.857 [2024-07-24 02:11:53.618824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.857 [2024-07-24 02:11:53.618855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:38.857 [2024-07-24 02:11:53.618874] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:38.857 [2024-07-24 02:11:53.619123] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:38.857 [2024-07-24 02:11:53.619379] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:38.857 [2024-07-24 02:11:53.619405] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:38.857 [2024-07-24 02:11:53.619422] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:38.858 [2024-07-24 02:11:53.623006] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:38.858 [2024-07-24 02:11:53.632335] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:38.858 [2024-07-24 02:11:53.632732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.858 [2024-07-24 02:11:53.632764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:38.858 [2024-07-24 02:11:53.632783] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:38.858 [2024-07-24 02:11:53.633023] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:38.858 [2024-07-24 02:11:53.633268] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:38.858 [2024-07-24 02:11:53.633293] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:38.858 [2024-07-24 02:11:53.633309] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:38.858 [2024-07-24 02:11:53.636908] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:38.858 [2024-07-24 02:11:53.646224] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:38.858 [2024-07-24 02:11:53.646646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.858 [2024-07-24 02:11:53.646679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:38.858 [2024-07-24 02:11:53.646698] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:38.858 [2024-07-24 02:11:53.646938] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:38.858 [2024-07-24 02:11:53.647184] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:38.858 [2024-07-24 02:11:53.647210] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:38.858 [2024-07-24 02:11:53.647227] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:38.858 [2024-07-24 02:11:53.650818] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:38.858 [2024-07-24 02:11:53.660166] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:38.858 [2024-07-24 02:11:53.660581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.858 [2024-07-24 02:11:53.660613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:38.858 [2024-07-24 02:11:53.660631] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:38.858 [2024-07-24 02:11:53.660871] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:38.858 [2024-07-24 02:11:53.661116] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:38.858 [2024-07-24 02:11:53.661141] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:38.858 [2024-07-24 02:11:53.661163] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:38.858 [2024-07-24 02:11:53.664757] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:38.858 [2024-07-24 02:11:53.674078] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:38.858 [2024-07-24 02:11:53.674472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.858 [2024-07-24 02:11:53.674504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:38.858 [2024-07-24 02:11:53.674522] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:38.858 [2024-07-24 02:11:53.674762] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:38.858 [2024-07-24 02:11:53.675007] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:38.858 [2024-07-24 02:11:53.675032] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:38.858 [2024-07-24 02:11:53.675052] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:38.858 [2024-07-24 02:11:53.678646] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:38.858 [2024-07-24 02:11:53.687964] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:38.858 [2024-07-24 02:11:53.688376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.858 [2024-07-24 02:11:53.688408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:38.858 [2024-07-24 02:11:53.688427] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:38.858 [2024-07-24 02:11:53.688666] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:38.858 [2024-07-24 02:11:53.688911] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:38.858 [2024-07-24 02:11:53.688935] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:38.858 [2024-07-24 02:11:53.688951] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:38.858 [2024-07-24 02:11:53.692585] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:38.858 [2024-07-24 02:11:53.701895] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:38.858 [2024-07-24 02:11:53.702324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.858 [2024-07-24 02:11:53.702357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:38.858 [2024-07-24 02:11:53.702375] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:38.858 [2024-07-24 02:11:53.702615] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:38.858 [2024-07-24 02:11:53.702860] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:38.858 [2024-07-24 02:11:53.702885] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:38.858 [2024-07-24 02:11:53.702901] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:38.858 [2024-07-24 02:11:53.706490] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:38.858 [2024-07-24 02:11:53.715807] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:38.858 [2024-07-24 02:11:53.716224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.858 [2024-07-24 02:11:53.716255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:38.858 [2024-07-24 02:11:53.716273] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:38.858 [2024-07-24 02:11:53.716521] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:38.858 [2024-07-24 02:11:53.716767] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:38.858 [2024-07-24 02:11:53.716791] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:38.858 [2024-07-24 02:11:53.716807] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:38.858 [2024-07-24 02:11:53.720393] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:38.858 [2024-07-24 02:11:53.729706] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:38.858 [2024-07-24 02:11:53.730124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.858 [2024-07-24 02:11:53.730156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:38.858 [2024-07-24 02:11:53.730174] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:38.858 [2024-07-24 02:11:53.730422] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:38.858 [2024-07-24 02:11:53.730668] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:38.858 [2024-07-24 02:11:53.730692] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:38.858 [2024-07-24 02:11:53.730708] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:38.858 [2024-07-24 02:11:53.734289] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:38.858 [2024-07-24 02:11:53.743619] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:38.858 [2024-07-24 02:11:53.744026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:38.858 [2024-07-24 02:11:53.744057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:38.858 [2024-07-24 02:11:53.744076] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:38.858 [2024-07-24 02:11:53.744323] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:38.858 [2024-07-24 02:11:53.744569] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:38.858 [2024-07-24 02:11:53.744594] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:38.859 [2024-07-24 02:11:53.744611] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:38.859 [2024-07-24 02:11:53.748390] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:39.117 [2024-07-24 02:11:53.757666] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:39.117 [2024-07-24 02:11:53.758060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.117 [2024-07-24 02:11:53.758094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:39.117 [2024-07-24 02:11:53.758112] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:39.117 [2024-07-24 02:11:53.758370] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:39.117 [2024-07-24 02:11:53.758617] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:39.117 [2024-07-24 02:11:53.758641] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:39.117 [2024-07-24 02:11:53.758658] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:39.117 [2024-07-24 02:11:53.762246] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:39.117 [2024-07-24 02:11:53.771579] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:39.117 [2024-07-24 02:11:53.771996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.117 [2024-07-24 02:11:53.772028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:39.117 [2024-07-24 02:11:53.772047] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:39.117 [2024-07-24 02:11:53.772287] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:39.117 [2024-07-24 02:11:53.772552] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:39.117 [2024-07-24 02:11:53.772577] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:39.117 [2024-07-24 02:11:53.772594] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:39.117 [2024-07-24 02:11:53.776184] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:39.117 [2024-07-24 02:11:53.785505] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:39.117 [2024-07-24 02:11:53.785912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.117 [2024-07-24 02:11:53.785945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:39.117 [2024-07-24 02:11:53.785963] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:39.117 [2024-07-24 02:11:53.786202] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:39.118 [2024-07-24 02:11:53.786456] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:39.118 [2024-07-24 02:11:53.786482] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:39.118 [2024-07-24 02:11:53.786498] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:39.118 [2024-07-24 02:11:53.790080] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:39.118 [2024-07-24 02:11:53.799398] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:39.118 [2024-07-24 02:11:53.799819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.118 [2024-07-24 02:11:53.799851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:39.118 [2024-07-24 02:11:53.799869] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:39.118 [2024-07-24 02:11:53.800109] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:39.118 [2024-07-24 02:11:53.800363] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:39.118 [2024-07-24 02:11:53.800388] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:39.118 [2024-07-24 02:11:53.800410] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:39.118 [2024-07-24 02:11:53.803992] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:39.118 [2024-07-24 02:11:53.813336] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:39.118 [2024-07-24 02:11:53.813772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.118 [2024-07-24 02:11:53.813805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:39.118 [2024-07-24 02:11:53.813824] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:39.118 [2024-07-24 02:11:53.814063] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:39.118 [2024-07-24 02:11:53.814309] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:39.118 [2024-07-24 02:11:53.814355] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:39.118 [2024-07-24 02:11:53.814371] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:39.118 [2024-07-24 02:11:53.817952] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:39.118 [2024-07-24 02:11:53.827255] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:39.118 [2024-07-24 02:11:53.827684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.118 [2024-07-24 02:11:53.827716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:39.118 [2024-07-24 02:11:53.827735] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:39.118 [2024-07-24 02:11:53.827974] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:39.118 [2024-07-24 02:11:53.828219] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:39.118 [2024-07-24 02:11:53.828244] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:39.118 [2024-07-24 02:11:53.828260] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:39.118 [2024-07-24 02:11:53.831863] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:39.118 [2024-07-24 02:11:53.841176] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:39.118 [2024-07-24 02:11:53.841605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.118 [2024-07-24 02:11:53.841637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:39.118 [2024-07-24 02:11:53.841656] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:39.118 [2024-07-24 02:11:53.841895] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:39.118 [2024-07-24 02:11:53.842140] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:39.118 [2024-07-24 02:11:53.842165] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:39.118 [2024-07-24 02:11:53.842181] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:39.118 [2024-07-24 02:11:53.845771] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:39.118 [2024-07-24 02:11:53.855076] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:39.118 [2024-07-24 02:11:53.855500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.118 [2024-07-24 02:11:53.855537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:39.118 [2024-07-24 02:11:53.855556] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:39.118 [2024-07-24 02:11:53.855795] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:39.118 [2024-07-24 02:11:53.856040] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:39.118 [2024-07-24 02:11:53.856064] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:39.118 [2024-07-24 02:11:53.856081] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:39.118 [2024-07-24 02:11:53.859680] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:39.118 [2024-07-24 02:11:53.868977] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:39.118 [2024-07-24 02:11:53.869378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.118 [2024-07-24 02:11:53.869410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:39.118 [2024-07-24 02:11:53.869429] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:39.118 [2024-07-24 02:11:53.869668] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:39.118 [2024-07-24 02:11:53.869913] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:39.118 [2024-07-24 02:11:53.869938] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:39.118 [2024-07-24 02:11:53.869954] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:39.118 [2024-07-24 02:11:53.873550] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:39.118 [2024-07-24 02:11:53.882863] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:39.118 [2024-07-24 02:11:53.883260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.118 [2024-07-24 02:11:53.883291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:39.118 [2024-07-24 02:11:53.883310] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:39.118 [2024-07-24 02:11:53.883588] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:39.118 [2024-07-24 02:11:53.883833] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:39.118 [2024-07-24 02:11:53.883859] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:39.118 [2024-07-24 02:11:53.883875] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:39.118 [2024-07-24 02:11:53.887487] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:39.118 [2024-07-24 02:11:53.896792] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:39.118 [2024-07-24 02:11:53.897206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.118 [2024-07-24 02:11:53.897239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:39.118 [2024-07-24 02:11:53.897257] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:39.118 [2024-07-24 02:11:53.897513] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:39.118 [2024-07-24 02:11:53.897765] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:39.118 [2024-07-24 02:11:53.897790] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:39.118 [2024-07-24 02:11:53.897806] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:39.118 [2024-07-24 02:11:53.901391] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:39.118 [2024-07-24 02:11:53.910707] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:39.118 [2024-07-24 02:11:53.911097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.118 [2024-07-24 02:11:53.911130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:39.118 [2024-07-24 02:11:53.911149] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:39.118 [2024-07-24 02:11:53.911400] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:39.118 [2024-07-24 02:11:53.911646] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:39.118 [2024-07-24 02:11:53.911672] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:39.118 [2024-07-24 02:11:53.911688] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:39.118 [2024-07-24 02:11:53.915269] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:39.118 [2024-07-24 02:11:53.924574] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:39.119 [2024-07-24 02:11:53.924985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.119 [2024-07-24 02:11:53.925017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:39.119 [2024-07-24 02:11:53.925035] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:39.119 [2024-07-24 02:11:53.925274] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:39.119 [2024-07-24 02:11:53.925527] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:39.119 [2024-07-24 02:11:53.925554] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:39.119 [2024-07-24 02:11:53.925570] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:39.119 [2024-07-24 02:11:53.929153] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:39.119 [2024-07-24 02:11:53.938462] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:39.119 [2024-07-24 02:11:53.938861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.119 [2024-07-24 02:11:53.938893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:39.119 [2024-07-24 02:11:53.938911] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:39.119 [2024-07-24 02:11:53.939150] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:39.119 [2024-07-24 02:11:53.939406] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:39.119 [2024-07-24 02:11:53.939446] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:39.119 [2024-07-24 02:11:53.939464] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:39.119 [2024-07-24 02:11:53.943055] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:39.119 [2024-07-24 02:11:53.952363] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:39.119 [2024-07-24 02:11:53.952771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.119 [2024-07-24 02:11:53.952803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:39.119 [2024-07-24 02:11:53.952821] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:39.119 [2024-07-24 02:11:53.953061] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:39.119 [2024-07-24 02:11:53.953305] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:39.119 [2024-07-24 02:11:53.953340] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:39.119 [2024-07-24 02:11:53.953358] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:39.119 [2024-07-24 02:11:53.956953] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:39.119 [2024-07-24 02:11:53.966256] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:39.119 [2024-07-24 02:11:53.966653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.119 [2024-07-24 02:11:53.966686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:39.119 [2024-07-24 02:11:53.966704] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:39.119 [2024-07-24 02:11:53.966945] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:39.119 [2024-07-24 02:11:53.967190] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:39.119 [2024-07-24 02:11:53.967216] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:39.119 [2024-07-24 02:11:53.967232] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:39.119 [2024-07-24 02:11:53.970821] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:39.119 [2024-07-24 02:11:53.980125] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:39.119 [2024-07-24 02:11:53.980554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.119 [2024-07-24 02:11:53.980587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:39.119 [2024-07-24 02:11:53.980606] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:39.119 [2024-07-24 02:11:53.980846] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:39.119 [2024-07-24 02:11:53.981092] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:39.119 [2024-07-24 02:11:53.981117] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:39.119 [2024-07-24 02:11:53.981134] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:39.119 [2024-07-24 02:11:53.984724] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:39.119 [2024-07-24 02:11:53.994028] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:39.119 [2024-07-24 02:11:53.994424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.119 [2024-07-24 02:11:53.994456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:39.119 [2024-07-24 02:11:53.994480] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:39.119 [2024-07-24 02:11:53.994720] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:39.119 [2024-07-24 02:11:53.994964] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:39.119 [2024-07-24 02:11:53.994989] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:39.119 [2024-07-24 02:11:53.995005] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:39.119 [2024-07-24 02:11:53.998594] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:39.119 [2024-07-24 02:11:54.008235] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:39.119 [2024-07-24 02:11:54.008648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.119 [2024-07-24 02:11:54.008684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:39.119 [2024-07-24 02:11:54.008703] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:39.119 [2024-07-24 02:11:54.008944] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:39.119 [2024-07-24 02:11:54.009216] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:39.119 [2024-07-24 02:11:54.009254] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:39.119 [2024-07-24 02:11:54.009283] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:39.378 [2024-07-24 02:11:54.013085] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:39.378 [2024-07-24 02:11:54.022112] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:39.378 [2024-07-24 02:11:54.022526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.378 [2024-07-24 02:11:54.022561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:39.378 [2024-07-24 02:11:54.022580] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:39.378 [2024-07-24 02:11:54.022820] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:39.378 [2024-07-24 02:11:54.023064] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:39.378 [2024-07-24 02:11:54.023089] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:39.378 [2024-07-24 02:11:54.023106] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:39.378 [2024-07-24 02:11:54.026697] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:39.378 [2024-07-24 02:11:54.036000] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:39.378 [2024-07-24 02:11:54.036415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.378 [2024-07-24 02:11:54.036448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:39.378 [2024-07-24 02:11:54.036467] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:39.378 [2024-07-24 02:11:54.036707] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:39.378 [2024-07-24 02:11:54.036952] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:39.378 [2024-07-24 02:11:54.036983] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:39.378 [2024-07-24 02:11:54.037000] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:39.378 [2024-07-24 02:11:54.040593] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:39.378 [2024-07-24 02:11:54.049896] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:39.378 [2024-07-24 02:11:54.050328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.378 [2024-07-24 02:11:54.050360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:39.378 [2024-07-24 02:11:54.050378] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:39.378 [2024-07-24 02:11:54.050618] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:39.378 [2024-07-24 02:11:54.050862] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:39.378 [2024-07-24 02:11:54.050888] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:39.378 [2024-07-24 02:11:54.050904] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:39.378 [2024-07-24 02:11:54.054493] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:39.378 [2024-07-24 02:11:54.063811] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:39.378 [2024-07-24 02:11:54.064220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.378 [2024-07-24 02:11:54.064252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:39.378 [2024-07-24 02:11:54.064270] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:39.378 [2024-07-24 02:11:54.064520] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:39.378 [2024-07-24 02:11:54.064765] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:39.379 [2024-07-24 02:11:54.064791] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:39.379 [2024-07-24 02:11:54.064807] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:39.379 [2024-07-24 02:11:54.068394] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:39.379 [2024-07-24 02:11:54.077691] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:39.379 [2024-07-24 02:11:54.078077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.379 [2024-07-24 02:11:54.078109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:39.379 [2024-07-24 02:11:54.078127] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:39.379 [2024-07-24 02:11:54.078378] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:39.379 [2024-07-24 02:11:54.078622] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:39.379 [2024-07-24 02:11:54.078647] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:39.379 [2024-07-24 02:11:54.078664] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:39.379 [2024-07-24 02:11:54.082244] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:39.379 [2024-07-24 02:11:54.091554] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:39.379 [2024-07-24 02:11:54.091965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.379 [2024-07-24 02:11:54.091997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:39.379 [2024-07-24 02:11:54.092015] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:39.379 [2024-07-24 02:11:54.092255] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:39.379 [2024-07-24 02:11:54.092510] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:39.379 [2024-07-24 02:11:54.092536] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:39.379 [2024-07-24 02:11:54.092553] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:39.379 [2024-07-24 02:11:54.096131] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:39.379 [2024-07-24 02:11:54.105444] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:39.379 [2024-07-24 02:11:54.105854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.379 [2024-07-24 02:11:54.105887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:39.379 [2024-07-24 02:11:54.105905] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:39.379 [2024-07-24 02:11:54.106146] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:39.379 [2024-07-24 02:11:54.106401] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:39.379 [2024-07-24 02:11:54.106428] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:39.379 [2024-07-24 02:11:54.106445] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:39.379 [2024-07-24 02:11:54.110028] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:39.379 [2024-07-24 02:11:54.119334] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:39.379 [2024-07-24 02:11:54.119755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.379 [2024-07-24 02:11:54.119786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:39.379 [2024-07-24 02:11:54.119804] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:39.379 [2024-07-24 02:11:54.120044] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:39.379 [2024-07-24 02:11:54.120287] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:39.379 [2024-07-24 02:11:54.120312] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:39.379 [2024-07-24 02:11:54.120340] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:39.379 [2024-07-24 02:11:54.123923] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:39.379 [2024-07-24 02:11:54.133224] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:39.379 [2024-07-24 02:11:54.133616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.379 [2024-07-24 02:11:54.133648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:39.379 [2024-07-24 02:11:54.133666] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:39.379 [2024-07-24 02:11:54.133911] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:39.379 [2024-07-24 02:11:54.134155] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:39.379 [2024-07-24 02:11:54.134180] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:39.379 [2024-07-24 02:11:54.134196] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:39.379 [2024-07-24 02:11:54.137787] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:39.379 [2024-07-24 02:11:54.147081] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:39.379 [2024-07-24 02:11:54.147509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.379 [2024-07-24 02:11:54.147543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:39.379 [2024-07-24 02:11:54.147562] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:39.379 [2024-07-24 02:11:54.147802] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:39.379 [2024-07-24 02:11:54.148046] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:39.379 [2024-07-24 02:11:54.148071] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:39.379 [2024-07-24 02:11:54.148087] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:39.379 [2024-07-24 02:11:54.151679] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:39.379 [2024-07-24 02:11:54.160994] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:39.379 [2024-07-24 02:11:54.161381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.379 [2024-07-24 02:11:54.161415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:39.379 [2024-07-24 02:11:54.161433] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:39.379 [2024-07-24 02:11:54.161674] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:39.379 [2024-07-24 02:11:54.161918] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:39.379 [2024-07-24 02:11:54.161943] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:39.379 [2024-07-24 02:11:54.161959] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:39.379 [2024-07-24 02:11:54.165550] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:39.379 [2024-07-24 02:11:54.174850] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:39.379 [2024-07-24 02:11:54.175269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.379 [2024-07-24 02:11:54.175301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:39.379 [2024-07-24 02:11:54.175328] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:39.379 [2024-07-24 02:11:54.175571] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:39.379 [2024-07-24 02:11:54.175815] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:39.379 [2024-07-24 02:11:54.175839] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:39.379 [2024-07-24 02:11:54.175863] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:39.379 [2024-07-24 02:11:54.179451] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:39.379 [2024-07-24 02:11:54.188752] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:39.379 [2024-07-24 02:11:54.189134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.379 [2024-07-24 02:11:54.189167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:39.379 [2024-07-24 02:11:54.189186] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:39.379 [2024-07-24 02:11:54.189437] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:39.379 [2024-07-24 02:11:54.189684] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:39.379 [2024-07-24 02:11:54.189709] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:39.379 [2024-07-24 02:11:54.189726] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:39.379 [2024-07-24 02:11:54.193304] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:39.379 [2024-07-24 02:11:54.202607] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:39.379 [2024-07-24 02:11:54.203018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.379 [2024-07-24 02:11:54.203050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:39.379 [2024-07-24 02:11:54.203069] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:39.380 [2024-07-24 02:11:54.203309] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:39.380 [2024-07-24 02:11:54.203581] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:39.380 [2024-07-24 02:11:54.203607] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:39.380 [2024-07-24 02:11:54.203623] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:39.380 [2024-07-24 02:11:54.207203] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:39.380 [2024-07-24 02:11:54.216510] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:39.380 [2024-07-24 02:11:54.216919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.380 [2024-07-24 02:11:54.216951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:39.380 [2024-07-24 02:11:54.216970] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:39.380 [2024-07-24 02:11:54.217209] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:39.380 [2024-07-24 02:11:54.217464] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:39.380 [2024-07-24 02:11:54.217491] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:39.380 [2024-07-24 02:11:54.217507] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:39.380 [2024-07-24 02:11:54.221088] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:39.380 [2024-07-24 02:11:54.230393] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:39.380 [2024-07-24 02:11:54.230812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.380 [2024-07-24 02:11:54.230844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:39.380 [2024-07-24 02:11:54.230862] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:39.380 [2024-07-24 02:11:54.231101] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:39.380 [2024-07-24 02:11:54.231356] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:39.380 [2024-07-24 02:11:54.231383] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:39.380 [2024-07-24 02:11:54.231400] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:39.380 [2024-07-24 02:11:54.234979] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:39.380 [2024-07-24 02:11:54.244278] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:39.380 [2024-07-24 02:11:54.244683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.380 [2024-07-24 02:11:54.244715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:39.380 [2024-07-24 02:11:54.244734] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:39.380 [2024-07-24 02:11:54.244973] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:39.380 [2024-07-24 02:11:54.245217] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:39.380 [2024-07-24 02:11:54.245242] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:39.380 [2024-07-24 02:11:54.245259] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:39.380 [2024-07-24 02:11:54.248848] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:39.380 [2024-07-24 02:11:54.258162] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:39.380 [2024-07-24 02:11:54.258591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.380 [2024-07-24 02:11:54.258624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:39.380 [2024-07-24 02:11:54.258642] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:39.380 [2024-07-24 02:11:54.258882] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:39.380 [2024-07-24 02:11:54.259125] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:39.380 [2024-07-24 02:11:54.259150] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:39.380 [2024-07-24 02:11:54.259166] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:39.380 [2024-07-24 02:11:54.262756] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:39.639 [2024-07-24 02:11:54.272306] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:39.639 [2024-07-24 02:11:54.272803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.639 [2024-07-24 02:11:54.272839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:39.639 [2024-07-24 02:11:54.272858] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:39.639 [2024-07-24 02:11:54.273099] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:39.639 [2024-07-24 02:11:54.273374] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:39.639 [2024-07-24 02:11:54.273402] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:39.639 [2024-07-24 02:11:54.273420] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:39.639 [2024-07-24 02:11:54.277109] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:39.639 [2024-07-24 02:11:54.286246] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:39.639 [2024-07-24 02:11:54.286671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.639 [2024-07-24 02:11:54.286706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:39.639 [2024-07-24 02:11:54.286725] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:39.639 [2024-07-24 02:11:54.286964] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:39.639 [2024-07-24 02:11:54.287208] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:39.639 [2024-07-24 02:11:54.287234] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:39.639 [2024-07-24 02:11:54.287250] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:39.639 [2024-07-24 02:11:54.290846] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:39.639 [2024-07-24 02:11:54.300150] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:39.639 [2024-07-24 02:11:54.300550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.639 [2024-07-24 02:11:54.300583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:39.639 [2024-07-24 02:11:54.300601] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:39.639 [2024-07-24 02:11:54.300841] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:39.639 [2024-07-24 02:11:54.301085] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:39.639 [2024-07-24 02:11:54.301110] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:39.639 [2024-07-24 02:11:54.301126] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:39.639 [2024-07-24 02:11:54.304718] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:39.639 [2024-07-24 02:11:54.314023] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:39.639 [2024-07-24 02:11:54.314440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.639 [2024-07-24 02:11:54.314474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:39.639 [2024-07-24 02:11:54.314493] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:39.639 [2024-07-24 02:11:54.314734] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:39.639 [2024-07-24 02:11:54.314980] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:39.639 [2024-07-24 02:11:54.315005] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:39.639 [2024-07-24 02:11:54.315022] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:39.639 [2024-07-24 02:11:54.318628] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:39.639 [2024-07-24 02:11:54.327924] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:39.639 [2024-07-24 02:11:54.328310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.639 [2024-07-24 02:11:54.328349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:39.639 [2024-07-24 02:11:54.328369] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:39.639 [2024-07-24 02:11:54.328610] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:39.639 [2024-07-24 02:11:54.328855] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:39.639 [2024-07-24 02:11:54.328880] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:39.639 [2024-07-24 02:11:54.328897] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:39.639 [2024-07-24 02:11:54.332528] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:39.639 [2024-07-24 02:11:54.341834] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:39.639 [2024-07-24 02:11:54.342244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.640 [2024-07-24 02:11:54.342276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:39.640 [2024-07-24 02:11:54.342295] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:39.640 [2024-07-24 02:11:54.342544] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:39.640 [2024-07-24 02:11:54.342789] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:39.640 [2024-07-24 02:11:54.342814] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:39.640 [2024-07-24 02:11:54.342830] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:39.640 [2024-07-24 02:11:54.346445] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:39.640 [2024-07-24 02:11:54.355745] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:39.640 [2024-07-24 02:11:54.356168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.640 [2024-07-24 02:11:54.356200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:39.640 [2024-07-24 02:11:54.356219] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:39.640 [2024-07-24 02:11:54.356469] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:39.640 [2024-07-24 02:11:54.356713] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:39.640 [2024-07-24 02:11:54.356739] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:39.640 [2024-07-24 02:11:54.356756] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:39.640 [2024-07-24 02:11:54.360361] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:39.640 [2024-07-24 02:11:54.369658] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:39.640 [2024-07-24 02:11:54.370045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.640 [2024-07-24 02:11:54.370084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:39.640 [2024-07-24 02:11:54.370103] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:39.640 [2024-07-24 02:11:54.370354] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:39.640 [2024-07-24 02:11:54.370600] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:39.640 [2024-07-24 02:11:54.370626] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:39.640 [2024-07-24 02:11:54.370643] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:39.640 [2024-07-24 02:11:54.374242] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:39.640 [2024-07-24 02:11:54.383554] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:39.640 [2024-07-24 02:11:54.383968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.640 [2024-07-24 02:11:54.384002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:39.640 [2024-07-24 02:11:54.384020] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:39.640 [2024-07-24 02:11:54.384259] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:39.640 [2024-07-24 02:11:54.384514] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:39.640 [2024-07-24 02:11:54.384540] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:39.640 [2024-07-24 02:11:54.384557] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:39.640 [2024-07-24 02:11:54.388139] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:39.640 [2024-07-24 02:11:54.397447] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:39.640 [2024-07-24 02:11:54.397848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.640 [2024-07-24 02:11:54.397880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:39.640 [2024-07-24 02:11:54.397898] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:39.640 [2024-07-24 02:11:54.398137] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:39.640 [2024-07-24 02:11:54.398392] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:39.640 [2024-07-24 02:11:54.398418] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:39.640 [2024-07-24 02:11:54.398435] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:39.640 [2024-07-24 02:11:54.402014] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:39.640 [2024-07-24 02:11:54.411325] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:39.640 [2024-07-24 02:11:54.411745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.640 [2024-07-24 02:11:54.411777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:39.640 [2024-07-24 02:11:54.411796] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:39.640 [2024-07-24 02:11:54.412035] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:39.640 [2024-07-24 02:11:54.412286] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:39.640 [2024-07-24 02:11:54.412312] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:39.640 [2024-07-24 02:11:54.412341] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:39.640 [2024-07-24 02:11:54.415924] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:39.640 [2024-07-24 02:11:54.425218] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:39.640 [2024-07-24 02:11:54.425635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.640 [2024-07-24 02:11:54.425668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:39.640 [2024-07-24 02:11:54.425686] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:39.640 [2024-07-24 02:11:54.425926] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:39.640 [2024-07-24 02:11:54.426172] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:39.640 [2024-07-24 02:11:54.426197] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:39.640 [2024-07-24 02:11:54.426214] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:39.640 [2024-07-24 02:11:54.429801] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:39.640 [2024-07-24 02:11:54.439096] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:39.641 [2024-07-24 02:11:54.439490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.641 [2024-07-24 02:11:54.439522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:39.641 [2024-07-24 02:11:54.439540] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:39.641 [2024-07-24 02:11:54.439779] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:39.641 [2024-07-24 02:11:54.440023] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:39.641 [2024-07-24 02:11:54.440049] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:39.641 [2024-07-24 02:11:54.440065] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:39.641 [2024-07-24 02:11:54.443660] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:39.641 [2024-07-24 02:11:54.452963] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:39.641 [2024-07-24 02:11:54.453382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.641 [2024-07-24 02:11:54.453414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:39.641 [2024-07-24 02:11:54.453433] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:39.641 [2024-07-24 02:11:54.453673] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:39.641 [2024-07-24 02:11:54.453917] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:39.641 [2024-07-24 02:11:54.453942] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:39.641 [2024-07-24 02:11:54.453958] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:39.641 [2024-07-24 02:11:54.457548] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:39.641 [2024-07-24 02:11:54.466870] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:39.641 [2024-07-24 02:11:54.467293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.641 [2024-07-24 02:11:54.467333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:39.641 [2024-07-24 02:11:54.467354] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:39.641 [2024-07-24 02:11:54.467594] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:39.641 [2024-07-24 02:11:54.467838] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:39.641 [2024-07-24 02:11:54.467864] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:39.641 [2024-07-24 02:11:54.467880] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:39.641 [2024-07-24 02:11:54.471468] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:39.641 [2024-07-24 02:11:54.480766] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:39.641 [2024-07-24 02:11:54.481159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.641 [2024-07-24 02:11:54.481191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:39.641 [2024-07-24 02:11:54.481209] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:39.641 [2024-07-24 02:11:54.481458] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:39.641 [2024-07-24 02:11:54.481702] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:39.641 [2024-07-24 02:11:54.481728] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:39.641 [2024-07-24 02:11:54.481744] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:39.641 [2024-07-24 02:11:54.485331] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:39.641 [2024-07-24 02:11:54.494634] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:39.641 [2024-07-24 02:11:54.495031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.641 [2024-07-24 02:11:54.495063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:39.641 [2024-07-24 02:11:54.495080] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:39.641 [2024-07-24 02:11:54.495328] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:39.641 [2024-07-24 02:11:54.495573] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:39.641 [2024-07-24 02:11:54.495599] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:39.641 [2024-07-24 02:11:54.495615] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:39.641 [2024-07-24 02:11:54.499195] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:39.641 [2024-07-24 02:11:54.508508] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:39.641 [2024-07-24 02:11:54.508916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.641 [2024-07-24 02:11:54.508948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:39.641 [2024-07-24 02:11:54.508973] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:39.641 [2024-07-24 02:11:54.509214] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:39.641 [2024-07-24 02:11:54.509469] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:39.641 [2024-07-24 02:11:54.509495] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:39.641 [2024-07-24 02:11:54.509511] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:39.641 [2024-07-24 02:11:54.513096] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:39.641 [2024-07-24 02:11:54.522405] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:39.641 [2024-07-24 02:11:54.522817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.641 [2024-07-24 02:11:54.522850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:39.641 [2024-07-24 02:11:54.522869] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:39.642 [2024-07-24 02:11:54.523109] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:39.642 [2024-07-24 02:11:54.523365] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:39.642 [2024-07-24 02:11:54.523392] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:39.642 [2024-07-24 02:11:54.523409] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:39.642 [2024-07-24 02:11:54.526986] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:39.901 [2024-07-24 02:11:54.536428] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:39.901 [2024-07-24 02:11:54.536821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.901 [2024-07-24 02:11:54.536856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:39.901 [2024-07-24 02:11:54.536875] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:39.901 [2024-07-24 02:11:54.537115] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:39.901 [2024-07-24 02:11:54.537374] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:39.901 [2024-07-24 02:11:54.537402] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:39.901 [2024-07-24 02:11:54.537418] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:39.901 [2024-07-24 02:11:54.541141] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:39.901 [2024-07-24 02:11:54.550462] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:39.901 [2024-07-24 02:11:54.550862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.901 [2024-07-24 02:11:54.550895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:39.901 [2024-07-24 02:11:54.550914] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:39.901 [2024-07-24 02:11:54.551154] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:39.901 [2024-07-24 02:11:54.551410] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:39.901 [2024-07-24 02:11:54.551442] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:39.901 [2024-07-24 02:11:54.551460] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:39.901 [2024-07-24 02:11:54.555043] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:39.901 [2024-07-24 02:11:54.564360] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:39.901 [2024-07-24 02:11:54.564798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.901 [2024-07-24 02:11:54.564831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:39.901 [2024-07-24 02:11:54.564850] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:39.901 [2024-07-24 02:11:54.565089] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:39.901 [2024-07-24 02:11:54.565344] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:39.901 [2024-07-24 02:11:54.565370] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:39.901 [2024-07-24 02:11:54.565386] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:39.901 [2024-07-24 02:11:54.568965] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:39.901 [2024-07-24 02:11:54.578268] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:39.901 [2024-07-24 02:11:54.578688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.901 [2024-07-24 02:11:54.578721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:39.901 [2024-07-24 02:11:54.578739] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:39.901 [2024-07-24 02:11:54.578978] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:39.901 [2024-07-24 02:11:54.579222] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:39.901 [2024-07-24 02:11:54.579246] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:39.901 [2024-07-24 02:11:54.579263] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:39.901 [2024-07-24 02:11:54.582860] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:39.901 [2024-07-24 02:11:54.592157] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:39.901 [2024-07-24 02:11:54.592555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.901 [2024-07-24 02:11:54.592589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:39.901 [2024-07-24 02:11:54.592608] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:39.901 [2024-07-24 02:11:54.592848] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:39.901 [2024-07-24 02:11:54.593092] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:39.901 [2024-07-24 02:11:54.593118] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:39.901 [2024-07-24 02:11:54.593134] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:39.901 [2024-07-24 02:11:54.596724] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:39.901 [2024-07-24 02:11:54.606019] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:39.901 [2024-07-24 02:11:54.606448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.901 [2024-07-24 02:11:54.606480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:39.901 [2024-07-24 02:11:54.606498] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:39.901 [2024-07-24 02:11:54.606738] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:39.901 [2024-07-24 02:11:54.606981] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:39.901 [2024-07-24 02:11:54.607007] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:39.901 [2024-07-24 02:11:54.607024] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:39.901 [2024-07-24 02:11:54.610618] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:39.901 [2024-07-24 02:11:54.619914] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:39.901 [2024-07-24 02:11:54.620333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.901 [2024-07-24 02:11:54.620365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:39.901 [2024-07-24 02:11:54.620384] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:39.901 [2024-07-24 02:11:54.620623] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:39.901 [2024-07-24 02:11:54.620868] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:39.901 [2024-07-24 02:11:54.620894] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:39.901 [2024-07-24 02:11:54.620910] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:39.901 [2024-07-24 02:11:54.624498] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:39.901 [2024-07-24 02:11:54.633790] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:39.901 [2024-07-24 02:11:54.634175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.901 [2024-07-24 02:11:54.634207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:39.901 [2024-07-24 02:11:54.634226] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:39.901 [2024-07-24 02:11:54.634476] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:39.901 [2024-07-24 02:11:54.634722] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:39.901 [2024-07-24 02:11:54.634748] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:39.901 [2024-07-24 02:11:54.634764] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:39.901 [2024-07-24 02:11:54.638351] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:39.901 [2024-07-24 02:11:54.647649] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:39.901 [2024-07-24 02:11:54.648068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.901 [2024-07-24 02:11:54.648100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:39.902 [2024-07-24 02:11:54.648119] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:39.902 [2024-07-24 02:11:54.648375] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:39.902 [2024-07-24 02:11:54.648620] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:39.902 [2024-07-24 02:11:54.648645] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:39.902 [2024-07-24 02:11:54.648661] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:39.902 [2024-07-24 02:11:54.652239] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:39.902 [2024-07-24 02:11:54.661556] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:39.902 [2024-07-24 02:11:54.661973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.902 [2024-07-24 02:11:54.662005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:39.902 [2024-07-24 02:11:54.662023] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:39.902 [2024-07-24 02:11:54.662262] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:39.902 [2024-07-24 02:11:54.662515] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:39.902 [2024-07-24 02:11:54.662542] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:39.902 [2024-07-24 02:11:54.662559] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:39.902 [2024-07-24 02:11:54.666137] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:39.902 [2024-07-24 02:11:54.675441] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:39.902 [2024-07-24 02:11:54.675854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.902 [2024-07-24 02:11:54.675886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:39.902 [2024-07-24 02:11:54.675904] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:39.902 [2024-07-24 02:11:54.676144] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:39.902 [2024-07-24 02:11:54.676398] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:39.902 [2024-07-24 02:11:54.676425] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:39.902 [2024-07-24 02:11:54.676442] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:39.902 [2024-07-24 02:11:54.680021] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:39.902 [2024-07-24 02:11:54.689321] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:39.902 [2024-07-24 02:11:54.689729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.902 [2024-07-24 02:11:54.689761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:39.902 [2024-07-24 02:11:54.689779] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:39.902 [2024-07-24 02:11:54.690018] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:39.902 [2024-07-24 02:11:54.690261] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:39.902 [2024-07-24 02:11:54.690287] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:39.902 [2024-07-24 02:11:54.690309] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:39.902 [2024-07-24 02:11:54.693900] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:39.902 [2024-07-24 02:11:54.703192] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:39.902 [2024-07-24 02:11:54.703596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.902 [2024-07-24 02:11:54.703628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:39.902 [2024-07-24 02:11:54.703646] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:39.902 [2024-07-24 02:11:54.703885] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:39.902 [2024-07-24 02:11:54.704129] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:39.902 [2024-07-24 02:11:54.704155] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:39.902 [2024-07-24 02:11:54.704171] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:39.902 [2024-07-24 02:11:54.707762] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:39.902 [2024-07-24 02:11:54.717058] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:39.902 [2024-07-24 02:11:54.717492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.902 [2024-07-24 02:11:54.717526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:39.902 [2024-07-24 02:11:54.717544] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:39.902 [2024-07-24 02:11:54.717785] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:39.902 [2024-07-24 02:11:54.718031] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:39.902 [2024-07-24 02:11:54.718056] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:39.902 [2024-07-24 02:11:54.718073] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:39.902 [2024-07-24 02:11:54.721667] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:39.902 [2024-07-24 02:11:54.730968] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:39.902 [2024-07-24 02:11:54.731375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.902 [2024-07-24 02:11:54.731408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:39.902 [2024-07-24 02:11:54.731427] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:39.902 [2024-07-24 02:11:54.731668] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:39.902 [2024-07-24 02:11:54.731913] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:39.902 [2024-07-24 02:11:54.731939] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:39.902 [2024-07-24 02:11:54.731956] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:39.902 [2024-07-24 02:11:54.735545] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:39.902 [2024-07-24 02:11:54.744853] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:39.902 [2024-07-24 02:11:54.745265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.902 [2024-07-24 02:11:54.745303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:39.902 [2024-07-24 02:11:54.745332] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:39.902 [2024-07-24 02:11:54.745574] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:39.902 [2024-07-24 02:11:54.745817] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:39.902 [2024-07-24 02:11:54.745843] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:39.902 [2024-07-24 02:11:54.745859] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:39.902 [2024-07-24 02:11:54.749454] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:39.902 [2024-07-24 02:11:54.758763] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:39.902 [2024-07-24 02:11:54.759165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.902 [2024-07-24 02:11:54.759198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:39.902 [2024-07-24 02:11:54.759217] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:39.902 [2024-07-24 02:11:54.759470] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:39.902 [2024-07-24 02:11:54.759725] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:39.902 [2024-07-24 02:11:54.759751] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:39.902 [2024-07-24 02:11:54.759767] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:39.902 [2024-07-24 02:11:54.763367] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:39.902 [2024-07-24 02:11:54.772694] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:39.902 [2024-07-24 02:11:54.773103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.902 [2024-07-24 02:11:54.773135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:39.902 [2024-07-24 02:11:54.773153] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:39.902 [2024-07-24 02:11:54.773403] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:39.902 [2024-07-24 02:11:54.773648] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:39.902 [2024-07-24 02:11:54.773673] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:39.902 [2024-07-24 02:11:54.773689] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:39.903 [2024-07-24 02:11:54.777270] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:39.903 [2024-07-24 02:11:54.786624] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:39.903 [2024-07-24 02:11:54.787119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:39.903 [2024-07-24 02:11:54.787168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:39.903 [2024-07-24 02:11:54.787186] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:39.903 [2024-07-24 02:11:54.787439] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:39.903 [2024-07-24 02:11:54.787690] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:39.903 [2024-07-24 02:11:54.787716] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:39.903 [2024-07-24 02:11:54.787732] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:39.903 [2024-07-24 02:11:54.791420] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:40.162 [2024-07-24 02:11:54.800745] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:40.162 [2024-07-24 02:11:54.801280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.162 [2024-07-24 02:11:54.801343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:40.162 [2024-07-24 02:11:54.801363] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:40.162 [2024-07-24 02:11:54.801603] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:40.162 [2024-07-24 02:11:54.801848] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:40.162 [2024-07-24 02:11:54.801873] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:40.162 [2024-07-24 02:11:54.801889] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:40.162 [2024-07-24 02:11:54.805508] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:40.162 [2024-07-24 02:11:54.814628] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:40.162 [2024-07-24 02:11:54.815020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.162 [2024-07-24 02:11:54.815054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:40.162 [2024-07-24 02:11:54.815073] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:40.162 [2024-07-24 02:11:54.815313] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:40.162 [2024-07-24 02:11:54.815571] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:40.162 [2024-07-24 02:11:54.815596] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:40.162 [2024-07-24 02:11:54.815612] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:40.162 [2024-07-24 02:11:54.819201] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:40.162 [2024-07-24 02:11:54.828540] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:40.162 [2024-07-24 02:11:54.828929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.162 [2024-07-24 02:11:54.828961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:40.162 [2024-07-24 02:11:54.828980] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:40.162 [2024-07-24 02:11:54.829219] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:40.162 [2024-07-24 02:11:54.829478] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:40.162 [2024-07-24 02:11:54.829504] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:40.162 [2024-07-24 02:11:54.829520] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:40.162 [2024-07-24 02:11:54.833117] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:40.162 [2024-07-24 02:11:54.842456] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:40.162 [2024-07-24 02:11:54.842865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.162 [2024-07-24 02:11:54.842897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:40.162 [2024-07-24 02:11:54.842916] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:40.162 [2024-07-24 02:11:54.843155] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:40.162 [2024-07-24 02:11:54.843416] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:40.162 [2024-07-24 02:11:54.843442] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:40.162 [2024-07-24 02:11:54.843458] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:40.162 [2024-07-24 02:11:54.847044] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:40.162 [2024-07-24 02:11:54.856379] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:40.162 [2024-07-24 02:11:54.856791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.162 [2024-07-24 02:11:54.856823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:40.162 [2024-07-24 02:11:54.856841] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:40.162 [2024-07-24 02:11:54.857080] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:40.162 [2024-07-24 02:11:54.857339] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:40.162 [2024-07-24 02:11:54.857364] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:40.162 [2024-07-24 02:11:54.857380] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:40.162 [2024-07-24 02:11:54.860989] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:40.162 [2024-07-24 02:11:54.870343] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:40.162 [2024-07-24 02:11:54.870774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.162 [2024-07-24 02:11:54.870806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:40.162 [2024-07-24 02:11:54.870824] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:40.162 [2024-07-24 02:11:54.871064] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:40.162 [2024-07-24 02:11:54.871309] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:40.162 [2024-07-24 02:11:54.871344] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:40.162 [2024-07-24 02:11:54.871363] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:40.162 [2024-07-24 02:11:54.874948] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:40.162 [2024-07-24 02:11:54.884269] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:40.162 [2024-07-24 02:11:54.884684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.163 [2024-07-24 02:11:54.884716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:40.163 [2024-07-24 02:11:54.884739] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:40.163 [2024-07-24 02:11:54.884980] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:40.163 [2024-07-24 02:11:54.885224] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:40.163 [2024-07-24 02:11:54.885249] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:40.163 [2024-07-24 02:11:54.885265] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:40.163 [2024-07-24 02:11:54.888856] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:40.163 [2024-07-24 02:11:54.898171] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:40.163 [2024-07-24 02:11:54.898543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.163 [2024-07-24 02:11:54.898576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:40.163 [2024-07-24 02:11:54.898595] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:40.163 [2024-07-24 02:11:54.898835] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:40.163 [2024-07-24 02:11:54.899081] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:40.163 [2024-07-24 02:11:54.899105] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:40.163 [2024-07-24 02:11:54.899121] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:40.163 [2024-07-24 02:11:54.902722] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:40.163 [2024-07-24 02:11:54.912129] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:40.163 [2024-07-24 02:11:54.912529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.163 [2024-07-24 02:11:54.912562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:40.163 [2024-07-24 02:11:54.912581] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:40.163 [2024-07-24 02:11:54.912821] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:40.163 [2024-07-24 02:11:54.913066] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:40.163 [2024-07-24 02:11:54.913091] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:40.163 [2024-07-24 02:11:54.913107] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:40.163 [2024-07-24 02:11:54.916700] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:40.163 [2024-07-24 02:11:54.926022] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:40.163 [2024-07-24 02:11:54.926394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.163 [2024-07-24 02:11:54.926426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:40.163 [2024-07-24 02:11:54.926445] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:40.163 [2024-07-24 02:11:54.926685] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:40.163 [2024-07-24 02:11:54.926930] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:40.163 [2024-07-24 02:11:54.926960] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:40.163 [2024-07-24 02:11:54.926978] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:40.163 [2024-07-24 02:11:54.930572] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:40.163 [2024-07-24 02:11:54.939893] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:40.163 [2024-07-24 02:11:54.940306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.163 [2024-07-24 02:11:54.940346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:40.163 [2024-07-24 02:11:54.940365] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:40.163 [2024-07-24 02:11:54.940606] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:40.163 [2024-07-24 02:11:54.940851] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:40.163 [2024-07-24 02:11:54.940875] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:40.163 [2024-07-24 02:11:54.940891] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:40.163 [2024-07-24 02:11:54.944486] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:40.163 [2024-07-24 02:11:54.953817] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:40.163 [2024-07-24 02:11:54.954200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.163 [2024-07-24 02:11:54.954231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:40.163 [2024-07-24 02:11:54.954249] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:40.163 [2024-07-24 02:11:54.954500] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:40.163 [2024-07-24 02:11:54.954746] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:40.163 [2024-07-24 02:11:54.954770] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:40.163 [2024-07-24 02:11:54.954786] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:40.163 [2024-07-24 02:11:54.958378] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:40.163 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1578961 Killed "${NVMF_APP[@]}" "$@" 00:33:40.163 02:11:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:33:40.163 02:11:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:33:40.163 02:11:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:40.163 02:11:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:33:40.163 02:11:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:40.163 02:11:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=1580033 00:33:40.163 02:11:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:33:40.163 02:11:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 1580033 00:33:40.163 02:11:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 1580033 ']' 00:33:40.163 02:11:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:40.163 02:11:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:40.163 02:11:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:40.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:40.163 02:11:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:40.163 02:11:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:40.163 [2024-07-24 02:11:54.967727] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:40.163 [2024-07-24 02:11:54.968224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.163 [2024-07-24 02:11:54.968256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:40.163 [2024-07-24 02:11:54.968275] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:40.163 [2024-07-24 02:11:54.968524] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:40.163 [2024-07-24 02:11:54.968771] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:40.163 [2024-07-24 02:11:54.968795] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:40.163 [2024-07-24 02:11:54.968812] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:40.163 [2024-07-24 02:11:54.972407] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:40.163 [2024-07-24 02:11:54.981719] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:40.163 [2024-07-24 02:11:54.982135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.163 [2024-07-24 02:11:54.982167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:40.163 [2024-07-24 02:11:54.982185] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:40.163 [2024-07-24 02:11:54.982435] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:40.163 [2024-07-24 02:11:54.982679] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:40.163 [2024-07-24 02:11:54.982704] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:40.163 [2024-07-24 02:11:54.982720] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:40.163 [2024-07-24 02:11:54.986306] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:40.163 [2024-07-24 02:11:54.995641] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:40.163 [2024-07-24 02:11:54.996049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.163 [2024-07-24 02:11:54.996080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:40.163 [2024-07-24 02:11:54.996099] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:40.164 [2024-07-24 02:11:54.996348] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:40.164 [2024-07-24 02:11:54.996593] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:40.164 [2024-07-24 02:11:54.996622] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:40.164 [2024-07-24 02:11:54.996638] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:40.164 [2024-07-24 02:11:55.000235] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:40.164 [2024-07-24 02:11:55.009567] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:40.164 [2024-07-24 02:11:55.009977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.164 [2024-07-24 02:11:55.010009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:40.164 [2024-07-24 02:11:55.010028] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:40.164 [2024-07-24 02:11:55.010267] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:40.164 [2024-07-24 02:11:55.010521] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:40.164 [2024-07-24 02:11:55.010547] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:40.164 [2024-07-24 02:11:55.010563] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:40.164 [2024-07-24 02:11:55.013618] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:33:40.164 [2024-07-24 02:11:55.013708] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:40.164 [2024-07-24 02:11:55.014142] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:40.164 [2024-07-24 02:11:55.023658] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:40.164 [2024-07-24 02:11:55.024049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.164 [2024-07-24 02:11:55.024085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:40.164 [2024-07-24 02:11:55.024103] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:40.164 [2024-07-24 02:11:55.024355] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:40.164 [2024-07-24 02:11:55.024600] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:40.164 [2024-07-24 02:11:55.024625] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:40.164 [2024-07-24 02:11:55.024642] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:40.164 [2024-07-24 02:11:55.028221] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:40.164 [2024-07-24 02:11:55.037534] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:40.164 [2024-07-24 02:11:55.038019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.164 [2024-07-24 02:11:55.038051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:40.164 [2024-07-24 02:11:55.038077] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:40.164 [2024-07-24 02:11:55.038326] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:40.164 [2024-07-24 02:11:55.038572] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:40.164 [2024-07-24 02:11:55.038596] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:40.164 [2024-07-24 02:11:55.038612] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:40.164 [2024-07-24 02:11:55.042205] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:40.164 EAL: No free 2048 kB hugepages reported on node 1 00:33:40.164 [2024-07-24 02:11:55.051562] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:40.164 [2024-07-24 02:11:55.051999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.164 [2024-07-24 02:11:55.052033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:40.164 [2024-07-24 02:11:55.052061] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:40.164 [2024-07-24 02:11:55.052301] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:40.164 [2024-07-24 02:11:55.052585] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:40.164 [2024-07-24 02:11:55.052613] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:40.164 [2024-07-24 02:11:55.052630] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:40.423 [2024-07-24 02:11:55.056465] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:40.423 [2024-07-24 02:11:55.065524] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:40.424 [2024-07-24 02:11:55.065916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.424 [2024-07-24 02:11:55.065960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:40.424 [2024-07-24 02:11:55.065979] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:40.424 [2024-07-24 02:11:55.066219] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:40.424 [2024-07-24 02:11:55.066477] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:40.424 [2024-07-24 02:11:55.066502] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:40.424 [2024-07-24 02:11:55.066518] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:40.424 [2024-07-24 02:11:55.070100] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:40.424 [2024-07-24 02:11:55.079403] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:40.424 [2024-07-24 02:11:55.079817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.424 [2024-07-24 02:11:55.079851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:40.424 [2024-07-24 02:11:55.079870] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:40.424 [2024-07-24 02:11:55.080113] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:40.424 [2024-07-24 02:11:55.080367] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:40.424 [2024-07-24 02:11:55.080393] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:40.424 [2024-07-24 02:11:55.080410] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:40.424 [2024-07-24 02:11:55.083986] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:40.424 [2024-07-24 02:11:55.084310] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:40.424 [2024-07-24 02:11:55.093302] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:40.424 [2024-07-24 02:11:55.093889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.424 [2024-07-24 02:11:55.093940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:40.424 [2024-07-24 02:11:55.093972] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:40.424 [2024-07-24 02:11:55.094222] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:40.424 [2024-07-24 02:11:55.094479] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:40.424 [2024-07-24 02:11:55.094505] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:40.424 [2024-07-24 02:11:55.094523] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:40.424 [2024-07-24 02:11:55.098119] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:40.424 [2024-07-24 02:11:55.107268] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:40.424 [2024-07-24 02:11:55.107762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.424 [2024-07-24 02:11:55.107808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:40.424 [2024-07-24 02:11:55.107830] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:40.424 [2024-07-24 02:11:55.108077] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:40.424 [2024-07-24 02:11:55.108332] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:40.424 [2024-07-24 02:11:55.108358] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:40.424 [2024-07-24 02:11:55.108377] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:40.424 [2024-07-24 02:11:55.111953] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:40.424 [2024-07-24 02:11:55.121254] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:40.424 [2024-07-24 02:11:55.121714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.424 [2024-07-24 02:11:55.121747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:40.424 [2024-07-24 02:11:55.121766] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:40.424 [2024-07-24 02:11:55.122005] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:40.424 [2024-07-24 02:11:55.122250] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:40.424 [2024-07-24 02:11:55.122276] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:40.424 [2024-07-24 02:11:55.122293] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:40.424 [2024-07-24 02:11:55.125888] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:40.424 [2024-07-24 02:11:55.135197] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:40.424 [2024-07-24 02:11:55.135706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.424 [2024-07-24 02:11:55.135754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:40.424 [2024-07-24 02:11:55.135775] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:40.424 [2024-07-24 02:11:55.136022] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:40.424 [2024-07-24 02:11:55.136280] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:40.424 [2024-07-24 02:11:55.136305] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:40.424 [2024-07-24 02:11:55.136331] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:40.424 [2024-07-24 02:11:55.139919] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:40.424 [2024-07-24 02:11:55.149240] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:40.424 [2024-07-24 02:11:55.149780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.424 [2024-07-24 02:11:55.149830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:40.424 [2024-07-24 02:11:55.149852] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:40.424 [2024-07-24 02:11:55.150099] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:40.424 [2024-07-24 02:11:55.150358] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:40.424 [2024-07-24 02:11:55.150383] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:40.424 [2024-07-24 02:11:55.150402] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:40.424 [2024-07-24 02:11:55.153984] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:40.424 [2024-07-24 02:11:55.163311] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:40.424 [2024-07-24 02:11:55.163717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.424 [2024-07-24 02:11:55.163750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:40.424 [2024-07-24 02:11:55.163770] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:40.424 [2024-07-24 02:11:55.164017] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:40.424 [2024-07-24 02:11:55.164263] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:40.424 [2024-07-24 02:11:55.164288] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:40.424 [2024-07-24 02:11:55.164305] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:40.424 [2024-07-24 02:11:55.167892] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:40.424 [2024-07-24 02:11:55.177195] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:40.424 [2024-07-24 02:11:55.177526] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:40.424 [2024-07-24 02:11:55.177565] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:40.424 [2024-07-24 02:11:55.177582] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:40.424 [2024-07-24 02:11:55.177596] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:40.424 [2024-07-24 02:11:55.177607] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:40.424 [2024-07-24 02:11:55.177616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.424 [2024-07-24 02:11:55.177649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:40.424 [2024-07-24 02:11:55.177668] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:40.424 [2024-07-24 02:11:55.177689] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:33:40.424 [2024-07-24 02:11:55.177745] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:33:40.424 [2024-07-24 02:11:55.177748] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:40.424 [2024-07-24 02:11:55.177918] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:40.424 [2024-07-24 02:11:55.178161] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:40.424 [2024-07-24 02:11:55.178186] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:40.424 [2024-07-24 02:11:55.178203] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:40.424 [2024-07-24 02:11:55.181799] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:40.425 [2024-07-24 02:11:55.191126] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:40.425 [2024-07-24 02:11:55.191718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.425 [2024-07-24 02:11:55.191774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:40.425 [2024-07-24 02:11:55.191796] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:40.425 [2024-07-24 02:11:55.192046] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:40.425 [2024-07-24 02:11:55.192293] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:40.425 [2024-07-24 02:11:55.192328] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:40.425 [2024-07-24 02:11:55.192349] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:40.425 [2024-07-24 02:11:55.195951] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:40.425 [2024-07-24 02:11:55.205066] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:40.425 [2024-07-24 02:11:55.205660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.425 [2024-07-24 02:11:55.205709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:40.425 [2024-07-24 02:11:55.205731] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:40.425 [2024-07-24 02:11:55.205983] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:40.425 [2024-07-24 02:11:55.206231] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:40.425 [2024-07-24 02:11:55.206256] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:40.425 [2024-07-24 02:11:55.206276] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:40.425 [2024-07-24 02:11:55.209879] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:40.425 [2024-07-24 02:11:55.218996] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:40.425 [2024-07-24 02:11:55.219579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.425 [2024-07-24 02:11:55.219627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:40.425 [2024-07-24 02:11:55.219650] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:40.425 [2024-07-24 02:11:55.219902] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:40.425 [2024-07-24 02:11:55.220150] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:40.425 [2024-07-24 02:11:55.220189] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:40.425 [2024-07-24 02:11:55.220208] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:40.425 [2024-07-24 02:11:55.223804] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:40.425 [2024-07-24 02:11:55.232921] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:40.425 [2024-07-24 02:11:55.233516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.425 [2024-07-24 02:11:55.233562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:40.425 [2024-07-24 02:11:55.233585] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:40.425 [2024-07-24 02:11:55.233837] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:40.425 [2024-07-24 02:11:55.234085] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:40.425 [2024-07-24 02:11:55.234111] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:40.425 [2024-07-24 02:11:55.234130] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:40.425 [2024-07-24 02:11:55.237722] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:40.425 [2024-07-24 02:11:55.246826] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:40.425 [2024-07-24 02:11:55.247359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.425 [2024-07-24 02:11:55.247403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:40.425 [2024-07-24 02:11:55.247425] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:40.425 [2024-07-24 02:11:55.247675] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:40.425 [2024-07-24 02:11:55.247922] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:40.425 [2024-07-24 02:11:55.247948] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:40.425 [2024-07-24 02:11:55.247967] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:40.425 [2024-07-24 02:11:55.251563] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:40.425 [2024-07-24 02:11:55.260892] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:40.425 [2024-07-24 02:11:55.261519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.425 [2024-07-24 02:11:55.261569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:40.425 [2024-07-24 02:11:55.261591] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:40.425 [2024-07-24 02:11:55.261865] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:40.425 [2024-07-24 02:11:55.262115] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:40.425 [2024-07-24 02:11:55.262141] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:40.425 [2024-07-24 02:11:55.262160] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:40.425 [2024-07-24 02:11:55.265752] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:40.425 [2024-07-24 02:11:55.274860] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:40.425 [2024-07-24 02:11:55.275270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.425 [2024-07-24 02:11:55.275305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:40.425 [2024-07-24 02:11:55.275333] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:40.425 [2024-07-24 02:11:55.275575] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:40.425 [2024-07-24 02:11:55.275820] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:40.425 [2024-07-24 02:11:55.275846] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:40.425 [2024-07-24 02:11:55.275863] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:40.425 [2024-07-24 02:11:55.279452] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:40.425 [2024-07-24 02:11:55.288458] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:40.425 [2024-07-24 02:11:55.288845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.425 [2024-07-24 02:11:55.288874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:40.425 [2024-07-24 02:11:55.288891] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:40.425 [2024-07-24 02:11:55.289121] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:40.425 [2024-07-24 02:11:55.289361] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:40.425 [2024-07-24 02:11:55.289385] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:40.425 [2024-07-24 02:11:55.289400] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:40.425 [2024-07-24 02:11:55.292703] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:40.425 02:11:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:40.425 02:11:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:33:40.425 02:11:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:40.425 02:11:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:40.425 02:11:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:40.425 [2024-07-24 02:11:55.302130] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:40.425 [2024-07-24 02:11:55.302550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.425 [2024-07-24 02:11:55.302580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:40.425 [2024-07-24 02:11:55.302596] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:40.425 [2024-07-24 02:11:55.302827] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:40.425 [2024-07-24 02:11:55.303051] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:40.425 [2024-07-24 02:11:55.303072] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:40.425 [2024-07-24 02:11:55.303085] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:40.425 [2024-07-24 02:11:55.306352] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:40.425 [2024-07-24 02:11:55.315855] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:40.425 [2024-07-24 02:11:55.316253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.425 [2024-07-24 02:11:55.316284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:40.426 [2024-07-24 02:11:55.316302] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:40.426 [2024-07-24 02:11:55.316525] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:40.426 [2024-07-24 02:11:55.316769] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:40.426 [2024-07-24 02:11:55.316790] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:40.426 [2024-07-24 02:11:55.316804] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:40.684 [2024-07-24 02:11:55.320184] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:40.684 02:11:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:40.684 02:11:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:40.684 02:11:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:40.684 02:11:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:40.684 [2024-07-24 02:11:55.329120] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:40.684 [2024-07-24 02:11:55.329484] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:40.684 [2024-07-24 02:11:55.329887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.684 [2024-07-24 02:11:55.329918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:40.684 [2024-07-24 02:11:55.329936] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:40.684 [2024-07-24 02:11:55.330167] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:40.684 [2024-07-24 02:11:55.330418] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:40.684 [2024-07-24 02:11:55.330441] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:40.684 [2024-07-24 02:11:55.330455] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:40.684 [2024-07-24 02:11:55.333628] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:40.684 [2024-07-24 02:11:55.342996] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:40.684 [2024-07-24 02:11:55.343382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.684 [2024-07-24 02:11:55.343412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:40.684 [2024-07-24 02:11:55.343429] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:40.684 [2024-07-24 02:11:55.343659] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:40.684 [2024-07-24 02:11:55.343875] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:40.684 [2024-07-24 02:11:55.343896] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:40.685 [2024-07-24 02:11:55.343910] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:40.685 [2024-07-24 02:11:55.347071] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:40.685 02:11:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:40.685 02:11:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:40.685 02:11:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:40.685 02:11:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:40.685 [2024-07-24 02:11:55.356533] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:40.685 [2024-07-24 02:11:55.356986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.685 [2024-07-24 02:11:55.357017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:40.685 [2024-07-24 02:11:55.357035] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:40.685 [2024-07-24 02:11:55.357269] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:40.685 [2024-07-24 02:11:55.357537] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:40.685 [2024-07-24 02:11:55.357561] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:40.685 [2024-07-24 02:11:55.357576] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:40.685 [2024-07-24 02:11:55.360876] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:40.685 [2024-07-24 02:11:55.370217] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:40.685 [2024-07-24 02:11:55.370837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.685 [2024-07-24 02:11:55.370883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:40.685 [2024-07-24 02:11:55.370904] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:40.685 [2024-07-24 02:11:55.371157] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:40.685 [2024-07-24 02:11:55.371413] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:40.685 [2024-07-24 02:11:55.371438] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:40.685 [2024-07-24 02:11:55.371455] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:40.685 Malloc0 00:33:40.685 [2024-07-24 02:11:55.374729] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:40.685 02:11:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:40.685 02:11:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:40.685 02:11:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:40.685 02:11:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:40.685 02:11:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:40.685 02:11:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:40.685 02:11:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:40.685 02:11:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:40.685 [2024-07-24 02:11:55.383854] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:40.685 [2024-07-24 02:11:55.384232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.685 [2024-07-24 02:11:55.384270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10f1f70 with addr=10.0.0.2, port=4420 00:33:40.685 [2024-07-24 02:11:55.384287] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f70 is same with the state(5) to be set 00:33:40.685 [2024-07-24 02:11:55.384514] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10f1f70 (9): Bad file descriptor 00:33:40.685 [2024-07-24 02:11:55.384757] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:40.685 [2024-07-24 02:11:55.384780] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:40.685 [2024-07-24 02:11:55.384794] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:40.685 [2024-07-24 02:11:55.388010] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:40.685 02:11:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:40.685 02:11:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:40.685 02:11:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:40.685 02:11:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:40.685 [2024-07-24 02:11:55.394240] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:40.685 [2024-07-24 02:11:55.397500] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:40.685 02:11:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:40.685 02:11:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 1579249 00:33:40.685 [2024-07-24 02:11:55.562891] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:33:50.654 00:33:50.654 Latency(us) 00:33:50.654 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:50.654 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:50.654 Verification LBA range: start 0x0 length 0x4000 00:33:50.654 Nvme1n1 : 15.01 6587.29 25.73 8806.62 0.00 8290.43 564.34 23204.60 00:33:50.654 =================================================================================================================== 00:33:50.654 Total : 6587.29 25.73 8806.62 0.00 8290.43 564.34 23204.60 00:33:50.654 02:12:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:33:50.654 02:12:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:50.654 02:12:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:50.654 02:12:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:50.654 02:12:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:50.654 02:12:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:33:50.654 02:12:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:33:50.654 02:12:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:50.654 02:12:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:33:50.654 02:12:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:50.654 02:12:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:33:50.654 02:12:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:50.654 02:12:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:50.654 rmmod nvme_tcp 00:33:50.654 rmmod nvme_fabrics 00:33:50.654 rmmod nvme_keyring 00:33:50.654 02:12:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:50.654 02:12:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:33:50.654 02:12:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:33:50.654 02:12:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 1580033 ']' 00:33:50.654 02:12:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 1580033 00:33:50.654 02:12:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@948 -- # '[' -z 1580033 ']' 00:33:50.654 02:12:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@952 -- # kill -0 1580033 00:33:50.654 02:12:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@953 -- # uname 00:33:50.654 02:12:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:50.654 02:12:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1580033 00:33:50.654 02:12:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:50.654 02:12:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:50.654 02:12:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1580033' 00:33:50.654 killing process with pid 1580033 00:33:50.654 02:12:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@967 -- # kill 1580033 00:33:50.654 02:12:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # wait 1580033 00:33:50.654 02:12:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:50.654 02:12:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:50.655 02:12:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:50.655 02:12:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:50.655 02:12:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:50.655 02:12:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:50.655 02:12:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:50.655 02:12:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:52.555 02:12:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:52.555 00:33:52.555 real 0m22.431s 00:33:52.555 user 1m0.051s 00:33:52.555 sys 0m4.177s 00:33:52.556 02:12:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:52.556 02:12:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:52.556 ************************************ 00:33:52.556 END TEST nvmf_bdevperf 00:33:52.556 ************************************ 00:33:52.556 02:12:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:33:52.556 02:12:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:33:52.556 02:12:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:52.556 02:12:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:52.556 ************************************ 00:33:52.556 START TEST nvmf_target_disconnect 00:33:52.556 ************************************ 00:33:52.556 02:12:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:33:52.556 * Looking for test storage... 00:33:52.556 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:52.556 02:12:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:52.556 02:12:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:33:52.556 02:12:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:52.556 02:12:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:52.556 02:12:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:52.556 02:12:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:52.556 02:12:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:52.556 02:12:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:52.556 02:12:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:52.556 02:12:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:52.556 02:12:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:52.556 02:12:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:52.556 02:12:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:52.556 02:12:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:52.556 02:12:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:52.556 02:12:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:52.556 02:12:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:52.556 02:12:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:52.556 02:12:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:52.556 02:12:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:52.556 02:12:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:52.556 02:12:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:52.556 02:12:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:52.556 02:12:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:52.556 02:12:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:52.556 02:12:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:33:52.556 02:12:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:52.556 02:12:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:33:52.556 02:12:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:52.556 02:12:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:52.556 02:12:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:52.556 02:12:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:52.556 02:12:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:52.556 02:12:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:52.556 02:12:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:52.556 02:12:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:52.556 02:12:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:33:52.556 02:12:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:33:52.556 02:12:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:33:52.556 02:12:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:33:52.556 02:12:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:52.556 02:12:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:52.556 02:12:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:52.556 02:12:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:52.556 02:12:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:52.556 02:12:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:52.556 02:12:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:52.556 02:12:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:52.556 02:12:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:52.556 02:12:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:52.556 02:12:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:33:52.556 02:12:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:33:54.457 02:12:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:54.457 02:12:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:33:54.457 02:12:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:54.457 02:12:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:54.457 02:12:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:54.457 02:12:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:54.457 02:12:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:54.457 02:12:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:33:54.457 02:12:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:54.457 02:12:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:33:54.457 02:12:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:33:54.457 02:12:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:33:54.457 02:12:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:33:54.457 02:12:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:33:54.457 02:12:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:33:54.457 02:12:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:54.457 02:12:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:54.457 02:12:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:54.457 02:12:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:54.457 02:12:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:54.457 02:12:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:54.457 02:12:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:54.457 02:12:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:54.457 02:12:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:54.457 02:12:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:54.457 02:12:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:54.457 02:12:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:54.457 02:12:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:54.457 02:12:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:54.457 02:12:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:54.457 02:12:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:54.457 02:12:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:54.457 02:12:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:54.457 02:12:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:54.457 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:54.457 02:12:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:54.457 02:12:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:54.457 02:12:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:54.457 02:12:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:54.457 02:12:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:54.457 02:12:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:54.457 02:12:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:54.457 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:54.457 02:12:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:54.457 02:12:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:54.457 02:12:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:54.457 02:12:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:54.457 02:12:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:54.457 02:12:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:54.457 02:12:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:54.457 02:12:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:54.457 02:12:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:54.457 02:12:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:54.457 02:12:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:54.457 02:12:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:54.457 02:12:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:54.457 02:12:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:54.457 02:12:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:54.457 02:12:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:54.457 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:54.457 02:12:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:54.457 02:12:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:54.457 02:12:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:54.457 02:12:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:54.457 02:12:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:54.457 02:12:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:54.457 02:12:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:54.457 02:12:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:54.457 02:12:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:54.457 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:54.457 02:12:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:54.457 02:12:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:54.457 02:12:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:33:54.457 02:12:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:54.457 02:12:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:54.457 02:12:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:54.457 02:12:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:54.457 02:12:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:54.457 02:12:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:54.457 02:12:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:54.457 02:12:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:54.457 02:12:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:54.457 02:12:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:54.457 02:12:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:54.457 02:12:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:54.457 02:12:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:54.457 02:12:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:54.457 02:12:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:54.457 02:12:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:54.457 02:12:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:54.457 02:12:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:54.457 02:12:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:54.457 02:12:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:54.457 02:12:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:54.457 02:12:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:54.457 02:12:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:54.457 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:54.457 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.139 ms 00:33:54.457 00:33:54.457 --- 10.0.0.2 ping statistics --- 00:33:54.457 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:54.457 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:33:54.457 02:12:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:54.457 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:54.457 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.065 ms 00:33:54.457 00:33:54.457 --- 10.0.0.1 ping statistics --- 00:33:54.457 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:54.457 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:33:54.457 02:12:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:54.458 02:12:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:33:54.458 02:12:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:54.458 02:12:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:54.458 02:12:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:54.458 02:12:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:54.458 02:12:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:54.458 02:12:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:54.458 02:12:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:54.458 02:12:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:33:54.458 02:12:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:33:54.458 02:12:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:54.458 02:12:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:33:54.458 ************************************ 00:33:54.458 START TEST nvmf_target_disconnect_tc1 00:33:54.458 ************************************ 00:33:54.458 02:12:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc1 00:33:54.458 02:12:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:54.458 02:12:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@648 -- # local es=0 00:33:54.458 02:12:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:54.458 02:12:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:33:54.458 02:12:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:54.458 02:12:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:33:54.458 02:12:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:54.458 02:12:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:33:54.458 02:12:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:54.458 02:12:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:33:54.458 02:12:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:33:54.458 02:12:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:54.458 EAL: No free 2048 kB hugepages reported on node 1 00:33:54.458 [2024-07-24 02:12:09.090288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:54.458 [2024-07-24 02:12:09.090391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bd5590 with addr=10.0.0.2, port=4420 00:33:54.458 [2024-07-24 02:12:09.090426] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:33:54.458 [2024-07-24 02:12:09.090446] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:33:54.458 [2024-07-24 02:12:09.090459] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:33:54.458 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:33:54.458 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:33:54.458 Initializing NVMe Controllers 00:33:54.458 02:12:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # es=1 00:33:54.458 02:12:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:54.458 02:12:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:54.458 02:12:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:54.458 00:33:54.458 real 0m0.089s 00:33:54.458 user 0m0.040s 00:33:54.458 sys 0m0.049s 00:33:54.458 02:12:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:54.458 02:12:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:33:54.458 ************************************ 00:33:54.458 END TEST nvmf_target_disconnect_tc1 00:33:54.458 ************************************ 00:33:54.458 02:12:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:33:54.458 02:12:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:33:54.458 02:12:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:54.458 02:12:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:33:54.458 ************************************ 00:33:54.458 START TEST nvmf_target_disconnect_tc2 00:33:54.458 ************************************ 00:33:54.458 02:12:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc2 00:33:54.458 02:12:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:33:54.458 02:12:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:33:54.458 02:12:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:54.458 02:12:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:33:54.458 02:12:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:54.458 02:12:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1583689 00:33:54.458 02:12:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:33:54.458 02:12:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1583689 00:33:54.458 02:12:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 1583689 ']' 00:33:54.458 02:12:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:54.458 02:12:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:54.458 02:12:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:54.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:54.458 02:12:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:54.458 02:12:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:54.458 [2024-07-24 02:12:09.184443] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:33:54.458 [2024-07-24 02:12:09.184525] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:54.458 EAL: No free 2048 kB hugepages reported on node 1 00:33:54.458 [2024-07-24 02:12:09.248075] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:54.458 [2024-07-24 02:12:09.336154] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:54.458 [2024-07-24 02:12:09.336208] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:54.458 [2024-07-24 02:12:09.336232] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:54.458 [2024-07-24 02:12:09.336243] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:54.458 [2024-07-24 02:12:09.336253] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:54.458 [2024-07-24 02:12:09.336341] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:33:54.458 [2024-07-24 02:12:09.336464] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:33:54.458 [2024-07-24 02:12:09.336530] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:33:54.458 [2024-07-24 02:12:09.336528] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:33:54.716 02:12:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:54.716 02:12:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:33:54.716 02:12:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:54.716 02:12:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:54.716 02:12:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:54.716 02:12:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:54.716 02:12:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:54.716 02:12:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:54.716 02:12:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:54.716 Malloc0 00:33:54.716 02:12:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:54.717 02:12:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:33:54.717 02:12:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:54.717 02:12:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:54.717 [2024-07-24 02:12:09.499518] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:54.717 02:12:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:54.717 02:12:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:54.717 02:12:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:54.717 02:12:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:54.717 02:12:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:54.717 02:12:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:54.717 02:12:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:54.717 02:12:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:54.717 02:12:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:54.717 02:12:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:54.717 02:12:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:54.717 02:12:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:54.717 [2024-07-24 02:12:09.527795] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:54.717 02:12:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:54.717 02:12:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:54.717 02:12:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:54.717 02:12:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:54.717 02:12:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:54.717 02:12:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=1583721 00:33:54.717 02:12:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:54.717 02:12:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:33:54.717 EAL: No free 2048 kB hugepages reported on node 1 00:33:57.305 02:12:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 1583689 00:33:57.305 02:12:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:33:57.305 Read completed with error (sct=0, sc=8) 00:33:57.305 starting I/O failed 00:33:57.305 Read completed with error (sct=0, sc=8) 00:33:57.305 starting I/O failed 00:33:57.305 Write completed with error (sct=0, sc=8) 00:33:57.305 starting I/O failed 00:33:57.305 Write completed with error (sct=0, sc=8) 00:33:57.305 starting I/O failed 00:33:57.305 Read completed with error (sct=0, sc=8) 00:33:57.305 starting I/O failed 00:33:57.305 Read completed with error (sct=0, sc=8) 00:33:57.305 starting I/O failed 00:33:57.305 Read completed with error (sct=0, sc=8) 00:33:57.305 starting I/O failed 00:33:57.305 Read completed with error (sct=0, sc=8) 00:33:57.305 starting I/O failed 00:33:57.305 Read completed with error (sct=0, sc=8) 00:33:57.305 starting I/O failed 00:33:57.305 Read completed with error (sct=0, sc=8) 00:33:57.305 starting I/O failed 00:33:57.305 Write completed with error (sct=0, sc=8) 00:33:57.305 starting I/O failed 00:33:57.305 Write completed with error (sct=0, sc=8) 00:33:57.305 starting I/O failed 00:33:57.305 Read completed with error (sct=0, sc=8) 00:33:57.305 starting I/O failed 00:33:57.305 Read completed with error (sct=0, sc=8) 00:33:57.305 starting I/O failed 00:33:57.305 Write completed with error (sct=0, sc=8) 00:33:57.305 starting I/O failed 00:33:57.305 Read completed with error (sct=0, sc=8) 00:33:57.305 starting I/O failed 00:33:57.305 Write completed with error (sct=0, sc=8) 00:33:57.305 starting I/O failed 00:33:57.305 Write completed with error (sct=0, sc=8) 00:33:57.305 starting I/O failed 00:33:57.305 Write completed with error (sct=0, sc=8) 00:33:57.305 starting I/O failed 00:33:57.305 Write completed with error (sct=0, sc=8) 00:33:57.305 starting I/O failed 00:33:57.305 Read completed with error (sct=0, sc=8) 00:33:57.305 starting I/O failed 00:33:57.305 Read completed with error (sct=0, sc=8) 00:33:57.305 starting I/O failed 00:33:57.305 Write completed with error (sct=0, sc=8) 00:33:57.305 starting I/O failed 00:33:57.305 Write completed with error (sct=0, sc=8) 00:33:57.305 starting I/O failed 00:33:57.305 Write completed with error (sct=0, sc=8) 00:33:57.305 starting I/O failed 00:33:57.305 Write completed with error (sct=0, sc=8) 00:33:57.305 starting I/O failed 00:33:57.305 Write completed with error (sct=0, sc=8) 00:33:57.305 starting I/O failed 00:33:57.305 Write completed with error (sct=0, sc=8) 00:33:57.305 starting I/O failed 00:33:57.305 Write completed with error (sct=0, sc=8) 00:33:57.305 starting I/O failed 00:33:57.305 Write completed with error (sct=0, sc=8) 00:33:57.305 starting I/O failed 00:33:57.305 Read completed with error (sct=0, sc=8) 00:33:57.305 starting I/O failed 00:33:57.305 Write completed with error (sct=0, sc=8) 00:33:57.305 starting I/O failed 00:33:57.305 Read completed with error (sct=0, sc=8) 00:33:57.305 starting I/O failed 00:33:57.305 [2024-07-24 02:12:11.552589] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:57.305 Read completed with error (sct=0, sc=8) 00:33:57.305 starting I/O failed 00:33:57.305 Read completed with error (sct=0, sc=8) 00:33:57.305 starting I/O failed 00:33:57.305 Read completed with error (sct=0, sc=8) 00:33:57.305 starting I/O failed 00:33:57.305 Read completed with error (sct=0, sc=8) 00:33:57.305 starting I/O failed 00:33:57.305 Read completed with error (sct=0, sc=8) 00:33:57.305 starting I/O failed 00:33:57.305 Read completed with error (sct=0, sc=8) 00:33:57.305 starting I/O failed 00:33:57.305 Read completed with error (sct=0, sc=8) 00:33:57.305 starting I/O failed 00:33:57.305 Read completed with error (sct=0, sc=8) 00:33:57.305 starting I/O failed 00:33:57.305 Read completed with error (sct=0, sc=8) 00:33:57.305 starting I/O failed 00:33:57.305 Read completed with error (sct=0, sc=8) 00:33:57.305 starting I/O failed 00:33:57.305 Read completed with error (sct=0, sc=8) 00:33:57.305 starting I/O failed 00:33:57.305 Read completed with error (sct=0, sc=8) 00:33:57.305 starting I/O failed 00:33:57.305 Read completed with error (sct=0, sc=8) 00:33:57.305 starting I/O failed 00:33:57.305 Read completed with error (sct=0, sc=8) 00:33:57.305 starting I/O failed 00:33:57.305 Read completed with error (sct=0, sc=8) 00:33:57.305 starting I/O failed 00:33:57.305 Write completed with error (sct=0, sc=8) 00:33:57.305 starting I/O failed 00:33:57.305 Write completed with error (sct=0, sc=8) 00:33:57.305 starting I/O failed 00:33:57.305 Write completed with error (sct=0, sc=8) 00:33:57.305 starting I/O failed 00:33:57.305 Read completed with error (sct=0, sc=8) 00:33:57.305 starting I/O failed 00:33:57.305 Write completed with error (sct=0, sc=8) 00:33:57.305 starting I/O failed 00:33:57.305 Read completed with error (sct=0, sc=8) 00:33:57.305 starting I/O failed 00:33:57.305 Write completed with error (sct=0, sc=8) 00:33:57.305 starting I/O failed 00:33:57.305 Write completed with error (sct=0, sc=8) 00:33:57.305 starting I/O failed 00:33:57.305 Write completed with error (sct=0, sc=8) 00:33:57.305 starting I/O failed 00:33:57.305 Read completed with error (sct=0, sc=8) 00:33:57.305 starting I/O failed 00:33:57.305 Write completed with error (sct=0, sc=8) 00:33:57.305 starting I/O failed 00:33:57.305 Read completed with error (sct=0, sc=8) 00:33:57.305 starting I/O failed 00:33:57.305 Write completed with error (sct=0, sc=8) 00:33:57.305 starting I/O failed 00:33:57.305 Read completed with error (sct=0, sc=8) 00:33:57.305 starting I/O failed 00:33:57.305 Write completed with error (sct=0, sc=8) 00:33:57.305 starting I/O failed 00:33:57.305 Read completed with error (sct=0, sc=8) 00:33:57.305 starting I/O failed 00:33:57.305 [2024-07-24 02:12:11.552941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:57.305 Read completed with error (sct=0, sc=8) 00:33:57.305 starting I/O failed 00:33:57.305 Read completed with error (sct=0, sc=8) 00:33:57.305 starting I/O failed 00:33:57.305 Read completed with error (sct=0, sc=8) 00:33:57.305 starting I/O failed 00:33:57.305 Read completed with error (sct=0, sc=8) 00:33:57.305 starting I/O failed 00:33:57.305 Read completed with error (sct=0, sc=8) 00:33:57.305 starting I/O failed 00:33:57.305 Read completed with error (sct=0, sc=8) 00:33:57.305 starting I/O failed 00:33:57.305 Read completed with error (sct=0, sc=8) 00:33:57.305 starting I/O failed 00:33:57.305 Read completed with error (sct=0, sc=8) 00:33:57.305 starting I/O failed 00:33:57.305 Read completed with error (sct=0, sc=8) 00:33:57.305 starting I/O failed 00:33:57.305 Read completed with error (sct=0, sc=8) 00:33:57.305 starting I/O failed 00:33:57.305 Read completed with error (sct=0, sc=8) 00:33:57.305 starting I/O failed 00:33:57.305 Read completed with error (sct=0, sc=8) 00:33:57.306 starting I/O failed 00:33:57.306 Read completed with error (sct=0, sc=8) 00:33:57.306 starting I/O failed 00:33:57.306 Read completed with error (sct=0, sc=8) 00:33:57.306 starting I/O failed 00:33:57.306 Read completed with error (sct=0, sc=8) 00:33:57.306 starting I/O failed 00:33:57.306 Read completed with error (sct=0, sc=8) 00:33:57.306 starting I/O failed 00:33:57.306 Write completed with error (sct=0, sc=8) 00:33:57.306 starting I/O failed 00:33:57.306 Read completed with error (sct=0, sc=8) 00:33:57.306 starting I/O failed 00:33:57.306 Read completed with error (sct=0, sc=8) 00:33:57.306 starting I/O failed 00:33:57.306 Read completed with error (sct=0, sc=8) 00:33:57.306 starting I/O failed 00:33:57.306 Read completed with error (sct=0, sc=8) 00:33:57.306 starting I/O failed 00:33:57.306 Read completed with error (sct=0, sc=8) 00:33:57.306 starting I/O failed 00:33:57.306 Write completed with error (sct=0, sc=8) 00:33:57.306 starting I/O failed 00:33:57.306 Read completed with error (sct=0, sc=8) 00:33:57.306 starting I/O failed 00:33:57.306 Read completed with error (sct=0, sc=8) 00:33:57.306 starting I/O failed 00:33:57.306 Write completed with error (sct=0, sc=8) 00:33:57.306 starting I/O failed 00:33:57.306 Read completed with error (sct=0, sc=8) 00:33:57.306 starting I/O failed 00:33:57.306 Read completed with error (sct=0, sc=8) 00:33:57.306 starting I/O failed 00:33:57.306 Read completed with error (sct=0, sc=8) 00:33:57.306 starting I/O failed 00:33:57.306 Read completed with error (sct=0, sc=8) 00:33:57.306 starting I/O failed 00:33:57.306 Write completed with error (sct=0, sc=8) 00:33:57.306 starting I/O failed 00:33:57.306 Read completed with error (sct=0, sc=8) 00:33:57.306 starting I/O failed 00:33:57.306 [2024-07-24 02:12:11.553247] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:57.306 Read completed with error (sct=0, sc=8) 00:33:57.306 starting I/O failed 00:33:57.306 Write completed with error (sct=0, sc=8) 00:33:57.306 starting I/O failed 00:33:57.306 Write completed with error (sct=0, sc=8) 00:33:57.306 starting I/O failed 00:33:57.306 Write completed with error (sct=0, sc=8) 00:33:57.306 starting I/O failed 00:33:57.306 Write completed with error (sct=0, sc=8) 00:33:57.306 starting I/O failed 00:33:57.306 Read completed with error (sct=0, sc=8) 00:33:57.306 starting I/O failed 00:33:57.306 Write completed with error (sct=0, sc=8) 00:33:57.306 starting I/O failed 00:33:57.306 Read completed with error (sct=0, sc=8) 00:33:57.306 starting I/O failed 00:33:57.306 Write completed with error (sct=0, sc=8) 00:33:57.306 starting I/O failed 00:33:57.306 Read completed with error (sct=0, sc=8) 00:33:57.306 starting I/O failed 00:33:57.306 Write completed with error (sct=0, sc=8) 00:33:57.306 starting I/O failed 00:33:57.306 Read completed with error (sct=0, sc=8) 00:33:57.306 starting I/O failed 00:33:57.306 Read completed with error (sct=0, sc=8) 00:33:57.306 starting I/O failed 00:33:57.306 Read completed with error (sct=0, sc=8) 00:33:57.306 starting I/O failed 00:33:57.306 Read completed with error (sct=0, sc=8) 00:33:57.306 starting I/O failed 00:33:57.306 Read completed with error (sct=0, sc=8) 00:33:57.306 starting I/O failed 00:33:57.306 Write completed with error (sct=0, sc=8) 00:33:57.306 starting I/O failed 00:33:57.306 Read completed with error (sct=0, sc=8) 00:33:57.306 starting I/O failed 00:33:57.306 Read completed with error (sct=0, sc=8) 00:33:57.306 starting I/O failed 00:33:57.306 Write completed with error (sct=0, sc=8) 00:33:57.306 starting I/O failed 00:33:57.306 Write completed with error (sct=0, sc=8) 00:33:57.306 starting I/O failed 00:33:57.306 Read completed with error (sct=0, sc=8) 00:33:57.306 starting I/O failed 00:33:57.306 Write completed with error (sct=0, sc=8) 00:33:57.306 starting I/O failed 00:33:57.306 Read completed with error (sct=0, sc=8) 00:33:57.306 starting I/O failed 00:33:57.306 Write completed with error (sct=0, sc=8) 00:33:57.306 starting I/O failed 00:33:57.306 Write completed with error (sct=0, sc=8) 00:33:57.306 starting I/O failed 00:33:57.306 Write completed with error (sct=0, sc=8) 00:33:57.306 starting I/O failed 00:33:57.306 Write completed with error (sct=0, sc=8) 00:33:57.306 starting I/O failed 00:33:57.306 Read completed with error (sct=0, sc=8) 00:33:57.306 starting I/O failed 00:33:57.306 Write completed with error (sct=0, sc=8) 00:33:57.306 starting I/O failed 00:33:57.306 Read completed with error (sct=0, sc=8) 00:33:57.306 starting I/O failed 00:33:57.306 Write completed with error (sct=0, sc=8) 00:33:57.306 starting I/O failed 00:33:57.306 [2024-07-24 02:12:11.553548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:57.306 [2024-07-24 02:12:11.553789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.306 [2024-07-24 02:12:11.553822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:57.306 qpair failed and we were unable to recover it. 00:33:57.306 [2024-07-24 02:12:11.553943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.306 [2024-07-24 02:12:11.553970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:57.306 qpair failed and we were unable to recover it. 00:33:57.306 [2024-07-24 02:12:11.554136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.306 [2024-07-24 02:12:11.554162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:57.306 qpair failed and we were unable to recover it. 00:33:57.306 [2024-07-24 02:12:11.554267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.306 [2024-07-24 02:12:11.554294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:57.306 qpair failed and we were unable to recover it. 00:33:57.306 [2024-07-24 02:12:11.554402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.306 [2024-07-24 02:12:11.554428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:57.306 qpair failed and we were unable to recover it. 00:33:57.306 [2024-07-24 02:12:11.554543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.306 [2024-07-24 02:12:11.554569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:57.306 qpair failed and we were unable to recover it. 00:33:57.306 [2024-07-24 02:12:11.554794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.306 [2024-07-24 02:12:11.554820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:57.306 qpair failed and we were unable to recover it. 00:33:57.306 [2024-07-24 02:12:11.554983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.306 [2024-07-24 02:12:11.555009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:57.306 qpair failed and we were unable to recover it. 00:33:57.306 [2024-07-24 02:12:11.555145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.306 [2024-07-24 02:12:11.555171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:57.306 qpair failed and we were unable to recover it. 00:33:57.306 [2024-07-24 02:12:11.555279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.306 [2024-07-24 02:12:11.555322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:57.306 qpair failed and we were unable to recover it. 00:33:57.306 [2024-07-24 02:12:11.555440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.306 [2024-07-24 02:12:11.555467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:57.306 qpair failed and we were unable to recover it. 00:33:57.306 [2024-07-24 02:12:11.555613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.306 [2024-07-24 02:12:11.555652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.306 qpair failed and we were unable to recover it. 00:33:57.306 [2024-07-24 02:12:11.555795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.306 [2024-07-24 02:12:11.555822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.306 qpair failed and we were unable to recover it. 00:33:57.306 [2024-07-24 02:12:11.555971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.306 [2024-07-24 02:12:11.555997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.306 qpair failed and we were unable to recover it. 00:33:57.306 [2024-07-24 02:12:11.556138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.306 [2024-07-24 02:12:11.556164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.306 qpair failed and we were unable to recover it. 00:33:57.306 [2024-07-24 02:12:11.556273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.306 [2024-07-24 02:12:11.556299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.306 qpair failed and we were unable to recover it. 00:33:57.306 [2024-07-24 02:12:11.556423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.306 [2024-07-24 02:12:11.556449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.306 qpair failed and we were unable to recover it. 00:33:57.306 [2024-07-24 02:12:11.556545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.306 [2024-07-24 02:12:11.556570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.306 qpair failed and we were unable to recover it. 00:33:57.307 [2024-07-24 02:12:11.556743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.307 [2024-07-24 02:12:11.556768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.307 qpair failed and we were unable to recover it. 00:33:57.307 [2024-07-24 02:12:11.556901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.307 [2024-07-24 02:12:11.556927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.307 qpair failed and we were unable to recover it. 00:33:57.307 [2024-07-24 02:12:11.557065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.307 [2024-07-24 02:12:11.557093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:57.307 qpair failed and we were unable to recover it. 00:33:57.307 [2024-07-24 02:12:11.557268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.307 [2024-07-24 02:12:11.557294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:57.307 qpair failed and we were unable to recover it. 00:33:57.307 [2024-07-24 02:12:11.557407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.307 [2024-07-24 02:12:11.557433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:57.307 qpair failed and we were unable to recover it. 00:33:57.307 [2024-07-24 02:12:11.557542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.307 [2024-07-24 02:12:11.557568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:57.307 qpair failed and we were unable to recover it. 00:33:57.307 [2024-07-24 02:12:11.557701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.307 [2024-07-24 02:12:11.557731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:57.307 qpair failed and we were unable to recover it. 00:33:57.307 [2024-07-24 02:12:11.557865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.307 [2024-07-24 02:12:11.557891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:57.307 qpair failed and we were unable to recover it. 00:33:57.307 [2024-07-24 02:12:11.558039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.307 [2024-07-24 02:12:11.558086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.307 qpair failed and we were unable to recover it. 00:33:57.307 [2024-07-24 02:12:11.558215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.307 [2024-07-24 02:12:11.558241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.307 qpair failed and we were unable to recover it. 00:33:57.307 [2024-07-24 02:12:11.558356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.307 [2024-07-24 02:12:11.558383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.307 qpair failed and we were unable to recover it. 00:33:57.307 [2024-07-24 02:12:11.558486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.307 [2024-07-24 02:12:11.558513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.307 qpair failed and we were unable to recover it. 00:33:57.307 [2024-07-24 02:12:11.558620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.307 [2024-07-24 02:12:11.558647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.307 qpair failed and we were unable to recover it. 00:33:57.307 [2024-07-24 02:12:11.558799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.307 [2024-07-24 02:12:11.558825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.307 qpair failed and we were unable to recover it. 00:33:57.307 [2024-07-24 02:12:11.558954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.307 [2024-07-24 02:12:11.558997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.307 qpair failed and we were unable to recover it. 00:33:57.307 [2024-07-24 02:12:11.559128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.307 [2024-07-24 02:12:11.559154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.307 qpair failed and we were unable to recover it. 00:33:57.307 [2024-07-24 02:12:11.559262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.307 [2024-07-24 02:12:11.559288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.307 qpair failed and we were unable to recover it. 00:33:57.307 [2024-07-24 02:12:11.559417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.307 [2024-07-24 02:12:11.559445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:57.307 qpair failed and we were unable to recover it. 00:33:57.307 [2024-07-24 02:12:11.559543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.307 [2024-07-24 02:12:11.559569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:57.307 qpair failed and we were unable to recover it. 00:33:57.307 [2024-07-24 02:12:11.559729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.307 [2024-07-24 02:12:11.559755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:57.307 qpair failed and we were unable to recover it. 00:33:57.307 [2024-07-24 02:12:11.559976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.307 [2024-07-24 02:12:11.560039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:57.307 qpair failed and we were unable to recover it. 00:33:57.307 [2024-07-24 02:12:11.560181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.307 [2024-07-24 02:12:11.560210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:57.307 qpair failed and we were unable to recover it. 00:33:57.307 [2024-07-24 02:12:11.560329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.307 [2024-07-24 02:12:11.560373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:57.307 qpair failed and we were unable to recover it. 00:33:57.307 [2024-07-24 02:12:11.560489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.307 [2024-07-24 02:12:11.560515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:57.307 qpair failed and we were unable to recover it. 00:33:57.307 [2024-07-24 02:12:11.560633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.307 [2024-07-24 02:12:11.560659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:57.307 qpair failed and we were unable to recover it. 00:33:57.307 [2024-07-24 02:12:11.560836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.307 [2024-07-24 02:12:11.560865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:57.307 qpair failed and we were unable to recover it. 00:33:57.307 [2024-07-24 02:12:11.561076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.307 [2024-07-24 02:12:11.561136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.307 qpair failed and we were unable to recover it. 00:33:57.307 [2024-07-24 02:12:11.561249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.307 [2024-07-24 02:12:11.561275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.307 qpair failed and we were unable to recover it. 00:33:57.307 [2024-07-24 02:12:11.561399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.307 [2024-07-24 02:12:11.561426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.307 qpair failed and we were unable to recover it. 00:33:57.307 [2024-07-24 02:12:11.561537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.307 [2024-07-24 02:12:11.561563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.307 qpair failed and we were unable to recover it. 00:33:57.307 [2024-07-24 02:12:11.561706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.307 [2024-07-24 02:12:11.561732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.307 qpair failed and we were unable to recover it. 00:33:57.307 [2024-07-24 02:12:11.561862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.307 [2024-07-24 02:12:11.561888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.307 qpair failed and we were unable to recover it. 00:33:57.307 [2024-07-24 02:12:11.562022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.307 [2024-07-24 02:12:11.562049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:57.307 qpair failed and we were unable to recover it. 00:33:57.307 [2024-07-24 02:12:11.562189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.307 [2024-07-24 02:12:11.562229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.307 qpair failed and we were unable to recover it. 00:33:57.307 [2024-07-24 02:12:11.562404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.307 [2024-07-24 02:12:11.562444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.307 qpair failed and we were unable to recover it. 00:33:57.307 [2024-07-24 02:12:11.562562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.307 [2024-07-24 02:12:11.562589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.307 qpair failed and we were unable to recover it. 00:33:57.307 [2024-07-24 02:12:11.562730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.308 [2024-07-24 02:12:11.562756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.308 qpair failed and we were unable to recover it. 00:33:57.308 [2024-07-24 02:12:11.562902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.308 [2024-07-24 02:12:11.562928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.308 qpair failed and we were unable to recover it. 00:33:57.308 [2024-07-24 02:12:11.563060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.308 [2024-07-24 02:12:11.563085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.308 qpair failed and we were unable to recover it. 00:33:57.308 [2024-07-24 02:12:11.563244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.308 [2024-07-24 02:12:11.563269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.308 qpair failed and we were unable to recover it. 00:33:57.308 [2024-07-24 02:12:11.563380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.308 [2024-07-24 02:12:11.563406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.308 qpair failed and we were unable to recover it. 00:33:57.308 [2024-07-24 02:12:11.563543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.308 [2024-07-24 02:12:11.563572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.308 qpair failed and we were unable to recover it. 00:33:57.308 [2024-07-24 02:12:11.563699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.308 [2024-07-24 02:12:11.563725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.308 qpair failed and we were unable to recover it. 00:33:57.308 [2024-07-24 02:12:11.563915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.308 [2024-07-24 02:12:11.563941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.308 qpair failed and we were unable to recover it. 00:33:57.308 [2024-07-24 02:12:11.564049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.308 [2024-07-24 02:12:11.564076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.308 qpair failed and we were unable to recover it. 00:33:57.308 [2024-07-24 02:12:11.564251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.308 [2024-07-24 02:12:11.564291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:57.308 qpair failed and we were unable to recover it. 00:33:57.308 [2024-07-24 02:12:11.564472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.308 [2024-07-24 02:12:11.564501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:57.308 qpair failed and we were unable to recover it. 00:33:57.308 [2024-07-24 02:12:11.564621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.308 [2024-07-24 02:12:11.564648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.308 qpair failed and we were unable to recover it. 00:33:57.308 [2024-07-24 02:12:11.564759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.308 [2024-07-24 02:12:11.564784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.308 qpair failed and we were unable to recover it. 00:33:57.308 [2024-07-24 02:12:11.564916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.308 [2024-07-24 02:12:11.564942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.308 qpair failed and we were unable to recover it. 00:33:57.308 [2024-07-24 02:12:11.565069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.308 [2024-07-24 02:12:11.565095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.308 qpair failed and we were unable to recover it. 00:33:57.308 [2024-07-24 02:12:11.565199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.308 [2024-07-24 02:12:11.565225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.308 qpair failed and we were unable to recover it. 00:33:57.308 [2024-07-24 02:12:11.565349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.308 [2024-07-24 02:12:11.565390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.308 qpair failed and we were unable to recover it. 00:33:57.308 [2024-07-24 02:12:11.565503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.308 [2024-07-24 02:12:11.565532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.308 qpair failed and we were unable to recover it. 00:33:57.308 [2024-07-24 02:12:11.565742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.308 [2024-07-24 02:12:11.565782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.308 qpair failed and we were unable to recover it. 00:33:57.308 [2024-07-24 02:12:11.565919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.308 [2024-07-24 02:12:11.565947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.308 qpair failed and we were unable to recover it. 00:33:57.308 [2024-07-24 02:12:11.566060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.308 [2024-07-24 02:12:11.566086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.308 qpair failed and we were unable to recover it. 00:33:57.308 [2024-07-24 02:12:11.566244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.308 [2024-07-24 02:12:11.566270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.308 qpair failed and we were unable to recover it. 00:33:57.308 [2024-07-24 02:12:11.566391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.308 [2024-07-24 02:12:11.566419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.308 qpair failed and we were unable to recover it. 00:33:57.308 [2024-07-24 02:12:11.566557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.308 [2024-07-24 02:12:11.566583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.308 qpair failed and we were unable to recover it. 00:33:57.308 [2024-07-24 02:12:11.566700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.308 [2024-07-24 02:12:11.566726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.308 qpair failed and we were unable to recover it. 00:33:57.308 [2024-07-24 02:12:11.566817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.308 [2024-07-24 02:12:11.566843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.308 qpair failed and we were unable to recover it. 00:33:57.308 [2024-07-24 02:12:11.566953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.308 [2024-07-24 02:12:11.566979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.308 qpair failed and we were unable to recover it. 00:33:57.308 [2024-07-24 02:12:11.567102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.308 [2024-07-24 02:12:11.567141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.308 qpair failed and we were unable to recover it. 00:33:57.308 [2024-07-24 02:12:11.567267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.308 [2024-07-24 02:12:11.567325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.308 qpair failed and we were unable to recover it. 00:33:57.308 [2024-07-24 02:12:11.567549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.308 [2024-07-24 02:12:11.567577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.308 qpair failed and we were unable to recover it. 00:33:57.308 [2024-07-24 02:12:11.567757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.308 [2024-07-24 02:12:11.567800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.308 qpair failed and we were unable to recover it. 00:33:57.308 [2024-07-24 02:12:11.567932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.308 [2024-07-24 02:12:11.567982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.308 qpair failed and we were unable to recover it. 00:33:57.308 [2024-07-24 02:12:11.568115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.308 [2024-07-24 02:12:11.568142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.308 qpair failed and we were unable to recover it. 00:33:57.308 [2024-07-24 02:12:11.568276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.308 [2024-07-24 02:12:11.568314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.308 qpair failed and we were unable to recover it. 00:33:57.308 [2024-07-24 02:12:11.568430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.308 [2024-07-24 02:12:11.568456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.308 qpair failed and we were unable to recover it. 00:33:57.308 [2024-07-24 02:12:11.568591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.308 [2024-07-24 02:12:11.568627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.308 qpair failed and we were unable to recover it. 00:33:57.308 [2024-07-24 02:12:11.568891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.308 [2024-07-24 02:12:11.568942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.308 qpair failed and we were unable to recover it. 00:33:57.309 [2024-07-24 02:12:11.569223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.309 [2024-07-24 02:12:11.569281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.309 qpair failed and we were unable to recover it. 00:33:57.309 [2024-07-24 02:12:11.569406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.309 [2024-07-24 02:12:11.569433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.309 qpair failed and we were unable to recover it. 00:33:57.309 [2024-07-24 02:12:11.569564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.309 [2024-07-24 02:12:11.569590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.309 qpair failed and we were unable to recover it. 00:33:57.309 [2024-07-24 02:12:11.569773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.309 [2024-07-24 02:12:11.569803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.309 qpair failed and we were unable to recover it. 00:33:57.309 [2024-07-24 02:12:11.569963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.309 [2024-07-24 02:12:11.569989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.309 qpair failed and we were unable to recover it. 00:33:57.309 [2024-07-24 02:12:11.570190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.309 [2024-07-24 02:12:11.570244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.309 qpair failed and we were unable to recover it. 00:33:57.309 [2024-07-24 02:12:11.570380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.309 [2024-07-24 02:12:11.570408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.309 qpair failed and we were unable to recover it. 00:33:57.309 [2024-07-24 02:12:11.570544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.309 [2024-07-24 02:12:11.570570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.309 qpair failed and we were unable to recover it. 00:33:57.309 [2024-07-24 02:12:11.570742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.309 [2024-07-24 02:12:11.570768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.309 qpair failed and we were unable to recover it. 00:33:57.309 [2024-07-24 02:12:11.570928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.309 [2024-07-24 02:12:11.570954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.309 qpair failed and we were unable to recover it. 00:33:57.309 [2024-07-24 02:12:11.571086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.309 [2024-07-24 02:12:11.571112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.309 qpair failed and we were unable to recover it. 00:33:57.309 [2024-07-24 02:12:11.571235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.309 [2024-07-24 02:12:11.571264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.309 qpair failed and we were unable to recover it. 00:33:57.309 [2024-07-24 02:12:11.571414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.309 [2024-07-24 02:12:11.571442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.309 qpair failed and we were unable to recover it. 00:33:57.309 [2024-07-24 02:12:11.571580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.309 [2024-07-24 02:12:11.571607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.309 qpair failed and we were unable to recover it. 00:33:57.309 [2024-07-24 02:12:11.571797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.309 [2024-07-24 02:12:11.571836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.309 qpair failed and we were unable to recover it. 00:33:57.309 [2024-07-24 02:12:11.572001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.309 [2024-07-24 02:12:11.572028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.309 qpair failed and we were unable to recover it. 00:33:57.309 [2024-07-24 02:12:11.572185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.309 [2024-07-24 02:12:11.572211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.309 qpair failed and we were unable to recover it. 00:33:57.309 [2024-07-24 02:12:11.572339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.309 [2024-07-24 02:12:11.572365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.309 qpair failed and we were unable to recover it. 00:33:57.309 [2024-07-24 02:12:11.572503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.309 [2024-07-24 02:12:11.572529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.309 qpair failed and we were unable to recover it. 00:33:57.309 [2024-07-24 02:12:11.572662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.309 [2024-07-24 02:12:11.572687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.309 qpair failed and we were unable to recover it. 00:33:57.309 [2024-07-24 02:12:11.572794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.309 [2024-07-24 02:12:11.572821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.309 qpair failed and we were unable to recover it. 00:33:57.309 [2024-07-24 02:12:11.572945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.309 [2024-07-24 02:12:11.572971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.309 qpair failed and we were unable to recover it. 00:33:57.309 [2024-07-24 02:12:11.573127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.309 [2024-07-24 02:12:11.573169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.309 qpair failed and we were unable to recover it. 00:33:57.309 [2024-07-24 02:12:11.573337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.309 [2024-07-24 02:12:11.573363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.309 qpair failed and we were unable to recover it. 00:33:57.309 [2024-07-24 02:12:11.573521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.309 [2024-07-24 02:12:11.573547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.309 qpair failed and we were unable to recover it. 00:33:57.309 [2024-07-24 02:12:11.573660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.309 [2024-07-24 02:12:11.573686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.309 qpair failed and we were unable to recover it. 00:33:57.309 [2024-07-24 02:12:11.573814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.309 [2024-07-24 02:12:11.573840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.309 qpair failed and we were unable to recover it. 00:33:57.309 [2024-07-24 02:12:11.574000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.309 [2024-07-24 02:12:11.574034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.309 qpair failed and we were unable to recover it. 00:33:57.309 [2024-07-24 02:12:11.574172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.309 [2024-07-24 02:12:11.574200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.309 qpair failed and we were unable to recover it. 00:33:57.309 [2024-07-24 02:12:11.574349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.309 [2024-07-24 02:12:11.574394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.309 qpair failed and we were unable to recover it. 00:33:57.309 [2024-07-24 02:12:11.574527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.309 [2024-07-24 02:12:11.574554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.309 qpair failed and we were unable to recover it. 00:33:57.309 [2024-07-24 02:12:11.574729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.309 [2024-07-24 02:12:11.574758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.309 qpair failed and we were unable to recover it. 00:33:57.309 [2024-07-24 02:12:11.574920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.309 [2024-07-24 02:12:11.574962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.309 qpair failed and we were unable to recover it. 00:33:57.309 [2024-07-24 02:12:11.575192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.309 [2024-07-24 02:12:11.575221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.309 qpair failed and we were unable to recover it. 00:33:57.309 [2024-07-24 02:12:11.575362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.309 [2024-07-24 02:12:11.575406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.309 qpair failed and we were unable to recover it. 00:33:57.309 [2024-07-24 02:12:11.575512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.309 [2024-07-24 02:12:11.575538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.309 qpair failed and we were unable to recover it. 00:33:57.309 [2024-07-24 02:12:11.575697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.310 [2024-07-24 02:12:11.575723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.310 qpair failed and we were unable to recover it. 00:33:57.310 [2024-07-24 02:12:11.575855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.310 [2024-07-24 02:12:11.575882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.310 qpair failed and we were unable to recover it. 00:33:57.310 [2024-07-24 02:12:11.576014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.310 [2024-07-24 02:12:11.576042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.310 qpair failed and we were unable to recover it. 00:33:57.310 [2024-07-24 02:12:11.576201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.310 [2024-07-24 02:12:11.576227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.310 qpair failed and we were unable to recover it. 00:33:57.310 [2024-07-24 02:12:11.576384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.310 [2024-07-24 02:12:11.576424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:57.310 qpair failed and we were unable to recover it. 00:33:57.310 [2024-07-24 02:12:11.576571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.310 [2024-07-24 02:12:11.576599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:57.310 qpair failed and we were unable to recover it. 00:33:57.310 [2024-07-24 02:12:11.576701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.310 [2024-07-24 02:12:11.576729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:57.310 qpair failed and we were unable to recover it. 00:33:57.310 [2024-07-24 02:12:11.576884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.310 [2024-07-24 02:12:11.576915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:57.310 qpair failed and we were unable to recover it. 00:33:57.310 [2024-07-24 02:12:11.577057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.310 [2024-07-24 02:12:11.577087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:57.310 qpair failed and we were unable to recover it. 00:33:57.310 [2024-07-24 02:12:11.577252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.310 [2024-07-24 02:12:11.577291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.310 qpair failed and we were unable to recover it. 00:33:57.310 [2024-07-24 02:12:11.577449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.310 [2024-07-24 02:12:11.577488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.310 qpair failed and we were unable to recover it. 00:33:57.310 [2024-07-24 02:12:11.577601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.310 [2024-07-24 02:12:11.577628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.310 qpair failed and we were unable to recover it. 00:33:57.310 [2024-07-24 02:12:11.577770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.310 [2024-07-24 02:12:11.577811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.310 qpair failed and we were unable to recover it. 00:33:57.310 [2024-07-24 02:12:11.577944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.310 [2024-07-24 02:12:11.577970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.310 qpair failed and we were unable to recover it. 00:33:57.310 [2024-07-24 02:12:11.578080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.310 [2024-07-24 02:12:11.578105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.310 qpair failed and we were unable to recover it. 00:33:57.310 [2024-07-24 02:12:11.578207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.310 [2024-07-24 02:12:11.578236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:57.310 qpair failed and we were unable to recover it. 00:33:57.310 [2024-07-24 02:12:11.578374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.310 [2024-07-24 02:12:11.578402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:57.310 qpair failed and we were unable to recover it. 00:33:57.310 [2024-07-24 02:12:11.578532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.310 [2024-07-24 02:12:11.578558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:57.310 qpair failed and we were unable to recover it. 00:33:57.310 [2024-07-24 02:12:11.578700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.310 [2024-07-24 02:12:11.578727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:57.310 qpair failed and we were unable to recover it. 00:33:57.310 [2024-07-24 02:12:11.579069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.310 [2024-07-24 02:12:11.579122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:57.310 qpair failed and we were unable to recover it. 00:33:57.310 [2024-07-24 02:12:11.579266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.310 [2024-07-24 02:12:11.579295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:57.310 qpair failed and we were unable to recover it. 00:33:57.310 [2024-07-24 02:12:11.579478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.310 [2024-07-24 02:12:11.579504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:57.310 qpair failed and we were unable to recover it. 00:33:57.310 [2024-07-24 02:12:11.579629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.310 [2024-07-24 02:12:11.579655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:57.310 qpair failed and we were unable to recover it. 00:33:57.310 [2024-07-24 02:12:11.579782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.310 [2024-07-24 02:12:11.579808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:57.310 qpair failed and we were unable to recover it. 00:33:57.310 [2024-07-24 02:12:11.579941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.311 [2024-07-24 02:12:11.579967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:57.311 qpair failed and we were unable to recover it. 00:33:57.311 [2024-07-24 02:12:11.580122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.311 [2024-07-24 02:12:11.580149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:57.311 qpair failed and we were unable to recover it. 00:33:57.311 [2024-07-24 02:12:11.580303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.311 [2024-07-24 02:12:11.580340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.311 qpair failed and we were unable to recover it. 00:33:57.311 [2024-07-24 02:12:11.580523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.311 [2024-07-24 02:12:11.580550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.311 qpair failed and we were unable to recover it. 00:33:57.311 [2024-07-24 02:12:11.580680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.311 [2024-07-24 02:12:11.580706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.311 qpair failed and we were unable to recover it. 00:33:57.311 [2024-07-24 02:12:11.580811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.311 [2024-07-24 02:12:11.580837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.311 qpair failed and we were unable to recover it. 00:33:57.311 [2024-07-24 02:12:11.580993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.311 [2024-07-24 02:12:11.581022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.311 qpair failed and we were unable to recover it. 00:33:57.311 [2024-07-24 02:12:11.581173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.311 [2024-07-24 02:12:11.581204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.311 qpair failed and we were unable to recover it. 00:33:57.311 [2024-07-24 02:12:11.581340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.311 [2024-07-24 02:12:11.581367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:57.311 qpair failed and we were unable to recover it. 00:33:57.311 [2024-07-24 02:12:11.581489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.311 [2024-07-24 02:12:11.581528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.311 qpair failed and we were unable to recover it. 00:33:57.311 [2024-07-24 02:12:11.581671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.311 [2024-07-24 02:12:11.581698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.311 qpair failed and we were unable to recover it. 00:33:57.311 [2024-07-24 02:12:11.581853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.311 [2024-07-24 02:12:11.581879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.311 qpair failed and we were unable to recover it. 00:33:57.311 [2024-07-24 02:12:11.582015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.311 [2024-07-24 02:12:11.582042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.311 qpair failed and we were unable to recover it. 00:33:57.311 [2024-07-24 02:12:11.582194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.311 [2024-07-24 02:12:11.582222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.311 qpair failed and we were unable to recover it. 00:33:57.311 [2024-07-24 02:12:11.582344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.311 [2024-07-24 02:12:11.582390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.311 qpair failed and we were unable to recover it. 00:33:57.311 [2024-07-24 02:12:11.582489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.311 [2024-07-24 02:12:11.582515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.311 qpair failed and we were unable to recover it. 00:33:57.311 [2024-07-24 02:12:11.582660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.311 [2024-07-24 02:12:11.582686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.311 qpair failed and we were unable to recover it. 00:33:57.311 [2024-07-24 02:12:11.582806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.311 [2024-07-24 02:12:11.582835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.311 qpair failed and we were unable to recover it. 00:33:57.311 [2024-07-24 02:12:11.582972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.311 [2024-07-24 02:12:11.583001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.311 qpair failed and we were unable to recover it. 00:33:57.311 [2024-07-24 02:12:11.583152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.311 [2024-07-24 02:12:11.583180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.311 qpair failed and we were unable to recover it. 00:33:57.311 [2024-07-24 02:12:11.583377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.311 [2024-07-24 02:12:11.583405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.311 qpair failed and we were unable to recover it. 00:33:57.311 [2024-07-24 02:12:11.583564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.311 [2024-07-24 02:12:11.583607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.311 qpair failed and we were unable to recover it. 00:33:57.311 [2024-07-24 02:12:11.583812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.311 [2024-07-24 02:12:11.583840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.311 qpair failed and we were unable to recover it. 00:33:57.311 [2024-07-24 02:12:11.584063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.311 [2024-07-24 02:12:11.584089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.311 qpair failed and we were unable to recover it. 00:33:57.311 [2024-07-24 02:12:11.584198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.311 [2024-07-24 02:12:11.584223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.311 qpair failed and we were unable to recover it. 00:33:57.311 [2024-07-24 02:12:11.584357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.311 [2024-07-24 02:12:11.584383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.311 qpair failed and we were unable to recover it. 00:33:57.311 [2024-07-24 02:12:11.584548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.311 [2024-07-24 02:12:11.584574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.311 qpair failed and we were unable to recover it. 00:33:57.311 [2024-07-24 02:12:11.584745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.311 [2024-07-24 02:12:11.584770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.311 qpair failed and we were unable to recover it. 00:33:57.311 [2024-07-24 02:12:11.584897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.311 [2024-07-24 02:12:11.584922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.311 qpair failed and we were unable to recover it. 00:33:57.311 [2024-07-24 02:12:11.585085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.311 [2024-07-24 02:12:11.585114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.311 qpair failed and we were unable to recover it. 00:33:57.311 [2024-07-24 02:12:11.585221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.311 [2024-07-24 02:12:11.585249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.311 qpair failed and we were unable to recover it. 00:33:57.311 [2024-07-24 02:12:11.585371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.311 [2024-07-24 02:12:11.585397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.311 qpair failed and we were unable to recover it. 00:33:57.311 [2024-07-24 02:12:11.585531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.311 [2024-07-24 02:12:11.585556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.311 qpair failed and we were unable to recover it. 00:33:57.311 [2024-07-24 02:12:11.585664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.311 [2024-07-24 02:12:11.585690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.311 qpair failed and we were unable to recover it. 00:33:57.311 [2024-07-24 02:12:11.585790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.311 [2024-07-24 02:12:11.585821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.311 qpair failed and we were unable to recover it. 00:33:57.311 [2024-07-24 02:12:11.585962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.311 [2024-07-24 02:12:11.585988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.311 qpair failed and we were unable to recover it. 00:33:57.311 [2024-07-24 02:12:11.586120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.311 [2024-07-24 02:12:11.586146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.311 qpair failed and we were unable to recover it. 00:33:57.311 [2024-07-24 02:12:11.586249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.312 [2024-07-24 02:12:11.586274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.312 qpair failed and we were unable to recover it. 00:33:57.312 [2024-07-24 02:12:11.586416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.312 [2024-07-24 02:12:11.586442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.312 qpair failed and we were unable to recover it. 00:33:57.312 [2024-07-24 02:12:11.586587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.312 [2024-07-24 02:12:11.586626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:57.312 qpair failed and we were unable to recover it. 00:33:57.312 [2024-07-24 02:12:11.586764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.312 [2024-07-24 02:12:11.586792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:57.312 qpair failed and we were unable to recover it. 00:33:57.312 [2024-07-24 02:12:11.586921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.312 [2024-07-24 02:12:11.586947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:57.312 qpair failed and we were unable to recover it. 00:33:57.312 [2024-07-24 02:12:11.587062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.312 [2024-07-24 02:12:11.587090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:57.312 qpair failed and we were unable to recover it. 00:33:57.312 [2024-07-24 02:12:11.587227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.312 [2024-07-24 02:12:11.587255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:57.312 qpair failed and we were unable to recover it. 00:33:57.312 [2024-07-24 02:12:11.587416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.312 [2024-07-24 02:12:11.587443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:57.312 qpair failed and we were unable to recover it. 00:33:57.312 [2024-07-24 02:12:11.587581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.312 [2024-07-24 02:12:11.587608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.312 qpair failed and we were unable to recover it. 00:33:57.312 [2024-07-24 02:12:11.587716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.312 [2024-07-24 02:12:11.587743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.312 qpair failed and we were unable to recover it. 00:33:57.312 [2024-07-24 02:12:11.587854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.312 [2024-07-24 02:12:11.587881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.312 qpair failed and we were unable to recover it. 00:33:57.312 [2024-07-24 02:12:11.588044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.312 [2024-07-24 02:12:11.588070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.312 qpair failed and we were unable to recover it. 00:33:57.312 [2024-07-24 02:12:11.588195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.312 [2024-07-24 02:12:11.588220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.312 qpair failed and we were unable to recover it. 00:33:57.312 [2024-07-24 02:12:11.588330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.312 [2024-07-24 02:12:11.588357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.312 qpair failed and we were unable to recover it. 00:33:57.312 [2024-07-24 02:12:11.588458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.312 [2024-07-24 02:12:11.588484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.312 qpair failed and we were unable to recover it. 00:33:57.312 [2024-07-24 02:12:11.588617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.312 [2024-07-24 02:12:11.588643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.312 qpair failed and we were unable to recover it. 00:33:57.312 [2024-07-24 02:12:11.588780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.312 [2024-07-24 02:12:11.588805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.312 qpair failed and we were unable to recover it. 00:33:57.312 [2024-07-24 02:12:11.588936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.312 [2024-07-24 02:12:11.588962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.312 qpair failed and we were unable to recover it. 00:33:57.312 [2024-07-24 02:12:11.589092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.312 [2024-07-24 02:12:11.589117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.312 qpair failed and we were unable to recover it. 00:33:57.312 [2024-07-24 02:12:11.589229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.312 [2024-07-24 02:12:11.589255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.312 qpair failed and we were unable to recover it. 00:33:57.312 [2024-07-24 02:12:11.589374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.312 [2024-07-24 02:12:11.589400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.312 qpair failed and we were unable to recover it. 00:33:57.312 [2024-07-24 02:12:11.589553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.312 [2024-07-24 02:12:11.589579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.312 qpair failed and we were unable to recover it. 00:33:57.312 [2024-07-24 02:12:11.589709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.312 [2024-07-24 02:12:11.589735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.312 qpair failed and we were unable to recover it. 00:33:57.312 [2024-07-24 02:12:11.589884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.312 [2024-07-24 02:12:11.589946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.312 qpair failed and we were unable to recover it. 00:33:57.312 [2024-07-24 02:12:11.590128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.312 [2024-07-24 02:12:11.590154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.312 qpair failed and we were unable to recover it. 00:33:57.312 [2024-07-24 02:12:11.590293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.312 [2024-07-24 02:12:11.590324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.312 qpair failed and we were unable to recover it. 00:33:57.312 [2024-07-24 02:12:11.590458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.312 [2024-07-24 02:12:11.590483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.312 qpair failed and we were unable to recover it. 00:33:57.312 [2024-07-24 02:12:11.590624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.312 [2024-07-24 02:12:11.590650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.312 qpair failed and we were unable to recover it. 00:33:57.312 [2024-07-24 02:12:11.590802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.312 [2024-07-24 02:12:11.590828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.312 qpair failed and we were unable to recover it. 00:33:57.312 [2024-07-24 02:12:11.590979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.312 [2024-07-24 02:12:11.591004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.312 qpair failed and we were unable to recover it. 00:33:57.312 [2024-07-24 02:12:11.591108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.312 [2024-07-24 02:12:11.591134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.312 qpair failed and we were unable to recover it. 00:33:57.312 [2024-07-24 02:12:11.591268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.312 [2024-07-24 02:12:11.591293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.312 qpair failed and we were unable to recover it. 00:33:57.312 [2024-07-24 02:12:11.591468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.312 [2024-07-24 02:12:11.591494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.312 qpair failed and we were unable to recover it. 00:33:57.312 [2024-07-24 02:12:11.591598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.312 [2024-07-24 02:12:11.591623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.312 qpair failed and we were unable to recover it. 00:33:57.312 [2024-07-24 02:12:11.591768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.312 [2024-07-24 02:12:11.591810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.312 qpair failed and we were unable to recover it. 00:33:57.312 [2024-07-24 02:12:11.591994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.312 [2024-07-24 02:12:11.592020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.312 qpair failed and we were unable to recover it. 00:33:57.312 [2024-07-24 02:12:11.592175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.312 [2024-07-24 02:12:11.592201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.312 qpair failed and we were unable to recover it. 00:33:57.312 [2024-07-24 02:12:11.592309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.313 [2024-07-24 02:12:11.592351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.313 qpair failed and we were unable to recover it. 00:33:57.313 [2024-07-24 02:12:11.592527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.313 [2024-07-24 02:12:11.592567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:57.313 qpair failed and we were unable to recover it. 00:33:57.313 [2024-07-24 02:12:11.592705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.313 [2024-07-24 02:12:11.592733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:57.313 qpair failed and we were unable to recover it. 00:33:57.313 [2024-07-24 02:12:11.592867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.313 [2024-07-24 02:12:11.592894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:57.313 qpair failed and we were unable to recover it. 00:33:57.313 [2024-07-24 02:12:11.593026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.313 [2024-07-24 02:12:11.593052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:57.313 qpair failed and we were unable to recover it. 00:33:57.313 [2024-07-24 02:12:11.593156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.313 [2024-07-24 02:12:11.593182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:57.313 qpair failed and we were unable to recover it. 00:33:57.313 [2024-07-24 02:12:11.593354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.313 [2024-07-24 02:12:11.593409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.313 qpair failed and we were unable to recover it. 00:33:57.313 [2024-07-24 02:12:11.593606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.313 [2024-07-24 02:12:11.593633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.313 qpair failed and we were unable to recover it. 00:33:57.313 [2024-07-24 02:12:11.593741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.313 [2024-07-24 02:12:11.593769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.313 qpair failed and we were unable to recover it. 00:33:57.313 [2024-07-24 02:12:11.593898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.313 [2024-07-24 02:12:11.593924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.313 qpair failed and we were unable to recover it. 00:33:57.313 [2024-07-24 02:12:11.594049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.313 [2024-07-24 02:12:11.594079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.313 qpair failed and we were unable to recover it. 00:33:57.313 [2024-07-24 02:12:11.594257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.313 [2024-07-24 02:12:11.594283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.313 qpair failed and we were unable to recover it. 00:33:57.313 [2024-07-24 02:12:11.594421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.313 [2024-07-24 02:12:11.594449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:57.313 qpair failed and we were unable to recover it. 00:33:57.313 [2024-07-24 02:12:11.594582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.313 [2024-07-24 02:12:11.594608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:57.313 qpair failed and we were unable to recover it. 00:33:57.313 [2024-07-24 02:12:11.594709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.313 [2024-07-24 02:12:11.594740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:57.313 qpair failed and we were unable to recover it. 00:33:57.313 [2024-07-24 02:12:11.594909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.313 [2024-07-24 02:12:11.594935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:57.313 qpair failed and we were unable to recover it. 00:33:57.313 [2024-07-24 02:12:11.595067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.313 [2024-07-24 02:12:11.595095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.313 qpair failed and we were unable to recover it. 00:33:57.313 [2024-07-24 02:12:11.595225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.313 [2024-07-24 02:12:11.595251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.313 qpair failed and we were unable to recover it. 00:33:57.313 [2024-07-24 02:12:11.595363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.313 [2024-07-24 02:12:11.595392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.313 qpair failed and we were unable to recover it. 00:33:57.313 [2024-07-24 02:12:11.595521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.313 [2024-07-24 02:12:11.595547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.313 qpair failed and we were unable to recover it. 00:33:57.313 [2024-07-24 02:12:11.595706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.313 [2024-07-24 02:12:11.595733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.313 qpair failed and we were unable to recover it. 00:33:57.313 [2024-07-24 02:12:11.595861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.313 [2024-07-24 02:12:11.595887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.313 qpair failed and we were unable to recover it. 00:33:57.313 [2024-07-24 02:12:11.595991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.313 [2024-07-24 02:12:11.596017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.313 qpair failed and we were unable to recover it. 00:33:57.313 [2024-07-24 02:12:11.596151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.313 [2024-07-24 02:12:11.596178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.313 qpair failed and we were unable to recover it. 00:33:57.313 [2024-07-24 02:12:11.596311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.313 [2024-07-24 02:12:11.596342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.313 qpair failed and we were unable to recover it. 00:33:57.313 [2024-07-24 02:12:11.596470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.313 [2024-07-24 02:12:11.596496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.313 qpair failed and we were unable to recover it. 00:33:57.313 [2024-07-24 02:12:11.596637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.313 [2024-07-24 02:12:11.596663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.313 qpair failed and we were unable to recover it. 00:33:57.313 [2024-07-24 02:12:11.596797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.313 [2024-07-24 02:12:11.596823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.313 qpair failed and we were unable to recover it. 00:33:57.313 [2024-07-24 02:12:11.596954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.313 [2024-07-24 02:12:11.596980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.313 qpair failed and we were unable to recover it. 00:33:57.313 [2024-07-24 02:12:11.597087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.313 [2024-07-24 02:12:11.597112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.313 qpair failed and we were unable to recover it. 00:33:57.313 [2024-07-24 02:12:11.597223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.313 [2024-07-24 02:12:11.597250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.313 qpair failed and we were unable to recover it. 00:33:57.313 [2024-07-24 02:12:11.597393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.313 [2024-07-24 02:12:11.597434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:57.313 qpair failed and we were unable to recover it. 00:33:57.313 [2024-07-24 02:12:11.597574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.313 [2024-07-24 02:12:11.597603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:57.313 qpair failed and we were unable to recover it. 00:33:57.313 [2024-07-24 02:12:11.597740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.313 [2024-07-24 02:12:11.597767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:57.313 qpair failed and we were unable to recover it. 00:33:57.313 [2024-07-24 02:12:11.597927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.313 [2024-07-24 02:12:11.597953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:57.313 qpair failed and we were unable to recover it. 00:33:57.313 [2024-07-24 02:12:11.598085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.313 [2024-07-24 02:12:11.598111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:57.313 qpair failed and we were unable to recover it. 00:33:57.313 [2024-07-24 02:12:11.598244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.313 [2024-07-24 02:12:11.598272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.314 qpair failed and we were unable to recover it. 00:33:57.314 [2024-07-24 02:12:11.598413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.314 [2024-07-24 02:12:11.598452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.314 qpair failed and we were unable to recover it. 00:33:57.314 [2024-07-24 02:12:11.598588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.314 [2024-07-24 02:12:11.598616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.314 qpair failed and we were unable to recover it. 00:33:57.314 [2024-07-24 02:12:11.598781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.314 [2024-07-24 02:12:11.598807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.314 qpair failed and we were unable to recover it. 00:33:57.314 [2024-07-24 02:12:11.598962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.314 [2024-07-24 02:12:11.598987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.314 qpair failed and we were unable to recover it. 00:33:57.314 [2024-07-24 02:12:11.599182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.314 [2024-07-24 02:12:11.599213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.314 qpair failed and we were unable to recover it. 00:33:57.314 [2024-07-24 02:12:11.599393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.314 [2024-07-24 02:12:11.599434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:57.314 qpair failed and we were unable to recover it. 00:33:57.314 [2024-07-24 02:12:11.599573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.314 [2024-07-24 02:12:11.599600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.314 qpair failed and we were unable to recover it. 00:33:57.314 [2024-07-24 02:12:11.599765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.314 [2024-07-24 02:12:11.599791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.314 qpair failed and we were unable to recover it. 00:33:57.314 [2024-07-24 02:12:11.599921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.314 [2024-07-24 02:12:11.599947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.314 qpair failed and we were unable to recover it. 00:33:57.314 [2024-07-24 02:12:11.600138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.314 [2024-07-24 02:12:11.600167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.314 qpair failed and we were unable to recover it. 00:33:57.314 [2024-07-24 02:12:11.600349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.314 [2024-07-24 02:12:11.600375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.314 qpair failed and we were unable to recover it. 00:33:57.314 [2024-07-24 02:12:11.600503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.314 [2024-07-24 02:12:11.600529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.314 qpair failed and we were unable to recover it. 00:33:57.314 [2024-07-24 02:12:11.600712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.314 [2024-07-24 02:12:11.600738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.314 qpair failed and we were unable to recover it. 00:33:57.314 [2024-07-24 02:12:11.600832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.314 [2024-07-24 02:12:11.600858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.314 qpair failed and we were unable to recover it. 00:33:57.314 [2024-07-24 02:12:11.601015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.314 [2024-07-24 02:12:11.601041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.314 qpair failed and we were unable to recover it. 00:33:57.314 [2024-07-24 02:12:11.601196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.314 [2024-07-24 02:12:11.601222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.314 qpair failed and we were unable to recover it. 00:33:57.314 [2024-07-24 02:12:11.601353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.314 [2024-07-24 02:12:11.601379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.314 qpair failed and we were unable to recover it. 00:33:57.314 [2024-07-24 02:12:11.601485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.314 [2024-07-24 02:12:11.601512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.314 qpair failed and we were unable to recover it. 00:33:57.314 [2024-07-24 02:12:11.601701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.314 [2024-07-24 02:12:11.601730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.314 qpair failed and we were unable to recover it. 00:33:57.314 [2024-07-24 02:12:11.601863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.314 [2024-07-24 02:12:11.601888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.314 qpair failed and we were unable to recover it. 00:33:57.314 [2024-07-24 02:12:11.601996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.314 [2024-07-24 02:12:11.602023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.314 qpair failed and we were unable to recover it. 00:33:57.314 [2024-07-24 02:12:11.602154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.314 [2024-07-24 02:12:11.602181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.314 qpair failed and we were unable to recover it. 00:33:57.314 [2024-07-24 02:12:11.602314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.314 [2024-07-24 02:12:11.602348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.314 qpair failed and we were unable to recover it. 00:33:57.314 [2024-07-24 02:12:11.602476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.314 [2024-07-24 02:12:11.602502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.314 qpair failed and we were unable to recover it. 00:33:57.314 [2024-07-24 02:12:11.602681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.314 [2024-07-24 02:12:11.602722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:57.314 qpair failed and we were unable to recover it. 00:33:57.314 [2024-07-24 02:12:11.602878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.314 [2024-07-24 02:12:11.602907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:57.314 qpair failed and we were unable to recover it. 00:33:57.314 [2024-07-24 02:12:11.603043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.314 [2024-07-24 02:12:11.603071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:57.314 qpair failed and we were unable to recover it. 00:33:57.314 [2024-07-24 02:12:11.603209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.314 [2024-07-24 02:12:11.603235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:57.314 qpair failed and we were unable to recover it. 00:33:57.314 [2024-07-24 02:12:11.603376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.314 [2024-07-24 02:12:11.603404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:57.314 qpair failed and we were unable to recover it. 00:33:57.314 [2024-07-24 02:12:11.603537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.314 [2024-07-24 02:12:11.603563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:57.314 qpair failed and we were unable to recover it. 00:33:57.314 [2024-07-24 02:12:11.603695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.314 [2024-07-24 02:12:11.603722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.314 qpair failed and we were unable to recover it. 00:33:57.314 [2024-07-24 02:12:11.603864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.314 [2024-07-24 02:12:11.603890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.314 qpair failed and we were unable to recover it. 00:33:57.314 [2024-07-24 02:12:11.604037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.314 [2024-07-24 02:12:11.604077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.314 qpair failed and we were unable to recover it. 00:33:57.314 [2024-07-24 02:12:11.604271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.314 [2024-07-24 02:12:11.604298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.314 qpair failed and we were unable to recover it. 00:33:57.314 [2024-07-24 02:12:11.604435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.314 [2024-07-24 02:12:11.604462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.314 qpair failed and we were unable to recover it. 00:33:57.314 [2024-07-24 02:12:11.604571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.314 [2024-07-24 02:12:11.604597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.314 qpair failed and we were unable to recover it. 00:33:57.314 [2024-07-24 02:12:11.604721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.315 [2024-07-24 02:12:11.604810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.315 qpair failed and we were unable to recover it. 00:33:57.315 [2024-07-24 02:12:11.604967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.315 [2024-07-24 02:12:11.604993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.315 qpair failed and we were unable to recover it. 00:33:57.315 [2024-07-24 02:12:11.605098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.315 [2024-07-24 02:12:11.605124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.315 qpair failed and we were unable to recover it. 00:33:57.315 [2024-07-24 02:12:11.605254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.315 [2024-07-24 02:12:11.605280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.315 qpair failed and we were unable to recover it. 00:33:57.315 [2024-07-24 02:12:11.605424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.315 [2024-07-24 02:12:11.605450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.315 qpair failed and we were unable to recover it. 00:33:57.315 [2024-07-24 02:12:11.605608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.315 [2024-07-24 02:12:11.605634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.315 qpair failed and we were unable to recover it. 00:33:57.315 [2024-07-24 02:12:11.605758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.315 [2024-07-24 02:12:11.605784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.315 qpair failed and we were unable to recover it. 00:33:57.315 [2024-07-24 02:12:11.605894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.315 [2024-07-24 02:12:11.605920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.315 qpair failed and we were unable to recover it. 00:33:57.315 [2024-07-24 02:12:11.606022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.315 [2024-07-24 02:12:11.606048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.315 qpair failed and we were unable to recover it. 00:33:57.315 [2024-07-24 02:12:11.606167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.315 [2024-07-24 02:12:11.606207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:57.315 qpair failed and we were unable to recover it. 00:33:57.315 [2024-07-24 02:12:11.606376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.316 [2024-07-24 02:12:11.606405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:57.316 qpair failed and we were unable to recover it. 00:33:57.316 [2024-07-24 02:12:11.606538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.316 [2024-07-24 02:12:11.606564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:57.316 qpair failed and we were unable to recover it. 00:33:57.316 [2024-07-24 02:12:11.606662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.316 [2024-07-24 02:12:11.606688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:57.316 qpair failed and we were unable to recover it. 00:33:57.316 [2024-07-24 02:12:11.606795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.316 [2024-07-24 02:12:11.606822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:57.316 qpair failed and we were unable to recover it. 00:33:57.316 [2024-07-24 02:12:11.606928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.316 [2024-07-24 02:12:11.606955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:57.316 qpair failed and we were unable to recover it. 00:33:57.316 [2024-07-24 02:12:11.607088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.316 [2024-07-24 02:12:11.607114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:57.316 qpair failed and we were unable to recover it. 00:33:57.316 [2024-07-24 02:12:11.607223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.316 [2024-07-24 02:12:11.607250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:57.316 qpair failed and we were unable to recover it. 00:33:57.316 [2024-07-24 02:12:11.607381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.316 [2024-07-24 02:12:11.607408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:57.316 qpair failed and we were unable to recover it. 00:33:57.316 [2024-07-24 02:12:11.607534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.316 [2024-07-24 02:12:11.607560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:57.316 qpair failed and we were unable to recover it. 00:33:57.316 [2024-07-24 02:12:11.607707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.316 [2024-07-24 02:12:11.607734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:57.316 qpair failed and we were unable to recover it. 00:33:57.316 [2024-07-24 02:12:11.607864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.316 [2024-07-24 02:12:11.607891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:57.316 qpair failed and we were unable to recover it. 00:33:57.316 [2024-07-24 02:12:11.608061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.316 [2024-07-24 02:12:11.608089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.316 qpair failed and we were unable to recover it. 00:33:57.316 [2024-07-24 02:12:11.608327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.316 [2024-07-24 02:12:11.608370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.316 qpair failed and we were unable to recover it. 00:33:57.316 [2024-07-24 02:12:11.608504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.316 [2024-07-24 02:12:11.608530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.316 qpair failed and we were unable to recover it. 00:33:57.316 [2024-07-24 02:12:11.608638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.316 [2024-07-24 02:12:11.608665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.316 qpair failed and we were unable to recover it. 00:33:57.316 [2024-07-24 02:12:11.608820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.316 [2024-07-24 02:12:11.608846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.316 qpair failed and we were unable to recover it. 00:33:57.316 [2024-07-24 02:12:11.608949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.316 [2024-07-24 02:12:11.608975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.316 qpair failed and we were unable to recover it. 00:33:57.316 [2024-07-24 02:12:11.609090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.316 [2024-07-24 02:12:11.609133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.316 qpair failed and we were unable to recover it. 00:33:57.316 [2024-07-24 02:12:11.609237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.316 [2024-07-24 02:12:11.609264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.316 qpair failed and we were unable to recover it. 00:33:57.316 [2024-07-24 02:12:11.609372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.316 [2024-07-24 02:12:11.609398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.316 qpair failed and we were unable to recover it. 00:33:57.316 [2024-07-24 02:12:11.609531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.316 [2024-07-24 02:12:11.609556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.316 qpair failed and we were unable to recover it. 00:33:57.316 [2024-07-24 02:12:11.609662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.316 [2024-07-24 02:12:11.609688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.316 qpair failed and we were unable to recover it. 00:33:57.316 [2024-07-24 02:12:11.609794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.316 [2024-07-24 02:12:11.609821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.316 qpair failed and we were unable to recover it. 00:33:57.316 [2024-07-24 02:12:11.609948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.316 [2024-07-24 02:12:11.609993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:57.316 qpair failed and we were unable to recover it. 00:33:57.316 [2024-07-24 02:12:11.610151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.316 [2024-07-24 02:12:11.610177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:57.316 qpair failed and we were unable to recover it. 00:33:57.316 [2024-07-24 02:12:11.610297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.316 [2024-07-24 02:12:11.610343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.316 qpair failed and we were unable to recover it. 00:33:57.316 [2024-07-24 02:12:11.610491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.316 [2024-07-24 02:12:11.610520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.316 qpair failed and we were unable to recover it. 00:33:57.316 [2024-07-24 02:12:11.610625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.316 [2024-07-24 02:12:11.610652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.316 qpair failed and we were unable to recover it. 00:33:57.316 [2024-07-24 02:12:11.610787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.316 [2024-07-24 02:12:11.610814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.316 qpair failed and we were unable to recover it. 00:33:57.316 [2024-07-24 02:12:11.610987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.316 [2024-07-24 02:12:11.611014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.316 qpair failed and we were unable to recover it. 00:33:57.316 [2024-07-24 02:12:11.611122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.316 [2024-07-24 02:12:11.611148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.316 qpair failed and we were unable to recover it. 00:33:57.316 [2024-07-24 02:12:11.611302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.316 [2024-07-24 02:12:11.611351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:57.316 qpair failed and we were unable to recover it. 00:33:57.316 [2024-07-24 02:12:11.611508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.316 [2024-07-24 02:12:11.611537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:57.316 qpair failed and we were unable to recover it. 00:33:57.316 [2024-07-24 02:12:11.611672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.316 [2024-07-24 02:12:11.611700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:57.316 qpair failed and we were unable to recover it. 00:33:57.316 [2024-07-24 02:12:11.611843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.316 [2024-07-24 02:12:11.611869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:57.316 qpair failed and we were unable to recover it. 00:33:57.316 [2024-07-24 02:12:11.611993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.316 [2024-07-24 02:12:11.612020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:57.316 qpair failed and we were unable to recover it. 00:33:57.316 [2024-07-24 02:12:11.612154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.317 [2024-07-24 02:12:11.612180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:57.317 qpair failed and we were unable to recover it. 00:33:57.317 [2024-07-24 02:12:11.612361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.317 [2024-07-24 02:12:11.612421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.317 qpair failed and we were unable to recover it. 00:33:57.317 [2024-07-24 02:12:11.612556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.317 [2024-07-24 02:12:11.612583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.317 qpair failed and we were unable to recover it. 00:33:57.317 [2024-07-24 02:12:11.612723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.317 [2024-07-24 02:12:11.612750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.317 qpair failed and we were unable to recover it. 00:33:57.317 [2024-07-24 02:12:11.612884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.317 [2024-07-24 02:12:11.612909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.317 qpair failed and we were unable to recover it. 00:33:57.317 [2024-07-24 02:12:11.613052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.317 [2024-07-24 02:12:11.613077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.317 qpair failed and we were unable to recover it. 00:33:57.317 [2024-07-24 02:12:11.613201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.317 [2024-07-24 02:12:11.613227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.317 qpair failed and we were unable to recover it. 00:33:57.317 [2024-07-24 02:12:11.613414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.317 [2024-07-24 02:12:11.613455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:57.317 qpair failed and we were unable to recover it. 00:33:57.317 [2024-07-24 02:12:11.613577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.317 [2024-07-24 02:12:11.613633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.317 qpair failed and we were unable to recover it. 00:33:57.317 [2024-07-24 02:12:11.613786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.317 [2024-07-24 02:12:11.613814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.317 qpair failed and we were unable to recover it. 00:33:57.317 [2024-07-24 02:12:11.613951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.317 [2024-07-24 02:12:11.613977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.317 qpair failed and we were unable to recover it. 00:33:57.317 [2024-07-24 02:12:11.614187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.317 [2024-07-24 02:12:11.614237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.317 qpair failed and we were unable to recover it. 00:33:57.317 [2024-07-24 02:12:11.614421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.317 [2024-07-24 02:12:11.614448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.317 qpair failed and we were unable to recover it. 00:33:57.317 [2024-07-24 02:12:11.614578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.317 [2024-07-24 02:12:11.614625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.317 qpair failed and we were unable to recover it. 00:33:57.317 [2024-07-24 02:12:11.614906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.317 [2024-07-24 02:12:11.614957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.317 qpair failed and we were unable to recover it. 00:33:57.317 [2024-07-24 02:12:11.615106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.317 [2024-07-24 02:12:11.615133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.317 qpair failed and we were unable to recover it. 00:33:57.317 [2024-07-24 02:12:11.615304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.317 [2024-07-24 02:12:11.615350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.317 qpair failed and we were unable to recover it. 00:33:57.317 [2024-07-24 02:12:11.615478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.317 [2024-07-24 02:12:11.615504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.317 qpair failed and we were unable to recover it. 00:33:57.317 [2024-07-24 02:12:11.615633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.317 [2024-07-24 02:12:11.615658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.317 qpair failed and we were unable to recover it. 00:33:57.317 [2024-07-24 02:12:11.615793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.317 [2024-07-24 02:12:11.615840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.317 qpair failed and we were unable to recover it. 00:33:57.317 [2024-07-24 02:12:11.615997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.317 [2024-07-24 02:12:11.616051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.317 qpair failed and we were unable to recover it. 00:33:57.317 [2024-07-24 02:12:11.616174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.317 [2024-07-24 02:12:11.616199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.317 qpair failed and we were unable to recover it. 00:33:57.317 [2024-07-24 02:12:11.616357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.317 [2024-07-24 02:12:11.616386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.317 qpair failed and we were unable to recover it. 00:33:57.317 [2024-07-24 02:12:11.616541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.317 [2024-07-24 02:12:11.616567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.317 qpair failed and we were unable to recover it. 00:33:57.317 [2024-07-24 02:12:11.616706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.317 [2024-07-24 02:12:11.616732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.317 qpair failed and we were unable to recover it. 00:33:57.317 [2024-07-24 02:12:11.616868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.317 [2024-07-24 02:12:11.616895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.317 qpair failed and we were unable to recover it. 00:33:57.317 [2024-07-24 02:12:11.617057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.317 [2024-07-24 02:12:11.617086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.317 qpair failed and we were unable to recover it. 00:33:57.317 [2024-07-24 02:12:11.617263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.317 [2024-07-24 02:12:11.617289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.317 qpair failed and we were unable to recover it. 00:33:57.317 [2024-07-24 02:12:11.617426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.317 [2024-07-24 02:12:11.617454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.317 qpair failed and we were unable to recover it. 00:33:57.317 [2024-07-24 02:12:11.617564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.317 [2024-07-24 02:12:11.617590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.317 qpair failed and we were unable to recover it. 00:33:57.317 [2024-07-24 02:12:11.617703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.317 [2024-07-24 02:12:11.617728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.317 qpair failed and we were unable to recover it. 00:33:57.317 [2024-07-24 02:12:11.617900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.317 [2024-07-24 02:12:11.617942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.317 qpair failed and we were unable to recover it. 00:33:57.317 [2024-07-24 02:12:11.618101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.317 [2024-07-24 02:12:11.618127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.317 qpair failed and we were unable to recover it. 00:33:57.317 [2024-07-24 02:12:11.618289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.317 [2024-07-24 02:12:11.618326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.317 qpair failed and we were unable to recover it. 00:33:57.317 [2024-07-24 02:12:11.618427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.317 [2024-07-24 02:12:11.618454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.317 qpair failed and we were unable to recover it. 00:33:57.317 [2024-07-24 02:12:11.618611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.317 [2024-07-24 02:12:11.618636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.317 qpair failed and we were unable to recover it. 00:33:57.317 [2024-07-24 02:12:11.618767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.317 [2024-07-24 02:12:11.618792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.317 qpair failed and we were unable to recover it. 00:33:57.317 [2024-07-24 02:12:11.618926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.318 [2024-07-24 02:12:11.618953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.318 qpair failed and we were unable to recover it. 00:33:57.318 [2024-07-24 02:12:11.619106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.318 [2024-07-24 02:12:11.619132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.318 qpair failed and we were unable to recover it. 00:33:57.318 [2024-07-24 02:12:11.619256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.318 [2024-07-24 02:12:11.619282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.318 qpair failed and we were unable to recover it. 00:33:57.318 [2024-07-24 02:12:11.619420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.318 [2024-07-24 02:12:11.619448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.318 qpair failed and we were unable to recover it. 00:33:57.318 [2024-07-24 02:12:11.619576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.318 [2024-07-24 02:12:11.619629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.318 qpair failed and we were unable to recover it. 00:33:57.318 [2024-07-24 02:12:11.619776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.318 [2024-07-24 02:12:11.619801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.318 qpair failed and we were unable to recover it. 00:33:57.318 [2024-07-24 02:12:11.619930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.318 [2024-07-24 02:12:11.619962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.318 qpair failed and we were unable to recover it. 00:33:57.318 [2024-07-24 02:12:11.620095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.318 [2024-07-24 02:12:11.620121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.318 qpair failed and we were unable to recover it. 00:33:57.318 [2024-07-24 02:12:11.620250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.318 [2024-07-24 02:12:11.620276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.318 qpair failed and we were unable to recover it. 00:33:57.318 [2024-07-24 02:12:11.620380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.318 [2024-07-24 02:12:11.620406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.318 qpair failed and we were unable to recover it. 00:33:57.318 [2024-07-24 02:12:11.620503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.318 [2024-07-24 02:12:11.620529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.318 qpair failed and we were unable to recover it. 00:33:57.318 [2024-07-24 02:12:11.620661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.318 [2024-07-24 02:12:11.620687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.318 qpair failed and we were unable to recover it. 00:33:57.318 [2024-07-24 02:12:11.620823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.318 [2024-07-24 02:12:11.620849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.318 qpair failed and we were unable to recover it. 00:33:57.318 [2024-07-24 02:12:11.620976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.318 [2024-07-24 02:12:11.621001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.318 qpair failed and we were unable to recover it. 00:33:57.318 [2024-07-24 02:12:11.621146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.318 [2024-07-24 02:12:11.621172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.318 qpair failed and we were unable to recover it. 00:33:57.318 [2024-07-24 02:12:11.621350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.318 [2024-07-24 02:12:11.621394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.318 qpair failed and we were unable to recover it. 00:33:57.318 [2024-07-24 02:12:11.621527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.318 [2024-07-24 02:12:11.621553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.318 qpair failed and we were unable to recover it. 00:33:57.318 [2024-07-24 02:12:11.621688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.318 [2024-07-24 02:12:11.621713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.318 qpair failed and we were unable to recover it. 00:33:57.318 [2024-07-24 02:12:11.621824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.318 [2024-07-24 02:12:11.621850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.318 qpair failed and we were unable to recover it. 00:33:57.318 [2024-07-24 02:12:11.621980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.318 [2024-07-24 02:12:11.622007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.318 qpair failed and we were unable to recover it. 00:33:57.318 [2024-07-24 02:12:11.622237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.318 [2024-07-24 02:12:11.622277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.318 qpair failed and we were unable to recover it. 00:33:57.318 [2024-07-24 02:12:11.622433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.318 [2024-07-24 02:12:11.622462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.318 qpair failed and we were unable to recover it. 00:33:57.318 [2024-07-24 02:12:11.622632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.318 [2024-07-24 02:12:11.622659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.318 qpair failed and we were unable to recover it. 00:33:57.318 [2024-07-24 02:12:11.622814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.318 [2024-07-24 02:12:11.622841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.318 qpair failed and we were unable to recover it. 00:33:57.318 [2024-07-24 02:12:11.622970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.318 [2024-07-24 02:12:11.622998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.318 qpair failed and we were unable to recover it. 00:33:57.318 [2024-07-24 02:12:11.623211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.318 [2024-07-24 02:12:11.623263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.318 qpair failed and we were unable to recover it. 00:33:57.318 [2024-07-24 02:12:11.623374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.318 [2024-07-24 02:12:11.623401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.318 qpair failed and we were unable to recover it. 00:33:57.318 [2024-07-24 02:12:11.623538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.318 [2024-07-24 02:12:11.623564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.318 qpair failed and we were unable to recover it. 00:33:57.318 [2024-07-24 02:12:11.623663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.318 [2024-07-24 02:12:11.623688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.318 qpair failed and we were unable to recover it. 00:33:57.318 [2024-07-24 02:12:11.623815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.318 [2024-07-24 02:12:11.623843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.318 qpair failed and we were unable to recover it. 00:33:57.318 [2024-07-24 02:12:11.624016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.318 [2024-07-24 02:12:11.624085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.318 qpair failed and we were unable to recover it. 00:33:57.318 [2024-07-24 02:12:11.624228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.318 [2024-07-24 02:12:11.624257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.318 qpair failed and we were unable to recover it. 00:33:57.318 [2024-07-24 02:12:11.624432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.318 [2024-07-24 02:12:11.624458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.318 qpair failed and we were unable to recover it. 00:33:57.318 [2024-07-24 02:12:11.624586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.318 [2024-07-24 02:12:11.624630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.318 qpair failed and we were unable to recover it. 00:33:57.318 [2024-07-24 02:12:11.624812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.318 [2024-07-24 02:12:11.624871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.318 qpair failed and we were unable to recover it. 00:33:57.318 [2024-07-24 02:12:11.625047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.318 [2024-07-24 02:12:11.625075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.318 qpair failed and we were unable to recover it. 00:33:57.318 [2024-07-24 02:12:11.625235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.318 [2024-07-24 02:12:11.625278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.318 qpair failed and we were unable to recover it. 00:33:57.318 [2024-07-24 02:12:11.625458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.319 [2024-07-24 02:12:11.625499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:57.319 qpair failed and we were unable to recover it. 00:33:57.319 [2024-07-24 02:12:11.625669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.319 [2024-07-24 02:12:11.625698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:57.319 qpair failed and we were unable to recover it. 00:33:57.319 [2024-07-24 02:12:11.625832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.319 [2024-07-24 02:12:11.625858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:57.319 qpair failed and we were unable to recover it. 00:33:57.319 [2024-07-24 02:12:11.625956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.319 [2024-07-24 02:12:11.625982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:57.319 qpair failed and we were unable to recover it. 00:33:57.319 [2024-07-24 02:12:11.626190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.319 [2024-07-24 02:12:11.626253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.319 qpair failed and we were unable to recover it. 00:33:57.319 [2024-07-24 02:12:11.626359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.319 [2024-07-24 02:12:11.626386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.319 qpair failed and we were unable to recover it. 00:33:57.319 [2024-07-24 02:12:11.626494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.319 [2024-07-24 02:12:11.626519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.319 qpair failed and we were unable to recover it. 00:33:57.319 [2024-07-24 02:12:11.626670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.319 [2024-07-24 02:12:11.626699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.319 qpair failed and we were unable to recover it. 00:33:57.319 [2024-07-24 02:12:11.626875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.319 [2024-07-24 02:12:11.626903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.319 qpair failed and we were unable to recover it. 00:33:57.319 [2024-07-24 02:12:11.627061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.319 [2024-07-24 02:12:11.627086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.319 qpair failed and we were unable to recover it. 00:33:57.319 [2024-07-24 02:12:11.627248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.319 [2024-07-24 02:12:11.627280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:57.319 qpair failed and we were unable to recover it. 00:33:57.319 [2024-07-24 02:12:11.627443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.319 [2024-07-24 02:12:11.627471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:57.319 qpair failed and we were unable to recover it. 00:33:57.319 [2024-07-24 02:12:11.627609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.319 [2024-07-24 02:12:11.627635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:57.319 qpair failed and we were unable to recover it. 00:33:57.319 [2024-07-24 02:12:11.627752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.319 [2024-07-24 02:12:11.627778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:57.319 qpair failed and we were unable to recover it. 00:33:57.319 [2024-07-24 02:12:11.627898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.319 [2024-07-24 02:12:11.627929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:57.319 qpair failed and we were unable to recover it. 00:33:57.319 [2024-07-24 02:12:11.628101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.319 [2024-07-24 02:12:11.628130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:57.319 qpair failed and we were unable to recover it. 00:33:57.319 [2024-07-24 02:12:11.628276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.319 [2024-07-24 02:12:11.628305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:57.319 qpair failed and we were unable to recover it. 00:33:57.319 [2024-07-24 02:12:11.628447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.319 [2024-07-24 02:12:11.628486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.319 qpair failed and we were unable to recover it. 00:33:57.319 [2024-07-24 02:12:11.628654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.319 [2024-07-24 02:12:11.628681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.319 qpair failed and we were unable to recover it. 00:33:57.319 [2024-07-24 02:12:11.628950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.319 [2024-07-24 02:12:11.629001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.319 qpair failed and we were unable to recover it. 00:33:57.319 [2024-07-24 02:12:11.629164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.319 [2024-07-24 02:12:11.629216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.319 qpair failed and we were unable to recover it. 00:33:57.319 [2024-07-24 02:12:11.629339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.319 [2024-07-24 02:12:11.629366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.319 qpair failed and we were unable to recover it. 00:33:57.319 [2024-07-24 02:12:11.629548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.319 [2024-07-24 02:12:11.629594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.319 qpair failed and we were unable to recover it. 00:33:57.319 [2024-07-24 02:12:11.629807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.319 [2024-07-24 02:12:11.629866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.319 qpair failed and we were unable to recover it. 00:33:57.319 [2024-07-24 02:12:11.630057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.319 [2024-07-24 02:12:11.630104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.319 qpair failed and we were unable to recover it. 00:33:57.319 [2024-07-24 02:12:11.630236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.319 [2024-07-24 02:12:11.630262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.319 qpair failed and we were unable to recover it. 00:33:57.319 [2024-07-24 02:12:11.630429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.319 [2024-07-24 02:12:11.630456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.319 qpair failed and we were unable to recover it. 00:33:57.319 [2024-07-24 02:12:11.630593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.319 [2024-07-24 02:12:11.630636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.319 qpair failed and we were unable to recover it. 00:33:57.319 [2024-07-24 02:12:11.630766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.319 [2024-07-24 02:12:11.630797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.319 qpair failed and we were unable to recover it. 00:33:57.319 [2024-07-24 02:12:11.631021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.319 [2024-07-24 02:12:11.631071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.319 qpair failed and we were unable to recover it. 00:33:57.319 [2024-07-24 02:12:11.631222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.319 [2024-07-24 02:12:11.631247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.319 qpair failed and we were unable to recover it. 00:33:57.319 [2024-07-24 02:12:11.631375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.319 [2024-07-24 02:12:11.631401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.319 qpair failed and we were unable to recover it. 00:33:57.319 [2024-07-24 02:12:11.631506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.319 [2024-07-24 02:12:11.631532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.319 qpair failed and we were unable to recover it. 00:33:57.319 [2024-07-24 02:12:11.631656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.319 [2024-07-24 02:12:11.631687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.319 qpair failed and we were unable to recover it. 00:33:57.319 [2024-07-24 02:12:11.631921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.319 [2024-07-24 02:12:11.631974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.319 qpair failed and we were unable to recover it. 00:33:57.319 [2024-07-24 02:12:11.632205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.319 [2024-07-24 02:12:11.632257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.319 qpair failed and we were unable to recover it. 00:33:57.319 [2024-07-24 02:12:11.632391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.319 [2024-07-24 02:12:11.632418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.320 qpair failed and we were unable to recover it. 00:33:57.320 [2024-07-24 02:12:11.632570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.320 [2024-07-24 02:12:11.632617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.320 qpair failed and we were unable to recover it. 00:33:57.320 [2024-07-24 02:12:11.632809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.320 [2024-07-24 02:12:11.632853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.320 qpair failed and we were unable to recover it. 00:33:57.320 [2024-07-24 02:12:11.633010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.320 [2024-07-24 02:12:11.633054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.320 qpair failed and we were unable to recover it. 00:33:57.320 [2024-07-24 02:12:11.633191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.320 [2024-07-24 02:12:11.633217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.320 qpair failed and we were unable to recover it. 00:33:57.320 [2024-07-24 02:12:11.633367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.320 [2024-07-24 02:12:11.633397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.320 qpair failed and we were unable to recover it. 00:33:57.320 [2024-07-24 02:12:11.633595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.320 [2024-07-24 02:12:11.633625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.320 qpair failed and we were unable to recover it. 00:33:57.320 [2024-07-24 02:12:11.633792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.320 [2024-07-24 02:12:11.633835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.320 qpair failed and we were unable to recover it. 00:33:57.320 [2024-07-24 02:12:11.633965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.320 [2024-07-24 02:12:11.633991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.320 qpair failed and we were unable to recover it. 00:33:57.320 [2024-07-24 02:12:11.634126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.320 [2024-07-24 02:12:11.634151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.320 qpair failed and we were unable to recover it. 00:33:57.320 [2024-07-24 02:12:11.634256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.320 [2024-07-24 02:12:11.634283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.320 qpair failed and we were unable to recover it. 00:33:57.320 [2024-07-24 02:12:11.634394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.320 [2024-07-24 02:12:11.634421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.320 qpair failed and we were unable to recover it. 00:33:57.320 [2024-07-24 02:12:11.634635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.320 [2024-07-24 02:12:11.634694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.320 qpair failed and we were unable to recover it. 00:33:57.320 [2024-07-24 02:12:11.634875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.320 [2024-07-24 02:12:11.634917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.320 qpair failed and we were unable to recover it. 00:33:57.320 [2024-07-24 02:12:11.635054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.320 [2024-07-24 02:12:11.635081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.320 qpair failed and we were unable to recover it. 00:33:57.320 [2024-07-24 02:12:11.635207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.320 [2024-07-24 02:12:11.635233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.320 qpair failed and we were unable to recover it. 00:33:57.320 [2024-07-24 02:12:11.635390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.320 [2024-07-24 02:12:11.635434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.320 qpair failed and we were unable to recover it. 00:33:57.320 [2024-07-24 02:12:11.635593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.320 [2024-07-24 02:12:11.635636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.320 qpair failed and we were unable to recover it. 00:33:57.320 [2024-07-24 02:12:11.635826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.320 [2024-07-24 02:12:11.635855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.320 qpair failed and we were unable to recover it. 00:33:57.320 [2024-07-24 02:12:11.636031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.320 [2024-07-24 02:12:11.636057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.320 qpair failed and we were unable to recover it. 00:33:57.320 [2024-07-24 02:12:11.636166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.320 [2024-07-24 02:12:11.636193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.320 qpair failed and we were unable to recover it. 00:33:57.320 [2024-07-24 02:12:11.636372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.320 [2024-07-24 02:12:11.636402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.320 qpair failed and we were unable to recover it. 00:33:57.320 [2024-07-24 02:12:11.636540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.320 [2024-07-24 02:12:11.636584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.320 qpair failed and we were unable to recover it. 00:33:57.320 [2024-07-24 02:12:11.636766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.320 [2024-07-24 02:12:11.636811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.320 qpair failed and we were unable to recover it. 00:33:57.320 [2024-07-24 02:12:11.636941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.320 [2024-07-24 02:12:11.636967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.320 qpair failed and we were unable to recover it. 00:33:57.320 [2024-07-24 02:12:11.637125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.320 [2024-07-24 02:12:11.637151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.320 qpair failed and we were unable to recover it. 00:33:57.320 [2024-07-24 02:12:11.637283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.320 [2024-07-24 02:12:11.637309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.320 qpair failed and we were unable to recover it. 00:33:57.320 [2024-07-24 02:12:11.637474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.320 [2024-07-24 02:12:11.637522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.320 qpair failed and we were unable to recover it. 00:33:57.320 [2024-07-24 02:12:11.637697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.320 [2024-07-24 02:12:11.637726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.320 qpair failed and we were unable to recover it. 00:33:57.320 [2024-07-24 02:12:11.637944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.320 [2024-07-24 02:12:11.637996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.320 qpair failed and we were unable to recover it. 00:33:57.320 [2024-07-24 02:12:11.638167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.320 [2024-07-24 02:12:11.638195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.320 qpair failed and we were unable to recover it. 00:33:57.320 [2024-07-24 02:12:11.638309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.321 [2024-07-24 02:12:11.638341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.321 qpair failed and we were unable to recover it. 00:33:57.321 [2024-07-24 02:12:11.638473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.321 [2024-07-24 02:12:11.638502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.321 qpair failed and we were unable to recover it. 00:33:57.321 [2024-07-24 02:12:11.638644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.321 [2024-07-24 02:12:11.638672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.321 qpair failed and we were unable to recover it. 00:33:57.321 [2024-07-24 02:12:11.638820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.321 [2024-07-24 02:12:11.638848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.321 qpair failed and we were unable to recover it. 00:33:57.321 [2024-07-24 02:12:11.639017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.321 [2024-07-24 02:12:11.639046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.321 qpair failed and we were unable to recover it. 00:33:57.321 [2024-07-24 02:12:11.639225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.321 [2024-07-24 02:12:11.639253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.321 qpair failed and we were unable to recover it. 00:33:57.321 [2024-07-24 02:12:11.639385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.321 [2024-07-24 02:12:11.639412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.321 qpair failed and we were unable to recover it. 00:33:57.321 [2024-07-24 02:12:11.639562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.321 [2024-07-24 02:12:11.639607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.321 qpair failed and we were unable to recover it. 00:33:57.321 [2024-07-24 02:12:11.639886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.321 [2024-07-24 02:12:11.639948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.321 qpair failed and we were unable to recover it. 00:33:57.321 [2024-07-24 02:12:11.640137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.321 [2024-07-24 02:12:11.640164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.321 qpair failed and we were unable to recover it. 00:33:57.321 [2024-07-24 02:12:11.640347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.321 [2024-07-24 02:12:11.640391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.321 qpair failed and we were unable to recover it. 00:33:57.321 [2024-07-24 02:12:11.640572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.321 [2024-07-24 02:12:11.640616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.321 qpair failed and we were unable to recover it. 00:33:57.321 [2024-07-24 02:12:11.640768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.321 [2024-07-24 02:12:11.640811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.321 qpair failed and we were unable to recover it. 00:33:57.321 [2024-07-24 02:12:11.640955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.321 [2024-07-24 02:12:11.640999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.321 qpair failed and we were unable to recover it. 00:33:57.321 [2024-07-24 02:12:11.641128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.321 [2024-07-24 02:12:11.641154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.321 qpair failed and we were unable to recover it. 00:33:57.321 [2024-07-24 02:12:11.641308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.321 [2024-07-24 02:12:11.641358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.321 qpair failed and we were unable to recover it. 00:33:57.321 [2024-07-24 02:12:11.641511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.321 [2024-07-24 02:12:11.641555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.321 qpair failed and we were unable to recover it. 00:33:57.321 [2024-07-24 02:12:11.641711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.321 [2024-07-24 02:12:11.641770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.321 qpair failed and we were unable to recover it. 00:33:57.321 [2024-07-24 02:12:11.641935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.321 [2024-07-24 02:12:11.641962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.321 qpair failed and we were unable to recover it. 00:33:57.321 [2024-07-24 02:12:11.642062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.321 [2024-07-24 02:12:11.642088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.321 qpair failed and we were unable to recover it. 00:33:57.321 [2024-07-24 02:12:11.642244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.321 [2024-07-24 02:12:11.642270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.321 qpair failed and we were unable to recover it. 00:33:57.321 [2024-07-24 02:12:11.642422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.321 [2024-07-24 02:12:11.642470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.321 qpair failed and we were unable to recover it. 00:33:57.321 [2024-07-24 02:12:11.642624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.321 [2024-07-24 02:12:11.642653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.321 qpair failed and we were unable to recover it. 00:33:57.321 [2024-07-24 02:12:11.642800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.321 [2024-07-24 02:12:11.642829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.321 qpair failed and we were unable to recover it. 00:33:57.321 [2024-07-24 02:12:11.642963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.321 [2024-07-24 02:12:11.642989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.321 qpair failed and we were unable to recover it. 00:33:57.321 [2024-07-24 02:12:11.643120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.321 [2024-07-24 02:12:11.643146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.321 qpair failed and we were unable to recover it. 00:33:57.321 [2024-07-24 02:12:11.643275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.321 [2024-07-24 02:12:11.643302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.321 qpair failed and we were unable to recover it. 00:33:57.321 [2024-07-24 02:12:11.643456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.321 [2024-07-24 02:12:11.643482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.321 qpair failed and we were unable to recover it. 00:33:57.321 [2024-07-24 02:12:11.643627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.321 [2024-07-24 02:12:11.643662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.321 qpair failed and we were unable to recover it. 00:33:57.321 [2024-07-24 02:12:11.643807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.321 [2024-07-24 02:12:11.643836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.321 qpair failed and we were unable to recover it. 00:33:57.321 [2024-07-24 02:12:11.644003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.321 [2024-07-24 02:12:11.644032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.321 qpair failed and we were unable to recover it. 00:33:57.321 [2024-07-24 02:12:11.644188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.321 [2024-07-24 02:12:11.644214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.321 qpair failed and we were unable to recover it. 00:33:57.321 [2024-07-24 02:12:11.644347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.321 [2024-07-24 02:12:11.644375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.321 qpair failed and we were unable to recover it. 00:33:57.321 [2024-07-24 02:12:11.644511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.321 [2024-07-24 02:12:11.644538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.321 qpair failed and we were unable to recover it. 00:33:57.321 [2024-07-24 02:12:11.644663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.321 [2024-07-24 02:12:11.644707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.321 qpair failed and we were unable to recover it. 00:33:57.321 [2024-07-24 02:12:11.644837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.321 [2024-07-24 02:12:11.644881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.321 qpair failed and we were unable to recover it. 00:33:57.321 [2024-07-24 02:12:11.645035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.321 [2024-07-24 02:12:11.645084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.322 qpair failed and we were unable to recover it. 00:33:57.322 [2024-07-24 02:12:11.645220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.322 [2024-07-24 02:12:11.645246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.322 qpair failed and we were unable to recover it. 00:33:57.322 [2024-07-24 02:12:11.645382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.322 [2024-07-24 02:12:11.645409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.322 qpair failed and we were unable to recover it. 00:33:57.322 [2024-07-24 02:12:11.645564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.322 [2024-07-24 02:12:11.645608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.322 qpair failed and we were unable to recover it. 00:33:57.322 [2024-07-24 02:12:11.645741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.322 [2024-07-24 02:12:11.645767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.322 qpair failed and we were unable to recover it. 00:33:57.322 [2024-07-24 02:12:11.645924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.322 [2024-07-24 02:12:11.645950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.322 qpair failed and we were unable to recover it. 00:33:57.322 [2024-07-24 02:12:11.646161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.322 [2024-07-24 02:12:11.646188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.322 qpair failed and we were unable to recover it. 00:33:57.322 [2024-07-24 02:12:11.646399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.322 [2024-07-24 02:12:11.646443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.322 qpair failed and we were unable to recover it. 00:33:57.322 [2024-07-24 02:12:11.646613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.322 [2024-07-24 02:12:11.646640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.322 qpair failed and we were unable to recover it. 00:33:57.322 [2024-07-24 02:12:11.646801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.322 [2024-07-24 02:12:11.646828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.322 qpair failed and we were unable to recover it. 00:33:57.322 [2024-07-24 02:12:11.646953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.322 [2024-07-24 02:12:11.646979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.322 qpair failed and we were unable to recover it. 00:33:57.322 [2024-07-24 02:12:11.647102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.322 [2024-07-24 02:12:11.647129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.322 qpair failed and we were unable to recover it. 00:33:57.322 [2024-07-24 02:12:11.647260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.322 [2024-07-24 02:12:11.647287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.322 qpair failed and we were unable to recover it. 00:33:57.322 [2024-07-24 02:12:11.647421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.322 [2024-07-24 02:12:11.647448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.322 qpair failed and we were unable to recover it. 00:33:57.322 [2024-07-24 02:12:11.647610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.322 [2024-07-24 02:12:11.647636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.322 qpair failed and we were unable to recover it. 00:33:57.322 [2024-07-24 02:12:11.647764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.322 [2024-07-24 02:12:11.647791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.322 qpair failed and we were unable to recover it. 00:33:57.322 [2024-07-24 02:12:11.647904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.322 [2024-07-24 02:12:11.647930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.322 qpair failed and we were unable to recover it. 00:33:57.322 [2024-07-24 02:12:11.648065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.322 [2024-07-24 02:12:11.648091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.322 qpair failed and we were unable to recover it. 00:33:57.322 [2024-07-24 02:12:11.648221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.322 [2024-07-24 02:12:11.648248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.322 qpair failed and we were unable to recover it. 00:33:57.322 [2024-07-24 02:12:11.648458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.322 [2024-07-24 02:12:11.648485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.322 qpair failed and we were unable to recover it. 00:33:57.322 [2024-07-24 02:12:11.648634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.322 [2024-07-24 02:12:11.648677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.322 qpair failed and we were unable to recover it. 00:33:57.322 [2024-07-24 02:12:11.648833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.322 [2024-07-24 02:12:11.648876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.322 qpair failed and we were unable to recover it. 00:33:57.322 [2024-07-24 02:12:11.649008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.322 [2024-07-24 02:12:11.649034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.322 qpair failed and we were unable to recover it. 00:33:57.322 [2024-07-24 02:12:11.649168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.322 [2024-07-24 02:12:11.649196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.322 qpair failed and we were unable to recover it. 00:33:57.322 [2024-07-24 02:12:11.649372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.322 [2024-07-24 02:12:11.649423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.322 qpair failed and we were unable to recover it. 00:33:57.322 [2024-07-24 02:12:11.649572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.322 [2024-07-24 02:12:11.649616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.322 qpair failed and we were unable to recover it. 00:33:57.322 [2024-07-24 02:12:11.649762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.322 [2024-07-24 02:12:11.649791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.322 qpair failed and we were unable to recover it. 00:33:57.322 [2024-07-24 02:12:11.649969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.322 [2024-07-24 02:12:11.650008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.322 qpair failed and we were unable to recover it. 00:33:57.322 [2024-07-24 02:12:11.650176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.322 [2024-07-24 02:12:11.650203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.322 qpair failed and we were unable to recover it. 00:33:57.322 [2024-07-24 02:12:11.650337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.322 [2024-07-24 02:12:11.650363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.322 qpair failed and we were unable to recover it. 00:33:57.322 [2024-07-24 02:12:11.650487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.322 [2024-07-24 02:12:11.650513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.322 qpair failed and we were unable to recover it. 00:33:57.322 [2024-07-24 02:12:11.650610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.322 [2024-07-24 02:12:11.650636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.322 qpair failed and we were unable to recover it. 00:33:57.322 [2024-07-24 02:12:11.650739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.322 [2024-07-24 02:12:11.650767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.322 qpair failed and we were unable to recover it. 00:33:57.322 [2024-07-24 02:12:11.650881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.322 [2024-07-24 02:12:11.650907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.322 qpair failed and we were unable to recover it. 00:33:57.322 [2024-07-24 02:12:11.651039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.322 [2024-07-24 02:12:11.651066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.322 qpair failed and we were unable to recover it. 00:33:57.322 [2024-07-24 02:12:11.651170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.322 [2024-07-24 02:12:11.651196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.322 qpair failed and we were unable to recover it. 00:33:57.322 [2024-07-24 02:12:11.651357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.322 [2024-07-24 02:12:11.651384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.322 qpair failed and we were unable to recover it. 00:33:57.323 [2024-07-24 02:12:11.651518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.323 [2024-07-24 02:12:11.651544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.323 qpair failed and we were unable to recover it. 00:33:57.323 [2024-07-24 02:12:11.651701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.323 [2024-07-24 02:12:11.651727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.323 qpair failed and we were unable to recover it. 00:33:57.323 [2024-07-24 02:12:11.651862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.323 [2024-07-24 02:12:11.651888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.323 qpair failed and we were unable to recover it. 00:33:57.323 [2024-07-24 02:12:11.652015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.323 [2024-07-24 02:12:11.652042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.323 qpair failed and we were unable to recover it. 00:33:57.323 [2024-07-24 02:12:11.652179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.323 [2024-07-24 02:12:11.652207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.323 qpair failed and we were unable to recover it. 00:33:57.323 [2024-07-24 02:12:11.652385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.323 [2024-07-24 02:12:11.652425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.323 qpair failed and we were unable to recover it. 00:33:57.323 [2024-07-24 02:12:11.652548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.323 [2024-07-24 02:12:11.652575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.323 qpair failed and we were unable to recover it. 00:33:57.323 [2024-07-24 02:12:11.652732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.323 [2024-07-24 02:12:11.652761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.323 qpair failed and we were unable to recover it. 00:33:57.323 [2024-07-24 02:12:11.652908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.323 [2024-07-24 02:12:11.652936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.323 qpair failed and we were unable to recover it. 00:33:57.323 [2024-07-24 02:12:11.653043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.323 [2024-07-24 02:12:11.653072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.323 qpair failed and we were unable to recover it. 00:33:57.323 [2024-07-24 02:12:11.653198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.323 [2024-07-24 02:12:11.653224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.323 qpair failed and we were unable to recover it. 00:33:57.323 [2024-07-24 02:12:11.653369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.323 [2024-07-24 02:12:11.653397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.323 qpair failed and we were unable to recover it. 00:33:57.323 [2024-07-24 02:12:11.653506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.323 [2024-07-24 02:12:11.653532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.323 qpair failed and we were unable to recover it. 00:33:57.323 [2024-07-24 02:12:11.653710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.323 [2024-07-24 02:12:11.653739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.323 qpair failed and we were unable to recover it. 00:33:57.323 [2024-07-24 02:12:11.653907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.323 [2024-07-24 02:12:11.653935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.323 qpair failed and we were unable to recover it. 00:33:57.323 [2024-07-24 02:12:11.654078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.323 [2024-07-24 02:12:11.654107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.323 qpair failed and we were unable to recover it. 00:33:57.323 [2024-07-24 02:12:11.654265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.323 [2024-07-24 02:12:11.654294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.323 qpair failed and we were unable to recover it. 00:33:57.323 [2024-07-24 02:12:11.654443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.323 [2024-07-24 02:12:11.654470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.323 qpair failed and we were unable to recover it. 00:33:57.323 [2024-07-24 02:12:11.654602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.323 [2024-07-24 02:12:11.654628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.323 qpair failed and we were unable to recover it. 00:33:57.323 [2024-07-24 02:12:11.654864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.323 [2024-07-24 02:12:11.654917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.323 qpair failed and we were unable to recover it. 00:33:57.323 [2024-07-24 02:12:11.655185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.323 [2024-07-24 02:12:11.655236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.323 qpair failed and we were unable to recover it. 00:33:57.323 [2024-07-24 02:12:11.655344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.323 [2024-07-24 02:12:11.655371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.323 qpair failed and we were unable to recover it. 00:33:57.323 [2024-07-24 02:12:11.655514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.323 [2024-07-24 02:12:11.655558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.323 qpair failed and we were unable to recover it. 00:33:57.323 [2024-07-24 02:12:11.655743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.323 [2024-07-24 02:12:11.655785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.323 qpair failed and we were unable to recover it. 00:33:57.323 [2024-07-24 02:12:11.655941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.323 [2024-07-24 02:12:11.655985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.323 qpair failed and we were unable to recover it. 00:33:57.323 [2024-07-24 02:12:11.656115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.323 [2024-07-24 02:12:11.656142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.323 qpair failed and we were unable to recover it. 00:33:57.323 [2024-07-24 02:12:11.656305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.323 [2024-07-24 02:12:11.656339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.323 qpair failed and we were unable to recover it. 00:33:57.323 [2024-07-24 02:12:11.656456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.323 [2024-07-24 02:12:11.656483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.323 qpair failed and we were unable to recover it. 00:33:57.323 [2024-07-24 02:12:11.656641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.323 [2024-07-24 02:12:11.656670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.323 qpair failed and we were unable to recover it. 00:33:57.323 [2024-07-24 02:12:11.656951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.323 [2024-07-24 02:12:11.657004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.323 qpair failed and we were unable to recover it. 00:33:57.323 [2024-07-24 02:12:11.657173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.323 [2024-07-24 02:12:11.657207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.323 qpair failed and we were unable to recover it. 00:33:57.323 [2024-07-24 02:12:11.657413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.323 [2024-07-24 02:12:11.657452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.323 qpair failed and we were unable to recover it. 00:33:57.323 [2024-07-24 02:12:11.657592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.323 [2024-07-24 02:12:11.657619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.323 qpair failed and we were unable to recover it. 00:33:57.323 [2024-07-24 02:12:11.657771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.323 [2024-07-24 02:12:11.657801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.323 qpair failed and we were unable to recover it. 00:33:57.323 [2024-07-24 02:12:11.657921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.323 [2024-07-24 02:12:11.657948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.323 qpair failed and we were unable to recover it. 00:33:57.323 [2024-07-24 02:12:11.658125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.323 [2024-07-24 02:12:11.658153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.323 qpair failed and we were unable to recover it. 00:33:57.323 [2024-07-24 02:12:11.658327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.323 [2024-07-24 02:12:11.658371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.324 qpair failed and we were unable to recover it. 00:33:57.324 [2024-07-24 02:12:11.658530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.324 [2024-07-24 02:12:11.658555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.324 qpair failed and we were unable to recover it. 00:33:57.324 [2024-07-24 02:12:11.658711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.324 [2024-07-24 02:12:11.658736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.324 qpair failed and we were unable to recover it. 00:33:57.324 [2024-07-24 02:12:11.658887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.324 [2024-07-24 02:12:11.658915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.324 qpair failed and we were unable to recover it. 00:33:57.324 [2024-07-24 02:12:11.659064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.324 [2024-07-24 02:12:11.659091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.324 qpair failed and we were unable to recover it. 00:33:57.324 [2024-07-24 02:12:11.659250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.324 [2024-07-24 02:12:11.659276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.324 qpair failed and we were unable to recover it. 00:33:57.324 [2024-07-24 02:12:11.659437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.324 [2024-07-24 02:12:11.659463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.324 qpair failed and we were unable to recover it. 00:33:57.324 [2024-07-24 02:12:11.659575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.324 [2024-07-24 02:12:11.659600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.324 qpair failed and we were unable to recover it. 00:33:57.324 [2024-07-24 02:12:11.659736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.324 [2024-07-24 02:12:11.659761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.324 qpair failed and we were unable to recover it. 00:33:57.324 [2024-07-24 02:12:11.659864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.324 [2024-07-24 02:12:11.659890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.324 qpair failed and we were unable to recover it. 00:33:57.324 [2024-07-24 02:12:11.660052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.324 [2024-07-24 02:12:11.660080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.324 qpair failed and we were unable to recover it. 00:33:57.324 [2024-07-24 02:12:11.660206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.324 [2024-07-24 02:12:11.660232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.324 qpair failed and we were unable to recover it. 00:33:57.324 [2024-07-24 02:12:11.660391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.324 [2024-07-24 02:12:11.660417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.324 qpair failed and we were unable to recover it. 00:33:57.324 [2024-07-24 02:12:11.660553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.324 [2024-07-24 02:12:11.660578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.324 qpair failed and we were unable to recover it. 00:33:57.324 [2024-07-24 02:12:11.660740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.324 [2024-07-24 02:12:11.660765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.324 qpair failed and we were unable to recover it. 00:33:57.324 [2024-07-24 02:12:11.660888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.324 [2024-07-24 02:12:11.660913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.324 qpair failed and we were unable to recover it. 00:33:57.324 [2024-07-24 02:12:11.661081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.324 [2024-07-24 02:12:11.661108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.324 qpair failed and we were unable to recover it. 00:33:57.324 [2024-07-24 02:12:11.661269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.324 [2024-07-24 02:12:11.661293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.324 qpair failed and we were unable to recover it. 00:33:57.324 [2024-07-24 02:12:11.661437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.324 [2024-07-24 02:12:11.661463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.324 qpair failed and we were unable to recover it. 00:33:57.324 [2024-07-24 02:12:11.661574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.324 [2024-07-24 02:12:11.661617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.324 qpair failed and we were unable to recover it. 00:33:57.324 [2024-07-24 02:12:11.661770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.324 [2024-07-24 02:12:11.661795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.324 qpair failed and we were unable to recover it. 00:33:57.324 [2024-07-24 02:12:11.661925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.324 [2024-07-24 02:12:11.661949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.324 qpair failed and we were unable to recover it. 00:33:57.324 [2024-07-24 02:12:11.662097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.324 [2024-07-24 02:12:11.662125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.324 qpair failed and we were unable to recover it. 00:33:57.324 [2024-07-24 02:12:11.662303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.324 [2024-07-24 02:12:11.662335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.324 qpair failed and we were unable to recover it. 00:33:57.324 [2024-07-24 02:12:11.662469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.324 [2024-07-24 02:12:11.662494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.324 qpair failed and we were unable to recover it. 00:33:57.324 [2024-07-24 02:12:11.662606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.324 [2024-07-24 02:12:11.662631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.324 qpair failed and we were unable to recover it. 00:33:57.324 [2024-07-24 02:12:11.662765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.324 [2024-07-24 02:12:11.662791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.324 qpair failed and we were unable to recover it. 00:33:57.324 [2024-07-24 02:12:11.662934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.324 [2024-07-24 02:12:11.662964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.324 qpair failed and we were unable to recover it. 00:33:57.324 [2024-07-24 02:12:11.663104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.324 [2024-07-24 02:12:11.663131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.324 qpair failed and we were unable to recover it. 00:33:57.324 [2024-07-24 02:12:11.663256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.324 [2024-07-24 02:12:11.663281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.324 qpair failed and we were unable to recover it. 00:33:57.324 [2024-07-24 02:12:11.663418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.324 [2024-07-24 02:12:11.663443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.324 qpair failed and we were unable to recover it. 00:33:57.324 [2024-07-24 02:12:11.663553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.324 [2024-07-24 02:12:11.663578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.324 qpair failed and we were unable to recover it. 00:33:57.324 [2024-07-24 02:12:11.663713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.324 [2024-07-24 02:12:11.663753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.324 qpair failed and we were unable to recover it. 00:33:57.324 [2024-07-24 02:12:11.663888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.324 [2024-07-24 02:12:11.663916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.324 qpair failed and we were unable to recover it. 00:33:57.324 [2024-07-24 02:12:11.664028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.324 [2024-07-24 02:12:11.664055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.324 qpair failed and we were unable to recover it. 00:33:57.324 [2024-07-24 02:12:11.664219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.324 [2024-07-24 02:12:11.664259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.324 qpair failed and we were unable to recover it. 00:33:57.324 [2024-07-24 02:12:11.664378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.325 [2024-07-24 02:12:11.664406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.325 qpair failed and we were unable to recover it. 00:33:57.325 [2024-07-24 02:12:11.664565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.325 [2024-07-24 02:12:11.664592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.325 qpair failed and we were unable to recover it. 00:33:57.325 [2024-07-24 02:12:11.664722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.325 [2024-07-24 02:12:11.664773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.325 qpair failed and we were unable to recover it. 00:33:57.325 [2024-07-24 02:12:11.664967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.325 [2024-07-24 02:12:11.665011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.325 qpair failed and we were unable to recover it. 00:33:57.325 [2024-07-24 02:12:11.665140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.325 [2024-07-24 02:12:11.665166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.325 qpair failed and we were unable to recover it. 00:33:57.325 [2024-07-24 02:12:11.665308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.325 [2024-07-24 02:12:11.665366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.325 qpair failed and we were unable to recover it. 00:33:57.325 [2024-07-24 02:12:11.665550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.325 [2024-07-24 02:12:11.665579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.325 qpair failed and we were unable to recover it. 00:33:57.325 [2024-07-24 02:12:11.665775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.325 [2024-07-24 02:12:11.665819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.325 qpair failed and we were unable to recover it. 00:33:57.325 [2024-07-24 02:12:11.665938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.325 [2024-07-24 02:12:11.665981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.325 qpair failed and we were unable to recover it. 00:33:57.325 [2024-07-24 02:12:11.666142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.325 [2024-07-24 02:12:11.666168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.325 qpair failed and we were unable to recover it. 00:33:57.325 [2024-07-24 02:12:11.666330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.325 [2024-07-24 02:12:11.666374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.325 qpair failed and we were unable to recover it. 00:33:57.325 [2024-07-24 02:12:11.666555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.325 [2024-07-24 02:12:11.666598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.325 qpair failed and we were unable to recover it. 00:33:57.325 [2024-07-24 02:12:11.666779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.325 [2024-07-24 02:12:11.666826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.325 qpair failed and we were unable to recover it. 00:33:57.325 [2024-07-24 02:12:11.667108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.325 [2024-07-24 02:12:11.667165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.325 qpair failed and we were unable to recover it. 00:33:57.325 [2024-07-24 02:12:11.667305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.325 [2024-07-24 02:12:11.667336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.325 qpair failed and we were unable to recover it. 00:33:57.325 [2024-07-24 02:12:11.667473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.325 [2024-07-24 02:12:11.667501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.325 qpair failed and we were unable to recover it. 00:33:57.325 [2024-07-24 02:12:11.667638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.325 [2024-07-24 02:12:11.667667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.325 qpair failed and we were unable to recover it. 00:33:57.325 [2024-07-24 02:12:11.667841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.325 [2024-07-24 02:12:11.667870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.325 qpair failed and we were unable to recover it. 00:33:57.325 [2024-07-24 02:12:11.668009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.325 [2024-07-24 02:12:11.668053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.325 qpair failed and we were unable to recover it. 00:33:57.325 [2024-07-24 02:12:11.668211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.325 [2024-07-24 02:12:11.668237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.325 qpair failed and we were unable to recover it. 00:33:57.325 [2024-07-24 02:12:11.668385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.325 [2024-07-24 02:12:11.668415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.325 qpair failed and we were unable to recover it. 00:33:57.325 [2024-07-24 02:12:11.668583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.325 [2024-07-24 02:12:11.668627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.325 qpair failed and we were unable to recover it. 00:33:57.325 [2024-07-24 02:12:11.668827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.325 [2024-07-24 02:12:11.668870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.325 qpair failed and we were unable to recover it. 00:33:57.325 [2024-07-24 02:12:11.669087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.325 [2024-07-24 02:12:11.669114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.325 qpair failed and we were unable to recover it. 00:33:57.325 [2024-07-24 02:12:11.669247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.325 [2024-07-24 02:12:11.669274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.325 qpair failed and we were unable to recover it. 00:33:57.325 [2024-07-24 02:12:11.669389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.325 [2024-07-24 02:12:11.669416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.325 qpair failed and we were unable to recover it. 00:33:57.325 [2024-07-24 02:12:11.669572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.325 [2024-07-24 02:12:11.669602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.325 qpair failed and we were unable to recover it. 00:33:57.325 [2024-07-24 02:12:11.669770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.325 [2024-07-24 02:12:11.669798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.325 qpair failed and we were unable to recover it. 00:33:57.325 [2024-07-24 02:12:11.669968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.325 [2024-07-24 02:12:11.669997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.325 qpair failed and we were unable to recover it. 00:33:57.325 [2024-07-24 02:12:11.670181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.325 [2024-07-24 02:12:11.670224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.325 qpair failed and we were unable to recover it. 00:33:57.325 [2024-07-24 02:12:11.670387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.325 [2024-07-24 02:12:11.670416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.325 qpair failed and we were unable to recover it. 00:33:57.326 [2024-07-24 02:12:11.670518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.326 [2024-07-24 02:12:11.670544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.326 qpair failed and we were unable to recover it. 00:33:57.326 [2024-07-24 02:12:11.670677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.326 [2024-07-24 02:12:11.670702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.326 qpair failed and we were unable to recover it. 00:33:57.326 [2024-07-24 02:12:11.670888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.326 [2024-07-24 02:12:11.670916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.326 qpair failed and we were unable to recover it. 00:33:57.326 [2024-07-24 02:12:11.671159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.326 [2024-07-24 02:12:11.671212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.326 qpair failed and we were unable to recover it. 00:33:57.326 [2024-07-24 02:12:11.671381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.326 [2024-07-24 02:12:11.671410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.326 qpair failed and we were unable to recover it. 00:33:57.326 [2024-07-24 02:12:11.671544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.326 [2024-07-24 02:12:11.671570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.326 qpair failed and we were unable to recover it. 00:33:57.326 [2024-07-24 02:12:11.671717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.326 [2024-07-24 02:12:11.671746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.326 qpair failed and we were unable to recover it. 00:33:57.326 [2024-07-24 02:12:11.671956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.326 [2024-07-24 02:12:11.672012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.326 qpair failed and we were unable to recover it. 00:33:57.326 [2024-07-24 02:12:11.672150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.326 [2024-07-24 02:12:11.672184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.326 qpair failed and we were unable to recover it. 00:33:57.326 [2024-07-24 02:12:11.672360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.326 [2024-07-24 02:12:11.672387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.326 qpair failed and we were unable to recover it. 00:33:57.326 [2024-07-24 02:12:11.672515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.326 [2024-07-24 02:12:11.672541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.326 qpair failed and we were unable to recover it. 00:33:57.326 [2024-07-24 02:12:11.672692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.326 [2024-07-24 02:12:11.672721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.326 qpair failed and we were unable to recover it. 00:33:57.326 [2024-07-24 02:12:11.672935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.326 [2024-07-24 02:12:11.672995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.326 qpair failed and we were unable to recover it. 00:33:57.326 [2024-07-24 02:12:11.673174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.326 [2024-07-24 02:12:11.673203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.326 qpair failed and we were unable to recover it. 00:33:57.326 [2024-07-24 02:12:11.673305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.326 [2024-07-24 02:12:11.673345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.326 qpair failed and we were unable to recover it. 00:33:57.326 [2024-07-24 02:12:11.673496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.326 [2024-07-24 02:12:11.673522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.326 qpair failed and we were unable to recover it. 00:33:57.326 [2024-07-24 02:12:11.673627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.326 [2024-07-24 02:12:11.673654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.326 qpair failed and we were unable to recover it. 00:33:57.326 [2024-07-24 02:12:11.673814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.326 [2024-07-24 02:12:11.673842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.326 qpair failed and we were unable to recover it. 00:33:57.326 [2024-07-24 02:12:11.673987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.326 [2024-07-24 02:12:11.674015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.326 qpair failed and we were unable to recover it. 00:33:57.326 [2024-07-24 02:12:11.674158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.326 [2024-07-24 02:12:11.674190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.326 qpair failed and we were unable to recover it. 00:33:57.326 [2024-07-24 02:12:11.674330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.326 [2024-07-24 02:12:11.674373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.326 qpair failed and we were unable to recover it. 00:33:57.326 [2024-07-24 02:12:11.674530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.326 [2024-07-24 02:12:11.674555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.326 qpair failed and we were unable to recover it. 00:33:57.326 [2024-07-24 02:12:11.674719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.326 [2024-07-24 02:12:11.674770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.326 qpair failed and we were unable to recover it. 00:33:57.326 [2024-07-24 02:12:11.674917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.326 [2024-07-24 02:12:11.674946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.326 qpair failed and we were unable to recover it. 00:33:57.326 [2024-07-24 02:12:11.675091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.326 [2024-07-24 02:12:11.675119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.326 qpair failed and we were unable to recover it. 00:33:57.326 [2024-07-24 02:12:11.675261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.326 [2024-07-24 02:12:11.675291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.326 qpair failed and we were unable to recover it. 00:33:57.326 [2024-07-24 02:12:11.675444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.326 [2024-07-24 02:12:11.675470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.326 qpair failed and we were unable to recover it. 00:33:57.326 [2024-07-24 02:12:11.675568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.326 [2024-07-24 02:12:11.675594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.326 qpair failed and we were unable to recover it. 00:33:57.326 [2024-07-24 02:12:11.675751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.326 [2024-07-24 02:12:11.675780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.326 qpair failed and we were unable to recover it. 00:33:57.326 [2024-07-24 02:12:11.675929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.326 [2024-07-24 02:12:11.675958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.326 qpair failed and we were unable to recover it. 00:33:57.326 [2024-07-24 02:12:11.676079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.326 [2024-07-24 02:12:11.676107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.326 qpair failed and we were unable to recover it. 00:33:57.326 [2024-07-24 02:12:11.676280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.326 [2024-07-24 02:12:11.676310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.326 qpair failed and we were unable to recover it. 00:33:57.326 [2024-07-24 02:12:11.676470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.326 [2024-07-24 02:12:11.676496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.326 qpair failed and we were unable to recover it. 00:33:57.326 [2024-07-24 02:12:11.676603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.326 [2024-07-24 02:12:11.676629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.326 qpair failed and we were unable to recover it. 00:33:57.326 [2024-07-24 02:12:11.676757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.326 [2024-07-24 02:12:11.676782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.326 qpair failed and we were unable to recover it. 00:33:57.326 [2024-07-24 02:12:11.676945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.326 [2024-07-24 02:12:11.676978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.326 qpair failed and we were unable to recover it. 00:33:57.327 [2024-07-24 02:12:11.677093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.327 [2024-07-24 02:12:11.677119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.327 qpair failed and we were unable to recover it. 00:33:57.327 [2024-07-24 02:12:11.677267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.327 [2024-07-24 02:12:11.677296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.327 qpair failed and we were unable to recover it. 00:33:57.327 [2024-07-24 02:12:11.677455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.327 [2024-07-24 02:12:11.677481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.327 qpair failed and we were unable to recover it. 00:33:57.327 [2024-07-24 02:12:11.677615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.327 [2024-07-24 02:12:11.677640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.327 qpair failed and we were unable to recover it. 00:33:57.327 [2024-07-24 02:12:11.677787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.327 [2024-07-24 02:12:11.677816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.327 qpair failed and we were unable to recover it. 00:33:57.327 [2024-07-24 02:12:11.678001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.327 [2024-07-24 02:12:11.678026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.327 qpair failed and we were unable to recover it. 00:33:57.327 [2024-07-24 02:12:11.678174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.327 [2024-07-24 02:12:11.678203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.327 qpair failed and we were unable to recover it. 00:33:57.327 [2024-07-24 02:12:11.678385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.327 [2024-07-24 02:12:11.678425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.327 qpair failed and we were unable to recover it. 00:33:57.327 [2024-07-24 02:12:11.678560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.327 [2024-07-24 02:12:11.678603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.327 qpair failed and we were unable to recover it. 00:33:57.327 [2024-07-24 02:12:11.678774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.327 [2024-07-24 02:12:11.678803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.327 qpair failed and we were unable to recover it. 00:33:57.327 [2024-07-24 02:12:11.678977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.327 [2024-07-24 02:12:11.679006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.327 qpair failed and we were unable to recover it. 00:33:57.327 [2024-07-24 02:12:11.679153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.327 [2024-07-24 02:12:11.679182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.327 qpair failed and we were unable to recover it. 00:33:57.327 [2024-07-24 02:12:11.679360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.327 [2024-07-24 02:12:11.679387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.327 qpair failed and we were unable to recover it. 00:33:57.327 [2024-07-24 02:12:11.679496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.327 [2024-07-24 02:12:11.679523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.327 qpair failed and we were unable to recover it. 00:33:57.327 [2024-07-24 02:12:11.679657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.327 [2024-07-24 02:12:11.679683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.327 qpair failed and we were unable to recover it. 00:33:57.327 [2024-07-24 02:12:11.679815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.327 [2024-07-24 02:12:11.679841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.327 qpair failed and we were unable to recover it. 00:33:57.327 [2024-07-24 02:12:11.679971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.327 [2024-07-24 02:12:11.680017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.327 qpair failed and we were unable to recover it. 00:33:57.327 [2024-07-24 02:12:11.680135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.327 [2024-07-24 02:12:11.680163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.327 qpair failed and we were unable to recover it. 00:33:57.327 [2024-07-24 02:12:11.680294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.327 [2024-07-24 02:12:11.680325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.327 qpair failed and we were unable to recover it. 00:33:57.327 [2024-07-24 02:12:11.680452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.327 [2024-07-24 02:12:11.680478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.327 qpair failed and we were unable to recover it. 00:33:57.327 [2024-07-24 02:12:11.680623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.327 [2024-07-24 02:12:11.680652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.327 qpair failed and we were unable to recover it. 00:33:57.327 [2024-07-24 02:12:11.680803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.327 [2024-07-24 02:12:11.680829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.327 qpair failed and we were unable to recover it. 00:33:57.327 [2024-07-24 02:12:11.680953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.327 [2024-07-24 02:12:11.680995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.327 qpair failed and we were unable to recover it. 00:33:57.327 [2024-07-24 02:12:11.681165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.327 [2024-07-24 02:12:11.681194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.327 qpair failed and we were unable to recover it. 00:33:57.327 [2024-07-24 02:12:11.681326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.327 [2024-07-24 02:12:11.681352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.327 qpair failed and we were unable to recover it. 00:33:57.327 [2024-07-24 02:12:11.681457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.327 [2024-07-24 02:12:11.681482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.327 qpair failed and we were unable to recover it. 00:33:57.327 [2024-07-24 02:12:11.681615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.327 [2024-07-24 02:12:11.681645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.327 qpair failed and we were unable to recover it. 00:33:57.327 [2024-07-24 02:12:11.681796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.327 [2024-07-24 02:12:11.681825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.327 qpair failed and we were unable to recover it. 00:33:57.327 [2024-07-24 02:12:11.681963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.327 [2024-07-24 02:12:11.681991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.327 qpair failed and we were unable to recover it. 00:33:57.327 [2024-07-24 02:12:11.682159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.327 [2024-07-24 02:12:11.682188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.327 qpair failed and we were unable to recover it. 00:33:57.327 [2024-07-24 02:12:11.682337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.327 [2024-07-24 02:12:11.682363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.327 qpair failed and we were unable to recover it. 00:33:57.327 [2024-07-24 02:12:11.682531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.327 [2024-07-24 02:12:11.682557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.327 qpair failed and we were unable to recover it. 00:33:57.327 [2024-07-24 02:12:11.682719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.327 [2024-07-24 02:12:11.682747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.327 qpair failed and we were unable to recover it. 00:33:57.327 [2024-07-24 02:12:11.682921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.327 [2024-07-24 02:12:11.682950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.327 qpair failed and we were unable to recover it. 00:33:57.327 [2024-07-24 02:12:11.683129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.327 [2024-07-24 02:12:11.683157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.327 qpair failed and we were unable to recover it. 00:33:57.327 [2024-07-24 02:12:11.683303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.327 [2024-07-24 02:12:11.683337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.328 qpair failed and we were unable to recover it. 00:33:57.328 [2024-07-24 02:12:11.683497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.328 [2024-07-24 02:12:11.683522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.328 qpair failed and we were unable to recover it. 00:33:57.328 [2024-07-24 02:12:11.683678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.328 [2024-07-24 02:12:11.683707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.328 qpair failed and we were unable to recover it. 00:33:57.328 [2024-07-24 02:12:11.683853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.328 [2024-07-24 02:12:11.683881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.328 qpair failed and we were unable to recover it. 00:33:57.328 [2024-07-24 02:12:11.684103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.328 [2024-07-24 02:12:11.684131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.328 qpair failed and we were unable to recover it. 00:33:57.328 [2024-07-24 02:12:11.684252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.328 [2024-07-24 02:12:11.684282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.328 qpair failed and we were unable to recover it. 00:33:57.328 [2024-07-24 02:12:11.684462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.328 [2024-07-24 02:12:11.684489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.328 qpair failed and we were unable to recover it. 00:33:57.328 [2024-07-24 02:12:11.684620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.328 [2024-07-24 02:12:11.684645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.328 qpair failed and we were unable to recover it. 00:33:57.328 [2024-07-24 02:12:11.684777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.328 [2024-07-24 02:12:11.684820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.328 qpair failed and we were unable to recover it. 00:33:57.328 [2024-07-24 02:12:11.684925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.328 [2024-07-24 02:12:11.684954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.328 qpair failed and we were unable to recover it. 00:33:57.328 [2024-07-24 02:12:11.685082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.328 [2024-07-24 02:12:11.685125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.328 qpair failed and we were unable to recover it. 00:33:57.328 [2024-07-24 02:12:11.685270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.328 [2024-07-24 02:12:11.685300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.328 qpair failed and we were unable to recover it. 00:33:57.328 [2024-07-24 02:12:11.685461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.328 [2024-07-24 02:12:11.685487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.328 qpair failed and we were unable to recover it. 00:33:57.328 [2024-07-24 02:12:11.685617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.328 [2024-07-24 02:12:11.685642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.328 qpair failed and we were unable to recover it. 00:33:57.328 [2024-07-24 02:12:11.685797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.328 [2024-07-24 02:12:11.685823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.328 qpair failed and we were unable to recover it. 00:33:57.328 [2024-07-24 02:12:11.685961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.328 [2024-07-24 02:12:11.685989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.328 qpair failed and we were unable to recover it. 00:33:57.328 [2024-07-24 02:12:11.686127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.328 [2024-07-24 02:12:11.686156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.328 qpair failed and we were unable to recover it. 00:33:57.328 [2024-07-24 02:12:11.686265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.328 [2024-07-24 02:12:11.686293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.328 qpair failed and we were unable to recover it. 00:33:57.328 [2024-07-24 02:12:11.686442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.328 [2024-07-24 02:12:11.686472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.328 qpair failed and we were unable to recover it. 00:33:57.328 [2024-07-24 02:12:11.686631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.328 [2024-07-24 02:12:11.686657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.328 qpair failed and we were unable to recover it. 00:33:57.328 [2024-07-24 02:12:11.686807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.328 [2024-07-24 02:12:11.686837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.328 qpair failed and we were unable to recover it. 00:33:57.328 [2024-07-24 02:12:11.687006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.328 [2024-07-24 02:12:11.687035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.328 qpair failed and we were unable to recover it. 00:33:57.328 [2024-07-24 02:12:11.687155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.328 [2024-07-24 02:12:11.687181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.328 qpair failed and we were unable to recover it. 00:33:57.328 [2024-07-24 02:12:11.687342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.328 [2024-07-24 02:12:11.687368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.328 qpair failed and we were unable to recover it. 00:33:57.328 [2024-07-24 02:12:11.687476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.328 [2024-07-24 02:12:11.687503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.328 qpair failed and we were unable to recover it. 00:33:57.328 [2024-07-24 02:12:11.687609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.328 [2024-07-24 02:12:11.687635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.328 qpair failed and we were unable to recover it. 00:33:57.328 [2024-07-24 02:12:11.687753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.328 [2024-07-24 02:12:11.687779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.328 qpair failed and we were unable to recover it. 00:33:57.328 [2024-07-24 02:12:11.687923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.328 [2024-07-24 02:12:11.687951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.328 qpair failed and we were unable to recover it. 00:33:57.328 [2024-07-24 02:12:11.688133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.328 [2024-07-24 02:12:11.688159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.328 qpair failed and we were unable to recover it. 00:33:57.328 [2024-07-24 02:12:11.688261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.328 [2024-07-24 02:12:11.688303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.328 qpair failed and we were unable to recover it. 00:33:57.328 [2024-07-24 02:12:11.688462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.328 [2024-07-24 02:12:11.688488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.328 qpair failed and we were unable to recover it. 00:33:57.328 [2024-07-24 02:12:11.688616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.328 [2024-07-24 02:12:11.688642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.328 qpair failed and we were unable to recover it. 00:33:57.328 [2024-07-24 02:12:11.688822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.328 [2024-07-24 02:12:11.688851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.328 qpair failed and we were unable to recover it. 00:33:57.328 [2024-07-24 02:12:11.688995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.328 [2024-07-24 02:12:11.689023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.328 qpair failed and we were unable to recover it. 00:33:57.328 [2024-07-24 02:12:11.689166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.328 [2024-07-24 02:12:11.689192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.328 qpair failed and we were unable to recover it. 00:33:57.328 [2024-07-24 02:12:11.689314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.328 [2024-07-24 02:12:11.689350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.328 qpair failed and we were unable to recover it. 00:33:57.328 [2024-07-24 02:12:11.689492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.328 [2024-07-24 02:12:11.689518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.328 qpair failed and we were unable to recover it. 00:33:57.328 [2024-07-24 02:12:11.689624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.329 [2024-07-24 02:12:11.689649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.329 qpair failed and we were unable to recover it. 00:33:57.329 [2024-07-24 02:12:11.689821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.329 [2024-07-24 02:12:11.689850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.329 qpair failed and we were unable to recover it. 00:33:57.329 [2024-07-24 02:12:11.689951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.329 [2024-07-24 02:12:11.689979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.329 qpair failed and we were unable to recover it. 00:33:57.329 [2024-07-24 02:12:11.690106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.329 [2024-07-24 02:12:11.690133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.329 qpair failed and we were unable to recover it. 00:33:57.329 [2024-07-24 02:12:11.690287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.329 [2024-07-24 02:12:11.690356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.329 qpair failed and we were unable to recover it. 00:33:57.329 [2024-07-24 02:12:11.690544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.329 [2024-07-24 02:12:11.690571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.329 qpair failed and we were unable to recover it. 00:33:57.329 [2024-07-24 02:12:11.690704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.329 [2024-07-24 02:12:11.690731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.329 qpair failed and we were unable to recover it. 00:33:57.329 [2024-07-24 02:12:11.690859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.329 [2024-07-24 02:12:11.690901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.329 qpair failed and we were unable to recover it. 00:33:57.329 [2024-07-24 02:12:11.691050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.329 [2024-07-24 02:12:11.691079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.329 qpair failed and we were unable to recover it. 00:33:57.329 [2024-07-24 02:12:11.691204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.329 [2024-07-24 02:12:11.691230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.329 qpair failed and we were unable to recover it. 00:33:57.329 [2024-07-24 02:12:11.691376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.329 [2024-07-24 02:12:11.691403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.329 qpair failed and we were unable to recover it. 00:33:57.329 [2024-07-24 02:12:11.691531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.329 [2024-07-24 02:12:11.691557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.329 qpair failed and we were unable to recover it. 00:33:57.329 [2024-07-24 02:12:11.691681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.329 [2024-07-24 02:12:11.691707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.329 qpair failed and we were unable to recover it. 00:33:57.329 [2024-07-24 02:12:11.691814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.329 [2024-07-24 02:12:11.691840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.329 qpair failed and we were unable to recover it. 00:33:57.329 [2024-07-24 02:12:11.692001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.329 [2024-07-24 02:12:11.692027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.329 qpair failed and we were unable to recover it. 00:33:57.329 [2024-07-24 02:12:11.692171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.329 [2024-07-24 02:12:11.692200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.329 qpair failed and we were unable to recover it. 00:33:57.329 [2024-07-24 02:12:11.692335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.329 [2024-07-24 02:12:11.692379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.329 qpair failed and we were unable to recover it. 00:33:57.329 [2024-07-24 02:12:11.692475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.329 [2024-07-24 02:12:11.692502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.329 qpair failed and we were unable to recover it. 00:33:57.329 [2024-07-24 02:12:11.692656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.329 [2024-07-24 02:12:11.692681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.329 qpair failed and we were unable to recover it. 00:33:57.329 [2024-07-24 02:12:11.692875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.329 [2024-07-24 02:12:11.692935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.329 qpair failed and we were unable to recover it. 00:33:57.329 [2024-07-24 02:12:11.693106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.329 [2024-07-24 02:12:11.693134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.329 qpair failed and we were unable to recover it. 00:33:57.329 [2024-07-24 02:12:11.693277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.329 [2024-07-24 02:12:11.693303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.329 qpair failed and we were unable to recover it. 00:33:57.329 [2024-07-24 02:12:11.693410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.329 [2024-07-24 02:12:11.693440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.329 qpair failed and we were unable to recover it. 00:33:57.329 [2024-07-24 02:12:11.693612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.329 [2024-07-24 02:12:11.693640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.329 qpair failed and we were unable to recover it. 00:33:57.329 [2024-07-24 02:12:11.693761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.329 [2024-07-24 02:12:11.693786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.329 qpair failed and we were unable to recover it. 00:33:57.329 [2024-07-24 02:12:11.693920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.329 [2024-07-24 02:12:11.693946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.329 qpair failed and we were unable to recover it. 00:33:57.329 [2024-07-24 02:12:11.694100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.329 [2024-07-24 02:12:11.694128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.329 qpair failed and we were unable to recover it. 00:33:57.329 [2024-07-24 02:12:11.694310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.329 [2024-07-24 02:12:11.694356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.329 qpair failed and we were unable to recover it. 00:33:57.329 [2024-07-24 02:12:11.694490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.329 [2024-07-24 02:12:11.694517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.329 qpair failed and we were unable to recover it. 00:33:57.329 [2024-07-24 02:12:11.694697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.329 [2024-07-24 02:12:11.694725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.329 qpair failed and we were unable to recover it. 00:33:57.329 [2024-07-24 02:12:11.694856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.329 [2024-07-24 02:12:11.694882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.329 qpair failed and we were unable to recover it. 00:33:57.329 [2024-07-24 02:12:11.695038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.329 [2024-07-24 02:12:11.695079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.329 qpair failed and we were unable to recover it. 00:33:57.329 [2024-07-24 02:12:11.695224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.329 [2024-07-24 02:12:11.695252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.329 qpair failed and we were unable to recover it. 00:33:57.329 [2024-07-24 02:12:11.695376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.329 [2024-07-24 02:12:11.695402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.329 qpair failed and we were unable to recover it. 00:33:57.329 [2024-07-24 02:12:11.695504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.329 [2024-07-24 02:12:11.695530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.329 qpair failed and we were unable to recover it. 00:33:57.329 [2024-07-24 02:12:11.695722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.329 [2024-07-24 02:12:11.695747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.329 qpair failed and we were unable to recover it. 00:33:57.329 [2024-07-24 02:12:11.695853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.330 [2024-07-24 02:12:11.695879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.330 qpair failed and we were unable to recover it. 00:33:57.330 [2024-07-24 02:12:11.696010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.330 [2024-07-24 02:12:11.696036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.330 qpair failed and we were unable to recover it. 00:33:57.330 [2024-07-24 02:12:11.696170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.330 [2024-07-24 02:12:11.696195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.330 qpair failed and we were unable to recover it. 00:33:57.330 [2024-07-24 02:12:11.696389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.330 [2024-07-24 02:12:11.696415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.330 qpair failed and we were unable to recover it. 00:33:57.330 [2024-07-24 02:12:11.696549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.330 [2024-07-24 02:12:11.696575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.330 qpair failed and we were unable to recover it. 00:33:57.330 [2024-07-24 02:12:11.696745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.330 [2024-07-24 02:12:11.696770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.330 qpair failed and we were unable to recover it. 00:33:57.330 [2024-07-24 02:12:11.696873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.330 [2024-07-24 02:12:11.696900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.330 qpair failed and we were unable to recover it. 00:33:57.330 [2024-07-24 02:12:11.697024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.330 [2024-07-24 02:12:11.697050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.330 qpair failed and we were unable to recover it. 00:33:57.330 [2024-07-24 02:12:11.697212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.330 [2024-07-24 02:12:11.697240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.330 qpair failed and we were unable to recover it. 00:33:57.330 [2024-07-24 02:12:11.697421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.330 [2024-07-24 02:12:11.697447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.330 qpair failed and we were unable to recover it. 00:33:57.330 [2024-07-24 02:12:11.697580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.330 [2024-07-24 02:12:11.697606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.330 qpair failed and we were unable to recover it. 00:33:57.330 [2024-07-24 02:12:11.697760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.330 [2024-07-24 02:12:11.697788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.330 qpair failed and we were unable to recover it. 00:33:57.330 [2024-07-24 02:12:11.697934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.330 [2024-07-24 02:12:11.697959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.330 qpair failed and we were unable to recover it. 00:33:57.330 [2024-07-24 02:12:11.698086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.330 [2024-07-24 02:12:11.698118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.330 qpair failed and we were unable to recover it. 00:33:57.330 [2024-07-24 02:12:11.698277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.330 [2024-07-24 02:12:11.698328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.330 qpair failed and we were unable to recover it. 00:33:57.330 [2024-07-24 02:12:11.698454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.330 [2024-07-24 02:12:11.698480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.330 qpair failed and we were unable to recover it. 00:33:57.330 [2024-07-24 02:12:11.698653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.330 [2024-07-24 02:12:11.698681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.330 qpair failed and we were unable to recover it. 00:33:57.330 [2024-07-24 02:12:11.698849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.330 [2024-07-24 02:12:11.698878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.330 qpair failed and we were unable to recover it. 00:33:57.330 [2024-07-24 02:12:11.699000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.330 [2024-07-24 02:12:11.699025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.330 qpair failed and we were unable to recover it. 00:33:57.330 [2024-07-24 02:12:11.699147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.330 [2024-07-24 02:12:11.699172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.330 qpair failed and we were unable to recover it. 00:33:57.330 [2024-07-24 02:12:11.699330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.330 [2024-07-24 02:12:11.699358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.330 qpair failed and we were unable to recover it. 00:33:57.330 [2024-07-24 02:12:11.699500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.330 [2024-07-24 02:12:11.699526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.330 qpair failed and we were unable to recover it. 00:33:57.330 [2024-07-24 02:12:11.699651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.330 [2024-07-24 02:12:11.699692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.330 qpair failed and we were unable to recover it. 00:33:57.330 [2024-07-24 02:12:11.699827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.330 [2024-07-24 02:12:11.699855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.330 qpair failed and we were unable to recover it. 00:33:57.330 [2024-07-24 02:12:11.699999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.330 [2024-07-24 02:12:11.700025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.330 qpair failed and we were unable to recover it. 00:33:57.330 [2024-07-24 02:12:11.700156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.330 [2024-07-24 02:12:11.700182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.330 qpair failed and we were unable to recover it. 00:33:57.330 [2024-07-24 02:12:11.700368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.330 [2024-07-24 02:12:11.700410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.330 qpair failed and we were unable to recover it. 00:33:57.330 [2024-07-24 02:12:11.700535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.330 [2024-07-24 02:12:11.700561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.330 qpair failed and we were unable to recover it. 00:33:57.330 [2024-07-24 02:12:11.700696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.330 [2024-07-24 02:12:11.700721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.330 qpair failed and we were unable to recover it. 00:33:57.330 [2024-07-24 02:12:11.700824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.330 [2024-07-24 02:12:11.700850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.330 qpair failed and we were unable to recover it. 00:33:57.330 [2024-07-24 02:12:11.700957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.330 [2024-07-24 02:12:11.700983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.330 qpair failed and we were unable to recover it. 00:33:57.331 [2024-07-24 02:12:11.701114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.331 [2024-07-24 02:12:11.701140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.331 qpair failed and we were unable to recover it. 00:33:57.331 [2024-07-24 02:12:11.701296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.331 [2024-07-24 02:12:11.701330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.331 qpair failed and we were unable to recover it. 00:33:57.331 [2024-07-24 02:12:11.701478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.331 [2024-07-24 02:12:11.701504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.331 qpair failed and we were unable to recover it. 00:33:57.331 [2024-07-24 02:12:11.701636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.331 [2024-07-24 02:12:11.701678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.331 qpair failed and we were unable to recover it. 00:33:57.331 [2024-07-24 02:12:11.701798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.331 [2024-07-24 02:12:11.701841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.331 qpair failed and we were unable to recover it. 00:33:57.331 [2024-07-24 02:12:11.701995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.331 [2024-07-24 02:12:11.702020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.331 qpair failed and we were unable to recover it. 00:33:57.331 [2024-07-24 02:12:11.702194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.331 [2024-07-24 02:12:11.702222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.331 qpair failed and we were unable to recover it. 00:33:57.331 [2024-07-24 02:12:11.702382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.331 [2024-07-24 02:12:11.702408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.331 qpair failed and we were unable to recover it. 00:33:57.331 [2024-07-24 02:12:11.702540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.331 [2024-07-24 02:12:11.702565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.331 qpair failed and we were unable to recover it. 00:33:57.331 [2024-07-24 02:12:11.702700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.331 [2024-07-24 02:12:11.702742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.331 qpair failed and we were unable to recover it. 00:33:57.331 [2024-07-24 02:12:11.702925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.331 [2024-07-24 02:12:11.702954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.331 qpair failed and we were unable to recover it. 00:33:57.331 [2024-07-24 02:12:11.703071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.331 [2024-07-24 02:12:11.703097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.331 qpair failed and we were unable to recover it. 00:33:57.331 [2024-07-24 02:12:11.703225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.331 [2024-07-24 02:12:11.703251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.331 qpair failed and we were unable to recover it. 00:33:57.331 [2024-07-24 02:12:11.703423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.331 [2024-07-24 02:12:11.703449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.331 qpair failed and we were unable to recover it. 00:33:57.331 [2024-07-24 02:12:11.703572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.331 [2024-07-24 02:12:11.703597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.331 qpair failed and we were unable to recover it. 00:33:57.331 [2024-07-24 02:12:11.703703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.331 [2024-07-24 02:12:11.703729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.331 qpair failed and we were unable to recover it. 00:33:57.331 [2024-07-24 02:12:11.703881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.331 [2024-07-24 02:12:11.703909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.331 qpair failed and we were unable to recover it. 00:33:57.331 [2024-07-24 02:12:11.704085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.331 [2024-07-24 02:12:11.704111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.331 qpair failed and we were unable to recover it. 00:33:57.331 [2024-07-24 02:12:11.704284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.331 [2024-07-24 02:12:11.704312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.331 qpair failed and we were unable to recover it. 00:33:57.331 [2024-07-24 02:12:11.704477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.331 [2024-07-24 02:12:11.704503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.331 qpair failed and we were unable to recover it. 00:33:57.331 [2024-07-24 02:12:11.704633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.331 [2024-07-24 02:12:11.704660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.331 qpair failed and we were unable to recover it. 00:33:57.331 [2024-07-24 02:12:11.704787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.331 [2024-07-24 02:12:11.704813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.331 qpair failed and we were unable to recover it. 00:33:57.331 [2024-07-24 02:12:11.704983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.331 [2024-07-24 02:12:11.705012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.331 qpair failed and we were unable to recover it. 00:33:57.331 [2024-07-24 02:12:11.705167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.331 [2024-07-24 02:12:11.705196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.331 qpair failed and we were unable to recover it. 00:33:57.331 [2024-07-24 02:12:11.705339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.331 [2024-07-24 02:12:11.705366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.331 qpair failed and we were unable to recover it. 00:33:57.331 [2024-07-24 02:12:11.705496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.331 [2024-07-24 02:12:11.705521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.331 qpair failed and we were unable to recover it. 00:33:57.331 [2024-07-24 02:12:11.705652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.331 [2024-07-24 02:12:11.705677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.331 qpair failed and we were unable to recover it. 00:33:57.331 [2024-07-24 02:12:11.705853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.331 [2024-07-24 02:12:11.705881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.331 qpair failed and we were unable to recover it. 00:33:57.331 [2024-07-24 02:12:11.705997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.331 [2024-07-24 02:12:11.706025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.331 qpair failed and we were unable to recover it. 00:33:57.331 [2024-07-24 02:12:11.706171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.331 [2024-07-24 02:12:11.706197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.331 qpair failed and we were unable to recover it. 00:33:57.331 [2024-07-24 02:12:11.706363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.331 [2024-07-24 02:12:11.706407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.331 qpair failed and we were unable to recover it. 00:33:57.331 [2024-07-24 02:12:11.706574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.331 [2024-07-24 02:12:11.706602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.331 qpair failed and we were unable to recover it. 00:33:57.331 [2024-07-24 02:12:11.706720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.331 [2024-07-24 02:12:11.706747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.331 qpair failed and we were unable to recover it. 00:33:57.331 [2024-07-24 02:12:11.706882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.331 [2024-07-24 02:12:11.706908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.331 qpair failed and we were unable to recover it. 00:33:57.331 [2024-07-24 02:12:11.707070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.331 [2024-07-24 02:12:11.707098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.331 qpair failed and we were unable to recover it. 00:33:57.331 [2024-07-24 02:12:11.707274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.331 [2024-07-24 02:12:11.707300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.331 qpair failed and we were unable to recover it. 00:33:57.331 [2024-07-24 02:12:11.707418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.332 [2024-07-24 02:12:11.707460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.332 qpair failed and we were unable to recover it. 00:33:57.332 [2024-07-24 02:12:11.707600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.332 [2024-07-24 02:12:11.707629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.332 qpair failed and we were unable to recover it. 00:33:57.332 [2024-07-24 02:12:11.707772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.332 [2024-07-24 02:12:11.707798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.332 qpair failed and we were unable to recover it. 00:33:57.332 [2024-07-24 02:12:11.707889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.332 [2024-07-24 02:12:11.707915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.332 qpair failed and we were unable to recover it. 00:33:57.332 [2024-07-24 02:12:11.708111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.332 [2024-07-24 02:12:11.708151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.332 qpair failed and we were unable to recover it. 00:33:57.332 [2024-07-24 02:12:11.708370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.332 [2024-07-24 02:12:11.708429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.332 qpair failed and we were unable to recover it. 00:33:57.332 [2024-07-24 02:12:11.708613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.332 [2024-07-24 02:12:11.708642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.332 qpair failed and we were unable to recover it. 00:33:57.332 [2024-07-24 02:12:11.708762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.332 [2024-07-24 02:12:11.708790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.332 qpair failed and we were unable to recover it. 00:33:57.332 [2024-07-24 02:12:11.708943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.332 [2024-07-24 02:12:11.708969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.332 qpair failed and we were unable to recover it. 00:33:57.332 [2024-07-24 02:12:11.709088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.332 [2024-07-24 02:12:11.709114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.332 qpair failed and we were unable to recover it. 00:33:57.332 [2024-07-24 02:12:11.709278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.332 [2024-07-24 02:12:11.709306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.332 qpair failed and we were unable to recover it. 00:33:57.332 [2024-07-24 02:12:11.709468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.332 [2024-07-24 02:12:11.709494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.332 qpair failed and we were unable to recover it. 00:33:57.332 [2024-07-24 02:12:11.709603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.332 [2024-07-24 02:12:11.709630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.332 qpair failed and we were unable to recover it. 00:33:57.332 [2024-07-24 02:12:11.709792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.332 [2024-07-24 02:12:11.709820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.332 qpair failed and we were unable to recover it. 00:33:57.332 [2024-07-24 02:12:11.709975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.332 [2024-07-24 02:12:11.710004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.332 qpair failed and we were unable to recover it. 00:33:57.332 [2024-07-24 02:12:11.710134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.332 [2024-07-24 02:12:11.710178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.332 qpair failed and we were unable to recover it. 00:33:57.332 [2024-07-24 02:12:11.710341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.332 [2024-07-24 02:12:11.710371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.332 qpair failed and we were unable to recover it. 00:33:57.332 [2024-07-24 02:12:11.710528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.332 [2024-07-24 02:12:11.710554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.332 qpair failed and we were unable to recover it. 00:33:57.332 [2024-07-24 02:12:11.710680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.332 [2024-07-24 02:12:11.710706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.332 qpair failed and we were unable to recover it. 00:33:57.332 [2024-07-24 02:12:11.710855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.332 [2024-07-24 02:12:11.710883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.332 qpair failed and we were unable to recover it. 00:33:57.332 [2024-07-24 02:12:11.711034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.332 [2024-07-24 02:12:11.711060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.332 qpair failed and we were unable to recover it. 00:33:57.332 [2024-07-24 02:12:11.711166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.332 [2024-07-24 02:12:11.711192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.332 qpair failed and we were unable to recover it. 00:33:57.332 [2024-07-24 02:12:11.711285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.332 [2024-07-24 02:12:11.711310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.332 qpair failed and we were unable to recover it. 00:33:57.332 [2024-07-24 02:12:11.711421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.332 [2024-07-24 02:12:11.711446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.332 qpair failed and we were unable to recover it. 00:33:57.332 [2024-07-24 02:12:11.711604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.332 [2024-07-24 02:12:11.711630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.332 qpair failed and we were unable to recover it. 00:33:57.332 [2024-07-24 02:12:11.711783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.332 [2024-07-24 02:12:11.711811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.332 qpair failed and we were unable to recover it. 00:33:57.332 [2024-07-24 02:12:11.711985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.332 [2024-07-24 02:12:11.712010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.332 qpair failed and we were unable to recover it. 00:33:57.332 [2024-07-24 02:12:11.712120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.332 [2024-07-24 02:12:11.712161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.332 qpair failed and we were unable to recover it. 00:33:57.332 [2024-07-24 02:12:11.712319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.332 [2024-07-24 02:12:11.712348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.332 qpair failed and we were unable to recover it. 00:33:57.332 [2024-07-24 02:12:11.712494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.332 [2024-07-24 02:12:11.712520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.332 qpair failed and we were unable to recover it. 00:33:57.332 [2024-07-24 02:12:11.712647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.332 [2024-07-24 02:12:11.712689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.332 qpair failed and we were unable to recover it. 00:33:57.332 [2024-07-24 02:12:11.712857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.332 [2024-07-24 02:12:11.712885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.332 qpair failed and we were unable to recover it. 00:33:57.332 [2024-07-24 02:12:11.713007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.332 [2024-07-24 02:12:11.713032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.332 qpair failed and we were unable to recover it. 00:33:57.332 [2024-07-24 02:12:11.713162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.332 [2024-07-24 02:12:11.713188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.332 qpair failed and we were unable to recover it. 00:33:57.332 [2024-07-24 02:12:11.713332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.332 [2024-07-24 02:12:11.713362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.332 qpair failed and we were unable to recover it. 00:33:57.332 [2024-07-24 02:12:11.713493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.332 [2024-07-24 02:12:11.713519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.332 qpair failed and we were unable to recover it. 00:33:57.332 [2024-07-24 02:12:11.713685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.333 [2024-07-24 02:12:11.713710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.333 qpair failed and we were unable to recover it. 00:33:57.333 [2024-07-24 02:12:11.713905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.333 [2024-07-24 02:12:11.713930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.333 qpair failed and we were unable to recover it. 00:33:57.333 [2024-07-24 02:12:11.714075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.333 [2024-07-24 02:12:11.714101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.333 qpair failed and we were unable to recover it. 00:33:57.333 [2024-07-24 02:12:11.714274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.333 [2024-07-24 02:12:11.714302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.333 qpair failed and we were unable to recover it. 00:33:57.333 [2024-07-24 02:12:11.714457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.333 [2024-07-24 02:12:11.714486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.333 qpair failed and we were unable to recover it. 00:33:57.333 [2024-07-24 02:12:11.714661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.333 [2024-07-24 02:12:11.714687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.333 qpair failed and we were unable to recover it. 00:33:57.333 [2024-07-24 02:12:11.714843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.333 [2024-07-24 02:12:11.714872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.333 qpair failed and we were unable to recover it. 00:33:57.333 [2024-07-24 02:12:11.714984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.333 [2024-07-24 02:12:11.715013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.333 qpair failed and we were unable to recover it. 00:33:57.333 [2024-07-24 02:12:11.715157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.333 [2024-07-24 02:12:11.715182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.333 qpair failed and we were unable to recover it. 00:33:57.333 [2024-07-24 02:12:11.715322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.333 [2024-07-24 02:12:11.715364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.333 qpair failed and we were unable to recover it. 00:33:57.333 [2024-07-24 02:12:11.715513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.333 [2024-07-24 02:12:11.715541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.333 qpair failed and we were unable to recover it. 00:33:57.333 [2024-07-24 02:12:11.715715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.333 [2024-07-24 02:12:11.715740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.333 qpair failed and we were unable to recover it. 00:33:57.333 [2024-07-24 02:12:11.715890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.333 [2024-07-24 02:12:11.715918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.333 qpair failed and we were unable to recover it. 00:33:57.333 [2024-07-24 02:12:11.716071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.333 [2024-07-24 02:12:11.716100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.333 qpair failed and we were unable to recover it. 00:33:57.333 [2024-07-24 02:12:11.716212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.333 [2024-07-24 02:12:11.716255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.333 qpair failed and we were unable to recover it. 00:33:57.333 [2024-07-24 02:12:11.716435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.333 [2024-07-24 02:12:11.716461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.333 qpair failed and we were unable to recover it. 00:33:57.333 [2024-07-24 02:12:11.716572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.333 [2024-07-24 02:12:11.716620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.333 qpair failed and we were unable to recover it. 00:33:57.333 [2024-07-24 02:12:11.716804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.333 [2024-07-24 02:12:11.716829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.333 qpair failed and we were unable to recover it. 00:33:57.333 [2024-07-24 02:12:11.716937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.333 [2024-07-24 02:12:11.716981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.333 qpair failed and we were unable to recover it. 00:33:57.333 [2024-07-24 02:12:11.717147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.333 [2024-07-24 02:12:11.717180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.333 qpair failed and we were unable to recover it. 00:33:57.333 [2024-07-24 02:12:11.717305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.333 [2024-07-24 02:12:11.717388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.333 qpair failed and we were unable to recover it. 00:33:57.333 [2024-07-24 02:12:11.717487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.333 [2024-07-24 02:12:11.717529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.333 qpair failed and we were unable to recover it. 00:33:57.333 [2024-07-24 02:12:11.717650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.333 [2024-07-24 02:12:11.717678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.333 qpair failed and we were unable to recover it. 00:33:57.333 [2024-07-24 02:12:11.717829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.333 [2024-07-24 02:12:11.717855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.333 qpair failed and we were unable to recover it. 00:33:57.333 [2024-07-24 02:12:11.718034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.333 [2024-07-24 02:12:11.718062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.333 qpair failed and we were unable to recover it. 00:33:57.333 [2024-07-24 02:12:11.718171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.333 [2024-07-24 02:12:11.718199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.333 qpair failed and we were unable to recover it. 00:33:57.333 [2024-07-24 02:12:11.718376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.333 [2024-07-24 02:12:11.718402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.333 qpair failed and we were unable to recover it. 00:33:57.333 [2024-07-24 02:12:11.718529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.333 [2024-07-24 02:12:11.718555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.333 qpair failed and we were unable to recover it. 00:33:57.333 [2024-07-24 02:12:11.718707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.333 [2024-07-24 02:12:11.718735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.333 qpair failed and we were unable to recover it. 00:33:57.333 [2024-07-24 02:12:11.718887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.333 [2024-07-24 02:12:11.718913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.333 qpair failed and we were unable to recover it. 00:33:57.333 [2024-07-24 02:12:11.719037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.333 [2024-07-24 02:12:11.719063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.333 qpair failed and we were unable to recover it. 00:33:57.333 [2024-07-24 02:12:11.719226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.333 [2024-07-24 02:12:11.719254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.333 qpair failed and we were unable to recover it. 00:33:57.333 [2024-07-24 02:12:11.719427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.334 [2024-07-24 02:12:11.719454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.334 qpair failed and we were unable to recover it. 00:33:57.334 [2024-07-24 02:12:11.719605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.334 [2024-07-24 02:12:11.719633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.334 qpair failed and we were unable to recover it. 00:33:57.334 [2024-07-24 02:12:11.719800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.334 [2024-07-24 02:12:11.719828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.334 qpair failed and we were unable to recover it. 00:33:57.334 [2024-07-24 02:12:11.719975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.334 [2024-07-24 02:12:11.720001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.334 qpair failed and we were unable to recover it. 00:33:57.334 [2024-07-24 02:12:11.720125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.334 [2024-07-24 02:12:11.720150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.334 qpair failed and we were unable to recover it. 00:33:57.334 [2024-07-24 02:12:11.720332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.334 [2024-07-24 02:12:11.720374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.334 qpair failed and we were unable to recover it. 00:33:57.334 [2024-07-24 02:12:11.720488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.334 [2024-07-24 02:12:11.720514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.334 qpair failed and we were unable to recover it. 00:33:57.334 [2024-07-24 02:12:11.720655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.334 [2024-07-24 02:12:11.720680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.334 qpair failed and we were unable to recover it. 00:33:57.334 [2024-07-24 02:12:11.720867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.334 [2024-07-24 02:12:11.720893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.334 qpair failed and we were unable to recover it. 00:33:57.334 [2024-07-24 02:12:11.720996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.334 [2024-07-24 02:12:11.721021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.334 qpair failed and we were unable to recover it. 00:33:57.334 [2024-07-24 02:12:11.721157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.334 [2024-07-24 02:12:11.721182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.334 qpair failed and we were unable to recover it. 00:33:57.334 [2024-07-24 02:12:11.721314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.334 [2024-07-24 02:12:11.721354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.334 qpair failed and we were unable to recover it. 00:33:57.334 [2024-07-24 02:12:11.721528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.334 [2024-07-24 02:12:11.721554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.334 qpair failed and we were unable to recover it. 00:33:57.334 [2024-07-24 02:12:11.721678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.334 [2024-07-24 02:12:11.721721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.334 qpair failed and we were unable to recover it. 00:33:57.334 [2024-07-24 02:12:11.721890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.334 [2024-07-24 02:12:11.721923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.334 qpair failed and we were unable to recover it. 00:33:57.334 [2024-07-24 02:12:11.722080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.334 [2024-07-24 02:12:11.722108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.334 qpair failed and we were unable to recover it. 00:33:57.334 [2024-07-24 02:12:11.722231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.334 [2024-07-24 02:12:11.722256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.334 qpair failed and we were unable to recover it. 00:33:57.334 [2024-07-24 02:12:11.722415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.334 [2024-07-24 02:12:11.722458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.334 qpair failed and we were unable to recover it. 00:33:57.334 [2024-07-24 02:12:11.722564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.334 [2024-07-24 02:12:11.722592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.334 qpair failed and we were unable to recover it. 00:33:57.334 [2024-07-24 02:12:11.722727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.334 [2024-07-24 02:12:11.722754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.334 qpair failed and we were unable to recover it. 00:33:57.334 [2024-07-24 02:12:11.722903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.334 [2024-07-24 02:12:11.722930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.334 qpair failed and we were unable to recover it. 00:33:57.334 [2024-07-24 02:12:11.723066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.334 [2024-07-24 02:12:11.723092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.334 qpair failed and we were unable to recover it. 00:33:57.334 [2024-07-24 02:12:11.723281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.334 [2024-07-24 02:12:11.723323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.334 qpair failed and we were unable to recover it. 00:33:57.334 [2024-07-24 02:12:11.723512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.334 [2024-07-24 02:12:11.723542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.334 qpair failed and we were unable to recover it. 00:33:57.334 [2024-07-24 02:12:11.723718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.334 [2024-07-24 02:12:11.723743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.334 qpair failed and we were unable to recover it. 00:33:57.334 [2024-07-24 02:12:11.723851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.334 [2024-07-24 02:12:11.723893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.334 qpair failed and we were unable to recover it. 00:33:57.334 [2024-07-24 02:12:11.724031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.334 [2024-07-24 02:12:11.724060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.334 qpair failed and we were unable to recover it. 00:33:57.334 [2024-07-24 02:12:11.724197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.334 [2024-07-24 02:12:11.724225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.334 qpair failed and we were unable to recover it. 00:33:57.334 [2024-07-24 02:12:11.724385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.334 [2024-07-24 02:12:11.724411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.334 qpair failed and we were unable to recover it. 00:33:57.334 [2024-07-24 02:12:11.724544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.334 [2024-07-24 02:12:11.724586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.334 qpair failed and we were unable to recover it. 00:33:57.334 [2024-07-24 02:12:11.724704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.334 [2024-07-24 02:12:11.724733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.334 qpair failed and we were unable to recover it. 00:33:57.334 [2024-07-24 02:12:11.724865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.334 [2024-07-24 02:12:11.724893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.334 qpair failed and we were unable to recover it. 00:33:57.334 [2024-07-24 02:12:11.725044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.334 [2024-07-24 02:12:11.725070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.334 qpair failed and we were unable to recover it. 00:33:57.334 [2024-07-24 02:12:11.725204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.334 [2024-07-24 02:12:11.725245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.334 qpair failed and we were unable to recover it. 00:33:57.334 [2024-07-24 02:12:11.725414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.334 [2024-07-24 02:12:11.725443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.334 qpair failed and we were unable to recover it. 00:33:57.334 [2024-07-24 02:12:11.725588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.334 [2024-07-24 02:12:11.725616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.334 qpair failed and we were unable to recover it. 00:33:57.334 [2024-07-24 02:12:11.725759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.335 [2024-07-24 02:12:11.725785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.335 qpair failed and we were unable to recover it. 00:33:57.335 [2024-07-24 02:12:11.725920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.335 [2024-07-24 02:12:11.725963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.335 qpair failed and we were unable to recover it. 00:33:57.335 [2024-07-24 02:12:11.726105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.335 [2024-07-24 02:12:11.726133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.335 qpair failed and we were unable to recover it. 00:33:57.335 [2024-07-24 02:12:11.726299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.335 [2024-07-24 02:12:11.726334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.335 qpair failed and we were unable to recover it. 00:33:57.335 [2024-07-24 02:12:11.726458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.335 [2024-07-24 02:12:11.726483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.335 qpair failed and we were unable to recover it. 00:33:57.335 [2024-07-24 02:12:11.726618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.335 [2024-07-24 02:12:11.726643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.335 qpair failed and we were unable to recover it. 00:33:57.335 [2024-07-24 02:12:11.726829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.335 [2024-07-24 02:12:11.726857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.335 qpair failed and we were unable to recover it. 00:33:57.335 [2024-07-24 02:12:11.726994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.335 [2024-07-24 02:12:11.727022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.335 qpair failed and we were unable to recover it. 00:33:57.335 [2024-07-24 02:12:11.727142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.335 [2024-07-24 02:12:11.727167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.335 qpair failed and we were unable to recover it. 00:33:57.335 [2024-07-24 02:12:11.727291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.335 [2024-07-24 02:12:11.727321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.335 qpair failed and we were unable to recover it. 00:33:57.335 [2024-07-24 02:12:11.727484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.335 [2024-07-24 02:12:11.727512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.335 qpair failed and we were unable to recover it. 00:33:57.335 [2024-07-24 02:12:11.727652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.335 [2024-07-24 02:12:11.727680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.335 qpair failed and we were unable to recover it. 00:33:57.335 [2024-07-24 02:12:11.727805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.335 [2024-07-24 02:12:11.727830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.335 qpair failed and we were unable to recover it. 00:33:57.335 [2024-07-24 02:12:11.727988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.335 [2024-07-24 02:12:11.728014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.335 qpair failed and we were unable to recover it. 00:33:57.335 [2024-07-24 02:12:11.728168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.335 [2024-07-24 02:12:11.728198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.335 qpair failed and we were unable to recover it. 00:33:57.335 [2024-07-24 02:12:11.728383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.335 [2024-07-24 02:12:11.728409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.335 qpair failed and we were unable to recover it. 00:33:57.335 [2024-07-24 02:12:11.728541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.335 [2024-07-24 02:12:11.728568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.335 qpair failed and we were unable to recover it. 00:33:57.335 [2024-07-24 02:12:11.728701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.335 [2024-07-24 02:12:11.728742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.335 qpair failed and we were unable to recover it. 00:33:57.335 [2024-07-24 02:12:11.728877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.335 [2024-07-24 02:12:11.728905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.335 qpair failed and we were unable to recover it. 00:33:57.335 [2024-07-24 02:12:11.729069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.335 [2024-07-24 02:12:11.729098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.335 qpair failed and we were unable to recover it. 00:33:57.335 [2024-07-24 02:12:11.729231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.335 [2024-07-24 02:12:11.729258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.335 qpair failed and we were unable to recover it. 00:33:57.335 [2024-07-24 02:12:11.729396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.335 [2024-07-24 02:12:11.729441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.335 qpair failed and we were unable to recover it. 00:33:57.335 [2024-07-24 02:12:11.729584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.335 [2024-07-24 02:12:11.729612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.335 qpair failed and we were unable to recover it. 00:33:57.335 [2024-07-24 02:12:11.729757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.335 [2024-07-24 02:12:11.729786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.335 qpair failed and we were unable to recover it. 00:33:57.335 [2024-07-24 02:12:11.729927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.335 [2024-07-24 02:12:11.729953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.335 qpair failed and we were unable to recover it. 00:33:57.335 [2024-07-24 02:12:11.730088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.336 [2024-07-24 02:12:11.730129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.336 qpair failed and we were unable to recover it. 00:33:57.336 [2024-07-24 02:12:11.730303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.336 [2024-07-24 02:12:11.730334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.336 qpair failed and we were unable to recover it. 00:33:57.336 [2024-07-24 02:12:11.730460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.336 [2024-07-24 02:12:11.730485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.336 qpair failed and we were unable to recover it. 00:33:57.336 [2024-07-24 02:12:11.730611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.336 [2024-07-24 02:12:11.730637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.336 qpair failed and we were unable to recover it. 00:33:57.336 [2024-07-24 02:12:11.730763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.336 [2024-07-24 02:12:11.730803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.336 qpair failed and we were unable to recover it. 00:33:57.336 [2024-07-24 02:12:11.730941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.336 [2024-07-24 02:12:11.730970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.336 qpair failed and we were unable to recover it. 00:33:57.336 [2024-07-24 02:12:11.731138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.336 [2024-07-24 02:12:11.731166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.336 qpair failed and we were unable to recover it. 00:33:57.336 [2024-07-24 02:12:11.731335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.336 [2024-07-24 02:12:11.731362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.336 qpair failed and we were unable to recover it. 00:33:57.336 [2024-07-24 02:12:11.731497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.336 [2024-07-24 02:12:11.731542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.336 qpair failed and we were unable to recover it. 00:33:57.336 [2024-07-24 02:12:11.731710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.336 [2024-07-24 02:12:11.731738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.336 qpair failed and we were unable to recover it. 00:33:57.336 [2024-07-24 02:12:11.731878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.336 [2024-07-24 02:12:11.731906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.336 qpair failed and we were unable to recover it. 00:33:57.336 [2024-07-24 02:12:11.732055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.336 [2024-07-24 02:12:11.732080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.336 qpair failed and we were unable to recover it. 00:33:57.336 [2024-07-24 02:12:11.732183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.336 [2024-07-24 02:12:11.732209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.336 qpair failed and we were unable to recover it. 00:33:57.336 [2024-07-24 02:12:11.732356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.336 [2024-07-24 02:12:11.732384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.336 qpair failed and we were unable to recover it. 00:33:57.336 [2024-07-24 02:12:11.732538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.336 [2024-07-24 02:12:11.732564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.336 qpair failed and we were unable to recover it. 00:33:57.336 [2024-07-24 02:12:11.732699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.336 [2024-07-24 02:12:11.732725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.336 qpair failed and we were unable to recover it. 00:33:57.336 [2024-07-24 02:12:11.732883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.336 [2024-07-24 02:12:11.732911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.336 qpair failed and we were unable to recover it. 00:33:57.336 [2024-07-24 02:12:11.733064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.336 [2024-07-24 02:12:11.733092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.336 qpair failed and we were unable to recover it. 00:33:57.336 [2024-07-24 02:12:11.733235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.336 [2024-07-24 02:12:11.733263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.336 qpair failed and we were unable to recover it. 00:33:57.336 [2024-07-24 02:12:11.733410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.336 [2024-07-24 02:12:11.733436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.336 qpair failed and we were unable to recover it. 00:33:57.336 [2024-07-24 02:12:11.733536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.336 [2024-07-24 02:12:11.733561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.336 qpair failed and we were unable to recover it. 00:33:57.336 [2024-07-24 02:12:11.733686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.336 [2024-07-24 02:12:11.733714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.336 qpair failed and we were unable to recover it. 00:33:57.336 [2024-07-24 02:12:11.733854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.336 [2024-07-24 02:12:11.733882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.336 qpair failed and we were unable to recover it. 00:33:57.336 [2024-07-24 02:12:11.734024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.336 [2024-07-24 02:12:11.734049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.336 qpair failed and we were unable to recover it. 00:33:57.336 [2024-07-24 02:12:11.734183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.336 [2024-07-24 02:12:11.734210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.336 qpair failed and we were unable to recover it. 00:33:57.336 [2024-07-24 02:12:11.734366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.336 [2024-07-24 02:12:11.734396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.336 qpair failed and we were unable to recover it. 00:33:57.336 [2024-07-24 02:12:11.734560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.336 [2024-07-24 02:12:11.734588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.336 qpair failed and we were unable to recover it. 00:33:57.336 [2024-07-24 02:12:11.734734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.336 [2024-07-24 02:12:11.734759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.336 qpair failed and we were unable to recover it. 00:33:57.336 [2024-07-24 02:12:11.734913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.336 [2024-07-24 02:12:11.734956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.336 qpair failed and we were unable to recover it. 00:33:57.336 [2024-07-24 02:12:11.735069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.336 [2024-07-24 02:12:11.735098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.336 qpair failed and we were unable to recover it. 00:33:57.336 [2024-07-24 02:12:11.735243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.336 [2024-07-24 02:12:11.735272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.336 qpair failed and we were unable to recover it. 00:33:57.336 [2024-07-24 02:12:11.735430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.336 [2024-07-24 02:12:11.735456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.336 qpair failed and we were unable to recover it. 00:33:57.336 [2024-07-24 02:12:11.735565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.336 [2024-07-24 02:12:11.735590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.336 qpair failed and we were unable to recover it. 00:33:57.336 [2024-07-24 02:12:11.735718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.336 [2024-07-24 02:12:11.735749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.336 qpair failed and we were unable to recover it. 00:33:57.336 [2024-07-24 02:12:11.735893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.336 [2024-07-24 02:12:11.735921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.336 qpair failed and we were unable to recover it. 00:33:57.336 [2024-07-24 02:12:11.736080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.336 [2024-07-24 02:12:11.736106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.336 qpair failed and we were unable to recover it. 00:33:57.336 [2024-07-24 02:12:11.736234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.337 [2024-07-24 02:12:11.736260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.337 qpair failed and we were unable to recover it. 00:33:57.337 [2024-07-24 02:12:11.736439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.337 [2024-07-24 02:12:11.736465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.337 qpair failed and we were unable to recover it. 00:33:57.337 [2024-07-24 02:12:11.736589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.337 [2024-07-24 02:12:11.736614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.337 qpair failed and we were unable to recover it. 00:33:57.337 [2024-07-24 02:12:11.736720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.337 [2024-07-24 02:12:11.736745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.337 qpair failed and we were unable to recover it. 00:33:57.337 [2024-07-24 02:12:11.736879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.337 [2024-07-24 02:12:11.736905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.337 qpair failed and we were unable to recover it. 00:33:57.337 [2024-07-24 02:12:11.737064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.337 [2024-07-24 02:12:11.737093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.337 qpair failed and we were unable to recover it. 00:33:57.337 [2024-07-24 02:12:11.737229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.337 [2024-07-24 02:12:11.737257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.337 qpair failed and we were unable to recover it. 00:33:57.337 [2024-07-24 02:12:11.737409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.337 [2024-07-24 02:12:11.737436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.337 qpair failed and we were unable to recover it. 00:33:57.337 [2024-07-24 02:12:11.737568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.337 [2024-07-24 02:12:11.737593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.337 qpair failed and we were unable to recover it. 00:33:57.337 [2024-07-24 02:12:11.737714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.337 [2024-07-24 02:12:11.737742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.337 qpair failed and we were unable to recover it. 00:33:57.337 [2024-07-24 02:12:11.737912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.337 [2024-07-24 02:12:11.737940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.337 qpair failed and we were unable to recover it. 00:33:57.337 [2024-07-24 02:12:11.738081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.337 [2024-07-24 02:12:11.738106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.337 qpair failed and we were unable to recover it. 00:33:57.337 [2024-07-24 02:12:11.738241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.337 [2024-07-24 02:12:11.738285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.337 qpair failed and we were unable to recover it. 00:33:57.337 [2024-07-24 02:12:11.738473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.337 [2024-07-24 02:12:11.738499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.337 qpair failed and we were unable to recover it. 00:33:57.337 [2024-07-24 02:12:11.738644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.337 [2024-07-24 02:12:11.738673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.337 qpair failed and we were unable to recover it. 00:33:57.337 [2024-07-24 02:12:11.738825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.337 [2024-07-24 02:12:11.738852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.337 qpair failed and we were unable to recover it. 00:33:57.337 [2024-07-24 02:12:11.738986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.337 [2024-07-24 02:12:11.739028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.337 qpair failed and we were unable to recover it. 00:33:57.337 [2024-07-24 02:12:11.739176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.337 [2024-07-24 02:12:11.739204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.337 qpair failed and we were unable to recover it. 00:33:57.337 [2024-07-24 02:12:11.739386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.337 [2024-07-24 02:12:11.739412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.337 qpair failed and we were unable to recover it. 00:33:57.337 [2024-07-24 02:12:11.739516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.337 [2024-07-24 02:12:11.739543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.337 qpair failed and we were unable to recover it. 00:33:57.337 [2024-07-24 02:12:11.739678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.337 [2024-07-24 02:12:11.739703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.337 qpair failed and we were unable to recover it. 00:33:57.337 [2024-07-24 02:12:11.739864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.337 [2024-07-24 02:12:11.739893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.337 qpair failed and we were unable to recover it. 00:33:57.337 [2024-07-24 02:12:11.740034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.337 [2024-07-24 02:12:11.740062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.337 qpair failed and we were unable to recover it. 00:33:57.337 [2024-07-24 02:12:11.740228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.337 [2024-07-24 02:12:11.740256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.337 qpair failed and we were unable to recover it. 00:33:57.337 [2024-07-24 02:12:11.740375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.337 [2024-07-24 02:12:11.740401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.337 qpair failed and we were unable to recover it. 00:33:57.337 [2024-07-24 02:12:11.740515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.337 [2024-07-24 02:12:11.740540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.337 qpair failed and we were unable to recover it. 00:33:57.337 [2024-07-24 02:12:11.740690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.337 [2024-07-24 02:12:11.740722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.337 qpair failed and we were unable to recover it. 00:33:57.337 [2024-07-24 02:12:11.740880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.337 [2024-07-24 02:12:11.740905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.337 qpair failed and we were unable to recover it. 00:33:57.337 [2024-07-24 02:12:11.741037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.337 [2024-07-24 02:12:11.741079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.337 qpair failed and we were unable to recover it. 00:33:57.337 [2024-07-24 02:12:11.741228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.337 [2024-07-24 02:12:11.741256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.337 qpair failed and we were unable to recover it. 00:33:57.337 [2024-07-24 02:12:11.741423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.337 [2024-07-24 02:12:11.741452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.337 qpair failed and we were unable to recover it. 00:33:57.337 [2024-07-24 02:12:11.741607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.338 [2024-07-24 02:12:11.741632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.338 qpair failed and we were unable to recover it. 00:33:57.338 [2024-07-24 02:12:11.741760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.338 [2024-07-24 02:12:11.741801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.338 qpair failed and we were unable to recover it. 00:33:57.338 [2024-07-24 02:12:11.741949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.338 [2024-07-24 02:12:11.741978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.338 qpair failed and we were unable to recover it. 00:33:57.338 [2024-07-24 02:12:11.742101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.338 [2024-07-24 02:12:11.742129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.338 qpair failed and we were unable to recover it. 00:33:57.338 [2024-07-24 02:12:11.742255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.338 [2024-07-24 02:12:11.742280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.338 qpair failed and we were unable to recover it. 00:33:57.338 [2024-07-24 02:12:11.742442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.338 [2024-07-24 02:12:11.742485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.338 qpair failed and we were unable to recover it. 00:33:57.338 [2024-07-24 02:12:11.742594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.338 [2024-07-24 02:12:11.742623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.338 qpair failed and we were unable to recover it. 00:33:57.338 [2024-07-24 02:12:11.742767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.338 [2024-07-24 02:12:11.742795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.338 qpair failed and we were unable to recover it. 00:33:57.338 [2024-07-24 02:12:11.742941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.338 [2024-07-24 02:12:11.742966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.338 qpair failed and we were unable to recover it. 00:33:57.338 [2024-07-24 02:12:11.743074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.338 [2024-07-24 02:12:11.743099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.338 qpair failed and we were unable to recover it. 00:33:57.338 [2024-07-24 02:12:11.743227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.338 [2024-07-24 02:12:11.743253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.338 qpair failed and we were unable to recover it. 00:33:57.338 [2024-07-24 02:12:11.743424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.338 [2024-07-24 02:12:11.743452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.338 qpair failed and we were unable to recover it. 00:33:57.338 [2024-07-24 02:12:11.743599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.338 [2024-07-24 02:12:11.743625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.338 qpair failed and we were unable to recover it. 00:33:57.338 [2024-07-24 02:12:11.743759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.338 [2024-07-24 02:12:11.743784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.338 qpair failed and we were unable to recover it. 00:33:57.338 [2024-07-24 02:12:11.743897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.338 [2024-07-24 02:12:11.743922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.338 qpair failed and we were unable to recover it. 00:33:57.338 [2024-07-24 02:12:11.744069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.338 [2024-07-24 02:12:11.744098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.338 qpair failed and we were unable to recover it. 00:33:57.338 [2024-07-24 02:12:11.744246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.338 [2024-07-24 02:12:11.744272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.338 qpair failed and we were unable to recover it. 00:33:57.338 [2024-07-24 02:12:11.744384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.338 [2024-07-24 02:12:11.744410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.338 qpair failed and we were unable to recover it. 00:33:57.338 [2024-07-24 02:12:11.744535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.338 [2024-07-24 02:12:11.744560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.338 qpair failed and we were unable to recover it. 00:33:57.338 [2024-07-24 02:12:11.744708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.338 [2024-07-24 02:12:11.744735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.338 qpair failed and we were unable to recover it. 00:33:57.338 [2024-07-24 02:12:11.744857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.338 [2024-07-24 02:12:11.744883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.338 qpair failed and we were unable to recover it. 00:33:57.338 [2024-07-24 02:12:11.745041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.338 [2024-07-24 02:12:11.745082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.338 qpair failed and we were unable to recover it. 00:33:57.338 [2024-07-24 02:12:11.745230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.338 [2024-07-24 02:12:11.745258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.338 qpair failed and we were unable to recover it. 00:33:57.338 [2024-07-24 02:12:11.745405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.338 [2024-07-24 02:12:11.745436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.338 qpair failed and we were unable to recover it. 00:33:57.338 [2024-07-24 02:12:11.745616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.338 [2024-07-24 02:12:11.745641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.338 qpair failed and we were unable to recover it. 00:33:57.338 [2024-07-24 02:12:11.745785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.338 [2024-07-24 02:12:11.745813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.338 qpair failed and we were unable to recover it. 00:33:57.338 [2024-07-24 02:12:11.745953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.338 [2024-07-24 02:12:11.745982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.338 qpair failed and we were unable to recover it. 00:33:57.338 [2024-07-24 02:12:11.746123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.338 [2024-07-24 02:12:11.746152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.338 qpair failed and we were unable to recover it. 00:33:57.338 [2024-07-24 02:12:11.746276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.338 [2024-07-24 02:12:11.746302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.338 qpair failed and we were unable to recover it. 00:33:57.338 [2024-07-24 02:12:11.746422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.338 [2024-07-24 02:12:11.746448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.338 qpair failed and we were unable to recover it. 00:33:57.338 [2024-07-24 02:12:11.746606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.339 [2024-07-24 02:12:11.746631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.339 qpair failed and we were unable to recover it. 00:33:57.339 [2024-07-24 02:12:11.746803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.339 [2024-07-24 02:12:11.746832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.339 qpair failed and we were unable to recover it. 00:33:57.339 [2024-07-24 02:12:11.747008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.339 [2024-07-24 02:12:11.747034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.339 qpair failed and we were unable to recover it. 00:33:57.339 [2024-07-24 02:12:11.747157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.339 [2024-07-24 02:12:11.747199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.339 qpair failed and we were unable to recover it. 00:33:57.339 [2024-07-24 02:12:11.747368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.339 [2024-07-24 02:12:11.747398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.339 qpair failed and we were unable to recover it. 00:33:57.339 [2024-07-24 02:12:11.747516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.339 [2024-07-24 02:12:11.747545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.339 qpair failed and we were unable to recover it. 00:33:57.339 [2024-07-24 02:12:11.747725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.339 [2024-07-24 02:12:11.747755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.339 qpair failed and we were unable to recover it. 00:33:57.339 [2024-07-24 02:12:11.747907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.339 [2024-07-24 02:12:11.747978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.339 qpair failed and we were unable to recover it. 00:33:57.339 [2024-07-24 02:12:11.748081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.339 [2024-07-24 02:12:11.748109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.339 qpair failed and we were unable to recover it. 00:33:57.339 [2024-07-24 02:12:11.748253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.339 [2024-07-24 02:12:11.748282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.339 qpair failed and we were unable to recover it. 00:33:57.339 [2024-07-24 02:12:11.748467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.339 [2024-07-24 02:12:11.748493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.339 qpair failed and we were unable to recover it. 00:33:57.339 [2024-07-24 02:12:11.748644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.339 [2024-07-24 02:12:11.748672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.339 qpair failed and we were unable to recover it. 00:33:57.339 [2024-07-24 02:12:11.748840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.339 [2024-07-24 02:12:11.748868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.339 qpair failed and we were unable to recover it. 00:33:57.339 [2024-07-24 02:12:11.749013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.339 [2024-07-24 02:12:11.749042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.339 qpair failed and we were unable to recover it. 00:33:57.339 [2024-07-24 02:12:11.749182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.339 [2024-07-24 02:12:11.749210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.339 qpair failed and we were unable to recover it. 00:33:57.339 [2024-07-24 02:12:11.749391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.339 [2024-07-24 02:12:11.749417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.339 qpair failed and we were unable to recover it. 00:33:57.339 [2024-07-24 02:12:11.749565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.339 [2024-07-24 02:12:11.749608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.339 qpair failed and we were unable to recover it. 00:33:57.339 [2024-07-24 02:12:11.749788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.339 [2024-07-24 02:12:11.749816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.339 qpair failed and we were unable to recover it. 00:33:57.339 [2024-07-24 02:12:11.749960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.339 [2024-07-24 02:12:11.749986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.339 qpair failed and we were unable to recover it. 00:33:57.339 [2024-07-24 02:12:11.750099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.339 [2024-07-24 02:12:11.750125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.339 qpair failed and we were unable to recover it. 00:33:57.339 [2024-07-24 02:12:11.750242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.339 [2024-07-24 02:12:11.750268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.339 qpair failed and we were unable to recover it. 00:33:57.339 [2024-07-24 02:12:11.750445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.339 [2024-07-24 02:12:11.750474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.339 qpair failed and we were unable to recover it. 00:33:57.339 [2024-07-24 02:12:11.750619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.339 [2024-07-24 02:12:11.750644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.339 qpair failed and we were unable to recover it. 00:33:57.339 [2024-07-24 02:12:11.750745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.339 [2024-07-24 02:12:11.750772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.339 qpair failed and we were unable to recover it. 00:33:57.339 [2024-07-24 02:12:11.750949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.339 [2024-07-24 02:12:11.750978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.339 qpair failed and we were unable to recover it. 00:33:57.339 [2024-07-24 02:12:11.751117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.339 [2024-07-24 02:12:11.751145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.339 qpair failed and we were unable to recover it. 00:33:57.339 [2024-07-24 02:12:11.751297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.339 [2024-07-24 02:12:11.751327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.339 qpair failed and we were unable to recover it. 00:33:57.339 [2024-07-24 02:12:11.751466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.339 [2024-07-24 02:12:11.751491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.339 qpair failed and we were unable to recover it. 00:33:57.339 [2024-07-24 02:12:11.751651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.339 [2024-07-24 02:12:11.751677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.339 qpair failed and we were unable to recover it. 00:33:57.339 [2024-07-24 02:12:11.751861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.339 [2024-07-24 02:12:11.751886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.339 qpair failed and we were unable to recover it. 00:33:57.339 [2024-07-24 02:12:11.752020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.339 [2024-07-24 02:12:11.752045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.339 qpair failed and we were unable to recover it. 00:33:57.339 [2024-07-24 02:12:11.752178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.339 [2024-07-24 02:12:11.752220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.339 qpair failed and we were unable to recover it. 00:33:57.339 [2024-07-24 02:12:11.752334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.339 [2024-07-24 02:12:11.752363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.339 qpair failed and we were unable to recover it. 00:33:57.339 [2024-07-24 02:12:11.752473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.339 [2024-07-24 02:12:11.752506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.339 qpair failed and we were unable to recover it. 00:33:57.340 [2024-07-24 02:12:11.752665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.340 [2024-07-24 02:12:11.752691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.340 qpair failed and we were unable to recover it. 00:33:57.340 [2024-07-24 02:12:11.752791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.340 [2024-07-24 02:12:11.752816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.340 qpair failed and we were unable to recover it. 00:33:57.340 [2024-07-24 02:12:11.752946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.340 [2024-07-24 02:12:11.752971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.340 qpair failed and we were unable to recover it. 00:33:57.340 [2024-07-24 02:12:11.753125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.340 [2024-07-24 02:12:11.753154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.340 qpair failed and we were unable to recover it. 00:33:57.340 [2024-07-24 02:12:11.753276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.340 [2024-07-24 02:12:11.753301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.340 qpair failed and we were unable to recover it. 00:33:57.340 [2024-07-24 02:12:11.753477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.340 [2024-07-24 02:12:11.753503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.340 qpair failed and we were unable to recover it. 00:33:57.340 [2024-07-24 02:12:11.753656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.340 [2024-07-24 02:12:11.753684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.340 qpair failed and we were unable to recover it. 00:33:57.340 [2024-07-24 02:12:11.753828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.340 [2024-07-24 02:12:11.753856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.340 qpair failed and we were unable to recover it. 00:33:57.340 [2024-07-24 02:12:11.754045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.340 [2024-07-24 02:12:11.754070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.340 qpair failed and we were unable to recover it. 00:33:57.340 [2024-07-24 02:12:11.754213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.340 [2024-07-24 02:12:11.754241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.340 qpair failed and we were unable to recover it. 00:33:57.340 [2024-07-24 02:12:11.754411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.340 [2024-07-24 02:12:11.754440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.340 qpair failed and we were unable to recover it. 00:33:57.340 [2024-07-24 02:12:11.754609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.340 [2024-07-24 02:12:11.754637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.340 qpair failed and we were unable to recover it. 00:33:57.340 [2024-07-24 02:12:11.754785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.340 [2024-07-24 02:12:11.754810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.340 qpair failed and we were unable to recover it. 00:33:57.340 [2024-07-24 02:12:11.754940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.340 [2024-07-24 02:12:11.754984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.340 qpair failed and we were unable to recover it. 00:33:57.340 [2024-07-24 02:12:11.755158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.340 [2024-07-24 02:12:11.755183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.340 qpair failed and we were unable to recover it. 00:33:57.340 [2024-07-24 02:12:11.755322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.340 [2024-07-24 02:12:11.755363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.340 qpair failed and we were unable to recover it. 00:33:57.340 [2024-07-24 02:12:11.755486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.340 [2024-07-24 02:12:11.755511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.340 qpair failed and we were unable to recover it. 00:33:57.340 [2024-07-24 02:12:11.755648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.340 [2024-07-24 02:12:11.755673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.340 qpair failed and we were unable to recover it. 00:33:57.340 [2024-07-24 02:12:11.755834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.340 [2024-07-24 02:12:11.755878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.340 qpair failed and we were unable to recover it. 00:33:57.340 [2024-07-24 02:12:11.756018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.340 [2024-07-24 02:12:11.756046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.340 qpair failed and we were unable to recover it. 00:33:57.340 [2024-07-24 02:12:11.756188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.340 [2024-07-24 02:12:11.756214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.340 qpair failed and we were unable to recover it. 00:33:57.340 [2024-07-24 02:12:11.756351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.340 [2024-07-24 02:12:11.756377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.340 qpair failed and we were unable to recover it. 00:33:57.340 [2024-07-24 02:12:11.756481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.340 [2024-07-24 02:12:11.756507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.340 qpair failed and we were unable to recover it. 00:33:57.340 [2024-07-24 02:12:11.756655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.340 [2024-07-24 02:12:11.756683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.340 qpair failed and we were unable to recover it. 00:33:57.340 [2024-07-24 02:12:11.756838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.340 [2024-07-24 02:12:11.756864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.340 qpair failed and we were unable to recover it. 00:33:57.340 [2024-07-24 02:12:11.756995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.340 [2024-07-24 02:12:11.757038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.340 qpair failed and we were unable to recover it. 00:33:57.340 [2024-07-24 02:12:11.757185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.340 [2024-07-24 02:12:11.757213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.340 qpair failed and we were unable to recover it. 00:33:57.340 [2024-07-24 02:12:11.757379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.340 [2024-07-24 02:12:11.757404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.341 qpair failed and we were unable to recover it. 00:33:57.341 [2024-07-24 02:12:11.757537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.341 [2024-07-24 02:12:11.757562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.341 qpair failed and we were unable to recover it. 00:33:57.341 [2024-07-24 02:12:11.757671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.341 [2024-07-24 02:12:11.757698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.341 qpair failed and we were unable to recover it. 00:33:57.341 [2024-07-24 02:12:11.757858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.341 [2024-07-24 02:12:11.757883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.341 qpair failed and we were unable to recover it. 00:33:57.341 [2024-07-24 02:12:11.758019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.341 [2024-07-24 02:12:11.758047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.341 qpair failed and we were unable to recover it. 00:33:57.341 [2024-07-24 02:12:11.758232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.341 [2024-07-24 02:12:11.758258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.341 qpair failed and we were unable to recover it. 00:33:57.341 [2024-07-24 02:12:11.758437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.341 [2024-07-24 02:12:11.758486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.341 qpair failed and we were unable to recover it. 00:33:57.341 [2024-07-24 02:12:11.758628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.341 [2024-07-24 02:12:11.758657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.341 qpair failed and we were unable to recover it. 00:33:57.341 [2024-07-24 02:12:11.758802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.341 [2024-07-24 02:12:11.758831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.341 qpair failed and we were unable to recover it. 00:33:57.341 [2024-07-24 02:12:11.758974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.341 [2024-07-24 02:12:11.759000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.341 qpair failed and we were unable to recover it. 00:33:57.341 [2024-07-24 02:12:11.759152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.341 [2024-07-24 02:12:11.759194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.341 qpair failed and we were unable to recover it. 00:33:57.341 [2024-07-24 02:12:11.759332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.341 [2024-07-24 02:12:11.759360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.341 qpair failed and we were unable to recover it. 00:33:57.341 [2024-07-24 02:12:11.759498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.341 [2024-07-24 02:12:11.759526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.341 qpair failed and we were unable to recover it. 00:33:57.341 [2024-07-24 02:12:11.759670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.341 [2024-07-24 02:12:11.759699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.341 qpair failed and we were unable to recover it. 00:33:57.341 [2024-07-24 02:12:11.759831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.341 [2024-07-24 02:12:11.759857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.341 qpair failed and we were unable to recover it. 00:33:57.341 [2024-07-24 02:12:11.760001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.341 [2024-07-24 02:12:11.760030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.341 qpair failed and we were unable to recover it. 00:33:57.341 [2024-07-24 02:12:11.760166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.341 [2024-07-24 02:12:11.760194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.341 qpair failed and we were unable to recover it. 00:33:57.341 [2024-07-24 02:12:11.760350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.341 [2024-07-24 02:12:11.760375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.341 qpair failed and we were unable to recover it. 00:33:57.341 [2024-07-24 02:12:11.760529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.341 [2024-07-24 02:12:11.760555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.341 qpair failed and we were unable to recover it. 00:33:57.341 [2024-07-24 02:12:11.760687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.341 [2024-07-24 02:12:11.760712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.341 qpair failed and we were unable to recover it. 00:33:57.341 [2024-07-24 02:12:11.760847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.341 [2024-07-24 02:12:11.760872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.341 qpair failed and we were unable to recover it. 00:33:57.341 [2024-07-24 02:12:11.760995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.341 [2024-07-24 02:12:11.761021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.341 qpair failed and we were unable to recover it. 00:33:57.341 [2024-07-24 02:12:11.761148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.341 [2024-07-24 02:12:11.761173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.341 qpair failed and we were unable to recover it. 00:33:57.341 [2024-07-24 02:12:11.761293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.341 [2024-07-24 02:12:11.761340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.341 qpair failed and we were unable to recover it. 00:33:57.341 [2024-07-24 02:12:11.761477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.341 [2024-07-24 02:12:11.761505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.341 qpair failed and we were unable to recover it. 00:33:57.341 [2024-07-24 02:12:11.761632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.341 [2024-07-24 02:12:11.761658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.341 qpair failed and we were unable to recover it. 00:33:57.341 [2024-07-24 02:12:11.761800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.341 [2024-07-24 02:12:11.761825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.341 qpair failed and we were unable to recover it. 00:33:57.341 [2024-07-24 02:12:11.761956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.341 [2024-07-24 02:12:11.761984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.341 qpair failed and we were unable to recover it. 00:33:57.341 [2024-07-24 02:12:11.762119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.341 [2024-07-24 02:12:11.762147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.341 qpair failed and we were unable to recover it. 00:33:57.341 [2024-07-24 02:12:11.762297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.341 [2024-07-24 02:12:11.762330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.341 qpair failed and we were unable to recover it. 00:33:57.341 [2024-07-24 02:12:11.762446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.341 [2024-07-24 02:12:11.762471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.341 qpair failed and we were unable to recover it. 00:33:57.341 [2024-07-24 02:12:11.762618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.341 [2024-07-24 02:12:11.762648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.341 qpair failed and we were unable to recover it. 00:33:57.341 [2024-07-24 02:12:11.762830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.341 [2024-07-24 02:12:11.762858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.341 qpair failed and we were unable to recover it. 00:33:57.341 [2024-07-24 02:12:11.763011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.341 [2024-07-24 02:12:11.763038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.341 qpair failed and we were unable to recover it. 00:33:57.341 [2024-07-24 02:12:11.763209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.341 [2024-07-24 02:12:11.763237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.341 qpair failed and we were unable to recover it. 00:33:57.341 [2024-07-24 02:12:11.763381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.341 [2024-07-24 02:12:11.763410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.341 qpair failed and we were unable to recover it. 00:33:57.341 [2024-07-24 02:12:11.763558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.341 [2024-07-24 02:12:11.763586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.341 qpair failed and we were unable to recover it. 00:33:57.341 [2024-07-24 02:12:11.763739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.342 [2024-07-24 02:12:11.763765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.342 qpair failed and we were unable to recover it. 00:33:57.342 [2024-07-24 02:12:11.763898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.342 [2024-07-24 02:12:11.763940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.342 qpair failed and we were unable to recover it. 00:33:57.342 [2024-07-24 02:12:11.764073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.342 [2024-07-24 02:12:11.764102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.342 qpair failed and we were unable to recover it. 00:33:57.342 [2024-07-24 02:12:11.764280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.342 [2024-07-24 02:12:11.764312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.342 qpair failed and we were unable to recover it. 00:33:57.342 [2024-07-24 02:12:11.764467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.342 [2024-07-24 02:12:11.764493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.342 qpair failed and we were unable to recover it. 00:33:57.342 [2024-07-24 02:12:11.764681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.342 [2024-07-24 02:12:11.764709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.342 qpair failed and we were unable to recover it. 00:33:57.342 [2024-07-24 02:12:11.764809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.342 [2024-07-24 02:12:11.764837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.342 qpair failed and we were unable to recover it. 00:33:57.342 [2024-07-24 02:12:11.764957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.342 [2024-07-24 02:12:11.764985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.342 qpair failed and we were unable to recover it. 00:33:57.342 [2024-07-24 02:12:11.765158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.342 [2024-07-24 02:12:11.765184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.342 qpair failed and we were unable to recover it. 00:33:57.342 [2024-07-24 02:12:11.765313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.342 [2024-07-24 02:12:11.765374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.342 qpair failed and we were unable to recover it. 00:33:57.342 [2024-07-24 02:12:11.765533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.342 [2024-07-24 02:12:11.765558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.342 qpair failed and we were unable to recover it. 00:33:57.342 [2024-07-24 02:12:11.765693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.342 [2024-07-24 02:12:11.765718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.342 qpair failed and we were unable to recover it. 00:33:57.342 [2024-07-24 02:12:11.765874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.342 [2024-07-24 02:12:11.765899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.342 qpair failed and we were unable to recover it. 00:33:57.342 [2024-07-24 02:12:11.766081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.342 [2024-07-24 02:12:11.766109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.342 qpair failed and we were unable to recover it. 00:33:57.342 [2024-07-24 02:12:11.766250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.342 [2024-07-24 02:12:11.766279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.342 qpair failed and we were unable to recover it. 00:33:57.342 [2024-07-24 02:12:11.766434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.342 [2024-07-24 02:12:11.766460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.342 qpair failed and we were unable to recover it. 00:33:57.342 [2024-07-24 02:12:11.766587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.342 [2024-07-24 02:12:11.766612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.342 qpair failed and we were unable to recover it. 00:33:57.342 [2024-07-24 02:12:11.766789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.342 [2024-07-24 02:12:11.766818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.342 qpair failed and we were unable to recover it. 00:33:57.342 [2024-07-24 02:12:11.766963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.342 [2024-07-24 02:12:11.766992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.342 qpair failed and we were unable to recover it. 00:33:57.342 [2024-07-24 02:12:11.767104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.342 [2024-07-24 02:12:11.767134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.342 qpair failed and we were unable to recover it. 00:33:57.342 [2024-07-24 02:12:11.767313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.342 [2024-07-24 02:12:11.767344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.342 qpair failed and we were unable to recover it. 00:33:57.342 [2024-07-24 02:12:11.767491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.342 [2024-07-24 02:12:11.767520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.342 qpair failed and we were unable to recover it. 00:33:57.342 [2024-07-24 02:12:11.767631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.342 [2024-07-24 02:12:11.767661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.342 qpair failed and we were unable to recover it. 00:33:57.342 [2024-07-24 02:12:11.767806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.342 [2024-07-24 02:12:11.767834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.342 qpair failed and we were unable to recover it. 00:33:57.342 [2024-07-24 02:12:11.767973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.342 [2024-07-24 02:12:11.767999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.342 qpair failed and we were unable to recover it. 00:33:57.342 [2024-07-24 02:12:11.768158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.342 [2024-07-24 02:12:11.768201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.342 qpair failed and we were unable to recover it. 00:33:57.342 [2024-07-24 02:12:11.768332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.342 [2024-07-24 02:12:11.768361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.342 qpair failed and we were unable to recover it. 00:33:57.342 [2024-07-24 02:12:11.768532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.342 [2024-07-24 02:12:11.768560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.342 qpair failed and we were unable to recover it. 00:33:57.342 [2024-07-24 02:12:11.768708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.342 [2024-07-24 02:12:11.768735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.342 qpair failed and we were unable to recover it. 00:33:57.342 [2024-07-24 02:12:11.768841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.342 [2024-07-24 02:12:11.768882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.342 qpair failed and we were unable to recover it. 00:33:57.342 [2024-07-24 02:12:11.769052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.342 [2024-07-24 02:12:11.769081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.342 qpair failed and we were unable to recover it. 00:33:57.342 [2024-07-24 02:12:11.769226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.342 [2024-07-24 02:12:11.769254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.342 qpair failed and we were unable to recover it. 00:33:57.342 [2024-07-24 02:12:11.769372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.342 [2024-07-24 02:12:11.769399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.342 qpair failed and we were unable to recover it. 00:33:57.342 [2024-07-24 02:12:11.769521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.342 [2024-07-24 02:12:11.769546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.342 qpair failed and we were unable to recover it. 00:33:57.342 [2024-07-24 02:12:11.769669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.342 [2024-07-24 02:12:11.769697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.342 qpair failed and we were unable to recover it. 00:33:57.342 [2024-07-24 02:12:11.769831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.342 [2024-07-24 02:12:11.769859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.342 qpair failed and we were unable to recover it. 00:33:57.342 [2024-07-24 02:12:11.770015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.342 [2024-07-24 02:12:11.770040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.342 qpair failed and we were unable to recover it. 00:33:57.342 [2024-07-24 02:12:11.770173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.343 [2024-07-24 02:12:11.770199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.343 qpair failed and we were unable to recover it. 00:33:57.343 [2024-07-24 02:12:11.770375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.343 [2024-07-24 02:12:11.770404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.343 qpair failed and we were unable to recover it. 00:33:57.343 [2024-07-24 02:12:11.770528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.343 [2024-07-24 02:12:11.770559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.343 qpair failed and we were unable to recover it. 00:33:57.343 [2024-07-24 02:12:11.770713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.343 [2024-07-24 02:12:11.770739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.343 qpair failed and we were unable to recover it. 00:33:57.343 [2024-07-24 02:12:11.770869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.343 [2024-07-24 02:12:11.770912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.343 qpair failed and we were unable to recover it. 00:33:57.343 [2024-07-24 02:12:11.771082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.343 [2024-07-24 02:12:11.771110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.343 qpair failed and we were unable to recover it. 00:33:57.343 [2024-07-24 02:12:11.771242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.343 [2024-07-24 02:12:11.771270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.343 qpair failed and we were unable to recover it. 00:33:57.343 [2024-07-24 02:12:11.771412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.343 [2024-07-24 02:12:11.771442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.343 qpair failed and we were unable to recover it. 00:33:57.343 [2024-07-24 02:12:11.771578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.343 [2024-07-24 02:12:11.771603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.343 qpair failed and we were unable to recover it. 00:33:57.343 [2024-07-24 02:12:11.771732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.343 [2024-07-24 02:12:11.771757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.343 qpair failed and we were unable to recover it. 00:33:57.343 [2024-07-24 02:12:11.771935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.343 [2024-07-24 02:12:11.771964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.343 qpair failed and we were unable to recover it. 00:33:57.343 [2024-07-24 02:12:11.772119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.343 [2024-07-24 02:12:11.772145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.343 qpair failed and we were unable to recover it. 00:33:57.343 [2024-07-24 02:12:11.772272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.343 [2024-07-24 02:12:11.772297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.343 qpair failed and we were unable to recover it. 00:33:57.343 [2024-07-24 02:12:11.772465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.343 [2024-07-24 02:12:11.772495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.343 qpair failed and we were unable to recover it. 00:33:57.343 [2024-07-24 02:12:11.772637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.343 [2024-07-24 02:12:11.772666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.343 qpair failed and we were unable to recover it. 00:33:57.343 [2024-07-24 02:12:11.772813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.343 [2024-07-24 02:12:11.772839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.343 qpair failed and we were unable to recover it. 00:33:57.343 [2024-07-24 02:12:11.772971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.343 [2024-07-24 02:12:11.772996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.343 qpair failed and we were unable to recover it. 00:33:57.343 [2024-07-24 02:12:11.773117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.343 [2024-07-24 02:12:11.773145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.343 qpair failed and we were unable to recover it. 00:33:57.343 [2024-07-24 02:12:11.773286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.343 [2024-07-24 02:12:11.773314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.343 qpair failed and we were unable to recover it. 00:33:57.343 [2024-07-24 02:12:11.773501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.343 [2024-07-24 02:12:11.773527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.343 qpair failed and we were unable to recover it. 00:33:57.343 [2024-07-24 02:12:11.773629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.343 [2024-07-24 02:12:11.773671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.343 qpair failed and we were unable to recover it. 00:33:57.343 [2024-07-24 02:12:11.773848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.343 [2024-07-24 02:12:11.773877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.343 qpair failed and we were unable to recover it. 00:33:57.343 [2024-07-24 02:12:11.774044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.343 [2024-07-24 02:12:11.774072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.343 qpair failed and we were unable to recover it. 00:33:57.343 [2024-07-24 02:12:11.774232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.343 [2024-07-24 02:12:11.774257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.343 qpair failed and we were unable to recover it. 00:33:57.343 [2024-07-24 02:12:11.774429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.343 [2024-07-24 02:12:11.774458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.343 qpair failed and we were unable to recover it. 00:33:57.343 [2024-07-24 02:12:11.774591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.343 [2024-07-24 02:12:11.774619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.343 qpair failed and we were unable to recover it. 00:33:57.343 [2024-07-24 02:12:11.774763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.343 [2024-07-24 02:12:11.774791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.343 qpair failed and we were unable to recover it. 00:33:57.343 [2024-07-24 02:12:11.774939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.343 [2024-07-24 02:12:11.774964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.343 qpair failed and we were unable to recover it. 00:33:57.343 [2024-07-24 02:12:11.775085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.343 [2024-07-24 02:12:11.775127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.343 qpair failed and we were unable to recover it. 00:33:57.343 [2024-07-24 02:12:11.775261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.343 [2024-07-24 02:12:11.775289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.343 qpair failed and we were unable to recover it. 00:33:57.343 [2024-07-24 02:12:11.775439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.343 [2024-07-24 02:12:11.775468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.343 qpair failed and we were unable to recover it. 00:33:57.343 [2024-07-24 02:12:11.775597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.343 [2024-07-24 02:12:11.775622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.343 qpair failed and we were unable to recover it. 00:33:57.343 [2024-07-24 02:12:11.775750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.343 [2024-07-24 02:12:11.775776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.343 qpair failed and we were unable to recover it. 00:33:57.343 [2024-07-24 02:12:11.775893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.343 [2024-07-24 02:12:11.775921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.343 qpair failed and we were unable to recover it. 00:33:57.343 [2024-07-24 02:12:11.776089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.343 [2024-07-24 02:12:11.776121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.343 qpair failed and we were unable to recover it. 00:33:57.343 [2024-07-24 02:12:11.776372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.343 [2024-07-24 02:12:11.776398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.343 qpair failed and we were unable to recover it. 00:33:57.343 [2024-07-24 02:12:11.776509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.343 [2024-07-24 02:12:11.776534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.343 qpair failed and we were unable to recover it. 00:33:57.343 [2024-07-24 02:12:11.776665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.344 [2024-07-24 02:12:11.776691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.344 qpair failed and we were unable to recover it. 00:33:57.344 [2024-07-24 02:12:11.776845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.344 [2024-07-24 02:12:11.776873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.344 qpair failed and we were unable to recover it. 00:33:57.344 [2024-07-24 02:12:11.777020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.344 [2024-07-24 02:12:11.777045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.344 qpair failed and we were unable to recover it. 00:33:57.344 [2024-07-24 02:12:11.777193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.344 [2024-07-24 02:12:11.777223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.344 qpair failed and we were unable to recover it. 00:33:57.344 [2024-07-24 02:12:11.777366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.344 [2024-07-24 02:12:11.777395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.344 qpair failed and we were unable to recover it. 00:33:57.344 [2024-07-24 02:12:11.777539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.344 [2024-07-24 02:12:11.777567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.344 qpair failed and we were unable to recover it. 00:33:57.344 [2024-07-24 02:12:11.777739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.344 [2024-07-24 02:12:11.777764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.344 qpair failed and we were unable to recover it. 00:33:57.344 [2024-07-24 02:12:11.777874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.344 [2024-07-24 02:12:11.777919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.344 qpair failed and we were unable to recover it. 00:33:57.344 [2024-07-24 02:12:11.778055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.344 [2024-07-24 02:12:11.778084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.344 qpair failed and we were unable to recover it. 00:33:57.344 [2024-07-24 02:12:11.778230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.344 [2024-07-24 02:12:11.778259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.344 qpair failed and we were unable to recover it. 00:33:57.344 [2024-07-24 02:12:11.778410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.344 [2024-07-24 02:12:11.778436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.344 qpair failed and we were unable to recover it. 00:33:57.344 [2024-07-24 02:12:11.778596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.344 [2024-07-24 02:12:11.778622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.344 qpair failed and we were unable to recover it. 00:33:57.344 [2024-07-24 02:12:11.778806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.344 [2024-07-24 02:12:11.778835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.344 qpair failed and we were unable to recover it. 00:33:57.344 [2024-07-24 02:12:11.778944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.344 [2024-07-24 02:12:11.778974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.344 qpair failed and we were unable to recover it. 00:33:57.344 [2024-07-24 02:12:11.779126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.344 [2024-07-24 02:12:11.779152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.344 qpair failed and we were unable to recover it. 00:33:57.344 [2024-07-24 02:12:11.779329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.344 [2024-07-24 02:12:11.779358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.344 qpair failed and we were unable to recover it. 00:33:57.344 [2024-07-24 02:12:11.779495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.344 [2024-07-24 02:12:11.779523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.344 qpair failed and we were unable to recover it. 00:33:57.344 [2024-07-24 02:12:11.779659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.344 [2024-07-24 02:12:11.779687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.344 qpair failed and we were unable to recover it. 00:33:57.344 [2024-07-24 02:12:11.779826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.344 [2024-07-24 02:12:11.779852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.344 qpair failed and we were unable to recover it. 00:33:57.344 [2024-07-24 02:12:11.779977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.344 [2024-07-24 02:12:11.780002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.344 qpair failed and we were unable to recover it. 00:33:57.344 [2024-07-24 02:12:11.780143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.344 [2024-07-24 02:12:11.780171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.344 qpair failed and we were unable to recover it. 00:33:57.344 [2024-07-24 02:12:11.780338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.344 [2024-07-24 02:12:11.780367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.344 qpair failed and we were unable to recover it. 00:33:57.344 [2024-07-24 02:12:11.780488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.344 [2024-07-24 02:12:11.780513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.344 qpair failed and we were unable to recover it. 00:33:57.344 [2024-07-24 02:12:11.780666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.344 [2024-07-24 02:12:11.780708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.344 qpair failed and we were unable to recover it. 00:33:57.344 [2024-07-24 02:12:11.780855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.344 [2024-07-24 02:12:11.780883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.344 qpair failed and we were unable to recover it. 00:33:57.344 [2024-07-24 02:12:11.781034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.344 [2024-07-24 02:12:11.781062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.344 qpair failed and we were unable to recover it. 00:33:57.344 [2024-07-24 02:12:11.781211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.344 [2024-07-24 02:12:11.781237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.344 qpair failed and we were unable to recover it. 00:33:57.344 [2024-07-24 02:12:11.781365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.344 [2024-07-24 02:12:11.781406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.344 qpair failed and we were unable to recover it. 00:33:57.344 [2024-07-24 02:12:11.781553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.344 [2024-07-24 02:12:11.781579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.344 qpair failed and we were unable to recover it. 00:33:57.344 [2024-07-24 02:12:11.781710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.344 [2024-07-24 02:12:11.781735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.344 qpair failed and we were unable to recover it. 00:33:57.344 [2024-07-24 02:12:11.781862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.344 [2024-07-24 02:12:11.781888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.344 qpair failed and we were unable to recover it. 00:33:57.344 [2024-07-24 02:12:11.782057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.344 [2024-07-24 02:12:11.782085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.344 qpair failed and we were unable to recover it. 00:33:57.344 [2024-07-24 02:12:11.782220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.344 [2024-07-24 02:12:11.782248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.344 qpair failed and we were unable to recover it. 00:33:57.344 [2024-07-24 02:12:11.782422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.344 [2024-07-24 02:12:11.782451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.344 qpair failed and we were unable to recover it. 00:33:57.344 [2024-07-24 02:12:11.782572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.344 [2024-07-24 02:12:11.782598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.344 qpair failed and we were unable to recover it. 00:33:57.344 [2024-07-24 02:12:11.782731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.344 [2024-07-24 02:12:11.782758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.344 qpair failed and we were unable to recover it. 00:33:57.344 [2024-07-24 02:12:11.782898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.344 [2024-07-24 02:12:11.782927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.344 qpair failed and we were unable to recover it. 00:33:57.344 [2024-07-24 02:12:11.783076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.345 [2024-07-24 02:12:11.783104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.345 qpair failed and we were unable to recover it. 00:33:57.345 [2024-07-24 02:12:11.783231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.345 [2024-07-24 02:12:11.783260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.345 qpair failed and we were unable to recover it. 00:33:57.345 [2024-07-24 02:12:11.783397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.345 [2024-07-24 02:12:11.783423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.345 qpair failed and we were unable to recover it. 00:33:57.345 [2024-07-24 02:12:11.783570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.345 [2024-07-24 02:12:11.783598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.345 qpair failed and we were unable to recover it. 00:33:57.345 [2024-07-24 02:12:11.783742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.345 [2024-07-24 02:12:11.783771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.345 qpair failed and we were unable to recover it. 00:33:57.345 [2024-07-24 02:12:11.783901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.345 [2024-07-24 02:12:11.783928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.345 qpair failed and we were unable to recover it. 00:33:57.345 [2024-07-24 02:12:11.784060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.345 [2024-07-24 02:12:11.784085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.345 qpair failed and we were unable to recover it. 00:33:57.345 [2024-07-24 02:12:11.784239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.345 [2024-07-24 02:12:11.784263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.345 qpair failed and we were unable to recover it. 00:33:57.345 [2024-07-24 02:12:11.784401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.345 [2024-07-24 02:12:11.784428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.345 qpair failed and we were unable to recover it. 00:33:57.345 [2024-07-24 02:12:11.784580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.345 [2024-07-24 02:12:11.784604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.345 qpair failed and we were unable to recover it. 00:33:57.345 [2024-07-24 02:12:11.784738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.345 [2024-07-24 02:12:11.784777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.345 qpair failed and we were unable to recover it. 00:33:57.345 [2024-07-24 02:12:11.784953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.345 [2024-07-24 02:12:11.784979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.345 qpair failed and we were unable to recover it. 00:33:57.345 [2024-07-24 02:12:11.785122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.345 [2024-07-24 02:12:11.785150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.345 qpair failed and we were unable to recover it. 00:33:57.345 [2024-07-24 02:12:11.785267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.345 [2024-07-24 02:12:11.785292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.345 qpair failed and we were unable to recover it. 00:33:57.345 [2024-07-24 02:12:11.785427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.345 [2024-07-24 02:12:11.785451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.345 qpair failed and we were unable to recover it. 00:33:57.345 [2024-07-24 02:12:11.785639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.345 [2024-07-24 02:12:11.785666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.345 qpair failed and we were unable to recover it. 00:33:57.345 [2024-07-24 02:12:11.785811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.345 [2024-07-24 02:12:11.785838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.345 qpair failed and we were unable to recover it. 00:33:57.345 [2024-07-24 02:12:11.786020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.345 [2024-07-24 02:12:11.786044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.345 qpair failed and we were unable to recover it. 00:33:57.345 [2024-07-24 02:12:11.786141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.345 [2024-07-24 02:12:11.786182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.345 qpair failed and we were unable to recover it. 00:33:57.345 [2024-07-24 02:12:11.786296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.345 [2024-07-24 02:12:11.786329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.345 qpair failed and we were unable to recover it. 00:33:57.346 [2024-07-24 02:12:11.786487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.346 [2024-07-24 02:12:11.786511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.346 qpair failed and we were unable to recover it. 00:33:57.346 [2024-07-24 02:12:11.786669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.346 [2024-07-24 02:12:11.786693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.346 qpair failed and we were unable to recover it. 00:33:57.346 [2024-07-24 02:12:11.786832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.346 [2024-07-24 02:12:11.786859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.346 qpair failed and we were unable to recover it. 00:33:57.346 [2024-07-24 02:12:11.787003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.346 [2024-07-24 02:12:11.787029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.346 qpair failed and we were unable to recover it. 00:33:57.346 [2024-07-24 02:12:11.787176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.346 [2024-07-24 02:12:11.787202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.346 qpair failed and we were unable to recover it. 00:33:57.346 [2024-07-24 02:12:11.787328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.346 [2024-07-24 02:12:11.787352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.346 qpair failed and we were unable to recover it. 00:33:57.346 [2024-07-24 02:12:11.787478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.346 [2024-07-24 02:12:11.787503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.346 qpair failed and we were unable to recover it. 00:33:57.346 [2024-07-24 02:12:11.787653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.346 [2024-07-24 02:12:11.787681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.346 qpair failed and we were unable to recover it. 00:33:57.346 [2024-07-24 02:12:11.787801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.346 [2024-07-24 02:12:11.787827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.346 qpair failed and we were unable to recover it. 00:33:57.346 [2024-07-24 02:12:11.787979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.346 [2024-07-24 02:12:11.788003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.346 qpair failed and we were unable to recover it. 00:33:57.346 [2024-07-24 02:12:11.788103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.346 [2024-07-24 02:12:11.788127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.346 qpair failed and we were unable to recover it. 00:33:57.346 [2024-07-24 02:12:11.788282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.346 [2024-07-24 02:12:11.788309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.346 qpair failed and we were unable to recover it. 00:33:57.346 [2024-07-24 02:12:11.788438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.346 [2024-07-24 02:12:11.788464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.346 qpair failed and we were unable to recover it. 00:33:57.346 [2024-07-24 02:12:11.788642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.346 [2024-07-24 02:12:11.788666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.346 qpair failed and we were unable to recover it. 00:33:57.346 [2024-07-24 02:12:11.788779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.346 [2024-07-24 02:12:11.788803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.346 qpair failed and we were unable to recover it. 00:33:57.346 [2024-07-24 02:12:11.788928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.346 [2024-07-24 02:12:11.788952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.346 qpair failed and we were unable to recover it. 00:33:57.346 [2024-07-24 02:12:11.789102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.346 [2024-07-24 02:12:11.789129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.346 qpair failed and we were unable to recover it. 00:33:57.346 [2024-07-24 02:12:11.789281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.346 [2024-07-24 02:12:11.789306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.346 qpair failed and we were unable to recover it. 00:33:57.346 [2024-07-24 02:12:11.789496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.346 [2024-07-24 02:12:11.789525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.346 qpair failed and we were unable to recover it. 00:33:57.346 [2024-07-24 02:12:11.789665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.346 [2024-07-24 02:12:11.789691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.346 qpair failed and we were unable to recover it. 00:33:57.346 [2024-07-24 02:12:11.789826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.346 [2024-07-24 02:12:11.789853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.346 qpair failed and we were unable to recover it. 00:33:57.346 [2024-07-24 02:12:11.790001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.346 [2024-07-24 02:12:11.790025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.346 qpair failed and we were unable to recover it. 00:33:57.346 [2024-07-24 02:12:11.790180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.346 [2024-07-24 02:12:11.790206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.346 qpair failed and we were unable to recover it. 00:33:57.346 [2024-07-24 02:12:11.790326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.346 [2024-07-24 02:12:11.790353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.346 qpair failed and we were unable to recover it. 00:33:57.346 [2024-07-24 02:12:11.790498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.346 [2024-07-24 02:12:11.790525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.346 qpair failed and we were unable to recover it. 00:33:57.346 [2024-07-24 02:12:11.790696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.346 [2024-07-24 02:12:11.790719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.346 qpair failed and we were unable to recover it. 00:33:57.346 [2024-07-24 02:12:11.790842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.346 [2024-07-24 02:12:11.790866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.346 qpair failed and we were unable to recover it. 00:33:57.346 [2024-07-24 02:12:11.791016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.346 [2024-07-24 02:12:11.791043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.346 qpair failed and we were unable to recover it. 00:33:57.346 [2024-07-24 02:12:11.791184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.346 [2024-07-24 02:12:11.791210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.346 qpair failed and we were unable to recover it. 00:33:57.346 [2024-07-24 02:12:11.791370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.346 [2024-07-24 02:12:11.791395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.346 qpair failed and we were unable to recover it. 00:33:57.346 [2024-07-24 02:12:11.791523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.346 [2024-07-24 02:12:11.791563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.346 qpair failed and we were unable to recover it. 00:33:57.346 [2024-07-24 02:12:11.791705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.346 [2024-07-24 02:12:11.791731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.346 qpair failed and we were unable to recover it. 00:33:57.346 [2024-07-24 02:12:11.791872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.346 [2024-07-24 02:12:11.791898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.346 qpair failed and we were unable to recover it. 00:33:57.346 [2024-07-24 02:12:11.792053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.346 [2024-07-24 02:12:11.792077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.346 qpair failed and we were unable to recover it. 00:33:57.346 [2024-07-24 02:12:11.792206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.346 [2024-07-24 02:12:11.792232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.346 qpair failed and we were unable to recover it. 00:33:57.346 [2024-07-24 02:12:11.792425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.346 [2024-07-24 02:12:11.792450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.346 qpair failed and we were unable to recover it. 00:33:57.346 [2024-07-24 02:12:11.792585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.347 [2024-07-24 02:12:11.792611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.347 qpair failed and we were unable to recover it. 00:33:57.347 [2024-07-24 02:12:11.792743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.347 [2024-07-24 02:12:11.792767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.347 qpair failed and we were unable to recover it. 00:33:57.347 [2024-07-24 02:12:11.792893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.347 [2024-07-24 02:12:11.792934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.347 qpair failed and we were unable to recover it. 00:33:57.347 [2024-07-24 02:12:11.793039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.347 [2024-07-24 02:12:11.793065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.347 qpair failed and we were unable to recover it. 00:33:57.347 [2024-07-24 02:12:11.793207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.347 [2024-07-24 02:12:11.793235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.347 qpair failed and we were unable to recover it. 00:33:57.347 [2024-07-24 02:12:11.793388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.347 [2024-07-24 02:12:11.793413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.347 qpair failed and we were unable to recover it. 00:33:57.347 [2024-07-24 02:12:11.793517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.347 [2024-07-24 02:12:11.793541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.347 qpair failed and we were unable to recover it. 00:33:57.347 [2024-07-24 02:12:11.793710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.347 [2024-07-24 02:12:11.793736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.347 qpair failed and we were unable to recover it. 00:33:57.347 [2024-07-24 02:12:11.793844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.347 [2024-07-24 02:12:11.793870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.347 qpair failed and we were unable to recover it. 00:33:57.347 [2024-07-24 02:12:11.794019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.347 [2024-07-24 02:12:11.794043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.347 qpair failed and we were unable to recover it. 00:33:57.347 [2024-07-24 02:12:11.794198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.347 [2024-07-24 02:12:11.794240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.347 qpair failed and we were unable to recover it. 00:33:57.347 [2024-07-24 02:12:11.794382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.347 [2024-07-24 02:12:11.794409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.347 qpair failed and we were unable to recover it. 00:33:57.347 [2024-07-24 02:12:11.794546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.347 [2024-07-24 02:12:11.794572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.347 qpair failed and we were unable to recover it. 00:33:57.347 [2024-07-24 02:12:11.794724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.347 [2024-07-24 02:12:11.794752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.347 qpair failed and we were unable to recover it. 00:33:57.347 [2024-07-24 02:12:11.794880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.347 [2024-07-24 02:12:11.794919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.347 qpair failed and we were unable to recover it. 00:33:57.347 [2024-07-24 02:12:11.795059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.347 [2024-07-24 02:12:11.795086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.347 qpair failed and we were unable to recover it. 00:33:57.347 [2024-07-24 02:12:11.795255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.347 [2024-07-24 02:12:11.795282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.347 qpair failed and we were unable to recover it. 00:33:57.347 [2024-07-24 02:12:11.795435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.347 [2024-07-24 02:12:11.795460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.347 qpair failed and we were unable to recover it. 00:33:57.347 [2024-07-24 02:12:11.795632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.347 [2024-07-24 02:12:11.795659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.347 qpair failed and we were unable to recover it. 00:33:57.347 [2024-07-24 02:12:11.795840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.347 [2024-07-24 02:12:11.795863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.347 qpair failed and we were unable to recover it. 00:33:57.347 [2024-07-24 02:12:11.795968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.347 [2024-07-24 02:12:11.795993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.347 qpair failed and we were unable to recover it. 00:33:57.347 [2024-07-24 02:12:11.796173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.347 [2024-07-24 02:12:11.796200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.347 qpair failed and we were unable to recover it. 00:33:57.347 [2024-07-24 02:12:11.796384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.347 [2024-07-24 02:12:11.796408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.347 qpair failed and we were unable to recover it. 00:33:57.347 [2024-07-24 02:12:11.796532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.347 [2024-07-24 02:12:11.796556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.347 qpair failed and we were unable to recover it. 00:33:57.347 [2024-07-24 02:12:11.796727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.347 [2024-07-24 02:12:11.796753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.347 qpair failed and we were unable to recover it. 00:33:57.347 [2024-07-24 02:12:11.796939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.347 [2024-07-24 02:12:11.796964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.347 qpair failed and we were unable to recover it. 00:33:57.347 [2024-07-24 02:12:11.797071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.347 [2024-07-24 02:12:11.797111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.347 qpair failed and we were unable to recover it. 00:33:57.347 [2024-07-24 02:12:11.797264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.347 [2024-07-24 02:12:11.797291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.347 qpair failed and we were unable to recover it. 00:33:57.347 [2024-07-24 02:12:11.797473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.347 [2024-07-24 02:12:11.797497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.347 qpair failed and we were unable to recover it. 00:33:57.347 [2024-07-24 02:12:11.797659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.347 [2024-07-24 02:12:11.797683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.347 qpair failed and we were unable to recover it. 00:33:57.347 [2024-07-24 02:12:11.797824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.347 [2024-07-24 02:12:11.797851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.347 qpair failed and we were unable to recover it. 00:33:57.347 [2024-07-24 02:12:11.797994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.347 [2024-07-24 02:12:11.798020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.347 qpair failed and we were unable to recover it. 00:33:57.347 [2024-07-24 02:12:11.798132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.347 [2024-07-24 02:12:11.798158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.347 qpair failed and we were unable to recover it. 00:33:57.347 [2024-07-24 02:12:11.798304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.347 [2024-07-24 02:12:11.798333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.347 qpair failed and we were unable to recover it. 00:33:57.347 [2024-07-24 02:12:11.798430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.347 [2024-07-24 02:12:11.798459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.347 qpair failed and we were unable to recover it. 00:33:57.347 [2024-07-24 02:12:11.798611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.347 [2024-07-24 02:12:11.798637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.347 qpair failed and we were unable to recover it. 00:33:57.347 [2024-07-24 02:12:11.798787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.347 [2024-07-24 02:12:11.798813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.347 qpair failed and we were unable to recover it. 00:33:57.347 [2024-07-24 02:12:11.798982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.348 [2024-07-24 02:12:11.799006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.348 qpair failed and we were unable to recover it. 00:33:57.348 [2024-07-24 02:12:11.799112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.348 [2024-07-24 02:12:11.799154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.348 qpair failed and we were unable to recover it. 00:33:57.348 [2024-07-24 02:12:11.799332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.348 [2024-07-24 02:12:11.799359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.348 qpair failed and we were unable to recover it. 00:33:57.348 [2024-07-24 02:12:11.799499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.348 [2024-07-24 02:12:11.799525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.348 qpair failed and we were unable to recover it. 00:33:57.348 [2024-07-24 02:12:11.799680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.348 [2024-07-24 02:12:11.799703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.348 qpair failed and we were unable to recover it. 00:33:57.348 [2024-07-24 02:12:11.799825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.348 [2024-07-24 02:12:11.799864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.348 qpair failed and we were unable to recover it. 00:33:57.348 [2024-07-24 02:12:11.800007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.348 [2024-07-24 02:12:11.800035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.348 qpair failed and we were unable to recover it. 00:33:57.348 [2024-07-24 02:12:11.800206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.348 [2024-07-24 02:12:11.800232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.348 qpair failed and we were unable to recover it. 00:33:57.348 [2024-07-24 02:12:11.800389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.348 [2024-07-24 02:12:11.800414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.348 qpair failed and we were unable to recover it. 00:33:57.348 [2024-07-24 02:12:11.800522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.348 [2024-07-24 02:12:11.800546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.348 qpair failed and we were unable to recover it. 00:33:57.348 [2024-07-24 02:12:11.800678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.348 [2024-07-24 02:12:11.800704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.348 qpair failed and we were unable to recover it. 00:33:57.348 [2024-07-24 02:12:11.800814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.348 [2024-07-24 02:12:11.800840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.348 qpair failed and we were unable to recover it. 00:33:57.348 [2024-07-24 02:12:11.801010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.348 [2024-07-24 02:12:11.801034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.348 qpair failed and we were unable to recover it. 00:33:57.348 [2024-07-24 02:12:11.801137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.348 [2024-07-24 02:12:11.801166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.348 qpair failed and we were unable to recover it. 00:33:57.348 [2024-07-24 02:12:11.801276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.348 [2024-07-24 02:12:11.801300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.348 qpair failed and we were unable to recover it. 00:33:57.348 [2024-07-24 02:12:11.801454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.348 [2024-07-24 02:12:11.801480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.348 qpair failed and we were unable to recover it. 00:33:57.348 [2024-07-24 02:12:11.801656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.348 [2024-07-24 02:12:11.801680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.348 qpair failed and we were unable to recover it. 00:33:57.348 [2024-07-24 02:12:11.801832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.348 [2024-07-24 02:12:11.801858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.348 qpair failed and we were unable to recover it. 00:33:57.348 [2024-07-24 02:12:11.801969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.348 [2024-07-24 02:12:11.801996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.348 qpair failed and we were unable to recover it. 00:33:57.348 [2024-07-24 02:12:11.802167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.348 [2024-07-24 02:12:11.802193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.348 qpair failed and we were unable to recover it. 00:33:57.348 [2024-07-24 02:12:11.802348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.348 [2024-07-24 02:12:11.802372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.348 qpair failed and we were unable to recover it. 00:33:57.348 [2024-07-24 02:12:11.802500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.348 [2024-07-24 02:12:11.802541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.348 qpair failed and we were unable to recover it. 00:33:57.348 [2024-07-24 02:12:11.802716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.348 [2024-07-24 02:12:11.802743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.348 qpair failed and we were unable to recover it. 00:33:57.348 [2024-07-24 02:12:11.802884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.348 [2024-07-24 02:12:11.802910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.348 qpair failed and we were unable to recover it. 00:33:57.348 [2024-07-24 02:12:11.803090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.348 [2024-07-24 02:12:11.803114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.348 qpair failed and we were unable to recover it. 00:33:57.348 [2024-07-24 02:12:11.803242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.348 [2024-07-24 02:12:11.803291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.348 qpair failed and we were unable to recover it. 00:33:57.348 [2024-07-24 02:12:11.803448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.348 [2024-07-24 02:12:11.803472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.348 qpair failed and we were unable to recover it. 00:33:57.348 [2024-07-24 02:12:11.803604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.348 [2024-07-24 02:12:11.803644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.348 qpair failed and we were unable to recover it. 00:33:57.348 [2024-07-24 02:12:11.803794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.348 [2024-07-24 02:12:11.803818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.348 qpair failed and we were unable to recover it. 00:33:57.348 [2024-07-24 02:12:11.803945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.348 [2024-07-24 02:12:11.803990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.348 qpair failed and we were unable to recover it. 00:33:57.348 [2024-07-24 02:12:11.804134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.348 [2024-07-24 02:12:11.804162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.348 qpair failed and we were unable to recover it. 00:33:57.348 [2024-07-24 02:12:11.804305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.348 [2024-07-24 02:12:11.804339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.348 qpair failed and we were unable to recover it. 00:33:57.348 [2024-07-24 02:12:11.804488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.348 [2024-07-24 02:12:11.804511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.348 qpair failed and we were unable to recover it. 00:33:57.348 [2024-07-24 02:12:11.804615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.348 [2024-07-24 02:12:11.804639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.348 qpair failed and we were unable to recover it. 00:33:57.348 [2024-07-24 02:12:11.804792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.349 [2024-07-24 02:12:11.804819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.349 qpair failed and we were unable to recover it. 00:33:57.349 [2024-07-24 02:12:11.804970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.349 [2024-07-24 02:12:11.804997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.349 qpair failed and we were unable to recover it. 00:33:57.349 [2024-07-24 02:12:11.805175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.349 [2024-07-24 02:12:11.805199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.349 qpair failed and we were unable to recover it. 00:33:57.349 [2024-07-24 02:12:11.805297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.349 [2024-07-24 02:12:11.805326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.349 qpair failed and we were unable to recover it. 00:33:57.349 [2024-07-24 02:12:11.805511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.349 [2024-07-24 02:12:11.805537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.349 qpair failed and we were unable to recover it. 00:33:57.349 [2024-07-24 02:12:11.805671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.349 [2024-07-24 02:12:11.805697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.349 qpair failed and we were unable to recover it. 00:33:57.349 [2024-07-24 02:12:11.805873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.349 [2024-07-24 02:12:11.805896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.349 qpair failed and we were unable to recover it. 00:33:57.349 [2024-07-24 02:12:11.806065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.349 [2024-07-24 02:12:11.806097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.349 qpair failed and we were unable to recover it. 00:33:57.349 [2024-07-24 02:12:11.806233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.349 [2024-07-24 02:12:11.806259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.349 qpair failed and we were unable to recover it. 00:33:57.349 [2024-07-24 02:12:11.806425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.349 [2024-07-24 02:12:11.806450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.349 qpair failed and we were unable to recover it. 00:33:57.349 [2024-07-24 02:12:11.806576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.349 [2024-07-24 02:12:11.806604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.349 qpair failed and we were unable to recover it. 00:33:57.349 [2024-07-24 02:12:11.806743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.349 [2024-07-24 02:12:11.806790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.349 qpair failed and we were unable to recover it. 00:33:57.349 [2024-07-24 02:12:11.806897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.349 [2024-07-24 02:12:11.806924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.349 qpair failed and we were unable to recover it. 00:33:57.349 [2024-07-24 02:12:11.807064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.349 [2024-07-24 02:12:11.807091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.349 qpair failed and we were unable to recover it. 00:33:57.349 [2024-07-24 02:12:11.807243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.349 [2024-07-24 02:12:11.807267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.349 qpair failed and we were unable to recover it. 00:33:57.349 [2024-07-24 02:12:11.807367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.349 [2024-07-24 02:12:11.807390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.349 qpair failed and we were unable to recover it. 00:33:57.349 [2024-07-24 02:12:11.807524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.349 [2024-07-24 02:12:11.807548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.349 qpair failed and we were unable to recover it. 00:33:57.349 [2024-07-24 02:12:11.807730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.349 [2024-07-24 02:12:11.807756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.349 qpair failed and we were unable to recover it. 00:33:57.349 [2024-07-24 02:12:11.807910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.629 [2024-07-24 02:12:12.254772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.629 qpair failed and we were unable to recover it. 00:33:57.629 [2024-07-24 02:12:12.254996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.629 [2024-07-24 02:12:12.255028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.629 qpair failed and we were unable to recover it. 00:33:57.629 [2024-07-24 02:12:12.255154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.629 [2024-07-24 02:12:12.255182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.629 qpair failed and we were unable to recover it. 00:33:57.629 [2024-07-24 02:12:12.255360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.629 [2024-07-24 02:12:12.255390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.629 qpair failed and we were unable to recover it. 00:33:57.629 [2024-07-24 02:12:12.255528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.629 [2024-07-24 02:12:12.255554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.629 qpair failed and we were unable to recover it. 00:33:57.629 [2024-07-24 02:12:12.255801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.629 [2024-07-24 02:12:12.255830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.629 qpair failed and we were unable to recover it. 00:33:57.629 [2024-07-24 02:12:12.255969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.629 [2024-07-24 02:12:12.256004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.629 qpair failed and we were unable to recover it. 00:33:57.629 [2024-07-24 02:12:12.256150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.629 [2024-07-24 02:12:12.256178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.629 qpair failed and we were unable to recover it. 00:33:57.630 [2024-07-24 02:12:12.256354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.630 [2024-07-24 02:12:12.256385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.630 qpair failed and we were unable to recover it. 00:33:57.630 [2024-07-24 02:12:12.256549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.630 [2024-07-24 02:12:12.256573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.630 qpair failed and we were unable to recover it. 00:33:57.630 [2024-07-24 02:12:12.256750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.630 [2024-07-24 02:12:12.256784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.630 qpair failed and we were unable to recover it. 00:33:57.630 [2024-07-24 02:12:12.256929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.630 [2024-07-24 02:12:12.256957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.630 qpair failed and we were unable to recover it. 00:33:57.630 [2024-07-24 02:12:12.257087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.630 [2024-07-24 02:12:12.257113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.630 qpair failed and we were unable to recover it. 00:33:57.630 [2024-07-24 02:12:12.257222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.630 [2024-07-24 02:12:12.257248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.630 qpair failed and we were unable to recover it. 00:33:57.630 [2024-07-24 02:12:12.257395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.630 [2024-07-24 02:12:12.257427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.630 qpair failed and we were unable to recover it. 00:33:57.630 [2024-07-24 02:12:12.257572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.630 [2024-07-24 02:12:12.257602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.630 qpair failed and we were unable to recover it. 00:33:57.630 [2024-07-24 02:12:12.257749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.630 [2024-07-24 02:12:12.257776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.630 qpair failed and we were unable to recover it. 00:33:57.630 [2024-07-24 02:12:12.257923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.630 [2024-07-24 02:12:12.257949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.630 qpair failed and we were unable to recover it. 00:33:57.630 [2024-07-24 02:12:12.258083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.630 [2024-07-24 02:12:12.258119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.630 qpair failed and we were unable to recover it. 00:33:57.630 [2024-07-24 02:12:12.258253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.630 [2024-07-24 02:12:12.258283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.630 qpair failed and we were unable to recover it. 00:33:57.630 [2024-07-24 02:12:12.258469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.630 [2024-07-24 02:12:12.258496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.630 qpair failed and we were unable to recover it. 00:33:57.630 [2024-07-24 02:12:12.258602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.630 [2024-07-24 02:12:12.258629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.630 qpair failed and we were unable to recover it. 00:33:57.630 [2024-07-24 02:12:12.258744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.630 [2024-07-24 02:12:12.258774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.630 qpair failed and we were unable to recover it. 00:33:57.630 [2024-07-24 02:12:12.258916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.630 [2024-07-24 02:12:12.258943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.630 qpair failed and we were unable to recover it. 00:33:57.630 [2024-07-24 02:12:12.259144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.630 [2024-07-24 02:12:12.259171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.630 qpair failed and we were unable to recover it. 00:33:57.630 [2024-07-24 02:12:12.259324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.630 [2024-07-24 02:12:12.259353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.630 qpair failed and we were unable to recover it. 00:33:57.630 [2024-07-24 02:12:12.259481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.630 [2024-07-24 02:12:12.259518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.630 qpair failed and we were unable to recover it. 00:33:57.630 [2024-07-24 02:12:12.259641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.630 [2024-07-24 02:12:12.259671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.630 qpair failed and we were unable to recover it. 00:33:57.630 [2024-07-24 02:12:12.259831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.630 [2024-07-24 02:12:12.259857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.630 qpair failed and we were unable to recover it. 00:33:57.630 [2024-07-24 02:12:12.259972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.630 [2024-07-24 02:12:12.259998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.630 qpair failed and we were unable to recover it. 00:33:57.630 [2024-07-24 02:12:12.260155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.630 [2024-07-24 02:12:12.260185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.630 qpair failed and we were unable to recover it. 00:33:57.630 [2024-07-24 02:12:12.260355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.630 [2024-07-24 02:12:12.260385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.630 qpair failed and we were unable to recover it. 00:33:57.630 [2024-07-24 02:12:12.260531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.630 [2024-07-24 02:12:12.260558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.630 qpair failed and we were unable to recover it. 00:33:57.630 [2024-07-24 02:12:12.260693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.630 [2024-07-24 02:12:12.260741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.630 qpair failed and we were unable to recover it. 00:33:57.630 [2024-07-24 02:12:12.260890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.630 [2024-07-24 02:12:12.260921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.630 qpair failed and we were unable to recover it. 00:33:57.630 [2024-07-24 02:12:12.261066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.630 [2024-07-24 02:12:12.261097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.630 qpair failed and we were unable to recover it. 00:33:57.630 [2024-07-24 02:12:12.261270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.630 [2024-07-24 02:12:12.261298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.630 qpair failed and we were unable to recover it. 00:33:57.630 [2024-07-24 02:12:12.261466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.630 [2024-07-24 02:12:12.261497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.630 qpair failed and we were unable to recover it. 00:33:57.630 [2024-07-24 02:12:12.261641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.630 [2024-07-24 02:12:12.261671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.630 qpair failed and we were unable to recover it. 00:33:57.630 [2024-07-24 02:12:12.261818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.630 [2024-07-24 02:12:12.261850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.630 qpair failed and we were unable to recover it. 00:33:57.630 [2024-07-24 02:12:12.262011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.630 [2024-07-24 02:12:12.262036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.630 qpair failed and we were unable to recover it. 00:33:57.630 [2024-07-24 02:12:12.262171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.630 [2024-07-24 02:12:12.262199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.630 qpair failed and we were unable to recover it. 00:33:57.630 [2024-07-24 02:12:12.262331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.631 [2024-07-24 02:12:12.262358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.631 qpair failed and we were unable to recover it. 00:33:57.631 [2024-07-24 02:12:12.262476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.631 [2024-07-24 02:12:12.262503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.631 qpair failed and we were unable to recover it. 00:33:57.631 [2024-07-24 02:12:12.262613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.631 [2024-07-24 02:12:12.262640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.631 qpair failed and we were unable to recover it. 00:33:57.631 [2024-07-24 02:12:12.262756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.631 [2024-07-24 02:12:12.262783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.631 qpair failed and we were unable to recover it. 00:33:57.631 [2024-07-24 02:12:12.262914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.631 [2024-07-24 02:12:12.262943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.631 qpair failed and we were unable to recover it. 00:33:57.631 [2024-07-24 02:12:12.263086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.631 [2024-07-24 02:12:12.263116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.631 qpair failed and we were unable to recover it. 00:33:57.631 [2024-07-24 02:12:12.263241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.631 [2024-07-24 02:12:12.263267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.631 qpair failed and we were unable to recover it. 00:33:57.631 [2024-07-24 02:12:12.263397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.631 [2024-07-24 02:12:12.263425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.631 qpair failed and we were unable to recover it. 00:33:57.631 [2024-07-24 02:12:12.263548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.631 [2024-07-24 02:12:12.263577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.631 qpair failed and we were unable to recover it. 00:33:57.631 [2024-07-24 02:12:12.263695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.631 [2024-07-24 02:12:12.263728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.631 qpair failed and we were unable to recover it. 00:33:57.631 [2024-07-24 02:12:12.263900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.631 [2024-07-24 02:12:12.263934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.631 qpair failed and we were unable to recover it. 00:33:57.631 [2024-07-24 02:12:12.264109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.631 [2024-07-24 02:12:12.264136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.631 qpair failed and we were unable to recover it. 00:33:57.631 [2024-07-24 02:12:12.264328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.631 [2024-07-24 02:12:12.264368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.631 qpair failed and we were unable to recover it. 00:33:57.631 [2024-07-24 02:12:12.264522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.631 [2024-07-24 02:12:12.264548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.631 qpair failed and we were unable to recover it. 00:33:57.631 [2024-07-24 02:12:12.264677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.631 [2024-07-24 02:12:12.264702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.631 qpair failed and we were unable to recover it. 00:33:57.631 [2024-07-24 02:12:12.264812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.631 [2024-07-24 02:12:12.264839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.631 qpair failed and we were unable to recover it. 00:33:57.631 [2024-07-24 02:12:12.265004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.631 [2024-07-24 02:12:12.265030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.631 qpair failed and we were unable to recover it. 00:33:57.631 [2024-07-24 02:12:12.265139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.631 [2024-07-24 02:12:12.265165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.631 qpair failed and we were unable to recover it. 00:33:57.631 [2024-07-24 02:12:12.265299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.631 [2024-07-24 02:12:12.265340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.631 qpair failed and we were unable to recover it. 00:33:57.631 [2024-07-24 02:12:12.265453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.631 [2024-07-24 02:12:12.265496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.631 qpair failed and we were unable to recover it. 00:33:57.631 [2024-07-24 02:12:12.265668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.631 [2024-07-24 02:12:12.265696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.631 qpair failed and we were unable to recover it. 00:33:57.631 [2024-07-24 02:12:12.265877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.631 [2024-07-24 02:12:12.265903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.631 qpair failed and we were unable to recover it. 00:33:57.631 [2024-07-24 02:12:12.266067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.631 [2024-07-24 02:12:12.266093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.631 qpair failed and we were unable to recover it. 00:33:57.631 [2024-07-24 02:12:12.266241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.631 [2024-07-24 02:12:12.266269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.631 qpair failed and we were unable to recover it. 00:33:57.631 [2024-07-24 02:12:12.266428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.631 [2024-07-24 02:12:12.266455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.631 qpair failed and we were unable to recover it. 00:33:57.631 [2024-07-24 02:12:12.266566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.631 [2024-07-24 02:12:12.266591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.631 qpair failed and we were unable to recover it. 00:33:57.631 [2024-07-24 02:12:12.266790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.631 [2024-07-24 02:12:12.266816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.631 qpair failed and we were unable to recover it. 00:33:57.631 [2024-07-24 02:12:12.266968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.631 [2024-07-24 02:12:12.267015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.631 qpair failed and we were unable to recover it. 00:33:57.631 [2024-07-24 02:12:12.267184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.631 [2024-07-24 02:12:12.267213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.631 qpair failed and we were unable to recover it. 00:33:57.631 [2024-07-24 02:12:12.267354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.631 [2024-07-24 02:12:12.267384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.631 qpair failed and we were unable to recover it. 00:33:57.631 [2024-07-24 02:12:12.267537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.631 [2024-07-24 02:12:12.267564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.631 qpair failed and we were unable to recover it. 00:33:57.631 [2024-07-24 02:12:12.267696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.631 [2024-07-24 02:12:12.267741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.631 qpair failed and we were unable to recover it. 00:33:57.631 [2024-07-24 02:12:12.267885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.631 [2024-07-24 02:12:12.267914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.631 qpair failed and we were unable to recover it. 00:33:57.631 [2024-07-24 02:12:12.268052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.631 [2024-07-24 02:12:12.268080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.632 qpair failed and we were unable to recover it. 00:33:57.632 [2024-07-24 02:12:12.268256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.632 [2024-07-24 02:12:12.268281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.632 qpair failed and we were unable to recover it. 00:33:57.632 [2024-07-24 02:12:12.268439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.632 [2024-07-24 02:12:12.268469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.632 qpair failed and we were unable to recover it. 00:33:57.632 [2024-07-24 02:12:12.268583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.632 [2024-07-24 02:12:12.268611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.632 qpair failed and we were unable to recover it. 00:33:57.632 [2024-07-24 02:12:12.268725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.632 [2024-07-24 02:12:12.268753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.632 qpair failed and we were unable to recover it. 00:33:57.632 [2024-07-24 02:12:12.268902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.632 [2024-07-24 02:12:12.268928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.632 qpair failed and we were unable to recover it. 00:33:57.632 [2024-07-24 02:12:12.269091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.632 [2024-07-24 02:12:12.269134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.632 qpair failed and we were unable to recover it. 00:33:57.632 [2024-07-24 02:12:12.269254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.632 [2024-07-24 02:12:12.269284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.632 qpair failed and we were unable to recover it. 00:33:57.632 [2024-07-24 02:12:12.269441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.632 [2024-07-24 02:12:12.269471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.632 qpair failed and we were unable to recover it. 00:33:57.632 [2024-07-24 02:12:12.269605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.632 [2024-07-24 02:12:12.269631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.632 qpair failed and we were unable to recover it. 00:33:57.632 [2024-07-24 02:12:12.269790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.632 [2024-07-24 02:12:12.269830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.632 qpair failed and we were unable to recover it. 00:33:57.632 [2024-07-24 02:12:12.269970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.632 [2024-07-24 02:12:12.269999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.632 qpair failed and we were unable to recover it. 00:33:57.632 [2024-07-24 02:12:12.270141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.632 [2024-07-24 02:12:12.270171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.632 qpair failed and we were unable to recover it. 00:33:57.632 [2024-07-24 02:12:12.270329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.632 [2024-07-24 02:12:12.270356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.632 qpair failed and we were unable to recover it. 00:33:57.632 [2024-07-24 02:12:12.270456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.632 [2024-07-24 02:12:12.270482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.632 qpair failed and we were unable to recover it. 00:33:57.632 [2024-07-24 02:12:12.270637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.632 [2024-07-24 02:12:12.270666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.632 qpair failed and we were unable to recover it. 00:33:57.632 [2024-07-24 02:12:12.270808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.632 [2024-07-24 02:12:12.270836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.632 qpair failed and we were unable to recover it. 00:33:57.632 [2024-07-24 02:12:12.270981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.632 [2024-07-24 02:12:12.271007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.632 qpair failed and we were unable to recover it. 00:33:57.632 [2024-07-24 02:12:12.271134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.632 [2024-07-24 02:12:12.271160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.632 qpair failed and we were unable to recover it. 00:33:57.632 [2024-07-24 02:12:12.271306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.632 [2024-07-24 02:12:12.271345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.632 qpair failed and we were unable to recover it. 00:33:57.632 [2024-07-24 02:12:12.271459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.632 [2024-07-24 02:12:12.271487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.632 qpair failed and we were unable to recover it. 00:33:57.632 [2024-07-24 02:12:12.271641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.632 [2024-07-24 02:12:12.271667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.632 qpair failed and we were unable to recover it. 00:33:57.632 [2024-07-24 02:12:12.271793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.632 [2024-07-24 02:12:12.271835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.632 qpair failed and we were unable to recover it. 00:33:57.632 [2024-07-24 02:12:12.272003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.632 [2024-07-24 02:12:12.272032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.632 qpair failed and we were unable to recover it. 00:33:57.632 [2024-07-24 02:12:12.272162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.632 [2024-07-24 02:12:12.272188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.632 qpair failed and we were unable to recover it. 00:33:57.632 [2024-07-24 02:12:12.272326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.632 [2024-07-24 02:12:12.272354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.632 qpair failed and we were unable to recover it. 00:33:57.632 [2024-07-24 02:12:12.272524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.632 [2024-07-24 02:12:12.272557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.632 qpair failed and we were unable to recover it. 00:33:57.632 [2024-07-24 02:12:12.272703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.632 [2024-07-24 02:12:12.272732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.632 qpair failed and we were unable to recover it. 00:33:57.632 [2024-07-24 02:12:12.272881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.632 [2024-07-24 02:12:12.272907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.632 qpair failed and we were unable to recover it. 00:33:57.632 [2024-07-24 02:12:12.273045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.632 [2024-07-24 02:12:12.273071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.632 qpair failed and we were unable to recover it. 00:33:57.632 [2024-07-24 02:12:12.273204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.632 [2024-07-24 02:12:12.273230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.632 qpair failed and we were unable to recover it. 00:33:57.632 [2024-07-24 02:12:12.273365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.632 [2024-07-24 02:12:12.273392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.632 qpair failed and we were unable to recover it. 00:33:57.632 [2024-07-24 02:12:12.273593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.632 [2024-07-24 02:12:12.273619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.632 qpair failed and we were unable to recover it. 00:33:57.632 [2024-07-24 02:12:12.273744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.632 [2024-07-24 02:12:12.273769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.632 qpair failed and we were unable to recover it. 00:33:57.633 [2024-07-24 02:12:12.273912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.633 [2024-07-24 02:12:12.273938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.633 qpair failed and we were unable to recover it. 00:33:57.633 [2024-07-24 02:12:12.274073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.633 [2024-07-24 02:12:12.274114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.633 qpair failed and we were unable to recover it. 00:33:57.633 [2024-07-24 02:12:12.274260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.633 [2024-07-24 02:12:12.274286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.633 qpair failed and we were unable to recover it. 00:33:57.633 [2024-07-24 02:12:12.274437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.633 [2024-07-24 02:12:12.274464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.633 qpair failed and we were unable to recover it. 00:33:57.633 [2024-07-24 02:12:12.274573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.633 [2024-07-24 02:12:12.274616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.633 qpair failed and we were unable to recover it. 00:33:57.633 [2024-07-24 02:12:12.274734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.633 [2024-07-24 02:12:12.274763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.633 qpair failed and we were unable to recover it. 00:33:57.633 [2024-07-24 02:12:12.274941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.633 [2024-07-24 02:12:12.274970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.633 qpair failed and we were unable to recover it. 00:33:57.633 [2024-07-24 02:12:12.275150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.633 [2024-07-24 02:12:12.275176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.633 qpair failed and we were unable to recover it. 00:33:57.633 [2024-07-24 02:12:12.275274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.633 [2024-07-24 02:12:12.275302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.633 qpair failed and we were unable to recover it. 00:33:57.633 [2024-07-24 02:12:12.275467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.633 [2024-07-24 02:12:12.275498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.633 qpair failed and we were unable to recover it. 00:33:57.633 [2024-07-24 02:12:12.275623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.633 [2024-07-24 02:12:12.275653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.633 qpair failed and we were unable to recover it. 00:33:57.633 [2024-07-24 02:12:12.275807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.633 [2024-07-24 02:12:12.275833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.633 qpair failed and we were unable to recover it. 00:33:57.633 [2024-07-24 02:12:12.275988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.633 [2024-07-24 02:12:12.276014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.633 qpair failed and we were unable to recover it. 00:33:57.633 [2024-07-24 02:12:12.276160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.633 [2024-07-24 02:12:12.276189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.633 qpair failed and we were unable to recover it. 00:33:57.633 [2024-07-24 02:12:12.276369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.633 [2024-07-24 02:12:12.276399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.633 qpair failed and we were unable to recover it. 00:33:57.633 [2024-07-24 02:12:12.276553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.633 [2024-07-24 02:12:12.276579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.633 qpair failed and we were unable to recover it. 00:33:57.633 [2024-07-24 02:12:12.276690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.633 [2024-07-24 02:12:12.276716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.633 qpair failed and we were unable to recover it. 00:33:57.633 [2024-07-24 02:12:12.276822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.633 [2024-07-24 02:12:12.276868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.633 qpair failed and we were unable to recover it. 00:33:57.633 [2024-07-24 02:12:12.277034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.633 [2024-07-24 02:12:12.277063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.633 qpair failed and we were unable to recover it. 00:33:57.633 [2024-07-24 02:12:12.277221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.633 [2024-07-24 02:12:12.277250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.633 qpair failed and we were unable to recover it. 00:33:57.633 [2024-07-24 02:12:12.277359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.633 [2024-07-24 02:12:12.277387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.633 qpair failed and we were unable to recover it. 00:33:57.633 [2024-07-24 02:12:12.277553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.633 [2024-07-24 02:12:12.277596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.633 qpair failed and we were unable to recover it. 00:33:57.633 [2024-07-24 02:12:12.277765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.633 [2024-07-24 02:12:12.277794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.633 qpair failed and we were unable to recover it. 00:33:57.633 [2024-07-24 02:12:12.277956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.633 [2024-07-24 02:12:12.277982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.633 qpair failed and we were unable to recover it. 00:33:57.633 [2024-07-24 02:12:12.278111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.633 [2024-07-24 02:12:12.278154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.633 qpair failed and we were unable to recover it. 00:33:57.633 [2024-07-24 02:12:12.278329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.633 [2024-07-24 02:12:12.278359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.633 qpair failed and we were unable to recover it. 00:33:57.633 [2024-07-24 02:12:12.278506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.633 [2024-07-24 02:12:12.278534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.633 qpair failed and we were unable to recover it. 00:33:57.633 [2024-07-24 02:12:12.278688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.633 [2024-07-24 02:12:12.278714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.633 qpair failed and we were unable to recover it. 00:33:57.633 [2024-07-24 02:12:12.278815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.633 [2024-07-24 02:12:12.278841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.633 qpair failed and we were unable to recover it. 00:33:57.633 [2024-07-24 02:12:12.278996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.633 [2024-07-24 02:12:12.279025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.633 qpair failed and we were unable to recover it. 00:33:57.633 [2024-07-24 02:12:12.279194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.633 [2024-07-24 02:12:12.279222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.633 qpair failed and we were unable to recover it. 00:33:57.633 [2024-07-24 02:12:12.279380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.633 [2024-07-24 02:12:12.279407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.633 qpair failed and we were unable to recover it. 00:33:57.633 [2024-07-24 02:12:12.279533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.633 [2024-07-24 02:12:12.279559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.633 qpair failed and we were unable to recover it. 00:33:57.633 [2024-07-24 02:12:12.279721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.633 [2024-07-24 02:12:12.279750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.633 qpair failed and we were unable to recover it. 00:33:57.634 [2024-07-24 02:12:12.279895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.634 [2024-07-24 02:12:12.279924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.634 qpair failed and we were unable to recover it. 00:33:57.634 [2024-07-24 02:12:12.280051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.634 [2024-07-24 02:12:12.280077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.634 qpair failed and we were unable to recover it. 00:33:57.634 [2024-07-24 02:12:12.280214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.634 [2024-07-24 02:12:12.280256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.634 qpair failed and we were unable to recover it. 00:33:57.634 [2024-07-24 02:12:12.280403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.634 [2024-07-24 02:12:12.280430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.634 qpair failed and we were unable to recover it. 00:33:57.634 [2024-07-24 02:12:12.280585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.634 [2024-07-24 02:12:12.280611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.634 qpair failed and we were unable to recover it. 00:33:57.634 [2024-07-24 02:12:12.280742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.634 [2024-07-24 02:12:12.280768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.634 qpair failed and we were unable to recover it. 00:33:57.634 [2024-07-24 02:12:12.280894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.634 [2024-07-24 02:12:12.280920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.634 qpair failed and we were unable to recover it. 00:33:57.634 [2024-07-24 02:12:12.281016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.634 [2024-07-24 02:12:12.281042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.634 qpair failed and we were unable to recover it. 00:33:57.634 [2024-07-24 02:12:12.281137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.634 [2024-07-24 02:12:12.281179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.634 qpair failed and we were unable to recover it. 00:33:57.634 [2024-07-24 02:12:12.281333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.634 [2024-07-24 02:12:12.281360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.634 qpair failed and we were unable to recover it. 00:33:57.634 [2024-07-24 02:12:12.281476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.634 [2024-07-24 02:12:12.281502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.634 qpair failed and we were unable to recover it. 00:33:57.634 [2024-07-24 02:12:12.281632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.634 [2024-07-24 02:12:12.281658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.634 qpair failed and we were unable to recover it. 00:33:57.634 [2024-07-24 02:12:12.281810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.634 [2024-07-24 02:12:12.281839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.634 qpair failed and we were unable to recover it. 00:33:57.634 [2024-07-24 02:12:12.281969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.634 [2024-07-24 02:12:12.281995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.634 qpair failed and we were unable to recover it. 00:33:57.634 [2024-07-24 02:12:12.282126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.634 [2024-07-24 02:12:12.282152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.634 qpair failed and we were unable to recover it. 00:33:57.634 [2024-07-24 02:12:12.282309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.634 [2024-07-24 02:12:12.282352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.634 qpair failed and we were unable to recover it. 00:33:57.634 [2024-07-24 02:12:12.282480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.634 [2024-07-24 02:12:12.282509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.634 qpair failed and we were unable to recover it. 00:33:57.634 [2024-07-24 02:12:12.282644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.634 [2024-07-24 02:12:12.282669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.634 qpair failed and we were unable to recover it. 00:33:57.634 [2024-07-24 02:12:12.282802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.634 [2024-07-24 02:12:12.282828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.634 qpair failed and we were unable to recover it. 00:33:57.634 [2024-07-24 02:12:12.282985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.634 [2024-07-24 02:12:12.283013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.634 qpair failed and we were unable to recover it. 00:33:57.634 [2024-07-24 02:12:12.283157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.634 [2024-07-24 02:12:12.283186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.634 qpair failed and we were unable to recover it. 00:33:57.634 [2024-07-24 02:12:12.283341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.634 [2024-07-24 02:12:12.283368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.634 qpair failed and we were unable to recover it. 00:33:57.634 [2024-07-24 02:12:12.283499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.634 [2024-07-24 02:12:12.283526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.634 qpair failed and we were unable to recover it. 00:33:57.634 [2024-07-24 02:12:12.283658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.634 [2024-07-24 02:12:12.283687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.634 qpair failed and we were unable to recover it. 00:33:57.634 [2024-07-24 02:12:12.283805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.634 [2024-07-24 02:12:12.283834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.634 qpair failed and we were unable to recover it. 00:33:57.634 [2024-07-24 02:12:12.283984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.634 [2024-07-24 02:12:12.284010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.634 qpair failed and we were unable to recover it. 00:33:57.634 [2024-07-24 02:12:12.284148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.634 [2024-07-24 02:12:12.284195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.634 qpair failed and we were unable to recover it. 00:33:57.634 [2024-07-24 02:12:12.284368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.634 [2024-07-24 02:12:12.284398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.634 qpair failed and we were unable to recover it. 00:33:57.634 [2024-07-24 02:12:12.284545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.634 [2024-07-24 02:12:12.284574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.634 qpair failed and we were unable to recover it. 00:33:57.635 [2024-07-24 02:12:12.284720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.635 [2024-07-24 02:12:12.284745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.635 qpair failed and we were unable to recover it. 00:33:57.635 [2024-07-24 02:12:12.284878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.635 [2024-07-24 02:12:12.284919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.635 qpair failed and we were unable to recover it. 00:33:57.635 [2024-07-24 02:12:12.285089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.635 [2024-07-24 02:12:12.285118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.635 qpair failed and we were unable to recover it. 00:33:57.635 [2024-07-24 02:12:12.285255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.635 [2024-07-24 02:12:12.285283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.635 qpair failed and we were unable to recover it. 00:33:57.635 [2024-07-24 02:12:12.285418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.635 [2024-07-24 02:12:12.285446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.635 qpair failed and we were unable to recover it. 00:33:57.635 [2024-07-24 02:12:12.285574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.635 [2024-07-24 02:12:12.285615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.635 qpair failed and we were unable to recover it. 00:33:57.635 [2024-07-24 02:12:12.285755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.635 [2024-07-24 02:12:12.285783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.635 qpair failed and we were unable to recover it. 00:33:57.635 [2024-07-24 02:12:12.285951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.635 [2024-07-24 02:12:12.285980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.635 qpair failed and we were unable to recover it. 00:33:57.635 [2024-07-24 02:12:12.286154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.635 [2024-07-24 02:12:12.286180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.635 qpair failed and we were unable to recover it. 00:33:57.635 [2024-07-24 02:12:12.286307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.635 [2024-07-24 02:12:12.286359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.635 qpair failed and we were unable to recover it. 00:33:57.635 [2024-07-24 02:12:12.286502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.635 [2024-07-24 02:12:12.286531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.635 qpair failed and we were unable to recover it. 00:33:57.635 [2024-07-24 02:12:12.286707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.635 [2024-07-24 02:12:12.286735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.635 qpair failed and we were unable to recover it. 00:33:57.635 [2024-07-24 02:12:12.286886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.635 [2024-07-24 02:12:12.286913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.635 qpair failed and we were unable to recover it. 00:33:57.635 [2024-07-24 02:12:12.287038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.635 [2024-07-24 02:12:12.287091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.635 qpair failed and we were unable to recover it. 00:33:57.635 [2024-07-24 02:12:12.287265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.635 [2024-07-24 02:12:12.287294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.635 qpair failed and we were unable to recover it. 00:33:57.635 [2024-07-24 02:12:12.287469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.635 [2024-07-24 02:12:12.287496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.635 qpair failed and we were unable to recover it. 00:33:57.635 [2024-07-24 02:12:12.287661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.635 [2024-07-24 02:12:12.287687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.635 qpair failed and we were unable to recover it. 00:33:57.635 [2024-07-24 02:12:12.287842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.635 [2024-07-24 02:12:12.287889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.635 qpair failed and we were unable to recover it. 00:33:57.635 [2024-07-24 02:12:12.288009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.635 [2024-07-24 02:12:12.288039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.635 qpair failed and we were unable to recover it. 00:33:57.635 [2024-07-24 02:12:12.288197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.635 [2024-07-24 02:12:12.288223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.635 qpair failed and we were unable to recover it. 00:33:57.635 [2024-07-24 02:12:12.288350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.635 [2024-07-24 02:12:12.288378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.635 qpair failed and we were unable to recover it. 00:33:57.635 [2024-07-24 02:12:12.288515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.635 [2024-07-24 02:12:12.288558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.635 qpair failed and we were unable to recover it. 00:33:57.635 [2024-07-24 02:12:12.288731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.635 [2024-07-24 02:12:12.288757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.635 qpair failed and we were unable to recover it. 00:33:57.635 [2024-07-24 02:12:12.288890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.635 [2024-07-24 02:12:12.288917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.635 qpair failed and we were unable to recover it. 00:33:57.635 [2024-07-24 02:12:12.289106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.635 [2024-07-24 02:12:12.289132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.635 qpair failed and we were unable to recover it. 00:33:57.635 [2024-07-24 02:12:12.289313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.635 [2024-07-24 02:12:12.289348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.635 qpair failed and we were unable to recover it. 00:33:57.635 [2024-07-24 02:12:12.289489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.635 [2024-07-24 02:12:12.289518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.635 qpair failed and we were unable to recover it. 00:33:57.635 [2024-07-24 02:12:12.289662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.635 [2024-07-24 02:12:12.289691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.635 qpair failed and we were unable to recover it. 00:33:57.635 [2024-07-24 02:12:12.289868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.635 [2024-07-24 02:12:12.289893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.635 qpair failed and we were unable to recover it. 00:33:57.635 [2024-07-24 02:12:12.289998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.635 [2024-07-24 02:12:12.290041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.635 qpair failed and we were unable to recover it. 00:33:57.635 [2024-07-24 02:12:12.290211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.635 [2024-07-24 02:12:12.290239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.635 qpair failed and we were unable to recover it. 00:33:57.635 [2024-07-24 02:12:12.290410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.635 [2024-07-24 02:12:12.290439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.635 qpair failed and we were unable to recover it. 00:33:57.635 [2024-07-24 02:12:12.290563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.635 [2024-07-24 02:12:12.290590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.635 qpair failed and we were unable to recover it. 00:33:57.635 [2024-07-24 02:12:12.290717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.635 [2024-07-24 02:12:12.290743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.635 qpair failed and we were unable to recover it. 00:33:57.635 [2024-07-24 02:12:12.290893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.635 [2024-07-24 02:12:12.290922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.635 qpair failed and we were unable to recover it. 00:33:57.635 [2024-07-24 02:12:12.291059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.635 [2024-07-24 02:12:12.291088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.635 qpair failed and we were unable to recover it. 00:33:57.635 [2024-07-24 02:12:12.291274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.636 [2024-07-24 02:12:12.291300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.636 qpair failed and we were unable to recover it. 00:33:57.636 [2024-07-24 02:12:12.291482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.636 [2024-07-24 02:12:12.291510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.636 qpair failed and we were unable to recover it. 00:33:57.636 [2024-07-24 02:12:12.291626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.636 [2024-07-24 02:12:12.291655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.636 qpair failed and we were unable to recover it. 00:33:57.636 [2024-07-24 02:12:12.291767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.636 [2024-07-24 02:12:12.291796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.636 qpair failed and we were unable to recover it. 00:33:57.636 [2024-07-24 02:12:12.291931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.636 [2024-07-24 02:12:12.291957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.636 qpair failed and we were unable to recover it. 00:33:57.636 [2024-07-24 02:12:12.292057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.636 [2024-07-24 02:12:12.292083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.636 qpair failed and we were unable to recover it. 00:33:57.636 [2024-07-24 02:12:12.292262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.636 [2024-07-24 02:12:12.292290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.636 qpair failed and we were unable to recover it. 00:33:57.636 [2024-07-24 02:12:12.292423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.636 [2024-07-24 02:12:12.292452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.636 qpair failed and we were unable to recover it. 00:33:57.636 [2024-07-24 02:12:12.292604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.636 [2024-07-24 02:12:12.292630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.636 qpair failed and we were unable to recover it. 00:33:57.636 [2024-07-24 02:12:12.292761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.636 [2024-07-24 02:12:12.292787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.636 qpair failed and we were unable to recover it. 00:33:57.636 [2024-07-24 02:12:12.292961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.636 [2024-07-24 02:12:12.292987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.636 qpair failed and we were unable to recover it. 00:33:57.636 [2024-07-24 02:12:12.293108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.636 [2024-07-24 02:12:12.293134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.636 qpair failed and we were unable to recover it. 00:33:57.636 [2024-07-24 02:12:12.293302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.636 [2024-07-24 02:12:12.293335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.636 qpair failed and we were unable to recover it. 00:33:57.636 [2024-07-24 02:12:12.293453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.636 [2024-07-24 02:12:12.293482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.636 qpair failed and we were unable to recover it. 00:33:57.636 [2024-07-24 02:12:12.293616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.636 [2024-07-24 02:12:12.293645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.636 qpair failed and we were unable to recover it. 00:33:57.636 [2024-07-24 02:12:12.293809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.636 [2024-07-24 02:12:12.293838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.636 qpair failed and we were unable to recover it. 00:33:57.636 [2024-07-24 02:12:12.293981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.636 [2024-07-24 02:12:12.294007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.636 qpair failed and we were unable to recover it. 00:33:57.636 [2024-07-24 02:12:12.294141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.636 [2024-07-24 02:12:12.294167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.636 qpair failed and we were unable to recover it. 00:33:57.636 [2024-07-24 02:12:12.294346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.636 [2024-07-24 02:12:12.294376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.636 qpair failed and we were unable to recover it. 00:33:57.636 [2024-07-24 02:12:12.294537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.636 [2024-07-24 02:12:12.294563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.636 qpair failed and we were unable to recover it. 00:33:57.636 [2024-07-24 02:12:12.294695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.636 [2024-07-24 02:12:12.294721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.636 qpair failed and we were unable to recover it. 00:33:57.636 [2024-07-24 02:12:12.294851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.636 [2024-07-24 02:12:12.294895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.636 qpair failed and we were unable to recover it. 00:33:57.636 [2024-07-24 02:12:12.295041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.636 [2024-07-24 02:12:12.295069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.636 qpair failed and we were unable to recover it. 00:33:57.636 [2024-07-24 02:12:12.295211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.636 [2024-07-24 02:12:12.295239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.636 qpair failed and we were unable to recover it. 00:33:57.636 [2024-07-24 02:12:12.295414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.636 [2024-07-24 02:12:12.295441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.636 qpair failed and we were unable to recover it. 00:33:57.636 [2024-07-24 02:12:12.295594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.636 [2024-07-24 02:12:12.295623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.636 qpair failed and we were unable to recover it. 00:33:57.636 [2024-07-24 02:12:12.295796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.636 [2024-07-24 02:12:12.295825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.636 qpair failed and we were unable to recover it. 00:33:57.636 [2024-07-24 02:12:12.295945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.636 [2024-07-24 02:12:12.295973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.636 qpair failed and we were unable to recover it. 00:33:57.636 [2024-07-24 02:12:12.296097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.636 [2024-07-24 02:12:12.296123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.636 qpair failed and we were unable to recover it. 00:33:57.636 [2024-07-24 02:12:12.296255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.636 [2024-07-24 02:12:12.296285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.636 qpair failed and we were unable to recover it. 00:33:57.636 [2024-07-24 02:12:12.296489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.636 [2024-07-24 02:12:12.296518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.636 qpair failed and we were unable to recover it. 00:33:57.636 [2024-07-24 02:12:12.296659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.636 [2024-07-24 02:12:12.296688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.636 qpair failed and we were unable to recover it. 00:33:57.636 [2024-07-24 02:12:12.296840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.636 [2024-07-24 02:12:12.296866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.636 qpair failed and we were unable to recover it. 00:33:57.636 [2024-07-24 02:12:12.296965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.636 [2024-07-24 02:12:12.296991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.636 qpair failed and we were unable to recover it. 00:33:57.636 [2024-07-24 02:12:12.297094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.636 [2024-07-24 02:12:12.297120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.636 qpair failed and we were unable to recover it. 00:33:57.636 [2024-07-24 02:12:12.297213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.637 [2024-07-24 02:12:12.297239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.637 qpair failed and we were unable to recover it. 00:33:57.637 [2024-07-24 02:12:12.297336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.637 [2024-07-24 02:12:12.297363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.637 qpair failed and we were unable to recover it. 00:33:57.637 [2024-07-24 02:12:12.297473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.637 [2024-07-24 02:12:12.297499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.637 qpair failed and we were unable to recover it. 00:33:57.637 [2024-07-24 02:12:12.297659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.637 [2024-07-24 02:12:12.297685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.637 qpair failed and we were unable to recover it. 00:33:57.637 [2024-07-24 02:12:12.297863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.637 [2024-07-24 02:12:12.297891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.637 qpair failed and we were unable to recover it. 00:33:57.637 [2024-07-24 02:12:12.298034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.637 [2024-07-24 02:12:12.298060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.637 qpair failed and we were unable to recover it. 00:33:57.637 [2024-07-24 02:12:12.298169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.637 [2024-07-24 02:12:12.298196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.637 qpair failed and we were unable to recover it. 00:33:57.637 [2024-07-24 02:12:12.298344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.637 [2024-07-24 02:12:12.298374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.637 qpair failed and we were unable to recover it. 00:33:57.637 [2024-07-24 02:12:12.298548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.637 [2024-07-24 02:12:12.298577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.637 qpair failed and we were unable to recover it. 00:33:57.637 [2024-07-24 02:12:12.298758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.637 [2024-07-24 02:12:12.298784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.637 qpair failed and we were unable to recover it. 00:33:57.637 [2024-07-24 02:12:12.298936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.637 [2024-07-24 02:12:12.298965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.637 qpair failed and we were unable to recover it. 00:33:57.637 [2024-07-24 02:12:12.299119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.637 [2024-07-24 02:12:12.299144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.637 qpair failed and we were unable to recover it. 00:33:57.637 [2024-07-24 02:12:12.299275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.637 [2024-07-24 02:12:12.299300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.637 qpair failed and we were unable to recover it. 00:33:57.637 [2024-07-24 02:12:12.299442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.637 [2024-07-24 02:12:12.299469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.637 qpair failed and we were unable to recover it. 00:33:57.637 [2024-07-24 02:12:12.299576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.637 [2024-07-24 02:12:12.299602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.637 qpair failed and we were unable to recover it. 00:33:57.637 [2024-07-24 02:12:12.299778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.637 [2024-07-24 02:12:12.299807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.637 qpair failed and we were unable to recover it. 00:33:57.637 [2024-07-24 02:12:12.299913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.637 [2024-07-24 02:12:12.299941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.637 qpair failed and we were unable to recover it. 00:33:57.637 [2024-07-24 02:12:12.300120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.637 [2024-07-24 02:12:12.300146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.637 qpair failed and we were unable to recover it. 00:33:57.637 [2024-07-24 02:12:12.300295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.637 [2024-07-24 02:12:12.300333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.637 qpair failed and we were unable to recover it. 00:33:57.637 [2024-07-24 02:12:12.300483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.637 [2024-07-24 02:12:12.300512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.637 qpair failed and we were unable to recover it. 00:33:57.637 [2024-07-24 02:12:12.300685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.637 [2024-07-24 02:12:12.300714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.637 qpair failed and we were unable to recover it. 00:33:57.637 [2024-07-24 02:12:12.300874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.637 [2024-07-24 02:12:12.300900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.637 qpair failed and we were unable to recover it. 00:33:57.637 [2024-07-24 02:12:12.301013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.637 [2024-07-24 02:12:12.301038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.637 qpair failed and we were unable to recover it. 00:33:57.637 [2024-07-24 02:12:12.301224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.637 [2024-07-24 02:12:12.301250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.637 qpair failed and we were unable to recover it. 00:33:57.637 [2024-07-24 02:12:12.301386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.637 [2024-07-24 02:12:12.301429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.637 qpair failed and we were unable to recover it. 00:33:57.637 [2024-07-24 02:12:12.301587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.637 [2024-07-24 02:12:12.301613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.637 qpair failed and we were unable to recover it. 00:33:57.637 [2024-07-24 02:12:12.301744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.637 [2024-07-24 02:12:12.301787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.637 qpair failed and we were unable to recover it. 00:33:57.637 [2024-07-24 02:12:12.301936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.637 [2024-07-24 02:12:12.301964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.637 qpair failed and we were unable to recover it. 00:33:57.637 [2024-07-24 02:12:12.302100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.637 [2024-07-24 02:12:12.302128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.637 qpair failed and we were unable to recover it. 00:33:57.637 [2024-07-24 02:12:12.302285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.637 [2024-07-24 02:12:12.302311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.637 qpair failed and we were unable to recover it. 00:33:57.637 [2024-07-24 02:12:12.302431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.637 [2024-07-24 02:12:12.302474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.637 qpair failed and we were unable to recover it. 00:33:57.637 [2024-07-24 02:12:12.302621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.637 [2024-07-24 02:12:12.302650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.637 qpair failed and we were unable to recover it. 00:33:57.637 [2024-07-24 02:12:12.302831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.637 [2024-07-24 02:12:12.302859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.637 qpair failed and we were unable to recover it. 00:33:57.637 [2024-07-24 02:12:12.302977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.637 [2024-07-24 02:12:12.303003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.638 qpair failed and we were unable to recover it. 00:33:57.638 [2024-07-24 02:12:12.303129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.638 [2024-07-24 02:12:12.303155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.638 qpair failed and we were unable to recover it. 00:33:57.638 [2024-07-24 02:12:12.303310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.638 [2024-07-24 02:12:12.303346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.638 qpair failed and we were unable to recover it. 00:33:57.638 [2024-07-24 02:12:12.303530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.638 [2024-07-24 02:12:12.303556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.638 qpair failed and we were unable to recover it. 00:33:57.638 [2024-07-24 02:12:12.303722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.638 [2024-07-24 02:12:12.303748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.638 qpair failed and we were unable to recover it. 00:33:57.638 [2024-07-24 02:12:12.303879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.638 [2024-07-24 02:12:12.303904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.638 qpair failed and we were unable to recover it. 00:33:57.638 [2024-07-24 02:12:12.304055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.638 [2024-07-24 02:12:12.304097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.638 qpair failed and we were unable to recover it. 00:33:57.638 [2024-07-24 02:12:12.304236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.638 [2024-07-24 02:12:12.304265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.638 qpair failed and we were unable to recover it. 00:33:57.638 [2024-07-24 02:12:12.304393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.638 [2024-07-24 02:12:12.304420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.638 qpair failed and we were unable to recover it. 00:33:57.638 [2024-07-24 02:12:12.304555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.638 [2024-07-24 02:12:12.304582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.638 qpair failed and we were unable to recover it. 00:33:57.638 [2024-07-24 02:12:12.304744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.638 [2024-07-24 02:12:12.304769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.638 qpair failed and we were unable to recover it. 00:33:57.638 [2024-07-24 02:12:12.304944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.638 [2024-07-24 02:12:12.304973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.638 qpair failed and we were unable to recover it. 00:33:57.638 [2024-07-24 02:12:12.305101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.638 [2024-07-24 02:12:12.305129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.638 qpair failed and we were unable to recover it. 00:33:57.638 [2024-07-24 02:12:12.305287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.638 [2024-07-24 02:12:12.305313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.638 qpair failed and we were unable to recover it. 00:33:57.638 [2024-07-24 02:12:12.305477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.638 [2024-07-24 02:12:12.305507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.638 qpair failed and we were unable to recover it. 00:33:57.638 [2024-07-24 02:12:12.305649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.638 [2024-07-24 02:12:12.305678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.638 qpair failed and we were unable to recover it. 00:33:57.638 [2024-07-24 02:12:12.305806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.638 [2024-07-24 02:12:12.305833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.638 qpair failed and we were unable to recover it. 00:33:57.638 [2024-07-24 02:12:12.305960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.638 [2024-07-24 02:12:12.305986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.638 qpair failed and we were unable to recover it. 00:33:57.638 [2024-07-24 02:12:12.306112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.638 [2024-07-24 02:12:12.306141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.638 qpair failed and we were unable to recover it. 00:33:57.638 [2024-07-24 02:12:12.306262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.638 [2024-07-24 02:12:12.306291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.638 qpair failed and we were unable to recover it. 00:33:57.638 [2024-07-24 02:12:12.306479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.638 [2024-07-24 02:12:12.306506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.638 qpair failed and we were unable to recover it. 00:33:57.638 [2024-07-24 02:12:12.306682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.638 [2024-07-24 02:12:12.306711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.638 qpair failed and we were unable to recover it. 00:33:57.638 [2024-07-24 02:12:12.306851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.638 [2024-07-24 02:12:12.306880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.638 qpair failed and we were unable to recover it. 00:33:57.638 [2024-07-24 02:12:12.307029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.638 [2024-07-24 02:12:12.307057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.638 qpair failed and we were unable to recover it. 00:33:57.638 [2024-07-24 02:12:12.307203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.638 [2024-07-24 02:12:12.307229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.638 qpair failed and we were unable to recover it. 00:33:57.638 [2024-07-24 02:12:12.307402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.638 [2024-07-24 02:12:12.307431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.638 qpair failed and we were unable to recover it. 00:33:57.638 [2024-07-24 02:12:12.307576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.638 [2024-07-24 02:12:12.307605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.638 qpair failed and we were unable to recover it. 00:33:57.638 [2024-07-24 02:12:12.307720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.638 [2024-07-24 02:12:12.307748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.638 qpair failed and we were unable to recover it. 00:33:57.638 [2024-07-24 02:12:12.307900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.638 [2024-07-24 02:12:12.307926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.638 qpair failed and we were unable to recover it. 00:33:57.638 [2024-07-24 02:12:12.308057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.638 [2024-07-24 02:12:12.308103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.638 qpair failed and we were unable to recover it. 00:33:57.638 [2024-07-24 02:12:12.308270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.638 [2024-07-24 02:12:12.308298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.638 qpair failed and we were unable to recover it. 00:33:57.638 [2024-07-24 02:12:12.308411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.638 [2024-07-24 02:12:12.308440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.638 qpair failed and we were unable to recover it. 00:33:57.638 [2024-07-24 02:12:12.308621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.638 [2024-07-24 02:12:12.308647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.638 qpair failed and we were unable to recover it. 00:33:57.638 [2024-07-24 02:12:12.308755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.638 [2024-07-24 02:12:12.308796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.638 qpair failed and we were unable to recover it. 00:33:57.638 [2024-07-24 02:12:12.308955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.638 [2024-07-24 02:12:12.308981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.638 qpair failed and we were unable to recover it. 00:33:57.638 [2024-07-24 02:12:12.309115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.638 [2024-07-24 02:12:12.309152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.638 qpair failed and we were unable to recover it. 00:33:57.638 [2024-07-24 02:12:12.309329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.638 [2024-07-24 02:12:12.309356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.638 qpair failed and we were unable to recover it. 00:33:57.638 [2024-07-24 02:12:12.309486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.638 [2024-07-24 02:12:12.309529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.639 qpair failed and we were unable to recover it. 00:33:57.639 [2024-07-24 02:12:12.309679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.639 [2024-07-24 02:12:12.309708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.639 qpair failed and we were unable to recover it. 00:33:57.639 [2024-07-24 02:12:12.309843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.639 [2024-07-24 02:12:12.309872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.639 qpair failed and we were unable to recover it. 00:33:57.639 [2024-07-24 02:12:12.310002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.639 [2024-07-24 02:12:12.310028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.639 qpair failed and we were unable to recover it. 00:33:57.639 [2024-07-24 02:12:12.310165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.639 [2024-07-24 02:12:12.310191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.639 qpair failed and we were unable to recover it. 00:33:57.639 [2024-07-24 02:12:12.310338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.639 [2024-07-24 02:12:12.310368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.639 qpair failed and we were unable to recover it. 00:33:57.639 [2024-07-24 02:12:12.310544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.639 [2024-07-24 02:12:12.310574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.639 qpair failed and we were unable to recover it. 00:33:57.639 [2024-07-24 02:12:12.310734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.639 [2024-07-24 02:12:12.310760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.639 qpair failed and we were unable to recover it. 00:33:57.639 [2024-07-24 02:12:12.310864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.639 [2024-07-24 02:12:12.310890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.639 qpair failed and we were unable to recover it. 00:33:57.639 [2024-07-24 02:12:12.311048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.639 [2024-07-24 02:12:12.311077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.639 qpair failed and we were unable to recover it. 00:33:57.639 [2024-07-24 02:12:12.311224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.639 [2024-07-24 02:12:12.311252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.639 qpair failed and we were unable to recover it. 00:33:57.639 [2024-07-24 02:12:12.311379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.639 [2024-07-24 02:12:12.311406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.639 qpair failed and we were unable to recover it. 00:33:57.639 [2024-07-24 02:12:12.311517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.639 [2024-07-24 02:12:12.311543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.639 qpair failed and we were unable to recover it. 00:33:57.639 [2024-07-24 02:12:12.311681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.639 [2024-07-24 02:12:12.311709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.639 qpair failed and we were unable to recover it. 00:33:57.639 [2024-07-24 02:12:12.311813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.639 [2024-07-24 02:12:12.311841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.639 qpair failed and we were unable to recover it. 00:33:57.639 [2024-07-24 02:12:12.311982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.639 [2024-07-24 02:12:12.312008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.639 qpair failed and we were unable to recover it. 00:33:57.639 [2024-07-24 02:12:12.312163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.639 [2024-07-24 02:12:12.312206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.639 qpair failed and we were unable to recover it. 00:33:57.639 [2024-07-24 02:12:12.312352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.639 [2024-07-24 02:12:12.312381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.639 qpair failed and we were unable to recover it. 00:33:57.639 [2024-07-24 02:12:12.312524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.639 [2024-07-24 02:12:12.312552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.639 qpair failed and we were unable to recover it. 00:33:57.639 [2024-07-24 02:12:12.312726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.639 [2024-07-24 02:12:12.312752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.639 qpair failed and we were unable to recover it. 00:33:57.639 [2024-07-24 02:12:12.312884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.639 [2024-07-24 02:12:12.312927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.639 qpair failed and we were unable to recover it. 00:33:57.639 [2024-07-24 02:12:12.313063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.639 [2024-07-24 02:12:12.313091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.639 qpair failed and we were unable to recover it. 00:33:57.639 [2024-07-24 02:12:12.313240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.639 [2024-07-24 02:12:12.313269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.639 qpair failed and we were unable to recover it. 00:33:57.639 [2024-07-24 02:12:12.313424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.639 [2024-07-24 02:12:12.313450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.639 qpair failed and we were unable to recover it. 00:33:57.639 [2024-07-24 02:12:12.313548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.639 [2024-07-24 02:12:12.313574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.639 qpair failed and we were unable to recover it. 00:33:57.639 [2024-07-24 02:12:12.313705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.639 [2024-07-24 02:12:12.313734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.639 qpair failed and we were unable to recover it. 00:33:57.639 [2024-07-24 02:12:12.313898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.639 [2024-07-24 02:12:12.313927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.639 qpair failed and we were unable to recover it. 00:33:57.639 [2024-07-24 02:12:12.314067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.639 [2024-07-24 02:12:12.314093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.639 qpair failed and we were unable to recover it. 00:33:57.639 [2024-07-24 02:12:12.314202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.639 [2024-07-24 02:12:12.314228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.639 qpair failed and we were unable to recover it. 00:33:57.639 [2024-07-24 02:12:12.314400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.639 [2024-07-24 02:12:12.314429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.639 qpair failed and we were unable to recover it. 00:33:57.639 [2024-07-24 02:12:12.314572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.639 [2024-07-24 02:12:12.314602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.639 qpair failed and we were unable to recover it. 00:33:57.640 [2024-07-24 02:12:12.314742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.640 [2024-07-24 02:12:12.314768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.640 qpair failed and we were unable to recover it. 00:33:57.640 [2024-07-24 02:12:12.314923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.640 [2024-07-24 02:12:12.314965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.640 qpair failed and we were unable to recover it. 00:33:57.640 [2024-07-24 02:12:12.315124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.640 [2024-07-24 02:12:12.315154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.640 qpair failed and we were unable to recover it. 00:33:57.640 [2024-07-24 02:12:12.315312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.640 [2024-07-24 02:12:12.315346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.640 qpair failed and we were unable to recover it. 00:33:57.640 [2024-07-24 02:12:12.315503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.640 [2024-07-24 02:12:12.315529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.640 qpair failed and we were unable to recover it. 00:33:57.640 [2024-07-24 02:12:12.315665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.640 [2024-07-24 02:12:12.315692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.640 qpair failed and we were unable to recover it. 00:33:57.640 [2024-07-24 02:12:12.315831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.640 [2024-07-24 02:12:12.315860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.640 qpair failed and we were unable to recover it. 00:33:57.640 [2024-07-24 02:12:12.316008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.640 [2024-07-24 02:12:12.316036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.640 qpair failed and we were unable to recover it. 00:33:57.640 [2024-07-24 02:12:12.316195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.640 [2024-07-24 02:12:12.316221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.640 qpair failed and we were unable to recover it. 00:33:57.640 [2024-07-24 02:12:12.316354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.640 [2024-07-24 02:12:12.316398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.640 qpair failed and we were unable to recover it. 00:33:57.640 [2024-07-24 02:12:12.316544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.640 [2024-07-24 02:12:12.316573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.640 qpair failed and we were unable to recover it. 00:33:57.640 [2024-07-24 02:12:12.316692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.640 [2024-07-24 02:12:12.316722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.640 qpair failed and we were unable to recover it. 00:33:57.640 [2024-07-24 02:12:12.316874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.640 [2024-07-24 02:12:12.316900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.640 qpair failed and we were unable to recover it. 00:33:57.640 [2024-07-24 02:12:12.317036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.640 [2024-07-24 02:12:12.317079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.640 qpair failed and we were unable to recover it. 00:33:57.640 [2024-07-24 02:12:12.317228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.640 [2024-07-24 02:12:12.317254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.640 qpair failed and we were unable to recover it. 00:33:57.640 [2024-07-24 02:12:12.317390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.640 [2024-07-24 02:12:12.317417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.640 qpair failed and we were unable to recover it. 00:33:57.640 [2024-07-24 02:12:12.317526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.640 [2024-07-24 02:12:12.317552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.640 qpair failed and we were unable to recover it. 00:33:57.640 [2024-07-24 02:12:12.317660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.640 [2024-07-24 02:12:12.317687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.640 qpair failed and we were unable to recover it. 00:33:57.640 [2024-07-24 02:12:12.317782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.640 [2024-07-24 02:12:12.317808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.640 qpair failed and we were unable to recover it. 00:33:57.640 [2024-07-24 02:12:12.317905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.640 [2024-07-24 02:12:12.317931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.640 qpair failed and we were unable to recover it. 00:33:57.640 [2024-07-24 02:12:12.318059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.640 [2024-07-24 02:12:12.318085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.640 qpair failed and we were unable to recover it. 00:33:57.640 [2024-07-24 02:12:12.318213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.640 [2024-07-24 02:12:12.318256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.640 qpair failed and we were unable to recover it. 00:33:57.640 [2024-07-24 02:12:12.318366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.640 [2024-07-24 02:12:12.318395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.640 qpair failed and we were unable to recover it. 00:33:57.640 [2024-07-24 02:12:12.318534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.640 [2024-07-24 02:12:12.318564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.640 qpair failed and we were unable to recover it. 00:33:57.640 [2024-07-24 02:12:12.318712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.640 [2024-07-24 02:12:12.318738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.640 qpair failed and we were unable to recover it. 00:33:57.640 [2024-07-24 02:12:12.318865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.640 [2024-07-24 02:12:12.318891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.640 qpair failed and we were unable to recover it. 00:33:57.640 [2024-07-24 02:12:12.319046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.640 [2024-07-24 02:12:12.319074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.640 qpair failed and we were unable to recover it. 00:33:57.640 [2024-07-24 02:12:12.319257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.640 [2024-07-24 02:12:12.319283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.640 qpair failed and we were unable to recover it. 00:33:57.640 [2024-07-24 02:12:12.319421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.640 [2024-07-24 02:12:12.319447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.640 qpair failed and we were unable to recover it. 00:33:57.640 [2024-07-24 02:12:12.319577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.640 [2024-07-24 02:12:12.319608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.640 qpair failed and we were unable to recover it. 00:33:57.640 [2024-07-24 02:12:12.319777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.640 [2024-07-24 02:12:12.319806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.640 qpair failed and we were unable to recover it. 00:33:57.640 [2024-07-24 02:12:12.319955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.640 [2024-07-24 02:12:12.319983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.640 qpair failed and we were unable to recover it. 00:33:57.640 [2024-07-24 02:12:12.320141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.640 [2024-07-24 02:12:12.320166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.640 qpair failed and we were unable to recover it. 00:33:57.640 [2024-07-24 02:12:12.320300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.640 [2024-07-24 02:12:12.320333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.640 qpair failed and we were unable to recover it. 00:33:57.640 [2024-07-24 02:12:12.320484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.640 [2024-07-24 02:12:12.320513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.640 qpair failed and we were unable to recover it. 00:33:57.640 [2024-07-24 02:12:12.320691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.640 [2024-07-24 02:12:12.320717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.640 qpair failed and we were unable to recover it. 00:33:57.640 [2024-07-24 02:12:12.320852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.640 [2024-07-24 02:12:12.320879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.641 qpair failed and we were unable to recover it. 00:33:57.641 [2024-07-24 02:12:12.320979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.641 [2024-07-24 02:12:12.321007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.641 qpair failed and we were unable to recover it. 00:33:57.641 [2024-07-24 02:12:12.321165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.641 [2024-07-24 02:12:12.321195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.641 qpair failed and we were unable to recover it. 00:33:57.641 [2024-07-24 02:12:12.321344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.641 [2024-07-24 02:12:12.321373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.641 qpair failed and we were unable to recover it. 00:33:57.641 [2024-07-24 02:12:12.321508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.641 [2024-07-24 02:12:12.321534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.641 qpair failed and we were unable to recover it. 00:33:57.641 [2024-07-24 02:12:12.321698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.641 [2024-07-24 02:12:12.321724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.641 qpair failed and we were unable to recover it. 00:33:57.641 [2024-07-24 02:12:12.321875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.641 [2024-07-24 02:12:12.321905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.641 qpair failed and we were unable to recover it. 00:33:57.641 [2024-07-24 02:12:12.322029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.641 [2024-07-24 02:12:12.322059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.641 qpair failed and we were unable to recover it. 00:33:57.641 [2024-07-24 02:12:12.322240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.641 [2024-07-24 02:12:12.322265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.641 qpair failed and we were unable to recover it. 00:33:57.641 [2024-07-24 02:12:12.322397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.641 [2024-07-24 02:12:12.322440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.641 qpair failed and we were unable to recover it. 00:33:57.641 [2024-07-24 02:12:12.322612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.641 [2024-07-24 02:12:12.322641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.641 qpair failed and we were unable to recover it. 00:33:57.641 [2024-07-24 02:12:12.322757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.641 [2024-07-24 02:12:12.322785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.641 qpair failed and we were unable to recover it. 00:33:57.641 [2024-07-24 02:12:12.322963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.641 [2024-07-24 02:12:12.322989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.641 qpair failed and we were unable to recover it. 00:33:57.641 [2024-07-24 02:12:12.323147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.641 [2024-07-24 02:12:12.323173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.641 qpair failed and we were unable to recover it. 00:33:57.641 [2024-07-24 02:12:12.323304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.641 [2024-07-24 02:12:12.323336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.641 qpair failed and we were unable to recover it. 00:33:57.641 [2024-07-24 02:12:12.323494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.641 [2024-07-24 02:12:12.323523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.641 qpair failed and we were unable to recover it. 00:33:57.641 [2024-07-24 02:12:12.323672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.641 [2024-07-24 02:12:12.323699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.641 qpair failed and we were unable to recover it. 00:33:57.641 [2024-07-24 02:12:12.323838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.641 [2024-07-24 02:12:12.323881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.641 qpair failed and we were unable to recover it. 00:33:57.641 [2024-07-24 02:12:12.324025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.641 [2024-07-24 02:12:12.324053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.641 qpair failed and we were unable to recover it. 00:33:57.641 [2024-07-24 02:12:12.324213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.641 [2024-07-24 02:12:12.324239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.641 qpair failed and we were unable to recover it. 00:33:57.641 [2024-07-24 02:12:12.324371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.641 [2024-07-24 02:12:12.324398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.641 qpair failed and we were unable to recover it. 00:33:57.641 [2024-07-24 02:12:12.324581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.641 [2024-07-24 02:12:12.324609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.641 qpair failed and we were unable to recover it. 00:33:57.641 [2024-07-24 02:12:12.324766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.641 [2024-07-24 02:12:12.324795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.641 qpair failed and we were unable to recover it. 00:33:57.641 [2024-07-24 02:12:12.324961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.641 [2024-07-24 02:12:12.324990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.641 qpair failed and we were unable to recover it. 00:33:57.641 [2024-07-24 02:12:12.325133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.641 [2024-07-24 02:12:12.325159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.641 qpair failed and we were unable to recover it. 00:33:57.641 [2024-07-24 02:12:12.325337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.641 [2024-07-24 02:12:12.325367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.641 qpair failed and we were unable to recover it. 00:33:57.641 [2024-07-24 02:12:12.325509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.641 [2024-07-24 02:12:12.325537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.641 qpair failed and we were unable to recover it. 00:33:57.641 [2024-07-24 02:12:12.325642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.641 [2024-07-24 02:12:12.325671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.641 qpair failed and we were unable to recover it. 00:33:57.641 [2024-07-24 02:12:12.325825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.641 [2024-07-24 02:12:12.325850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.641 qpair failed and we were unable to recover it. 00:33:57.641 [2024-07-24 02:12:12.326025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.641 [2024-07-24 02:12:12.326054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.641 qpair failed and we were unable to recover it. 00:33:57.641 [2024-07-24 02:12:12.326228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.641 [2024-07-24 02:12:12.326253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.641 qpair failed and we were unable to recover it. 00:33:57.641 [2024-07-24 02:12:12.326383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.641 [2024-07-24 02:12:12.326410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.641 qpair failed and we were unable to recover it. 00:33:57.641 [2024-07-24 02:12:12.326538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.641 [2024-07-24 02:12:12.326563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.641 qpair failed and we were unable to recover it. 00:33:57.641 [2024-07-24 02:12:12.326672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.641 [2024-07-24 02:12:12.326698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.641 qpair failed and we were unable to recover it. 00:33:57.641 [2024-07-24 02:12:12.326854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.641 [2024-07-24 02:12:12.326888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.641 qpair failed and we were unable to recover it. 00:33:57.641 [2024-07-24 02:12:12.327067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.641 [2024-07-24 02:12:12.327096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.641 qpair failed and we were unable to recover it. 00:33:57.641 [2024-07-24 02:12:12.327278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.641 [2024-07-24 02:12:12.327304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.641 qpair failed and we were unable to recover it. 00:33:57.641 [2024-07-24 02:12:12.327425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.641 [2024-07-24 02:12:12.327451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.641 qpair failed and we were unable to recover it. 00:33:57.642 [2024-07-24 02:12:12.327557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.642 [2024-07-24 02:12:12.327583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.642 qpair failed and we were unable to recover it. 00:33:57.642 [2024-07-24 02:12:12.327755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.642 [2024-07-24 02:12:12.327784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.642 qpair failed and we were unable to recover it. 00:33:57.642 [2024-07-24 02:12:12.327933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.642 [2024-07-24 02:12:12.327958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.642 qpair failed and we were unable to recover it. 00:33:57.642 [2024-07-24 02:12:12.328071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.642 [2024-07-24 02:12:12.328097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.642 qpair failed and we were unable to recover it. 00:33:57.642 [2024-07-24 02:12:12.328207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.642 [2024-07-24 02:12:12.328233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.642 qpair failed and we were unable to recover it. 00:33:57.642 [2024-07-24 02:12:12.328393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.642 [2024-07-24 02:12:12.328419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.642 qpair failed and we were unable to recover it. 00:33:57.642 [2024-07-24 02:12:12.328585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.642 [2024-07-24 02:12:12.328611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.642 qpair failed and we were unable to recover it. 00:33:57.642 [2024-07-24 02:12:12.328783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.642 [2024-07-24 02:12:12.328812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.642 qpair failed and we were unable to recover it. 00:33:57.642 [2024-07-24 02:12:12.328924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.642 [2024-07-24 02:12:12.328953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.642 qpair failed and we were unable to recover it. 00:33:57.642 [2024-07-24 02:12:12.329123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.642 [2024-07-24 02:12:12.329152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.642 qpair failed and we were unable to recover it. 00:33:57.642 [2024-07-24 02:12:12.329303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.642 [2024-07-24 02:12:12.329335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.642 qpair failed and we were unable to recover it. 00:33:57.642 [2024-07-24 02:12:12.329472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.642 [2024-07-24 02:12:12.329515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.642 qpair failed and we were unable to recover it. 00:33:57.642 [2024-07-24 02:12:12.329632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.642 [2024-07-24 02:12:12.329661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.642 qpair failed and we were unable to recover it. 00:33:57.642 [2024-07-24 02:12:12.329799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.642 [2024-07-24 02:12:12.329827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.642 qpair failed and we were unable to recover it. 00:33:57.642 [2024-07-24 02:12:12.329977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.642 [2024-07-24 02:12:12.330003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.642 qpair failed and we were unable to recover it. 00:33:57.642 [2024-07-24 02:12:12.330134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.642 [2024-07-24 02:12:12.330162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.642 qpair failed and we were unable to recover it. 00:33:57.642 [2024-07-24 02:12:12.330329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.642 [2024-07-24 02:12:12.330359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.642 qpair failed and we were unable to recover it. 00:33:57.642 [2024-07-24 02:12:12.330503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.642 [2024-07-24 02:12:12.330532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.642 qpair failed and we were unable to recover it. 00:33:57.642 [2024-07-24 02:12:12.330685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.642 [2024-07-24 02:12:12.330711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.642 qpair failed and we were unable to recover it. 00:33:57.642 [2024-07-24 02:12:12.330842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.642 [2024-07-24 02:12:12.330868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.642 qpair failed and we were unable to recover it. 00:33:57.642 [2024-07-24 02:12:12.331027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.642 [2024-07-24 02:12:12.331056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.642 qpair failed and we were unable to recover it. 00:33:57.642 [2024-07-24 02:12:12.331200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.642 [2024-07-24 02:12:12.331228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.642 qpair failed and we were unable to recover it. 00:33:57.642 [2024-07-24 02:12:12.331391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.642 [2024-07-24 02:12:12.331417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.642 qpair failed and we were unable to recover it. 00:33:57.642 [2024-07-24 02:12:12.331590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.642 [2024-07-24 02:12:12.331623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.642 qpair failed and we were unable to recover it. 00:33:57.642 [2024-07-24 02:12:12.331748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.642 [2024-07-24 02:12:12.331777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.642 qpair failed and we were unable to recover it. 00:33:57.642 [2024-07-24 02:12:12.331921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.642 [2024-07-24 02:12:12.331951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.642 qpair failed and we were unable to recover it. 00:33:57.642 [2024-07-24 02:12:12.332095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.642 [2024-07-24 02:12:12.332121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.642 qpair failed and we were unable to recover it. 00:33:57.642 [2024-07-24 02:12:12.332253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.642 [2024-07-24 02:12:12.332279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.642 qpair failed and we were unable to recover it. 00:33:57.642 [2024-07-24 02:12:12.332465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.642 [2024-07-24 02:12:12.332492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.642 qpair failed and we were unable to recover it. 00:33:57.642 [2024-07-24 02:12:12.332603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.642 [2024-07-24 02:12:12.332629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.642 qpair failed and we were unable to recover it. 00:33:57.642 [2024-07-24 02:12:12.332784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.642 [2024-07-24 02:12:12.332810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.642 qpair failed and we were unable to recover it. 00:33:57.642 [2024-07-24 02:12:12.332913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.642 [2024-07-24 02:12:12.332956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.642 qpair failed and we were unable to recover it. 00:33:57.642 [2024-07-24 02:12:12.333127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.642 [2024-07-24 02:12:12.333156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.642 qpair failed and we were unable to recover it. 00:33:57.642 [2024-07-24 02:12:12.333293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.642 [2024-07-24 02:12:12.333328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.642 qpair failed and we were unable to recover it. 00:33:57.642 [2024-07-24 02:12:12.333484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.642 [2024-07-24 02:12:12.333511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.642 qpair failed and we were unable to recover it. 00:33:57.642 [2024-07-24 02:12:12.333621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.643 [2024-07-24 02:12:12.333647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.643 qpair failed and we were unable to recover it. 00:33:57.643 [2024-07-24 02:12:12.333807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.643 [2024-07-24 02:12:12.333833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.643 qpair failed and we were unable to recover it. 00:33:57.643 [2024-07-24 02:12:12.333994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.643 [2024-07-24 02:12:12.334023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.643 qpair failed and we were unable to recover it. 00:33:57.643 [2024-07-24 02:12:12.334176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.643 [2024-07-24 02:12:12.334201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.643 qpair failed and we were unable to recover it. 00:33:57.643 [2024-07-24 02:12:12.334339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.643 [2024-07-24 02:12:12.334368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.643 qpair failed and we were unable to recover it. 00:33:57.643 [2024-07-24 02:12:12.334509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.643 [2024-07-24 02:12:12.334535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.643 qpair failed and we were unable to recover it. 00:33:57.643 [2024-07-24 02:12:12.334693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.643 [2024-07-24 02:12:12.334719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.643 qpair failed and we were unable to recover it. 00:33:57.643 [2024-07-24 02:12:12.334918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.643 [2024-07-24 02:12:12.334944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.643 qpair failed and we were unable to recover it. 00:33:57.643 [2024-07-24 02:12:12.335092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.643 [2024-07-24 02:12:12.335121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.643 qpair failed and we were unable to recover it. 00:33:57.643 [2024-07-24 02:12:12.335289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.643 [2024-07-24 02:12:12.335326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.643 qpair failed and we were unable to recover it. 00:33:57.643 [2024-07-24 02:12:12.335441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.643 [2024-07-24 02:12:12.335472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.643 qpair failed and we were unable to recover it. 00:33:57.643 [2024-07-24 02:12:12.335629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.643 [2024-07-24 02:12:12.335655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.643 qpair failed and we were unable to recover it. 00:33:57.643 [2024-07-24 02:12:12.335833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.643 [2024-07-24 02:12:12.335862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.643 qpair failed and we were unable to recover it. 00:33:57.643 [2024-07-24 02:12:12.336005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.643 [2024-07-24 02:12:12.336033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.643 qpair failed and we were unable to recover it. 00:33:57.643 [2024-07-24 02:12:12.336180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.643 [2024-07-24 02:12:12.336209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.643 qpair failed and we were unable to recover it. 00:33:57.643 [2024-07-24 02:12:12.336362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.643 [2024-07-24 02:12:12.336388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.643 qpair failed and we were unable to recover it. 00:33:57.643 [2024-07-24 02:12:12.336501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.643 [2024-07-24 02:12:12.336528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.643 qpair failed and we were unable to recover it. 00:33:57.643 [2024-07-24 02:12:12.336690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.643 [2024-07-24 02:12:12.336719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.643 qpair failed and we were unable to recover it. 00:33:57.643 [2024-07-24 02:12:12.336860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.643 [2024-07-24 02:12:12.336889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.643 qpair failed and we were unable to recover it. 00:33:57.643 [2024-07-24 02:12:12.337014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.643 [2024-07-24 02:12:12.337039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.643 qpair failed and we were unable to recover it. 00:33:57.643 [2024-07-24 02:12:12.337321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.643 [2024-07-24 02:12:12.337350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.643 qpair failed and we were unable to recover it. 00:33:57.643 [2024-07-24 02:12:12.337453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.643 [2024-07-24 02:12:12.337482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.643 qpair failed and we were unable to recover it. 00:33:57.643 [2024-07-24 02:12:12.337625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.643 [2024-07-24 02:12:12.337653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.643 qpair failed and we were unable to recover it. 00:33:57.643 [2024-07-24 02:12:12.337810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.643 [2024-07-24 02:12:12.337836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.643 qpair failed and we were unable to recover it. 00:33:57.643 [2024-07-24 02:12:12.337947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.643 [2024-07-24 02:12:12.337973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.643 qpair failed and we were unable to recover it. 00:33:57.643 [2024-07-24 02:12:12.338132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.643 [2024-07-24 02:12:12.338160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.643 qpair failed and we were unable to recover it. 00:33:57.643 [2024-07-24 02:12:12.338294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.643 [2024-07-24 02:12:12.338341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.643 qpair failed and we were unable to recover it. 00:33:57.643 [2024-07-24 02:12:12.338472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.643 [2024-07-24 02:12:12.338498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.643 qpair failed and we were unable to recover it. 00:33:57.643 [2024-07-24 02:12:12.338602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.643 [2024-07-24 02:12:12.338627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.643 qpair failed and we were unable to recover it. 00:33:57.643 [2024-07-24 02:12:12.338722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.643 [2024-07-24 02:12:12.338751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.643 qpair failed and we were unable to recover it. 00:33:57.643 [2024-07-24 02:12:12.338886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.643 [2024-07-24 02:12:12.338912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.643 qpair failed and we were unable to recover it. 00:33:57.643 [2024-07-24 02:12:12.339028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.643 [2024-07-24 02:12:12.339054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.643 qpair failed and we were unable to recover it. 00:33:57.643 [2024-07-24 02:12:12.339184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.643 [2024-07-24 02:12:12.339227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.643 qpair failed and we were unable to recover it. 00:33:57.643 [2024-07-24 02:12:12.339410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.643 [2024-07-24 02:12:12.339440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.643 qpair failed and we were unable to recover it. 00:33:57.643 [2024-07-24 02:12:12.339564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.643 [2024-07-24 02:12:12.339593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.644 qpair failed and we were unable to recover it. 00:33:57.644 [2024-07-24 02:12:12.339751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.644 [2024-07-24 02:12:12.339777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.644 qpair failed and we were unable to recover it. 00:33:57.644 [2024-07-24 02:12:12.339884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.644 [2024-07-24 02:12:12.339911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.644 qpair failed and we were unable to recover it. 00:33:57.644 [2024-07-24 02:12:12.340072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.644 [2024-07-24 02:12:12.340102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.644 qpair failed and we were unable to recover it. 00:33:57.644 [2024-07-24 02:12:12.340243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.644 [2024-07-24 02:12:12.340273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.644 qpair failed and we were unable to recover it. 00:33:57.644 [2024-07-24 02:12:12.340434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.644 [2024-07-24 02:12:12.340460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.644 qpair failed and we were unable to recover it. 00:33:57.644 [2024-07-24 02:12:12.340598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.644 [2024-07-24 02:12:12.340625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.644 qpair failed and we were unable to recover it. 00:33:57.644 [2024-07-24 02:12:12.340788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.644 [2024-07-24 02:12:12.340832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.644 qpair failed and we were unable to recover it. 00:33:57.644 [2024-07-24 02:12:12.340973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.644 [2024-07-24 02:12:12.341003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.644 qpair failed and we were unable to recover it. 00:33:57.644 [2024-07-24 02:12:12.341168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.644 [2024-07-24 02:12:12.341194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.644 qpair failed and we were unable to recover it. 00:33:57.644 [2024-07-24 02:12:12.341378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.644 [2024-07-24 02:12:12.341408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.644 qpair failed and we were unable to recover it. 00:33:57.644 [2024-07-24 02:12:12.341555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.644 [2024-07-24 02:12:12.341584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.644 qpair failed and we were unable to recover it. 00:33:57.644 [2024-07-24 02:12:12.341700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.644 [2024-07-24 02:12:12.341728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.644 qpair failed and we were unable to recover it. 00:33:57.644 [2024-07-24 02:12:12.341911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.644 [2024-07-24 02:12:12.341938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.644 qpair failed and we were unable to recover it. 00:33:57.644 [2024-07-24 02:12:12.342089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.644 [2024-07-24 02:12:12.342119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.644 qpair failed and we were unable to recover it. 00:33:57.644 [2024-07-24 02:12:12.342230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.644 [2024-07-24 02:12:12.342259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.644 qpair failed and we were unable to recover it. 00:33:57.644 [2024-07-24 02:12:12.342418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.644 [2024-07-24 02:12:12.342448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.644 qpair failed and we were unable to recover it. 00:33:57.644 [2024-07-24 02:12:12.342578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.644 [2024-07-24 02:12:12.342604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.644 qpair failed and we were unable to recover it. 00:33:57.644 [2024-07-24 02:12:12.342709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.644 [2024-07-24 02:12:12.342735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.644 qpair failed and we were unable to recover it. 00:33:57.644 [2024-07-24 02:12:12.342879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.644 [2024-07-24 02:12:12.342905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.644 qpair failed and we were unable to recover it. 00:33:57.644 [2024-07-24 02:12:12.343081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.644 [2024-07-24 02:12:12.343110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.644 qpair failed and we were unable to recover it. 00:33:57.644 [2024-07-24 02:12:12.343287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.644 [2024-07-24 02:12:12.343313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.644 qpair failed and we were unable to recover it. 00:33:57.644 [2024-07-24 02:12:12.343460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.644 [2024-07-24 02:12:12.343490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.644 qpair failed and we were unable to recover it. 00:33:57.644 [2024-07-24 02:12:12.343624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.644 [2024-07-24 02:12:12.343650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.644 qpair failed and we were unable to recover it. 00:33:57.644 [2024-07-24 02:12:12.343827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.644 [2024-07-24 02:12:12.343855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.644 qpair failed and we were unable to recover it. 00:33:57.644 [2024-07-24 02:12:12.344025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.644 [2024-07-24 02:12:12.344051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.644 qpair failed and we were unable to recover it. 00:33:57.644 [2024-07-24 02:12:12.344212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.644 [2024-07-24 02:12:12.344241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.644 qpair failed and we were unable to recover it. 00:33:57.644 [2024-07-24 02:12:12.344413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.644 [2024-07-24 02:12:12.344442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.644 qpair failed and we were unable to recover it. 00:33:57.644 [2024-07-24 02:12:12.344574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.645 [2024-07-24 02:12:12.344603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.645 qpair failed and we were unable to recover it. 00:33:57.645 [2024-07-24 02:12:12.344739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.645 [2024-07-24 02:12:12.344765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.645 qpair failed and we were unable to recover it. 00:33:57.645 [2024-07-24 02:12:12.344871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.645 [2024-07-24 02:12:12.344897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.645 qpair failed and we were unable to recover it. 00:33:57.645 [2024-07-24 02:12:12.345023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.645 [2024-07-24 02:12:12.345049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.645 qpair failed and we were unable to recover it. 00:33:57.645 [2024-07-24 02:12:12.345201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.645 [2024-07-24 02:12:12.345230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.645 qpair failed and we were unable to recover it. 00:33:57.645 [2024-07-24 02:12:12.345382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.645 [2024-07-24 02:12:12.345409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.645 qpair failed and we were unable to recover it. 00:33:57.645 [2024-07-24 02:12:12.345510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.645 [2024-07-24 02:12:12.345537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.645 qpair failed and we were unable to recover it. 00:33:57.645 [2024-07-24 02:12:12.345700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.645 [2024-07-24 02:12:12.345729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.645 qpair failed and we were unable to recover it. 00:33:57.645 [2024-07-24 02:12:12.345845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.645 [2024-07-24 02:12:12.345874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.645 qpair failed and we were unable to recover it. 00:33:57.645 [2024-07-24 02:12:12.346002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.645 [2024-07-24 02:12:12.346028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.645 qpair failed and we were unable to recover it. 00:33:57.645 [2024-07-24 02:12:12.346163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.645 [2024-07-24 02:12:12.346189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.645 qpair failed and we were unable to recover it. 00:33:57.645 [2024-07-24 02:12:12.346374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.645 [2024-07-24 02:12:12.346403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.645 qpair failed and we were unable to recover it. 00:33:57.645 [2024-07-24 02:12:12.346512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.645 [2024-07-24 02:12:12.346541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.645 qpair failed and we were unable to recover it. 00:33:57.645 [2024-07-24 02:12:12.346693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.645 [2024-07-24 02:12:12.346718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.645 qpair failed and we were unable to recover it. 00:33:57.645 [2024-07-24 02:12:12.346888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.645 [2024-07-24 02:12:12.346917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.645 qpair failed and we were unable to recover it. 00:33:57.645 [2024-07-24 02:12:12.347058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.645 [2024-07-24 02:12:12.347087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.645 qpair failed and we were unable to recover it. 00:33:57.645 [2024-07-24 02:12:12.347198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.645 [2024-07-24 02:12:12.347226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.645 qpair failed and we were unable to recover it. 00:33:57.645 [2024-07-24 02:12:12.347431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.645 [2024-07-24 02:12:12.347457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.645 qpair failed and we were unable to recover it. 00:33:57.645 [2024-07-24 02:12:12.347611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.645 [2024-07-24 02:12:12.347640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.645 qpair failed and we were unable to recover it. 00:33:57.645 [2024-07-24 02:12:12.347790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.645 [2024-07-24 02:12:12.347819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.645 qpair failed and we were unable to recover it. 00:33:57.645 [2024-07-24 02:12:12.347966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.645 [2024-07-24 02:12:12.347995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.645 qpair failed and we were unable to recover it. 00:33:57.645 [2024-07-24 02:12:12.348172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.645 [2024-07-24 02:12:12.348198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.645 qpair failed and we were unable to recover it. 00:33:57.645 [2024-07-24 02:12:12.348336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.645 [2024-07-24 02:12:12.348391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.645 qpair failed and we were unable to recover it. 00:33:57.645 [2024-07-24 02:12:12.348532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.645 [2024-07-24 02:12:12.348561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.645 qpair failed and we were unable to recover it. 00:33:57.645 [2024-07-24 02:12:12.348690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.645 [2024-07-24 02:12:12.348719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.645 qpair failed and we were unable to recover it. 00:33:57.645 [2024-07-24 02:12:12.348880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.645 [2024-07-24 02:12:12.348906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.645 qpair failed and we were unable to recover it. 00:33:57.645 [2024-07-24 02:12:12.349040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.645 [2024-07-24 02:12:12.349082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.645 qpair failed and we were unable to recover it. 00:33:57.645 [2024-07-24 02:12:12.349267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.645 [2024-07-24 02:12:12.349293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.645 qpair failed and we were unable to recover it. 00:33:57.645 [2024-07-24 02:12:12.349411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.645 [2024-07-24 02:12:12.349438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.645 qpair failed and we were unable to recover it. 00:33:57.645 [2024-07-24 02:12:12.349564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.645 [2024-07-24 02:12:12.349590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.645 qpair failed and we were unable to recover it. 00:33:57.645 [2024-07-24 02:12:12.349716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.645 [2024-07-24 02:12:12.349742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.645 qpair failed and we were unable to recover it. 00:33:57.645 [2024-07-24 02:12:12.349907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.645 [2024-07-24 02:12:12.349936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.645 qpair failed and we were unable to recover it. 00:33:57.645 [2024-07-24 02:12:12.350072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.645 [2024-07-24 02:12:12.350101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.645 qpair failed and we were unable to recover it. 00:33:57.645 [2024-07-24 02:12:12.350253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.645 [2024-07-24 02:12:12.350278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.645 qpair failed and we were unable to recover it. 00:33:57.645 [2024-07-24 02:12:12.350413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.645 [2024-07-24 02:12:12.350456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.645 qpair failed and we were unable to recover it. 00:33:57.645 [2024-07-24 02:12:12.350602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.645 [2024-07-24 02:12:12.350638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.645 qpair failed and we were unable to recover it. 00:33:57.645 [2024-07-24 02:12:12.350782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.645 [2024-07-24 02:12:12.350810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.645 qpair failed and we were unable to recover it. 00:33:57.645 [2024-07-24 02:12:12.350995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.646 [2024-07-24 02:12:12.351021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.646 qpair failed and we were unable to recover it. 00:33:57.646 [2024-07-24 02:12:12.351130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.646 [2024-07-24 02:12:12.351173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.646 qpair failed and we were unable to recover it. 00:33:57.646 [2024-07-24 02:12:12.351333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.646 [2024-07-24 02:12:12.351360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.646 qpair failed and we were unable to recover it. 00:33:57.646 [2024-07-24 02:12:12.351485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.646 [2024-07-24 02:12:12.351510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.646 qpair failed and we were unable to recover it. 00:33:57.646 [2024-07-24 02:12:12.351631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.646 [2024-07-24 02:12:12.351657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.646 qpair failed and we were unable to recover it. 00:33:57.646 [2024-07-24 02:12:12.351793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.646 [2024-07-24 02:12:12.351819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.646 qpair failed and we were unable to recover it. 00:33:57.646 [2024-07-24 02:12:12.351978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.646 [2024-07-24 02:12:12.352021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.646 qpair failed and we were unable to recover it. 00:33:57.646 [2024-07-24 02:12:12.352132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.646 [2024-07-24 02:12:12.352160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.646 qpair failed and we were unable to recover it. 00:33:57.646 [2024-07-24 02:12:12.352280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.646 [2024-07-24 02:12:12.352306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.646 qpair failed and we were unable to recover it. 00:33:57.646 [2024-07-24 02:12:12.352485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.646 [2024-07-24 02:12:12.352529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.646 qpair failed and we were unable to recover it. 00:33:57.646 [2024-07-24 02:12:12.352640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.646 [2024-07-24 02:12:12.352668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.646 qpair failed and we were unable to recover it. 00:33:57.646 [2024-07-24 02:12:12.352781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.646 [2024-07-24 02:12:12.352810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.646 qpair failed and we were unable to recover it. 00:33:57.646 [2024-07-24 02:12:12.352973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.646 [2024-07-24 02:12:12.352999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.646 qpair failed and we were unable to recover it. 00:33:57.646 [2024-07-24 02:12:12.353129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.646 [2024-07-24 02:12:12.353154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.646 qpair failed and we were unable to recover it. 00:33:57.646 [2024-07-24 02:12:12.353260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.646 [2024-07-24 02:12:12.353286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.646 qpair failed and we were unable to recover it. 00:33:57.646 [2024-07-24 02:12:12.353392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.646 [2024-07-24 02:12:12.353419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.646 qpair failed and we were unable to recover it. 00:33:57.646 [2024-07-24 02:12:12.353516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.646 [2024-07-24 02:12:12.353543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.646 qpair failed and we were unable to recover it. 00:33:57.646 [2024-07-24 02:12:12.353703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.646 [2024-07-24 02:12:12.353745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.646 qpair failed and we were unable to recover it. 00:33:57.646 [2024-07-24 02:12:12.353917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.646 [2024-07-24 02:12:12.353945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.646 qpair failed and we were unable to recover it. 00:33:57.646 [2024-07-24 02:12:12.354113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.646 [2024-07-24 02:12:12.354142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.646 qpair failed and we were unable to recover it. 00:33:57.646 [2024-07-24 02:12:12.354253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.646 [2024-07-24 02:12:12.354279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.646 qpair failed and we were unable to recover it. 00:33:57.646 [2024-07-24 02:12:12.354427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.646 [2024-07-24 02:12:12.354453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.646 qpair failed and we were unable to recover it. 00:33:57.646 [2024-07-24 02:12:12.354611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.646 [2024-07-24 02:12:12.354639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.646 qpair failed and we were unable to recover it. 00:33:57.646 [2024-07-24 02:12:12.354749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.646 [2024-07-24 02:12:12.354778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.646 qpair failed and we were unable to recover it. 00:33:57.646 [2024-07-24 02:12:12.354932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.646 [2024-07-24 02:12:12.354957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.646 qpair failed and we were unable to recover it. 00:33:57.646 [2024-07-24 02:12:12.355078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.646 [2024-07-24 02:12:12.355106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.646 qpair failed and we were unable to recover it. 00:33:57.646 [2024-07-24 02:12:12.355272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.646 [2024-07-24 02:12:12.355301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.646 qpair failed and we were unable to recover it. 00:33:57.646 [2024-07-24 02:12:12.355479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.646 [2024-07-24 02:12:12.355508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.646 qpair failed and we were unable to recover it. 00:33:57.646 [2024-07-24 02:12:12.355656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.646 [2024-07-24 02:12:12.355682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.646 qpair failed and we were unable to recover it. 00:33:57.646 [2024-07-24 02:12:12.355816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.646 [2024-07-24 02:12:12.355858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.646 qpair failed and we were unable to recover it. 00:33:57.646 [2024-07-24 02:12:12.356072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.646 [2024-07-24 02:12:12.356098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.646 qpair failed and we were unable to recover it. 00:33:57.646 [2024-07-24 02:12:12.356268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.646 [2024-07-24 02:12:12.356296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.646 qpair failed and we were unable to recover it. 00:33:57.646 [2024-07-24 02:12:12.356486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.646 [2024-07-24 02:12:12.356512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.646 qpair failed and we were unable to recover it. 00:33:57.646 [2024-07-24 02:12:12.356620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.646 [2024-07-24 02:12:12.356645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.646 qpair failed and we were unable to recover it. 00:33:57.646 [2024-07-24 02:12:12.356799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.646 [2024-07-24 02:12:12.356828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.646 qpair failed and we were unable to recover it. 00:33:57.646 [2024-07-24 02:12:12.356972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.646 [2024-07-24 02:12:12.357000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.646 qpair failed and we were unable to recover it. 00:33:57.646 [2024-07-24 02:12:12.357143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.647 [2024-07-24 02:12:12.357169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.647 qpair failed and we were unable to recover it. 00:33:57.647 [2024-07-24 02:12:12.357285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.647 [2024-07-24 02:12:12.357313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.647 qpair failed and we were unable to recover it. 00:33:57.647 [2024-07-24 02:12:12.357531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.647 [2024-07-24 02:12:12.357557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.647 qpair failed and we were unable to recover it. 00:33:57.647 [2024-07-24 02:12:12.357695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.647 [2024-07-24 02:12:12.357721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.647 qpair failed and we were unable to recover it. 00:33:57.647 [2024-07-24 02:12:12.357844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.647 [2024-07-24 02:12:12.357870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.647 qpair failed and we were unable to recover it. 00:33:57.647 [2024-07-24 02:12:12.358045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.647 [2024-07-24 02:12:12.358074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.647 qpair failed and we were unable to recover it. 00:33:57.647 [2024-07-24 02:12:12.358218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.647 [2024-07-24 02:12:12.358248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.647 qpair failed and we were unable to recover it. 00:33:57.647 [2024-07-24 02:12:12.358401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.647 [2024-07-24 02:12:12.358430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.647 qpair failed and we were unable to recover it. 00:33:57.647 [2024-07-24 02:12:12.358582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.647 [2024-07-24 02:12:12.358608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.647 qpair failed and we were unable to recover it. 00:33:57.647 [2024-07-24 02:12:12.358775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.647 [2024-07-24 02:12:12.358819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.647 qpair failed and we were unable to recover it. 00:33:57.647 [2024-07-24 02:12:12.358967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.647 [2024-07-24 02:12:12.358996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.647 qpair failed and we were unable to recover it. 00:33:57.647 [2024-07-24 02:12:12.359170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.647 [2024-07-24 02:12:12.359198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.647 qpair failed and we were unable to recover it. 00:33:57.647 [2024-07-24 02:12:12.359360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.647 [2024-07-24 02:12:12.359386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.647 qpair failed and we were unable to recover it. 00:33:57.647 [2024-07-24 02:12:12.359498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.647 [2024-07-24 02:12:12.359524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.647 qpair failed and we were unable to recover it. 00:33:57.647 [2024-07-24 02:12:12.359730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.647 [2024-07-24 02:12:12.359772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.647 qpair failed and we were unable to recover it. 00:33:57.647 [2024-07-24 02:12:12.359943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.647 [2024-07-24 02:12:12.359971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.647 qpair failed and we were unable to recover it. 00:33:57.647 [2024-07-24 02:12:12.360126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.647 [2024-07-24 02:12:12.360153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.647 qpair failed and we were unable to recover it. 00:33:57.647 [2024-07-24 02:12:12.360346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.647 [2024-07-24 02:12:12.360387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.647 qpair failed and we were unable to recover it. 00:33:57.647 [2024-07-24 02:12:12.360529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.647 [2024-07-24 02:12:12.360557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.647 qpair failed and we were unable to recover it. 00:33:57.647 [2024-07-24 02:12:12.360696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.647 [2024-07-24 02:12:12.360724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.647 qpair failed and we were unable to recover it. 00:33:57.647 [2024-07-24 02:12:12.360876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.647 [2024-07-24 02:12:12.360902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.647 qpair failed and we were unable to recover it. 00:33:57.647 [2024-07-24 02:12:12.361017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.647 [2024-07-24 02:12:12.361044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.647 qpair failed and we were unable to recover it. 00:33:57.647 [2024-07-24 02:12:12.361201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.647 [2024-07-24 02:12:12.361230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.647 qpair failed and we were unable to recover it. 00:33:57.647 [2024-07-24 02:12:12.361355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.647 [2024-07-24 02:12:12.361385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.647 qpair failed and we were unable to recover it. 00:33:57.647 [2024-07-24 02:12:12.361542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.647 [2024-07-24 02:12:12.361568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.647 qpair failed and we were unable to recover it. 00:33:57.647 [2024-07-24 02:12:12.361691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.647 [2024-07-24 02:12:12.361732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.647 qpair failed and we were unable to recover it. 00:33:57.647 [2024-07-24 02:12:12.361880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.647 [2024-07-24 02:12:12.361908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.647 qpair failed and we were unable to recover it. 00:33:57.647 [2024-07-24 02:12:12.362045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.647 [2024-07-24 02:12:12.362073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.647 qpair failed and we were unable to recover it. 00:33:57.647 [2024-07-24 02:12:12.362225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.647 [2024-07-24 02:12:12.362251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.647 qpair failed and we were unable to recover it. 00:33:57.647 [2024-07-24 02:12:12.362382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.647 [2024-07-24 02:12:12.362425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.647 qpair failed and we were unable to recover it. 00:33:57.647 [2024-07-24 02:12:12.362569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.647 [2024-07-24 02:12:12.362602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.647 qpair failed and we were unable to recover it. 00:33:57.647 [2024-07-24 02:12:12.362706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.647 [2024-07-24 02:12:12.362734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.647 qpair failed and we were unable to recover it. 00:33:57.647 [2024-07-24 02:12:12.362884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.647 [2024-07-24 02:12:12.362909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.647 qpair failed and we were unable to recover it. 00:33:57.647 [2024-07-24 02:12:12.363041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.647 [2024-07-24 02:12:12.363082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.647 qpair failed and we were unable to recover it. 00:33:57.647 [2024-07-24 02:12:12.363201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.647 [2024-07-24 02:12:12.363230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.647 qpair failed and we were unable to recover it. 00:33:57.647 [2024-07-24 02:12:12.363370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.647 [2024-07-24 02:12:12.363399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.647 qpair failed and we were unable to recover it. 00:33:57.647 [2024-07-24 02:12:12.363551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.647 [2024-07-24 02:12:12.363576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.647 qpair failed and we were unable to recover it. 00:33:57.647 [2024-07-24 02:12:12.363709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.648 [2024-07-24 02:12:12.363751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.648 qpair failed and we were unable to recover it. 00:33:57.648 [2024-07-24 02:12:12.363921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.648 [2024-07-24 02:12:12.363950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.648 qpair failed and we were unable to recover it. 00:33:57.648 [2024-07-24 02:12:12.364062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.648 [2024-07-24 02:12:12.364090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.648 qpair failed and we were unable to recover it. 00:33:57.648 [2024-07-24 02:12:12.364241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.648 [2024-07-24 02:12:12.364267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.648 qpair failed and we were unable to recover it. 00:33:57.648 [2024-07-24 02:12:12.364450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.648 [2024-07-24 02:12:12.364479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.648 qpair failed and we were unable to recover it. 00:33:57.648 [2024-07-24 02:12:12.364600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.648 [2024-07-24 02:12:12.364630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.648 qpair failed and we were unable to recover it. 00:33:57.648 [2024-07-24 02:12:12.364765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.648 [2024-07-24 02:12:12.364794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.648 qpair failed and we were unable to recover it. 00:33:57.648 [2024-07-24 02:12:12.364976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.648 [2024-07-24 02:12:12.365001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.648 qpair failed and we were unable to recover it. 00:33:57.648 [2024-07-24 02:12:12.365115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.648 [2024-07-24 02:12:12.365141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.648 qpair failed and we were unable to recover it. 00:33:57.648 [2024-07-24 02:12:12.365298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.648 [2024-07-24 02:12:12.365331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.648 qpair failed and we were unable to recover it. 00:33:57.648 [2024-07-24 02:12:12.365500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.648 [2024-07-24 02:12:12.365529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.648 qpair failed and we were unable to recover it. 00:33:57.648 [2024-07-24 02:12:12.365703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.648 [2024-07-24 02:12:12.365729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.648 qpair failed and we were unable to recover it. 00:33:57.648 [2024-07-24 02:12:12.365830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.648 [2024-07-24 02:12:12.365856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.648 qpair failed and we were unable to recover it. 00:33:57.648 [2024-07-24 02:12:12.366011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.648 [2024-07-24 02:12:12.366040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.648 qpair failed and we were unable to recover it. 00:33:57.648 [2024-07-24 02:12:12.366174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.648 [2024-07-24 02:12:12.366202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.648 qpair failed and we were unable to recover it. 00:33:57.648 [2024-07-24 02:12:12.366347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.648 [2024-07-24 02:12:12.366380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.648 qpair failed and we were unable to recover it. 00:33:57.648 [2024-07-24 02:12:12.366529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.648 [2024-07-24 02:12:12.366557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.648 qpair failed and we were unable to recover it. 00:33:57.648 [2024-07-24 02:12:12.366696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.648 [2024-07-24 02:12:12.366725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.648 qpair failed and we were unable to recover it. 00:33:57.648 [2024-07-24 02:12:12.366894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.648 [2024-07-24 02:12:12.366923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.648 qpair failed and we were unable to recover it. 00:33:57.648 [2024-07-24 02:12:12.367038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.648 [2024-07-24 02:12:12.367064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.648 qpair failed and we were unable to recover it. 00:33:57.648 [2024-07-24 02:12:12.367199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.648 [2024-07-24 02:12:12.367225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.648 qpair failed and we were unable to recover it. 00:33:57.648 [2024-07-24 02:12:12.367411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.648 [2024-07-24 02:12:12.367437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.648 qpair failed and we were unable to recover it. 00:33:57.648 [2024-07-24 02:12:12.367620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.648 [2024-07-24 02:12:12.367649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.648 qpair failed and we were unable to recover it. 00:33:57.648 [2024-07-24 02:12:12.367800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.648 [2024-07-24 02:12:12.367827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.648 qpair failed and we were unable to recover it. 00:33:57.648 [2024-07-24 02:12:12.367952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.648 [2024-07-24 02:12:12.367993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.648 qpair failed and we were unable to recover it. 00:33:57.648 [2024-07-24 02:12:12.368142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.648 [2024-07-24 02:12:12.368170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.648 qpair failed and we were unable to recover it. 00:33:57.648 [2024-07-24 02:12:12.368343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.648 [2024-07-24 02:12:12.368369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.648 qpair failed and we were unable to recover it. 00:33:57.648 [2024-07-24 02:12:12.368499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.648 [2024-07-24 02:12:12.368525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.648 qpair failed and we were unable to recover it. 00:33:57.648 [2024-07-24 02:12:12.368628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.648 [2024-07-24 02:12:12.368655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.648 qpair failed and we were unable to recover it. 00:33:57.648 [2024-07-24 02:12:12.368834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.648 [2024-07-24 02:12:12.368862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.648 qpair failed and we were unable to recover it. 00:33:57.648 [2024-07-24 02:12:12.368998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.648 [2024-07-24 02:12:12.369027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.648 qpair failed and we were unable to recover it. 00:33:57.648 [2024-07-24 02:12:12.369152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.648 [2024-07-24 02:12:12.369179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.648 qpair failed and we were unable to recover it. 00:33:57.648 [2024-07-24 02:12:12.369338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.648 [2024-07-24 02:12:12.369379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.648 qpair failed and we were unable to recover it. 00:33:57.648 [2024-07-24 02:12:12.369511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.648 [2024-07-24 02:12:12.369540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.648 qpair failed and we were unable to recover it. 00:33:57.648 [2024-07-24 02:12:12.369685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.648 [2024-07-24 02:12:12.369717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.648 qpair failed and we were unable to recover it. 00:33:57.648 [2024-07-24 02:12:12.369831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.648 [2024-07-24 02:12:12.369857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.648 qpair failed and we were unable to recover it. 00:33:57.649 [2024-07-24 02:12:12.370018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.649 [2024-07-24 02:12:12.370044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.649 qpair failed and we were unable to recover it. 00:33:57.649 [2024-07-24 02:12:12.370199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.649 [2024-07-24 02:12:12.370226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.649 qpair failed and we were unable to recover it. 00:33:57.649 [2024-07-24 02:12:12.370399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.649 [2024-07-24 02:12:12.370428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.649 qpair failed and we were unable to recover it. 00:33:57.649 [2024-07-24 02:12:12.370568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.649 [2024-07-24 02:12:12.370594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.649 qpair failed and we were unable to recover it. 00:33:57.649 [2024-07-24 02:12:12.370731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.649 [2024-07-24 02:12:12.370756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.649 qpair failed and we were unable to recover it. 00:33:57.649 [2024-07-24 02:12:12.370899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.649 [2024-07-24 02:12:12.370924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.649 qpair failed and we were unable to recover it. 00:33:57.649 [2024-07-24 02:12:12.371121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.649 [2024-07-24 02:12:12.371147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.649 qpair failed and we were unable to recover it. 00:33:57.649 [2024-07-24 02:12:12.371270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.649 [2024-07-24 02:12:12.371295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.649 qpair failed and we were unable to recover it. 00:33:57.649 [2024-07-24 02:12:12.371439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.649 [2024-07-24 02:12:12.371482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.649 qpair failed and we were unable to recover it. 00:33:57.649 [2024-07-24 02:12:12.371623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.649 [2024-07-24 02:12:12.371651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.649 qpair failed and we were unable to recover it. 00:33:57.649 [2024-07-24 02:12:12.371827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.649 [2024-07-24 02:12:12.371855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.649 qpair failed and we were unable to recover it. 00:33:57.649 [2024-07-24 02:12:12.372011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.649 [2024-07-24 02:12:12.372036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.649 qpair failed and we were unable to recover it. 00:33:57.649 [2024-07-24 02:12:12.372175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.649 [2024-07-24 02:12:12.372217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.649 qpair failed and we were unable to recover it. 00:33:57.649 [2024-07-24 02:12:12.372358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.649 [2024-07-24 02:12:12.372385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.649 qpair failed and we were unable to recover it. 00:33:57.649 [2024-07-24 02:12:12.372514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.649 [2024-07-24 02:12:12.372540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.649 qpair failed and we were unable to recover it. 00:33:57.649 [2024-07-24 02:12:12.372678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.649 [2024-07-24 02:12:12.372703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.649 qpair failed and we were unable to recover it. 00:33:57.649 [2024-07-24 02:12:12.372811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.649 [2024-07-24 02:12:12.372837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.649 qpair failed and we were unable to recover it. 00:33:57.649 [2024-07-24 02:12:12.373000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.649 [2024-07-24 02:12:12.373029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.649 qpair failed and we were unable to recover it. 00:33:57.649 [2024-07-24 02:12:12.373206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.649 [2024-07-24 02:12:12.373241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.649 qpair failed and we were unable to recover it. 00:33:57.649 [2024-07-24 02:12:12.373374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.649 [2024-07-24 02:12:12.373400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.649 qpair failed and we were unable to recover it. 00:33:57.649 [2024-07-24 02:12:12.373523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.649 [2024-07-24 02:12:12.373549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.649 qpair failed and we were unable to recover it. 00:33:57.649 [2024-07-24 02:12:12.373707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.649 [2024-07-24 02:12:12.373737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.649 qpair failed and we were unable to recover it. 00:33:57.649 [2024-07-24 02:12:12.373876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.649 [2024-07-24 02:12:12.373905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.649 qpair failed and we were unable to recover it. 00:33:57.649 [2024-07-24 02:12:12.374118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.649 [2024-07-24 02:12:12.374146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.649 qpair failed and we were unable to recover it. 00:33:57.649 [2024-07-24 02:12:12.374262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.649 [2024-07-24 02:12:12.374290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.649 qpair failed and we were unable to recover it. 00:33:57.649 [2024-07-24 02:12:12.374444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.649 [2024-07-24 02:12:12.374476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.649 qpair failed and we were unable to recover it. 00:33:57.649 [2024-07-24 02:12:12.374610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.649 [2024-07-24 02:12:12.374654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.649 qpair failed and we were unable to recover it. 00:33:57.649 [2024-07-24 02:12:12.374810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.649 [2024-07-24 02:12:12.374843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.649 qpair failed and we were unable to recover it. 00:33:57.649 [2024-07-24 02:12:12.375019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.649 [2024-07-24 02:12:12.375048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.649 qpair failed and we were unable to recover it. 00:33:57.649 [2024-07-24 02:12:12.375220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.650 [2024-07-24 02:12:12.375252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.650 qpair failed and we were unable to recover it. 00:33:57.650 [2024-07-24 02:12:12.375423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.650 [2024-07-24 02:12:12.375452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.650 qpair failed and we were unable to recover it. 00:33:57.650 [2024-07-24 02:12:12.375582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.650 [2024-07-24 02:12:12.375617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.650 qpair failed and we were unable to recover it. 00:33:57.650 [2024-07-24 02:12:12.375757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.650 [2024-07-24 02:12:12.375799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.650 qpair failed and we were unable to recover it. 00:33:57.650 [2024-07-24 02:12:12.375934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.650 [2024-07-24 02:12:12.375964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.650 qpair failed and we were unable to recover it. 00:33:57.650 [2024-07-24 02:12:12.376158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.650 [2024-07-24 02:12:12.376184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.650 qpair failed and we were unable to recover it. 00:33:57.650 [2024-07-24 02:12:12.376321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.650 [2024-07-24 02:12:12.376362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.650 qpair failed and we were unable to recover it. 00:33:57.650 [2024-07-24 02:12:12.376552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.650 [2024-07-24 02:12:12.376581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.650 qpair failed and we were unable to recover it. 00:33:57.650 [2024-07-24 02:12:12.376726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.650 [2024-07-24 02:12:12.376762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.650 qpair failed and we were unable to recover it. 00:33:57.650 [2024-07-24 02:12:12.376943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.650 [2024-07-24 02:12:12.376971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.650 qpair failed and we were unable to recover it. 00:33:57.650 [2024-07-24 02:12:12.377096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.650 [2024-07-24 02:12:12.377129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.650 qpair failed and we were unable to recover it. 00:33:57.650 [2024-07-24 02:12:12.377247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.650 [2024-07-24 02:12:12.377273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.650 qpair failed and we were unable to recover it. 00:33:57.650 [2024-07-24 02:12:12.377398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.650 [2024-07-24 02:12:12.377424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.650 qpair failed and we were unable to recover it. 00:33:57.650 [2024-07-24 02:12:12.377584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.650 [2024-07-24 02:12:12.377614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.650 qpair failed and we were unable to recover it. 00:33:57.650 [2024-07-24 02:12:12.377762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.650 [2024-07-24 02:12:12.377789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.650 qpair failed and we were unable to recover it. 00:33:57.650 [2024-07-24 02:12:12.377897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.650 [2024-07-24 02:12:12.377928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.650 qpair failed and we were unable to recover it. 00:33:57.650 [2024-07-24 02:12:12.378086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.650 [2024-07-24 02:12:12.378112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.650 qpair failed and we were unable to recover it. 00:33:57.650 [2024-07-24 02:12:12.378253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.650 [2024-07-24 02:12:12.378282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.650 qpair failed and we were unable to recover it. 00:33:57.650 [2024-07-24 02:12:12.378449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.650 [2024-07-24 02:12:12.378477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.650 qpair failed and we were unable to recover it. 00:33:57.650 [2024-07-24 02:12:12.378632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.650 [2024-07-24 02:12:12.378676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.650 qpair failed and we were unable to recover it. 00:33:57.650 [2024-07-24 02:12:12.378836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.650 [2024-07-24 02:12:12.378862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.650 qpair failed and we were unable to recover it. 00:33:57.650 [2024-07-24 02:12:12.378986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.650 [2024-07-24 02:12:12.379011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.650 qpair failed and we were unable to recover it. 00:33:57.650 [2024-07-24 02:12:12.379121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.650 [2024-07-24 02:12:12.379147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.650 qpair failed and we were unable to recover it. 00:33:57.650 [2024-07-24 02:12:12.379281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.650 [2024-07-24 02:12:12.379308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.650 qpair failed and we were unable to recover it. 00:33:57.650 [2024-07-24 02:12:12.379459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.650 [2024-07-24 02:12:12.379488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.650 qpair failed and we were unable to recover it. 00:33:57.650 [2024-07-24 02:12:12.379628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.650 [2024-07-24 02:12:12.379656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.650 qpair failed and we were unable to recover it. 00:33:57.650 [2024-07-24 02:12:12.379836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.650 [2024-07-24 02:12:12.379861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.650 qpair failed and we were unable to recover it. 00:33:57.650 [2024-07-24 02:12:12.380001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.650 [2024-07-24 02:12:12.380027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.650 qpair failed and we were unable to recover it. 00:33:57.650 [2024-07-24 02:12:12.380252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.650 [2024-07-24 02:12:12.380280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.650 qpair failed and we were unable to recover it. 00:33:57.650 [2024-07-24 02:12:12.380434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.650 [2024-07-24 02:12:12.380463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.650 qpair failed and we were unable to recover it. 00:33:57.650 [2024-07-24 02:12:12.380620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.650 [2024-07-24 02:12:12.380646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.650 qpair failed and we were unable to recover it. 00:33:57.650 [2024-07-24 02:12:12.380772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.650 [2024-07-24 02:12:12.380813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.650 qpair failed and we were unable to recover it. 00:33:57.650 [2024-07-24 02:12:12.380948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.650 [2024-07-24 02:12:12.380976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.650 qpair failed and we were unable to recover it. 00:33:57.650 [2024-07-24 02:12:12.381148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.650 [2024-07-24 02:12:12.381176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.650 qpair failed and we were unable to recover it. 00:33:57.651 [2024-07-24 02:12:12.381308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.651 [2024-07-24 02:12:12.381342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.651 qpair failed and we were unable to recover it. 00:33:57.651 [2024-07-24 02:12:12.381474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.651 [2024-07-24 02:12:12.381500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.651 qpair failed and we were unable to recover it. 00:33:57.651 [2024-07-24 02:12:12.381646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.651 [2024-07-24 02:12:12.381675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.651 qpair failed and we were unable to recover it. 00:33:57.651 [2024-07-24 02:12:12.381888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.651 [2024-07-24 02:12:12.381921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.651 qpair failed and we were unable to recover it. 00:33:57.651 [2024-07-24 02:12:12.382086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.651 [2024-07-24 02:12:12.382112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.651 qpair failed and we were unable to recover it. 00:33:57.651 [2024-07-24 02:12:12.382245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.651 [2024-07-24 02:12:12.382288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.651 qpair failed and we were unable to recover it. 00:33:57.651 [2024-07-24 02:12:12.382485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.651 [2024-07-24 02:12:12.382512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.651 qpair failed and we were unable to recover it. 00:33:57.651 [2024-07-24 02:12:12.382665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.651 [2024-07-24 02:12:12.382693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.651 qpair failed and we were unable to recover it. 00:33:57.651 [2024-07-24 02:12:12.382839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.651 [2024-07-24 02:12:12.382865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.651 qpair failed and we were unable to recover it. 00:33:57.651 [2024-07-24 02:12:12.382995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.651 [2024-07-24 02:12:12.383036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.651 qpair failed and we were unable to recover it. 00:33:57.651 [2024-07-24 02:12:12.383154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.651 [2024-07-24 02:12:12.383183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.651 qpair failed and we were unable to recover it. 00:33:57.651 [2024-07-24 02:12:12.383362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.651 [2024-07-24 02:12:12.383392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.651 qpair failed and we were unable to recover it. 00:33:57.651 [2024-07-24 02:12:12.383532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.651 [2024-07-24 02:12:12.383558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.651 qpair failed and we were unable to recover it. 00:33:57.651 [2024-07-24 02:12:12.383692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.651 [2024-07-24 02:12:12.383718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.651 qpair failed and we were unable to recover it. 00:33:57.651 [2024-07-24 02:12:12.383904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.651 [2024-07-24 02:12:12.383933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.651 qpair failed and we were unable to recover it. 00:33:57.651 [2024-07-24 02:12:12.384040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.651 [2024-07-24 02:12:12.384068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.651 qpair failed and we were unable to recover it. 00:33:57.651 [2024-07-24 02:12:12.384252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.651 [2024-07-24 02:12:12.384278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.651 qpair failed and we were unable to recover it. 00:33:57.651 [2024-07-24 02:12:12.384390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.651 [2024-07-24 02:12:12.384433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.651 qpair failed and we were unable to recover it. 00:33:57.651 [2024-07-24 02:12:12.384620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.651 [2024-07-24 02:12:12.384646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.651 qpair failed and we were unable to recover it. 00:33:57.651 [2024-07-24 02:12:12.384737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.651 [2024-07-24 02:12:12.384763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.651 qpair failed and we were unable to recover it. 00:33:57.651 [2024-07-24 02:12:12.384866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.651 [2024-07-24 02:12:12.384892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.651 qpair failed and we were unable to recover it. 00:33:57.651 [2024-07-24 02:12:12.385049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.651 [2024-07-24 02:12:12.385091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.651 qpair failed and we were unable to recover it. 00:33:57.651 [2024-07-24 02:12:12.385260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.651 [2024-07-24 02:12:12.385289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.651 qpair failed and we were unable to recover it. 00:33:57.651 [2024-07-24 02:12:12.385440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.651 [2024-07-24 02:12:12.385469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.651 qpair failed and we were unable to recover it. 00:33:57.651 [2024-07-24 02:12:12.385593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.651 [2024-07-24 02:12:12.385619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.651 qpair failed and we were unable to recover it. 00:33:57.651 [2024-07-24 02:12:12.385720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.651 [2024-07-24 02:12:12.385747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.651 qpair failed and we were unable to recover it. 00:33:57.651 [2024-07-24 02:12:12.385888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.651 [2024-07-24 02:12:12.385916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.651 qpair failed and we were unable to recover it. 00:33:57.651 [2024-07-24 02:12:12.386026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.651 [2024-07-24 02:12:12.386055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.651 qpair failed and we were unable to recover it. 00:33:57.651 [2024-07-24 02:12:12.386229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.651 [2024-07-24 02:12:12.386255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.651 qpair failed and we were unable to recover it. 00:33:57.651 [2024-07-24 02:12:12.386435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.651 [2024-07-24 02:12:12.386464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.651 qpair failed and we were unable to recover it. 00:33:57.651 [2024-07-24 02:12:12.386616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.651 [2024-07-24 02:12:12.386646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.651 qpair failed and we were unable to recover it. 00:33:57.651 [2024-07-24 02:12:12.386781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.651 [2024-07-24 02:12:12.386806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.651 qpair failed and we were unable to recover it. 00:33:57.651 [2024-07-24 02:12:12.386916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.651 [2024-07-24 02:12:12.386942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.651 qpair failed and we were unable to recover it. 00:33:57.652 [2024-07-24 02:12:12.387068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.652 [2024-07-24 02:12:12.387094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.652 qpair failed and we were unable to recover it. 00:33:57.652 [2024-07-24 02:12:12.387223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.652 [2024-07-24 02:12:12.387251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.652 qpair failed and we were unable to recover it. 00:33:57.652 [2024-07-24 02:12:12.387434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.652 [2024-07-24 02:12:12.387464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.652 qpair failed and we were unable to recover it. 00:33:57.652 [2024-07-24 02:12:12.387583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.652 [2024-07-24 02:12:12.387609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.652 qpair failed and we were unable to recover it. 00:33:57.652 [2024-07-24 02:12:12.387743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.652 [2024-07-24 02:12:12.387768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.652 qpair failed and we were unable to recover it. 00:33:57.652 [2024-07-24 02:12:12.387915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.652 [2024-07-24 02:12:12.387943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.652 qpair failed and we were unable to recover it. 00:33:57.652 [2024-07-24 02:12:12.388089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.652 [2024-07-24 02:12:12.388119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.652 qpair failed and we were unable to recover it. 00:33:57.652 [2024-07-24 02:12:12.388301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.652 [2024-07-24 02:12:12.388333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.652 qpair failed and we were unable to recover it. 00:33:57.652 [2024-07-24 02:12:12.388427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.652 [2024-07-24 02:12:12.388468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.652 qpair failed and we were unable to recover it. 00:33:57.652 [2024-07-24 02:12:12.388649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.652 [2024-07-24 02:12:12.388675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.652 qpair failed and we were unable to recover it. 00:33:57.652 [2024-07-24 02:12:12.388782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.652 [2024-07-24 02:12:12.388808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.652 qpair failed and we were unable to recover it. 00:33:57.652 [2024-07-24 02:12:12.389021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.652 [2024-07-24 02:12:12.389047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.652 qpair failed and we were unable to recover it. 00:33:57.652 [2024-07-24 02:12:12.389225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.652 [2024-07-24 02:12:12.389253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.652 qpair failed and we were unable to recover it. 00:33:57.652 [2024-07-24 02:12:12.389362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.652 [2024-07-24 02:12:12.389392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.652 qpair failed and we were unable to recover it. 00:33:57.652 [2024-07-24 02:12:12.389517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.652 [2024-07-24 02:12:12.389546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.652 qpair failed and we were unable to recover it. 00:33:57.652 [2024-07-24 02:12:12.389696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.652 [2024-07-24 02:12:12.389721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.652 qpair failed and we were unable to recover it. 00:33:57.652 [2024-07-24 02:12:12.389874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.652 [2024-07-24 02:12:12.389916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.652 qpair failed and we were unable to recover it. 00:33:57.652 [2024-07-24 02:12:12.390059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.652 [2024-07-24 02:12:12.390088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.652 qpair failed and we were unable to recover it. 00:33:57.652 [2024-07-24 02:12:12.390265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.652 [2024-07-24 02:12:12.390294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.652 qpair failed and we were unable to recover it. 00:33:57.652 [2024-07-24 02:12:12.390541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.652 [2024-07-24 02:12:12.390567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.652 qpair failed and we were unable to recover it. 00:33:57.652 [2024-07-24 02:12:12.390756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.652 [2024-07-24 02:12:12.390808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.652 qpair failed and we were unable to recover it. 00:33:57.652 [2024-07-24 02:12:12.390960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.652 [2024-07-24 02:12:12.390989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.652 qpair failed and we were unable to recover it. 00:33:57.652 [2024-07-24 02:12:12.391129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.652 [2024-07-24 02:12:12.391158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.652 qpair failed and we were unable to recover it. 00:33:57.652 [2024-07-24 02:12:12.391279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.652 [2024-07-24 02:12:12.391304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.652 qpair failed and we were unable to recover it. 00:33:57.652 [2024-07-24 02:12:12.391471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.652 [2024-07-24 02:12:12.391514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.652 qpair failed and we were unable to recover it. 00:33:57.652 [2024-07-24 02:12:12.391660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.652 [2024-07-24 02:12:12.391688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.652 qpair failed and we were unable to recover it. 00:33:57.652 [2024-07-24 02:12:12.391865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.652 [2024-07-24 02:12:12.391893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.652 qpair failed and we were unable to recover it. 00:33:57.652 [2024-07-24 02:12:12.392011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.652 [2024-07-24 02:12:12.392036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.652 qpair failed and we were unable to recover it. 00:33:57.652 [2024-07-24 02:12:12.392161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.652 [2024-07-24 02:12:12.392186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.652 qpair failed and we were unable to recover it. 00:33:57.652 [2024-07-24 02:12:12.392345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.652 [2024-07-24 02:12:12.392375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.652 qpair failed and we were unable to recover it. 00:33:57.652 [2024-07-24 02:12:12.392520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.652 [2024-07-24 02:12:12.392549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.652 qpair failed and we were unable to recover it. 00:33:57.652 [2024-07-24 02:12:12.392760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.652 [2024-07-24 02:12:12.392786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.652 qpair failed and we were unable to recover it. 00:33:57.653 [2024-07-24 02:12:12.392933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.653 [2024-07-24 02:12:12.392962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.653 qpair failed and we were unable to recover it. 00:33:57.653 [2024-07-24 02:12:12.393072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.653 [2024-07-24 02:12:12.393101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.653 qpair failed and we were unable to recover it. 00:33:57.653 [2024-07-24 02:12:12.393211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.653 [2024-07-24 02:12:12.393240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.653 qpair failed and we were unable to recover it. 00:33:57.653 [2024-07-24 02:12:12.393368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.653 [2024-07-24 02:12:12.393394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.653 qpair failed and we were unable to recover it. 00:33:57.653 [2024-07-24 02:12:12.393552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.653 [2024-07-24 02:12:12.393578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.653 qpair failed and we were unable to recover it. 00:33:57.653 [2024-07-24 02:12:12.393714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.653 [2024-07-24 02:12:12.393742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.653 qpair failed and we were unable to recover it. 00:33:57.653 [2024-07-24 02:12:12.393920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.653 [2024-07-24 02:12:12.393952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.653 qpair failed and we were unable to recover it. 00:33:57.653 [2024-07-24 02:12:12.394080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.653 [2024-07-24 02:12:12.394105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.653 qpair failed and we were unable to recover it. 00:33:57.653 [2024-07-24 02:12:12.394239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.653 [2024-07-24 02:12:12.394265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.653 qpair failed and we were unable to recover it. 00:33:57.653 [2024-07-24 02:12:12.394371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.653 [2024-07-24 02:12:12.394397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.653 qpair failed and we were unable to recover it. 00:33:57.653 [2024-07-24 02:12:12.394502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.653 [2024-07-24 02:12:12.394527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.653 qpair failed and we were unable to recover it. 00:33:57.653 [2024-07-24 02:12:12.394624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.653 [2024-07-24 02:12:12.394649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.653 qpair failed and we were unable to recover it. 00:33:57.653 [2024-07-24 02:12:12.394777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.653 [2024-07-24 02:12:12.394802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.653 qpair failed and we were unable to recover it. 00:33:57.653 [2024-07-24 02:12:12.394986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.653 [2024-07-24 02:12:12.395015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.653 qpair failed and we were unable to recover it. 00:33:57.653 [2024-07-24 02:12:12.395175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.653 [2024-07-24 02:12:12.395200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.653 qpair failed and we were unable to recover it. 00:33:57.653 [2024-07-24 02:12:12.395361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.653 [2024-07-24 02:12:12.395387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.653 qpair failed and we were unable to recover it. 00:33:57.653 [2024-07-24 02:12:12.395493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.653 [2024-07-24 02:12:12.395535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.653 qpair failed and we were unable to recover it. 00:33:57.653 [2024-07-24 02:12:12.395661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.653 [2024-07-24 02:12:12.395690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.653 qpair failed and we were unable to recover it. 00:33:57.653 [2024-07-24 02:12:12.395831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.653 [2024-07-24 02:12:12.395859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.653 qpair failed and we were unable to recover it. 00:33:57.653 [2024-07-24 02:12:12.395989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.653 [2024-07-24 02:12:12.396015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.653 qpair failed and we were unable to recover it. 00:33:57.653 [2024-07-24 02:12:12.396149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.653 [2024-07-24 02:12:12.396175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.653 qpair failed and we were unable to recover it. 00:33:57.653 [2024-07-24 02:12:12.396373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.653 [2024-07-24 02:12:12.396399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.653 qpair failed and we were unable to recover it. 00:33:57.653 [2024-07-24 02:12:12.396533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.653 [2024-07-24 02:12:12.396559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.653 qpair failed and we were unable to recover it. 00:33:57.653 [2024-07-24 02:12:12.396737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.653 [2024-07-24 02:12:12.396762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.653 qpair failed and we were unable to recover it. 00:33:57.653 [2024-07-24 02:12:12.396919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.653 [2024-07-24 02:12:12.396944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.653 qpair failed and we were unable to recover it. 00:33:57.653 [2024-07-24 02:12:12.397121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.653 [2024-07-24 02:12:12.397150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.653 qpair failed and we were unable to recover it. 00:33:57.653 [2024-07-24 02:12:12.397274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.653 [2024-07-24 02:12:12.397302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.653 qpair failed and we were unable to recover it. 00:33:57.653 [2024-07-24 02:12:12.397476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.653 [2024-07-24 02:12:12.397502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.653 qpair failed and we were unable to recover it. 00:33:57.653 [2024-07-24 02:12:12.397610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.653 [2024-07-24 02:12:12.397635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.653 qpair failed and we were unable to recover it. 00:33:57.654 [2024-07-24 02:12:12.397817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.654 [2024-07-24 02:12:12.397843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.654 qpair failed and we were unable to recover it. 00:33:57.654 [2024-07-24 02:12:12.397974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.654 [2024-07-24 02:12:12.397999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.654 qpair failed and we were unable to recover it. 00:33:57.654 [2024-07-24 02:12:12.398185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.654 [2024-07-24 02:12:12.398210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.654 qpair failed and we were unable to recover it. 00:33:57.654 [2024-07-24 02:12:12.398344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.654 [2024-07-24 02:12:12.398395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.654 qpair failed and we were unable to recover it. 00:33:57.654 [2024-07-24 02:12:12.398512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.654 [2024-07-24 02:12:12.398544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.654 qpair failed and we were unable to recover it. 00:33:57.654 [2024-07-24 02:12:12.398713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.654 [2024-07-24 02:12:12.398742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.654 qpair failed and we were unable to recover it. 00:33:57.654 [2024-07-24 02:12:12.398929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.654 [2024-07-24 02:12:12.398954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.654 qpair failed and we were unable to recover it. 00:33:57.654 [2024-07-24 02:12:12.399050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.654 [2024-07-24 02:12:12.399076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.654 qpair failed and we were unable to recover it. 00:33:57.654 [2024-07-24 02:12:12.399243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.654 [2024-07-24 02:12:12.399269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.654 qpair failed and we were unable to recover it. 00:33:57.654 [2024-07-24 02:12:12.399408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.654 [2024-07-24 02:12:12.399435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.654 qpair failed and we were unable to recover it. 00:33:57.654 [2024-07-24 02:12:12.399570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.654 [2024-07-24 02:12:12.399596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.654 qpair failed and we were unable to recover it. 00:33:57.654 [2024-07-24 02:12:12.399737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.654 [2024-07-24 02:12:12.399764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.654 qpair failed and we were unable to recover it. 00:33:57.654 [2024-07-24 02:12:12.399915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.654 [2024-07-24 02:12:12.399945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.654 qpair failed and we were unable to recover it. 00:33:57.654 [2024-07-24 02:12:12.400093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.654 [2024-07-24 02:12:12.400122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.654 qpair failed and we were unable to recover it. 00:33:57.654 [2024-07-24 02:12:12.400269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.654 [2024-07-24 02:12:12.400295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.654 qpair failed and we were unable to recover it. 00:33:57.654 [2024-07-24 02:12:12.400424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.654 [2024-07-24 02:12:12.400450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.654 qpair failed and we were unable to recover it. 00:33:57.654 [2024-07-24 02:12:12.400560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.654 [2024-07-24 02:12:12.400585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.654 qpair failed and we were unable to recover it. 00:33:57.654 [2024-07-24 02:12:12.400734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.654 [2024-07-24 02:12:12.400760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.654 qpair failed and we were unable to recover it. 00:33:57.654 [2024-07-24 02:12:12.400907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.654 [2024-07-24 02:12:12.400933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.654 qpair failed and we were unable to recover it. 00:33:57.654 [2024-07-24 02:12:12.401057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.654 [2024-07-24 02:12:12.401083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.654 qpair failed and we were unable to recover it. 00:33:57.654 [2024-07-24 02:12:12.401267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.654 [2024-07-24 02:12:12.401296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.654 qpair failed and we were unable to recover it. 00:33:57.654 [2024-07-24 02:12:12.401420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.654 [2024-07-24 02:12:12.401449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.654 qpair failed and we were unable to recover it. 00:33:57.654 [2024-07-24 02:12:12.401628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.654 [2024-07-24 02:12:12.401654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.654 qpair failed and we were unable to recover it. 00:33:57.654 [2024-07-24 02:12:12.401800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.654 [2024-07-24 02:12:12.401826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.654 qpair failed and we were unable to recover it. 00:33:57.654 [2024-07-24 02:12:12.401949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.654 [2024-07-24 02:12:12.401975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.654 qpair failed and we were unable to recover it. 00:33:57.654 [2024-07-24 02:12:12.402160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.654 [2024-07-24 02:12:12.402189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.654 qpair failed and we were unable to recover it. 00:33:57.654 [2024-07-24 02:12:12.402368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.654 [2024-07-24 02:12:12.402394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.654 qpair failed and we were unable to recover it. 00:33:57.654 [2024-07-24 02:12:12.402507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.654 [2024-07-24 02:12:12.402550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.654 qpair failed and we were unable to recover it. 00:33:57.654 [2024-07-24 02:12:12.402694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.654 [2024-07-24 02:12:12.402723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.654 qpair failed and we were unable to recover it. 00:33:57.654 [2024-07-24 02:12:12.402892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.654 [2024-07-24 02:12:12.402920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.654 qpair failed and we were unable to recover it. 00:33:57.654 [2024-07-24 02:12:12.403078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.655 [2024-07-24 02:12:12.403104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.655 qpair failed and we were unable to recover it. 00:33:57.655 [2024-07-24 02:12:12.403282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.655 [2024-07-24 02:12:12.403311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.655 qpair failed and we were unable to recover it. 00:33:57.655 [2024-07-24 02:12:12.403474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.655 [2024-07-24 02:12:12.403502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.655 qpair failed and we were unable to recover it. 00:33:57.655 [2024-07-24 02:12:12.403623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.655 [2024-07-24 02:12:12.403651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.655 qpair failed and we were unable to recover it. 00:33:57.655 [2024-07-24 02:12:12.403784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.655 [2024-07-24 02:12:12.403810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.655 qpair failed and we were unable to recover it. 00:33:57.655 [2024-07-24 02:12:12.403947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.655 [2024-07-24 02:12:12.403973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.655 qpair failed and we were unable to recover it. 00:33:57.655 [2024-07-24 02:12:12.404102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.655 [2024-07-24 02:12:12.404128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.655 qpair failed and we were unable to recover it. 00:33:57.655 [2024-07-24 02:12:12.404311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.655 [2024-07-24 02:12:12.404360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.655 qpair failed and we were unable to recover it. 00:33:57.655 [2024-07-24 02:12:12.404525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.655 [2024-07-24 02:12:12.404551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.655 qpair failed and we were unable to recover it. 00:33:57.655 [2024-07-24 02:12:12.404682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.655 [2024-07-24 02:12:12.404724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.655 qpair failed and we were unable to recover it. 00:33:57.655 [2024-07-24 02:12:12.404825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.655 [2024-07-24 02:12:12.404854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.655 qpair failed and we were unable to recover it. 00:33:57.655 [2024-07-24 02:12:12.404997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.655 [2024-07-24 02:12:12.405026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.655 qpair failed and we were unable to recover it. 00:33:57.655 [2024-07-24 02:12:12.405179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.655 [2024-07-24 02:12:12.405205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.655 qpair failed and we were unable to recover it. 00:33:57.655 [2024-07-24 02:12:12.405333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.655 [2024-07-24 02:12:12.405383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.655 qpair failed and we were unable to recover it. 00:33:57.655 [2024-07-24 02:12:12.405504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.655 [2024-07-24 02:12:12.405532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.655 qpair failed and we were unable to recover it. 00:33:57.655 [2024-07-24 02:12:12.405699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.655 [2024-07-24 02:12:12.405729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.655 qpair failed and we were unable to recover it. 00:33:57.655 [2024-07-24 02:12:12.405863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.655 [2024-07-24 02:12:12.405890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.655 qpair failed and we were unable to recover it. 00:33:57.655 [2024-07-24 02:12:12.406053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.655 [2024-07-24 02:12:12.406081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.655 qpair failed and we were unable to recover it. 00:33:57.655 [2024-07-24 02:12:12.406199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.655 [2024-07-24 02:12:12.406228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.655 qpair failed and we were unable to recover it. 00:33:57.655 [2024-07-24 02:12:12.406416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.655 [2024-07-24 02:12:12.406443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.655 qpair failed and we were unable to recover it. 00:33:57.655 [2024-07-24 02:12:12.406584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.655 [2024-07-24 02:12:12.406622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.655 qpair failed and we were unable to recover it. 00:33:57.655 [2024-07-24 02:12:12.406773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.656 [2024-07-24 02:12:12.406802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.656 qpair failed and we were unable to recover it. 00:33:57.656 [2024-07-24 02:12:12.406948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.656 [2024-07-24 02:12:12.406977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.656 qpair failed and we were unable to recover it. 00:33:57.656 [2024-07-24 02:12:12.407147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.656 [2024-07-24 02:12:12.407176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.656 qpair failed and we were unable to recover it. 00:33:57.656 [2024-07-24 02:12:12.407303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.656 [2024-07-24 02:12:12.407334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.656 qpair failed and we were unable to recover it. 00:33:57.656 [2024-07-24 02:12:12.407445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.656 [2024-07-24 02:12:12.407470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.656 qpair failed and we were unable to recover it. 00:33:57.656 [2024-07-24 02:12:12.407649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.656 [2024-07-24 02:12:12.407678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.656 qpair failed and we were unable to recover it. 00:33:57.656 [2024-07-24 02:12:12.407821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.656 [2024-07-24 02:12:12.407850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.656 qpair failed and we were unable to recover it. 00:33:57.656 [2024-07-24 02:12:12.408003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.656 [2024-07-24 02:12:12.408029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.656 qpair failed and we were unable to recover it. 00:33:57.656 [2024-07-24 02:12:12.408142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.656 [2024-07-24 02:12:12.408168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.656 qpair failed and we were unable to recover it. 00:33:57.656 [2024-07-24 02:12:12.408273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.656 [2024-07-24 02:12:12.408300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.656 qpair failed and we were unable to recover it. 00:33:57.656 [2024-07-24 02:12:12.408449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.656 [2024-07-24 02:12:12.408475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.656 qpair failed and we were unable to recover it. 00:33:57.656 [2024-07-24 02:12:12.408627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.656 [2024-07-24 02:12:12.408653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.656 qpair failed and we were unable to recover it. 00:33:57.656 [2024-07-24 02:12:12.408804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.656 [2024-07-24 02:12:12.408852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.656 qpair failed and we were unable to recover it. 00:33:57.656 [2024-07-24 02:12:12.408999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.656 [2024-07-24 02:12:12.409029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.656 qpair failed and we were unable to recover it. 00:33:57.656 [2024-07-24 02:12:12.409182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.656 [2024-07-24 02:12:12.409211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.656 qpair failed and we were unable to recover it. 00:33:57.656 [2024-07-24 02:12:12.409367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.656 [2024-07-24 02:12:12.409393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.656 qpair failed and we were unable to recover it. 00:33:57.656 [2024-07-24 02:12:12.409529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.656 [2024-07-24 02:12:12.409573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.656 qpair failed and we were unable to recover it. 00:33:57.656 [2024-07-24 02:12:12.409720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.656 [2024-07-24 02:12:12.409749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.656 qpair failed and we were unable to recover it. 00:33:57.656 [2024-07-24 02:12:12.409893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.656 [2024-07-24 02:12:12.409921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.656 qpair failed and we were unable to recover it. 00:33:57.656 [2024-07-24 02:12:12.410074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.656 [2024-07-24 02:12:12.410099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.656 qpair failed and we were unable to recover it. 00:33:57.656 [2024-07-24 02:12:12.410270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.656 [2024-07-24 02:12:12.410299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.656 qpair failed and we were unable to recover it. 00:33:57.656 [2024-07-24 02:12:12.410458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.656 [2024-07-24 02:12:12.410486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.656 qpair failed and we were unable to recover it. 00:33:57.656 [2024-07-24 02:12:12.410638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.656 [2024-07-24 02:12:12.410666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.656 qpair failed and we were unable to recover it. 00:33:57.656 [2024-07-24 02:12:12.410796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.656 [2024-07-24 02:12:12.410822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.656 qpair failed and we were unable to recover it. 00:33:57.656 [2024-07-24 02:12:12.410964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.656 [2024-07-24 02:12:12.410990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.656 qpair failed and we were unable to recover it. 00:33:57.656 [2024-07-24 02:12:12.411145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.656 [2024-07-24 02:12:12.411174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.656 qpair failed and we were unable to recover it. 00:33:57.656 [2024-07-24 02:12:12.411324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.656 [2024-07-24 02:12:12.411353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.656 qpair failed and we were unable to recover it. 00:33:57.656 [2024-07-24 02:12:12.411489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.656 [2024-07-24 02:12:12.411515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.656 qpair failed and we were unable to recover it. 00:33:57.656 [2024-07-24 02:12:12.411685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.657 [2024-07-24 02:12:12.411711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.657 qpair failed and we were unable to recover it. 00:33:57.657 [2024-07-24 02:12:12.411839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.657 [2024-07-24 02:12:12.411868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.657 qpair failed and we were unable to recover it. 00:33:57.657 [2024-07-24 02:12:12.412039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.657 [2024-07-24 02:12:12.412068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.657 qpair failed and we were unable to recover it. 00:33:57.657 [2024-07-24 02:12:12.412214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.657 [2024-07-24 02:12:12.412240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.657 qpair failed and we were unable to recover it. 00:33:57.657 [2024-07-24 02:12:12.412404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.657 [2024-07-24 02:12:12.412448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.657 qpair failed and we were unable to recover it. 00:33:57.657 [2024-07-24 02:12:12.412594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.657 [2024-07-24 02:12:12.412632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.657 qpair failed and we were unable to recover it. 00:33:57.657 [2024-07-24 02:12:12.412773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.657 [2024-07-24 02:12:12.412802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.657 qpair failed and we were unable to recover it. 00:33:57.657 [2024-07-24 02:12:12.412964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.657 [2024-07-24 02:12:12.412991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.657 qpair failed and we were unable to recover it. 00:33:57.657 [2024-07-24 02:12:12.413116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.657 [2024-07-24 02:12:12.413141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.657 qpair failed and we were unable to recover it. 00:33:57.657 [2024-07-24 02:12:12.413277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.657 [2024-07-24 02:12:12.413303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.657 qpair failed and we were unable to recover it. 00:33:57.657 [2024-07-24 02:12:12.413467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.657 [2024-07-24 02:12:12.413497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.657 qpair failed and we were unable to recover it. 00:33:57.657 [2024-07-24 02:12:12.413644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.657 [2024-07-24 02:12:12.413670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.657 qpair failed and we were unable to recover it. 00:33:57.657 [2024-07-24 02:12:12.413817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.657 [2024-07-24 02:12:12.413847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.657 qpair failed and we were unable to recover it. 00:33:57.657 [2024-07-24 02:12:12.413994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.657 [2024-07-24 02:12:12.414023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.657 qpair failed and we were unable to recover it. 00:33:57.657 [2024-07-24 02:12:12.414168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.657 [2024-07-24 02:12:12.414197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.657 qpair failed and we were unable to recover it. 00:33:57.657 [2024-07-24 02:12:12.414371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.657 [2024-07-24 02:12:12.414397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.657 qpair failed and we were unable to recover it. 00:33:57.657 [2024-07-24 02:12:12.414546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.657 [2024-07-24 02:12:12.414575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.657 qpair failed and we were unable to recover it. 00:33:57.657 [2024-07-24 02:12:12.414716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.657 [2024-07-24 02:12:12.414745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.657 qpair failed and we were unable to recover it. 00:33:57.657 [2024-07-24 02:12:12.414864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.657 [2024-07-24 02:12:12.414893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.657 qpair failed and we were unable to recover it. 00:33:57.657 [2024-07-24 02:12:12.415082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.657 [2024-07-24 02:12:12.415108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.657 qpair failed and we were unable to recover it. 00:33:57.657 [2024-07-24 02:12:12.415267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.657 [2024-07-24 02:12:12.415296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.657 qpair failed and we were unable to recover it. 00:33:57.657 [2024-07-24 02:12:12.415439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.657 [2024-07-24 02:12:12.415468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.657 qpair failed and we were unable to recover it. 00:33:57.657 [2024-07-24 02:12:12.415640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.657 [2024-07-24 02:12:12.415677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.657 qpair failed and we were unable to recover it. 00:33:57.657 [2024-07-24 02:12:12.415773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.657 [2024-07-24 02:12:12.415799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.657 qpair failed and we were unable to recover it. 00:33:57.657 [2024-07-24 02:12:12.415898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.657 [2024-07-24 02:12:12.415922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.657 qpair failed and we were unable to recover it. 00:33:57.657 [2024-07-24 02:12:12.416081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.657 [2024-07-24 02:12:12.416110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.657 qpair failed and we were unable to recover it. 00:33:57.657 [2024-07-24 02:12:12.416252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.657 [2024-07-24 02:12:12.416281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.657 qpair failed and we were unable to recover it. 00:33:57.657 [2024-07-24 02:12:12.416488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.657 [2024-07-24 02:12:12.416514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.657 qpair failed and we were unable to recover it. 00:33:57.657 [2024-07-24 02:12:12.416631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.658 [2024-07-24 02:12:12.416659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.658 qpair failed and we were unable to recover it. 00:33:57.658 [2024-07-24 02:12:12.416806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.658 [2024-07-24 02:12:12.416835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.658 qpair failed and we were unable to recover it. 00:33:57.658 [2024-07-24 02:12:12.416984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.658 [2024-07-24 02:12:12.417013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.658 qpair failed and we were unable to recover it. 00:33:57.658 [2024-07-24 02:12:12.417132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.658 [2024-07-24 02:12:12.417159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.658 qpair failed and we were unable to recover it. 00:33:57.658 [2024-07-24 02:12:12.417290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.658 [2024-07-24 02:12:12.417324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.658 qpair failed and we were unable to recover it. 00:33:57.658 [2024-07-24 02:12:12.417482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.658 [2024-07-24 02:12:12.417510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.658 qpair failed and we were unable to recover it. 00:33:57.658 [2024-07-24 02:12:12.417659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.658 [2024-07-24 02:12:12.417692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.658 qpair failed and we were unable to recover it. 00:33:57.658 [2024-07-24 02:12:12.417842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.658 [2024-07-24 02:12:12.417868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.658 qpair failed and we were unable to recover it. 00:33:57.658 [2024-07-24 02:12:12.418001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.658 [2024-07-24 02:12:12.418027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.658 qpair failed and we were unable to recover it. 00:33:57.658 [2024-07-24 02:12:12.418137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.658 [2024-07-24 02:12:12.418163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.658 qpair failed and we were unable to recover it. 00:33:57.658 [2024-07-24 02:12:12.418276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.658 [2024-07-24 02:12:12.418304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.658 qpair failed and we were unable to recover it. 00:33:57.658 [2024-07-24 02:12:12.418468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.658 [2024-07-24 02:12:12.418494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.658 qpair failed and we were unable to recover it. 00:33:57.658 [2024-07-24 02:12:12.418624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.658 [2024-07-24 02:12:12.418666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.658 qpair failed and we were unable to recover it. 00:33:57.658 [2024-07-24 02:12:12.418812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.658 [2024-07-24 02:12:12.418840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.658 qpair failed and we were unable to recover it. 00:33:57.658 [2024-07-24 02:12:12.418997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.658 [2024-07-24 02:12:12.419023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.658 qpair failed and we were unable to recover it. 00:33:57.658 [2024-07-24 02:12:12.419183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.658 [2024-07-24 02:12:12.419210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.658 qpair failed and we were unable to recover it. 00:33:57.658 [2024-07-24 02:12:12.419392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.658 [2024-07-24 02:12:12.419422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.658 qpair failed and we were unable to recover it. 00:33:57.658 [2024-07-24 02:12:12.419530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.658 [2024-07-24 02:12:12.419558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.658 qpair failed and we were unable to recover it. 00:33:57.658 [2024-07-24 02:12:12.419677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.658 [2024-07-24 02:12:12.419706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.658 qpair failed and we were unable to recover it. 00:33:57.658 [2024-07-24 02:12:12.419886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.658 [2024-07-24 02:12:12.419912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.658 qpair failed and we were unable to recover it. 00:33:57.658 [2024-07-24 02:12:12.420066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.658 [2024-07-24 02:12:12.420094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.658 qpair failed and we were unable to recover it. 00:33:57.658 [2024-07-24 02:12:12.420232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.658 [2024-07-24 02:12:12.420260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.658 qpair failed and we were unable to recover it. 00:33:57.658 [2024-07-24 02:12:12.420383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.658 [2024-07-24 02:12:12.420412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.658 qpair failed and we were unable to recover it. 00:33:57.658 [2024-07-24 02:12:12.420560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.658 [2024-07-24 02:12:12.420586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.658 qpair failed and we were unable to recover it. 00:33:57.658 [2024-07-24 02:12:12.420729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.658 [2024-07-24 02:12:12.420755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.658 qpair failed and we were unable to recover it. 00:33:57.658 [2024-07-24 02:12:12.420851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.658 [2024-07-24 02:12:12.420877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.658 qpair failed and we were unable to recover it. 00:33:57.658 [2024-07-24 02:12:12.421013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.658 [2024-07-24 02:12:12.421042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.658 qpair failed and we were unable to recover it. 00:33:57.658 [2024-07-24 02:12:12.421204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.658 [2024-07-24 02:12:12.421241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.658 qpair failed and we were unable to recover it. 00:33:57.659 [2024-07-24 02:12:12.421443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.659 [2024-07-24 02:12:12.421504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.659 qpair failed and we were unable to recover it. 00:33:57.659 [2024-07-24 02:12:12.421680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.659 [2024-07-24 02:12:12.421709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.659 qpair failed and we were unable to recover it. 00:33:57.659 [2024-07-24 02:12:12.421820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.659 [2024-07-24 02:12:12.421848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.659 qpair failed and we were unable to recover it. 00:33:57.659 [2024-07-24 02:12:12.422033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.659 [2024-07-24 02:12:12.422060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.659 qpair failed and we were unable to recover it. 00:33:57.659 [2024-07-24 02:12:12.422214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.659 [2024-07-24 02:12:12.422248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.659 qpair failed and we were unable to recover it. 00:33:57.659 [2024-07-24 02:12:12.422409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.659 [2024-07-24 02:12:12.422436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.659 qpair failed and we were unable to recover it. 00:33:57.659 [2024-07-24 02:12:12.422570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.659 [2024-07-24 02:12:12.422596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.659 qpair failed and we were unable to recover it. 00:33:57.659 [2024-07-24 02:12:12.422737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.659 [2024-07-24 02:12:12.422770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.659 qpair failed and we were unable to recover it. 00:33:57.659 [2024-07-24 02:12:12.422900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.659 [2024-07-24 02:12:12.422954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.659 qpair failed and we were unable to recover it. 00:33:57.659 [2024-07-24 02:12:12.423074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.659 [2024-07-24 02:12:12.423102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.659 qpair failed and we were unable to recover it. 00:33:57.659 [2024-07-24 02:12:12.423273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.659 [2024-07-24 02:12:12.423302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.659 qpair failed and we were unable to recover it. 00:33:57.659 [2024-07-24 02:12:12.423445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.659 [2024-07-24 02:12:12.423471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.659 qpair failed and we were unable to recover it. 00:33:57.659 [2024-07-24 02:12:12.423604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.659 [2024-07-24 02:12:12.423634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.659 qpair failed and we were unable to recover it. 00:33:57.659 [2024-07-24 02:12:12.423794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.659 [2024-07-24 02:12:12.423822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.659 qpair failed and we were unable to recover it. 00:33:57.659 [2024-07-24 02:12:12.423956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.659 [2024-07-24 02:12:12.423985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.659 qpair failed and we were unable to recover it. 00:33:57.659 [2024-07-24 02:12:12.424162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.659 [2024-07-24 02:12:12.424188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.659 qpair failed and we were unable to recover it. 00:33:57.659 [2024-07-24 02:12:12.424282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.659 [2024-07-24 02:12:12.424306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.659 qpair failed and we were unable to recover it. 00:33:57.659 [2024-07-24 02:12:12.424488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.659 [2024-07-24 02:12:12.424517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.659 qpair failed and we were unable to recover it. 00:33:57.659 [2024-07-24 02:12:12.424663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.659 [2024-07-24 02:12:12.424692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.659 qpair failed and we were unable to recover it. 00:33:57.659 [2024-07-24 02:12:12.424836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.659 [2024-07-24 02:12:12.424863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.659 qpair failed and we were unable to recover it. 00:33:57.659 [2024-07-24 02:12:12.424971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.659 [2024-07-24 02:12:12.424997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.659 qpair failed and we were unable to recover it. 00:33:57.659 [2024-07-24 02:12:12.425160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.659 [2024-07-24 02:12:12.425189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.659 qpair failed and we were unable to recover it. 00:33:57.659 [2024-07-24 02:12:12.425333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.659 [2024-07-24 02:12:12.425370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.659 qpair failed and we were unable to recover it. 00:33:57.659 [2024-07-24 02:12:12.425519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.659 [2024-07-24 02:12:12.425545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.659 qpair failed and we were unable to recover it. 00:33:57.659 [2024-07-24 02:12:12.425669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.659 [2024-07-24 02:12:12.425696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.659 qpair failed and we were unable to recover it. 00:33:57.659 [2024-07-24 02:12:12.425883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.659 [2024-07-24 02:12:12.425912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.659 qpair failed and we were unable to recover it. 00:33:57.659 [2024-07-24 02:12:12.426033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.659 [2024-07-24 02:12:12.426061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.659 qpair failed and we were unable to recover it. 00:33:57.659 [2024-07-24 02:12:12.426211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.659 [2024-07-24 02:12:12.426237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.659 qpair failed and we were unable to recover it. 00:33:57.659 [2024-07-24 02:12:12.426412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.659 [2024-07-24 02:12:12.426441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.659 qpair failed and we were unable to recover it. 00:33:57.659 [2024-07-24 02:12:12.426586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.659 [2024-07-24 02:12:12.426625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.659 qpair failed and we were unable to recover it. 00:33:57.659 [2024-07-24 02:12:12.426781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.659 [2024-07-24 02:12:12.426810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.659 qpair failed and we were unable to recover it. 00:33:57.659 [2024-07-24 02:12:12.426964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.659 [2024-07-24 02:12:12.426990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.659 qpair failed and we were unable to recover it. 00:33:57.659 [2024-07-24 02:12:12.427135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.659 [2024-07-24 02:12:12.427161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.660 qpair failed and we were unable to recover it. 00:33:57.660 [2024-07-24 02:12:12.427298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.660 [2024-07-24 02:12:12.427346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.660 qpair failed and we were unable to recover it. 00:33:57.660 [2024-07-24 02:12:12.427458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.660 [2024-07-24 02:12:12.427486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.660 qpair failed and we were unable to recover it. 00:33:57.660 [2024-07-24 02:12:12.427646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.660 [2024-07-24 02:12:12.427672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.660 qpair failed and we were unable to recover it. 00:33:57.660 [2024-07-24 02:12:12.427807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.660 [2024-07-24 02:12:12.427850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.660 qpair failed and we were unable to recover it. 00:33:57.660 [2024-07-24 02:12:12.427962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.660 [2024-07-24 02:12:12.427991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.660 qpair failed and we were unable to recover it. 00:33:57.660 [2024-07-24 02:12:12.428172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.660 [2024-07-24 02:12:12.428201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.660 qpair failed and we were unable to recover it. 00:33:57.660 [2024-07-24 02:12:12.428336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.660 [2024-07-24 02:12:12.428367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.660 qpair failed and we were unable to recover it. 00:33:57.660 [2024-07-24 02:12:12.428474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.660 [2024-07-24 02:12:12.428500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.660 qpair failed and we were unable to recover it. 00:33:57.660 [2024-07-24 02:12:12.428683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.660 [2024-07-24 02:12:12.428711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.660 qpair failed and we were unable to recover it. 00:33:57.660 [2024-07-24 02:12:12.428862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.660 [2024-07-24 02:12:12.428891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.660 qpair failed and we were unable to recover it. 00:33:57.660 [2024-07-24 02:12:12.429046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.660 [2024-07-24 02:12:12.429071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.660 qpair failed and we were unable to recover it. 00:33:57.660 [2024-07-24 02:12:12.429168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.660 [2024-07-24 02:12:12.429194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.660 qpair failed and we were unable to recover it. 00:33:57.660 [2024-07-24 02:12:12.429329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.660 [2024-07-24 02:12:12.429359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.660 qpair failed and we were unable to recover it. 00:33:57.660 [2024-07-24 02:12:12.429507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.660 [2024-07-24 02:12:12.429540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.660 qpair failed and we were unable to recover it. 00:33:57.660 [2024-07-24 02:12:12.429697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.660 [2024-07-24 02:12:12.429722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.660 qpair failed and we were unable to recover it. 00:33:57.660 [2024-07-24 02:12:12.429855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.660 [2024-07-24 02:12:12.429881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.660 qpair failed and we were unable to recover it. 00:33:57.660 [2024-07-24 02:12:12.430032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.660 [2024-07-24 02:12:12.430061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.660 qpair failed and we were unable to recover it. 00:33:57.660 [2024-07-24 02:12:12.430212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.660 [2024-07-24 02:12:12.430242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.660 qpair failed and we were unable to recover it. 00:33:57.660 [2024-07-24 02:12:12.430405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.660 [2024-07-24 02:12:12.430431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.660 qpair failed and we were unable to recover it. 00:33:57.660 [2024-07-24 02:12:12.430608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.660 [2024-07-24 02:12:12.430636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.660 qpair failed and we were unable to recover it. 00:33:57.660 [2024-07-24 02:12:12.430797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.660 [2024-07-24 02:12:12.430823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.660 qpair failed and we were unable to recover it. 00:33:57.660 [2024-07-24 02:12:12.430981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.660 [2024-07-24 02:12:12.431007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.660 qpair failed and we were unable to recover it. 00:33:57.660 [2024-07-24 02:12:12.431135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.660 [2024-07-24 02:12:12.431161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.660 qpair failed and we were unable to recover it. 00:33:57.660 [2024-07-24 02:12:12.431290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.660 [2024-07-24 02:12:12.431337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.660 qpair failed and we were unable to recover it. 00:33:57.660 [2024-07-24 02:12:12.431513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.660 [2024-07-24 02:12:12.431542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.660 qpair failed and we were unable to recover it. 00:33:57.660 [2024-07-24 02:12:12.431682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.660 [2024-07-24 02:12:12.431711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.660 qpair failed and we were unable to recover it. 00:33:57.660 [2024-07-24 02:12:12.431854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.660 [2024-07-24 02:12:12.431879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.660 qpair failed and we were unable to recover it. 00:33:57.660 [2024-07-24 02:12:12.432016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.660 [2024-07-24 02:12:12.432042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.660 qpair failed and we were unable to recover it. 00:33:57.660 [2024-07-24 02:12:12.432246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.660 [2024-07-24 02:12:12.432275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.660 qpair failed and we were unable to recover it. 00:33:57.660 [2024-07-24 02:12:12.432460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.660 [2024-07-24 02:12:12.432489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.660 qpair failed and we were unable to recover it. 00:33:57.660 [2024-07-24 02:12:12.432639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.660 [2024-07-24 02:12:12.432664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.660 qpair failed and we were unable to recover it. 00:33:57.660 [2024-07-24 02:12:12.432764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.660 [2024-07-24 02:12:12.432789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.660 qpair failed and we were unable to recover it. 00:33:57.660 [2024-07-24 02:12:12.432904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.660 [2024-07-24 02:12:12.432932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.660 qpair failed and we were unable to recover it. 00:33:57.660 [2024-07-24 02:12:12.433077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.660 [2024-07-24 02:12:12.433107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.660 qpair failed and we were unable to recover it. 00:33:57.660 [2024-07-24 02:12:12.433288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.660 [2024-07-24 02:12:12.433314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.660 qpair failed and we were unable to recover it. 00:33:57.660 [2024-07-24 02:12:12.433473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.660 [2024-07-24 02:12:12.433502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.660 qpair failed and we were unable to recover it. 00:33:57.661 [2024-07-24 02:12:12.433649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.661 [2024-07-24 02:12:12.433677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.661 qpair failed and we were unable to recover it. 00:33:57.661 [2024-07-24 02:12:12.433842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.661 [2024-07-24 02:12:12.433870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.661 qpair failed and we were unable to recover it. 00:33:57.661 [2024-07-24 02:12:12.434050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.661 [2024-07-24 02:12:12.434076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.661 qpair failed and we were unable to recover it. 00:33:57.661 [2024-07-24 02:12:12.434250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.661 [2024-07-24 02:12:12.434278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.661 qpair failed and we were unable to recover it. 00:33:57.661 [2024-07-24 02:12:12.434435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.661 [2024-07-24 02:12:12.434463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.661 qpair failed and we were unable to recover it. 00:33:57.661 [2024-07-24 02:12:12.434635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.661 [2024-07-24 02:12:12.434663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.661 qpair failed and we were unable to recover it. 00:33:57.661 [2024-07-24 02:12:12.434838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.661 [2024-07-24 02:12:12.434864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.661 qpair failed and we were unable to recover it. 00:33:57.661 [2024-07-24 02:12:12.434970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.661 [2024-07-24 02:12:12.434996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.661 qpair failed and we were unable to recover it. 00:33:57.661 [2024-07-24 02:12:12.435131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.661 [2024-07-24 02:12:12.435160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.661 qpair failed and we were unable to recover it. 00:33:57.661 [2024-07-24 02:12:12.435334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.661 [2024-07-24 02:12:12.435368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.661 qpair failed and we were unable to recover it. 00:33:57.661 [2024-07-24 02:12:12.435492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.661 [2024-07-24 02:12:12.435518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.661 qpair failed and we were unable to recover it. 00:33:57.661 [2024-07-24 02:12:12.435677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.661 [2024-07-24 02:12:12.435703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.661 qpair failed and we were unable to recover it. 00:33:57.661 [2024-07-24 02:12:12.435847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.661 [2024-07-24 02:12:12.435876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.661 qpair failed and we were unable to recover it. 00:33:57.661 [2024-07-24 02:12:12.436014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.661 [2024-07-24 02:12:12.436042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.661 qpair failed and we were unable to recover it. 00:33:57.661 [2024-07-24 02:12:12.436190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.661 [2024-07-24 02:12:12.436216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.661 qpair failed and we were unable to recover it. 00:33:57.661 [2024-07-24 02:12:12.436375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.661 [2024-07-24 02:12:12.436404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.661 qpair failed and we were unable to recover it. 00:33:57.661 [2024-07-24 02:12:12.436576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.661 [2024-07-24 02:12:12.436616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.661 qpair failed and we were unable to recover it. 00:33:57.661 [2024-07-24 02:12:12.436783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.661 [2024-07-24 02:12:12.436811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.661 qpair failed and we were unable to recover it. 00:33:57.661 [2024-07-24 02:12:12.436925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.661 [2024-07-24 02:12:12.436955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.661 qpair failed and we were unable to recover it. 00:33:57.661 [2024-07-24 02:12:12.437109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.661 [2024-07-24 02:12:12.437135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.661 qpair failed and we were unable to recover it. 00:33:57.661 [2024-07-24 02:12:12.437297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.661 [2024-07-24 02:12:12.437365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.661 qpair failed and we were unable to recover it. 00:33:57.661 [2024-07-24 02:12:12.437530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.661 [2024-07-24 02:12:12.437556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.661 qpair failed and we were unable to recover it. 00:33:57.661 [2024-07-24 02:12:12.437698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.661 [2024-07-24 02:12:12.437724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.661 qpair failed and we were unable to recover it. 00:33:57.661 [2024-07-24 02:12:12.437854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.661 [2024-07-24 02:12:12.437899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.661 qpair failed and we were unable to recover it. 00:33:57.661 [2024-07-24 02:12:12.438078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.661 [2024-07-24 02:12:12.438104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.661 qpair failed and we were unable to recover it. 00:33:57.661 [2024-07-24 02:12:12.438234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.661 [2024-07-24 02:12:12.438260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.661 qpair failed and we were unable to recover it. 00:33:57.661 [2024-07-24 02:12:12.438431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.661 [2024-07-24 02:12:12.438458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.661 qpair failed and we were unable to recover it. 00:33:57.661 [2024-07-24 02:12:12.438594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.661 [2024-07-24 02:12:12.438638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.661 qpair failed and we were unable to recover it. 00:33:57.661 [2024-07-24 02:12:12.438803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.661 [2024-07-24 02:12:12.438832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.661 qpair failed and we were unable to recover it. 00:33:57.661 [2024-07-24 02:12:12.438987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.661 [2024-07-24 02:12:12.439015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.661 qpair failed and we were unable to recover it. 00:33:57.662 [2024-07-24 02:12:12.439188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.662 [2024-07-24 02:12:12.439214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.662 qpair failed and we were unable to recover it. 00:33:57.662 [2024-07-24 02:12:12.439329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.662 [2024-07-24 02:12:12.439356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.662 qpair failed and we were unable to recover it. 00:33:57.662 [2024-07-24 02:12:12.439522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.662 [2024-07-24 02:12:12.439550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.662 qpair failed and we were unable to recover it. 00:33:57.662 [2024-07-24 02:12:12.439661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.662 [2024-07-24 02:12:12.439689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.662 qpair failed and we were unable to recover it. 00:33:57.662 [2024-07-24 02:12:12.439811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.662 [2024-07-24 02:12:12.439838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.662 qpair failed and we were unable to recover it. 00:33:57.662 [2024-07-24 02:12:12.439974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.662 [2024-07-24 02:12:12.440001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.662 qpair failed and we were unable to recover it. 00:33:57.662 [2024-07-24 02:12:12.440185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.662 [2024-07-24 02:12:12.440214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.662 qpair failed and we were unable to recover it. 00:33:57.662 [2024-07-24 02:12:12.440339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.662 [2024-07-24 02:12:12.440369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.662 qpair failed and we were unable to recover it. 00:33:57.662 [2024-07-24 02:12:12.440490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.662 [2024-07-24 02:12:12.440516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.662 qpair failed and we were unable to recover it. 00:33:57.662 [2024-07-24 02:12:12.440627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.662 [2024-07-24 02:12:12.440653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.662 qpair failed and we were unable to recover it. 00:33:57.662 [2024-07-24 02:12:12.440775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.662 [2024-07-24 02:12:12.440803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.662 qpair failed and we were unable to recover it. 00:33:57.662 [2024-07-24 02:12:12.440950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.662 [2024-07-24 02:12:12.440978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.662 qpair failed and we were unable to recover it. 00:33:57.662 [2024-07-24 02:12:12.441157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.662 [2024-07-24 02:12:12.441182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.662 qpair failed and we were unable to recover it. 00:33:57.662 [2024-07-24 02:12:12.441285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.662 [2024-07-24 02:12:12.441312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.662 qpair failed and we were unable to recover it. 00:33:57.662 [2024-07-24 02:12:12.441469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.662 [2024-07-24 02:12:12.441497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.662 qpair failed and we were unable to recover it. 00:33:57.662 [2024-07-24 02:12:12.441646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.662 [2024-07-24 02:12:12.441679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.662 qpair failed and we were unable to recover it. 00:33:57.662 [2024-07-24 02:12:12.441867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.662 [2024-07-24 02:12:12.441892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.662 qpair failed and we were unable to recover it. 00:33:57.662 [2024-07-24 02:12:12.442036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.662 [2024-07-24 02:12:12.442065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.662 qpair failed and we were unable to recover it. 00:33:57.662 [2024-07-24 02:12:12.442245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.662 [2024-07-24 02:12:12.442274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.662 qpair failed and we were unable to recover it. 00:33:57.662 [2024-07-24 02:12:12.442406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.662 [2024-07-24 02:12:12.442435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.662 qpair failed and we were unable to recover it. 00:33:57.662 [2024-07-24 02:12:12.442581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.662 [2024-07-24 02:12:12.442618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.662 qpair failed and we were unable to recover it. 00:33:57.662 [2024-07-24 02:12:12.442727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.662 [2024-07-24 02:12:12.442752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.662 qpair failed and we were unable to recover it. 00:33:57.662 [2024-07-24 02:12:12.442912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.662 [2024-07-24 02:12:12.442941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.662 qpair failed and we were unable to recover it. 00:33:57.662 [2024-07-24 02:12:12.443062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.662 [2024-07-24 02:12:12.443091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.662 qpair failed and we were unable to recover it. 00:33:57.662 [2024-07-24 02:12:12.443239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.662 [2024-07-24 02:12:12.443264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.662 qpair failed and we were unable to recover it. 00:33:57.662 [2024-07-24 02:12:12.443398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.662 [2024-07-24 02:12:12.443425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.662 qpair failed and we were unable to recover it. 00:33:57.662 [2024-07-24 02:12:12.443614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.662 [2024-07-24 02:12:12.443643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.662 qpair failed and we were unable to recover it. 00:33:57.662 [2024-07-24 02:12:12.443787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.662 [2024-07-24 02:12:12.443815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.662 qpair failed and we were unable to recover it. 00:33:57.662 [2024-07-24 02:12:12.443941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.662 [2024-07-24 02:12:12.443967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.662 qpair failed and we were unable to recover it. 00:33:57.662 [2024-07-24 02:12:12.444098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.662 [2024-07-24 02:12:12.444125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.662 qpair failed and we were unable to recover it. 00:33:57.662 [2024-07-24 02:12:12.444325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.662 [2024-07-24 02:12:12.444351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.662 qpair failed and we were unable to recover it. 00:33:57.662 [2024-07-24 02:12:12.444449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.662 [2024-07-24 02:12:12.444475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.662 qpair failed and we were unable to recover it. 00:33:57.662 [2024-07-24 02:12:12.444664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.662 [2024-07-24 02:12:12.444689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.662 qpair failed and we were unable to recover it. 00:33:57.662 [2024-07-24 02:12:12.444814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.662 [2024-07-24 02:12:12.444857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.662 qpair failed and we were unable to recover it. 00:33:57.662 [2024-07-24 02:12:12.444990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.662 [2024-07-24 02:12:12.445019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.662 qpair failed and we were unable to recover it. 00:33:57.662 [2024-07-24 02:12:12.445185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.662 [2024-07-24 02:12:12.445213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.662 qpair failed and we were unable to recover it. 00:33:57.662 [2024-07-24 02:12:12.445362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.663 [2024-07-24 02:12:12.445388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.663 qpair failed and we were unable to recover it. 00:33:57.663 [2024-07-24 02:12:12.445498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.663 [2024-07-24 02:12:12.445524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.663 qpair failed and we were unable to recover it. 00:33:57.663 [2024-07-24 02:12:12.445675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.663 [2024-07-24 02:12:12.445703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.663 qpair failed and we were unable to recover it. 00:33:57.663 [2024-07-24 02:12:12.445897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.663 [2024-07-24 02:12:12.445923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.663 qpair failed and we were unable to recover it. 00:33:57.663 [2024-07-24 02:12:12.446055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.663 [2024-07-24 02:12:12.446080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.663 qpair failed and we were unable to recover it. 00:33:57.663 [2024-07-24 02:12:12.446210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.663 [2024-07-24 02:12:12.446235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.663 qpair failed and we were unable to recover it. 00:33:57.663 [2024-07-24 02:12:12.446401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.663 [2024-07-24 02:12:12.446431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.663 qpair failed and we were unable to recover it. 00:33:57.663 [2024-07-24 02:12:12.446599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.663 [2024-07-24 02:12:12.446625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.663 qpair failed and we were unable to recover it. 00:33:57.663 [2024-07-24 02:12:12.446748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.663 [2024-07-24 02:12:12.446774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.663 qpair failed and we were unable to recover it. 00:33:57.663 [2024-07-24 02:12:12.446878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.663 [2024-07-24 02:12:12.446904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.663 qpair failed and we were unable to recover it. 00:33:57.663 [2024-07-24 02:12:12.447102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.663 [2024-07-24 02:12:12.447128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.663 qpair failed and we were unable to recover it. 00:33:57.663 [2024-07-24 02:12:12.447288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.663 [2024-07-24 02:12:12.447340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.663 qpair failed and we were unable to recover it. 00:33:57.663 [2024-07-24 02:12:12.447513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.663 [2024-07-24 02:12:12.447538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.663 qpair failed and we were unable to recover it. 00:33:57.663 [2024-07-24 02:12:12.447653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.663 [2024-07-24 02:12:12.447679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.663 qpair failed and we were unable to recover it. 00:33:57.663 [2024-07-24 02:12:12.447833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.663 [2024-07-24 02:12:12.447862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.663 qpair failed and we were unable to recover it. 00:33:57.663 [2024-07-24 02:12:12.447981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.663 [2024-07-24 02:12:12.448010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.663 qpair failed and we were unable to recover it. 00:33:57.663 [2024-07-24 02:12:12.448172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.663 [2024-07-24 02:12:12.448198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.663 qpair failed and we were unable to recover it. 00:33:57.663 [2024-07-24 02:12:12.448303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.663 [2024-07-24 02:12:12.448337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.663 qpair failed and we were unable to recover it. 00:33:57.663 [2024-07-24 02:12:12.448496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.663 [2024-07-24 02:12:12.448524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.663 qpair failed and we were unable to recover it. 00:33:57.663 [2024-07-24 02:12:12.448643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.663 [2024-07-24 02:12:12.448671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.663 qpair failed and we were unable to recover it. 00:33:57.663 [2024-07-24 02:12:12.448831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.663 [2024-07-24 02:12:12.448860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.663 qpair failed and we were unable to recover it. 00:33:57.663 [2024-07-24 02:12:12.448963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.663 [2024-07-24 02:12:12.448989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.663 qpair failed and we were unable to recover it. 00:33:57.663 [2024-07-24 02:12:12.449124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.663 [2024-07-24 02:12:12.449150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.663 qpair failed and we were unable to recover it. 00:33:57.663 [2024-07-24 02:12:12.449334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.663 [2024-07-24 02:12:12.449376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.663 qpair failed and we were unable to recover it. 00:33:57.663 [2024-07-24 02:12:12.449527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.663 [2024-07-24 02:12:12.449552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.663 qpair failed and we were unable to recover it. 00:33:57.663 [2024-07-24 02:12:12.449679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.663 [2024-07-24 02:12:12.449704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.663 qpair failed and we were unable to recover it. 00:33:57.663 [2024-07-24 02:12:12.449862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.663 [2024-07-24 02:12:12.449891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.663 qpair failed and we were unable to recover it. 00:33:57.663 [2024-07-24 02:12:12.450058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.663 [2024-07-24 02:12:12.450087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.663 qpair failed and we were unable to recover it. 00:33:57.663 [2024-07-24 02:12:12.450262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.663 [2024-07-24 02:12:12.450288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.663 qpair failed and we were unable to recover it. 00:33:57.663 [2024-07-24 02:12:12.450406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.663 [2024-07-24 02:12:12.450432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.663 qpair failed and we were unable to recover it. 00:33:57.663 [2024-07-24 02:12:12.450618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.663 [2024-07-24 02:12:12.450647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.663 qpair failed and we were unable to recover it. 00:33:57.663 [2024-07-24 02:12:12.450783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.663 [2024-07-24 02:12:12.450811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.663 qpair failed and we were unable to recover it. 00:33:57.663 [2024-07-24 02:12:12.450963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.663 [2024-07-24 02:12:12.450989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.663 qpair failed and we were unable to recover it. 00:33:57.663 [2024-07-24 02:12:12.451123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.664 [2024-07-24 02:12:12.451166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.664 qpair failed and we were unable to recover it. 00:33:57.664 [2024-07-24 02:12:12.451284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.664 [2024-07-24 02:12:12.451312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.664 qpair failed and we were unable to recover it. 00:33:57.664 [2024-07-24 02:12:12.451514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.664 [2024-07-24 02:12:12.451543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.664 qpair failed and we were unable to recover it. 00:33:57.664 [2024-07-24 02:12:12.451702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.664 [2024-07-24 02:12:12.451727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.664 qpair failed and we were unable to recover it. 00:33:57.664 [2024-07-24 02:12:12.451835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.664 [2024-07-24 02:12:12.451862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.664 qpair failed and we were unable to recover it. 00:33:57.664 [2024-07-24 02:12:12.452037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.664 [2024-07-24 02:12:12.452065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.664 qpair failed and we were unable to recover it. 00:33:57.664 [2024-07-24 02:12:12.452214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.664 [2024-07-24 02:12:12.452242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.664 qpair failed and we were unable to recover it. 00:33:57.664 [2024-07-24 02:12:12.452357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.664 [2024-07-24 02:12:12.452384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.664 qpair failed and we were unable to recover it. 00:33:57.664 [2024-07-24 02:12:12.452521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.664 [2024-07-24 02:12:12.452547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.664 qpair failed and we were unable to recover it. 00:33:57.664 [2024-07-24 02:12:12.452707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.664 [2024-07-24 02:12:12.452732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.664 qpair failed and we were unable to recover it. 00:33:57.664 [2024-07-24 02:12:12.452859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.664 [2024-07-24 02:12:12.452888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.664 qpair failed and we were unable to recover it. 00:33:57.664 [2024-07-24 02:12:12.453076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.664 [2024-07-24 02:12:12.453102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.664 qpair failed and we were unable to recover it. 00:33:57.664 [2024-07-24 02:12:12.453207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.664 [2024-07-24 02:12:12.453251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.664 qpair failed and we were unable to recover it. 00:33:57.664 [2024-07-24 02:12:12.453423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.664 [2024-07-24 02:12:12.453453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.664 qpair failed and we were unable to recover it. 00:33:57.664 [2024-07-24 02:12:12.453621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.664 [2024-07-24 02:12:12.453654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.664 qpair failed and we were unable to recover it. 00:33:57.664 [2024-07-24 02:12:12.453803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.664 [2024-07-24 02:12:12.453828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.664 qpair failed and we were unable to recover it. 00:33:57.664 [2024-07-24 02:12:12.453984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.664 [2024-07-24 02:12:12.454026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.664 qpair failed and we were unable to recover it. 00:33:57.664 [2024-07-24 02:12:12.454194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.664 [2024-07-24 02:12:12.454222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.664 qpair failed and we were unable to recover it. 00:33:57.664 [2024-07-24 02:12:12.454377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.664 [2024-07-24 02:12:12.454404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.664 qpair failed and we were unable to recover it. 00:33:57.664 [2024-07-24 02:12:12.454503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.664 [2024-07-24 02:12:12.454529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.664 qpair failed and we were unable to recover it. 00:33:57.664 [2024-07-24 02:12:12.454658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.664 [2024-07-24 02:12:12.454684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.664 qpair failed and we were unable to recover it. 00:33:57.664 [2024-07-24 02:12:12.454832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.664 [2024-07-24 02:12:12.454860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.664 qpair failed and we were unable to recover it. 00:33:57.664 [2024-07-24 02:12:12.454970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.664 [2024-07-24 02:12:12.454998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.664 qpair failed and we were unable to recover it. 00:33:57.664 [2024-07-24 02:12:12.455144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.664 [2024-07-24 02:12:12.455170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.664 qpair failed and we were unable to recover it. 00:33:57.664 [2024-07-24 02:12:12.455282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.664 [2024-07-24 02:12:12.455307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.664 qpair failed and we were unable to recover it. 00:33:57.664 [2024-07-24 02:12:12.455448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.664 [2024-07-24 02:12:12.455474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.664 qpair failed and we were unable to recover it. 00:33:57.664 [2024-07-24 02:12:12.455626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.664 [2024-07-24 02:12:12.455654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.664 qpair failed and we were unable to recover it. 00:33:57.664 [2024-07-24 02:12:12.455834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.664 [2024-07-24 02:12:12.455860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.664 qpair failed and we were unable to recover it. 00:33:57.664 [2024-07-24 02:12:12.456008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.664 [2024-07-24 02:12:12.456036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.664 qpair failed and we were unable to recover it. 00:33:57.664 [2024-07-24 02:12:12.456180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.664 [2024-07-24 02:12:12.456208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.664 qpair failed and we were unable to recover it. 00:33:57.664 [2024-07-24 02:12:12.456343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.664 [2024-07-24 02:12:12.456382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.664 qpair failed and we were unable to recover it. 00:33:57.664 [2024-07-24 02:12:12.456499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.664 [2024-07-24 02:12:12.456524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.664 qpair failed and we were unable to recover it. 00:33:57.664 [2024-07-24 02:12:12.456636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.664 [2024-07-24 02:12:12.456661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.664 qpair failed and we were unable to recover it. 00:33:57.664 [2024-07-24 02:12:12.456795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.664 [2024-07-24 02:12:12.456820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.664 qpair failed and we were unable to recover it. 00:33:57.664 [2024-07-24 02:12:12.456943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.664 [2024-07-24 02:12:12.456971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.664 qpair failed and we were unable to recover it. 00:33:57.664 [2024-07-24 02:12:12.457127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.664 [2024-07-24 02:12:12.457152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.664 qpair failed and we were unable to recover it. 00:33:57.664 [2024-07-24 02:12:12.457263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.664 [2024-07-24 02:12:12.457289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.664 qpair failed and we were unable to recover it. 00:33:57.664 [2024-07-24 02:12:12.457476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.665 [2024-07-24 02:12:12.457503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.665 qpair failed and we were unable to recover it. 00:33:57.665 [2024-07-24 02:12:12.457607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.665 [2024-07-24 02:12:12.457649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.665 qpair failed and we were unable to recover it. 00:33:57.665 [2024-07-24 02:12:12.457786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.665 [2024-07-24 02:12:12.457811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.665 qpair failed and we were unable to recover it. 00:33:57.665 [2024-07-24 02:12:12.457941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.665 [2024-07-24 02:12:12.457967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.665 qpair failed and we were unable to recover it. 00:33:57.665 [2024-07-24 02:12:12.458129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.665 [2024-07-24 02:12:12.458155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.665 qpair failed and we were unable to recover it. 00:33:57.665 [2024-07-24 02:12:12.458291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.665 [2024-07-24 02:12:12.458337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.665 qpair failed and we were unable to recover it. 00:33:57.665 [2024-07-24 02:12:12.458503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.665 [2024-07-24 02:12:12.458529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.665 qpair failed and we were unable to recover it. 00:33:57.665 [2024-07-24 02:12:12.458653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.665 [2024-07-24 02:12:12.458679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.665 qpair failed and we were unable to recover it. 00:33:57.665 [2024-07-24 02:12:12.458834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.665 [2024-07-24 02:12:12.458863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.665 qpair failed and we were unable to recover it. 00:33:57.665 [2024-07-24 02:12:12.458996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.665 [2024-07-24 02:12:12.459024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.665 qpair failed and we were unable to recover it. 00:33:57.665 [2024-07-24 02:12:12.459149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.665 [2024-07-24 02:12:12.459175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.665 qpair failed and we were unable to recover it. 00:33:57.665 [2024-07-24 02:12:12.459311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.665 [2024-07-24 02:12:12.459346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.665 qpair failed and we were unable to recover it. 00:33:57.665 [2024-07-24 02:12:12.459485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.665 [2024-07-24 02:12:12.459514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.665 qpair failed and we were unable to recover it. 00:33:57.665 [2024-07-24 02:12:12.459685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.665 [2024-07-24 02:12:12.459713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.665 qpair failed and we were unable to recover it. 00:33:57.665 [2024-07-24 02:12:12.459889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.665 [2024-07-24 02:12:12.459915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.665 qpair failed and we were unable to recover it. 00:33:57.665 [2024-07-24 02:12:12.460025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.665 [2024-07-24 02:12:12.460050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.665 qpair failed and we were unable to recover it. 00:33:57.665 [2024-07-24 02:12:12.460153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.665 [2024-07-24 02:12:12.460178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.665 qpair failed and we were unable to recover it. 00:33:57.665 [2024-07-24 02:12:12.460308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.665 [2024-07-24 02:12:12.460346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.665 qpair failed and we were unable to recover it. 00:33:57.665 [2024-07-24 02:12:12.460527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.665 [2024-07-24 02:12:12.460556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.665 qpair failed and we were unable to recover it. 00:33:57.665 [2024-07-24 02:12:12.460744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.665 [2024-07-24 02:12:12.460797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.665 qpair failed and we were unable to recover it. 00:33:57.665 [2024-07-24 02:12:12.460969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.665 [2024-07-24 02:12:12.460997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.665 qpair failed and we were unable to recover it. 00:33:57.665 [2024-07-24 02:12:12.461116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.665 [2024-07-24 02:12:12.461145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.665 qpair failed and we were unable to recover it. 00:33:57.665 [2024-07-24 02:12:12.461329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.665 [2024-07-24 02:12:12.461355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.665 qpair failed and we were unable to recover it. 00:33:57.665 [2024-07-24 02:12:12.461507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.665 [2024-07-24 02:12:12.461535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.665 qpair failed and we were unable to recover it. 00:33:57.665 [2024-07-24 02:12:12.461684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.665 [2024-07-24 02:12:12.461713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.665 qpair failed and we were unable to recover it. 00:33:57.665 [2024-07-24 02:12:12.461854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.665 [2024-07-24 02:12:12.461883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.665 qpair failed and we were unable to recover it. 00:33:57.665 [2024-07-24 02:12:12.461996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.665 [2024-07-24 02:12:12.462022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.665 qpair failed and we were unable to recover it. 00:33:57.665 [2024-07-24 02:12:12.462131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.665 [2024-07-24 02:12:12.462157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.665 qpair failed and we were unable to recover it. 00:33:57.665 [2024-07-24 02:12:12.462292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.665 [2024-07-24 02:12:12.462327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.665 qpair failed and we were unable to recover it. 00:33:57.665 [2024-07-24 02:12:12.462446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.665 [2024-07-24 02:12:12.462474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.665 qpair failed and we were unable to recover it. 00:33:57.665 [2024-07-24 02:12:12.462599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.665 [2024-07-24 02:12:12.462625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.665 qpair failed and we were unable to recover it. 00:33:57.665 [2024-07-24 02:12:12.462772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.665 [2024-07-24 02:12:12.462798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.665 qpair failed and we were unable to recover it. 00:33:57.665 [2024-07-24 02:12:12.462952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.665 [2024-07-24 02:12:12.462981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.665 qpair failed and we were unable to recover it. 00:33:57.666 [2024-07-24 02:12:12.463128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.666 [2024-07-24 02:12:12.463157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.666 qpair failed and we were unable to recover it. 00:33:57.666 [2024-07-24 02:12:12.463333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.666 [2024-07-24 02:12:12.463368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.666 qpair failed and we were unable to recover it. 00:33:57.666 [2024-07-24 02:12:12.463505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.666 [2024-07-24 02:12:12.463533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.666 qpair failed and we were unable to recover it. 00:33:57.666 [2024-07-24 02:12:12.463703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.666 [2024-07-24 02:12:12.463731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.666 qpair failed and we were unable to recover it. 00:33:57.666 [2024-07-24 02:12:12.463850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.666 [2024-07-24 02:12:12.463878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.666 qpair failed and we were unable to recover it. 00:33:57.666 [2024-07-24 02:12:12.464025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.666 [2024-07-24 02:12:12.464050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.666 qpair failed and we were unable to recover it. 00:33:57.666 [2024-07-24 02:12:12.464207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.666 [2024-07-24 02:12:12.464251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.666 qpair failed and we were unable to recover it. 00:33:57.666 [2024-07-24 02:12:12.464358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.666 [2024-07-24 02:12:12.464387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.666 qpair failed and we were unable to recover it. 00:33:57.666 [2024-07-24 02:12:12.464555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.666 [2024-07-24 02:12:12.464583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.666 qpair failed and we were unable to recover it. 00:33:57.666 [2024-07-24 02:12:12.464713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.666 [2024-07-24 02:12:12.464738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.666 qpair failed and we were unable to recover it. 00:33:57.666 [2024-07-24 02:12:12.464893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.666 [2024-07-24 02:12:12.464918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.666 qpair failed and we were unable to recover it. 00:33:57.666 [2024-07-24 02:12:12.465083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.666 [2024-07-24 02:12:12.465111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.666 qpair failed and we were unable to recover it. 00:33:57.666 [2024-07-24 02:12:12.465257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.666 [2024-07-24 02:12:12.465289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.666 qpair failed and we were unable to recover it. 00:33:57.666 [2024-07-24 02:12:12.465451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.666 [2024-07-24 02:12:12.465477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.666 qpair failed and we were unable to recover it. 00:33:57.666 [2024-07-24 02:12:12.465653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.666 [2024-07-24 02:12:12.465681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.666 qpair failed and we were unable to recover it. 00:33:57.666 [2024-07-24 02:12:12.465799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.666 [2024-07-24 02:12:12.465827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.666 qpair failed and we were unable to recover it. 00:33:57.666 [2024-07-24 02:12:12.465975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.666 [2024-07-24 02:12:12.466003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.666 qpair failed and we were unable to recover it. 00:33:57.666 [2024-07-24 02:12:12.466158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.666 [2024-07-24 02:12:12.466184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.666 qpair failed and we were unable to recover it. 00:33:57.666 [2024-07-24 02:12:12.466312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.666 [2024-07-24 02:12:12.466368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.666 qpair failed and we were unable to recover it. 00:33:57.666 [2024-07-24 02:12:12.466517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.666 [2024-07-24 02:12:12.466547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.666 qpair failed and we were unable to recover it. 00:33:57.666 [2024-07-24 02:12:12.466669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.666 [2024-07-24 02:12:12.466698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.666 qpair failed and we were unable to recover it. 00:33:57.666 [2024-07-24 02:12:12.466857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.666 [2024-07-24 02:12:12.466883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.666 qpair failed and we were unable to recover it. 00:33:57.666 [2024-07-24 02:12:12.467009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.666 [2024-07-24 02:12:12.467051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.666 qpair failed and we were unable to recover it. 00:33:57.666 [2024-07-24 02:12:12.467228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.666 [2024-07-24 02:12:12.467254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.666 qpair failed and we were unable to recover it. 00:33:57.666 [2024-07-24 02:12:12.467409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.666 [2024-07-24 02:12:12.467453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.666 qpair failed and we were unable to recover it. 00:33:57.666 [2024-07-24 02:12:12.467607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.666 [2024-07-24 02:12:12.467633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.666 qpair failed and we were unable to recover it. 00:33:57.666 [2024-07-24 02:12:12.467830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.667 [2024-07-24 02:12:12.467888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.667 qpair failed and we were unable to recover it. 00:33:57.667 [2024-07-24 02:12:12.468031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.667 [2024-07-24 02:12:12.468060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.667 qpair failed and we were unable to recover it. 00:33:57.667 [2024-07-24 02:12:12.468239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.667 [2024-07-24 02:12:12.468267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.667 qpair failed and we were unable to recover it. 00:33:57.667 [2024-07-24 02:12:12.468415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.667 [2024-07-24 02:12:12.468442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.667 qpair failed and we were unable to recover it. 00:33:57.667 [2024-07-24 02:12:12.468553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.667 [2024-07-24 02:12:12.468579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.667 qpair failed and we were unable to recover it. 00:33:57.667 [2024-07-24 02:12:12.468736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.667 [2024-07-24 02:12:12.468764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.667 qpair failed and we were unable to recover it. 00:33:57.667 [2024-07-24 02:12:12.468909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.667 [2024-07-24 02:12:12.468945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.667 qpair failed and we were unable to recover it. 00:33:57.667 [2024-07-24 02:12:12.469121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.667 [2024-07-24 02:12:12.469146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.667 qpair failed and we were unable to recover it. 00:33:57.667 [2024-07-24 02:12:12.469311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.667 [2024-07-24 02:12:12.469348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.667 qpair failed and we were unable to recover it. 00:33:57.667 [2024-07-24 02:12:12.469503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.667 [2024-07-24 02:12:12.469531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.667 qpair failed and we were unable to recover it. 00:33:57.667 [2024-07-24 02:12:12.469675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.667 [2024-07-24 02:12:12.469704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.667 qpair failed and we were unable to recover it. 00:33:57.667 [2024-07-24 02:12:12.469855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.667 [2024-07-24 02:12:12.469881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.667 qpair failed and we were unable to recover it. 00:33:57.667 [2024-07-24 02:12:12.470013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.667 [2024-07-24 02:12:12.470056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.667 qpair failed and we were unable to recover it. 00:33:57.667 [2024-07-24 02:12:12.470229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.667 [2024-07-24 02:12:12.470257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.667 qpair failed and we were unable to recover it. 00:33:57.667 [2024-07-24 02:12:12.470411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.667 [2024-07-24 02:12:12.470440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.667 qpair failed and we were unable to recover it. 00:33:57.667 [2024-07-24 02:12:12.470593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.667 [2024-07-24 02:12:12.470629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.667 qpair failed and we were unable to recover it. 00:33:57.667 [2024-07-24 02:12:12.470775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.667 [2024-07-24 02:12:12.470804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.667 qpair failed and we were unable to recover it. 00:33:57.667 [2024-07-24 02:12:12.470988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.667 [2024-07-24 02:12:12.471014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.667 qpair failed and we were unable to recover it. 00:33:57.667 [2024-07-24 02:12:12.471138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.667 [2024-07-24 02:12:12.471165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.667 qpair failed and we were unable to recover it. 00:33:57.667 [2024-07-24 02:12:12.471261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.667 [2024-07-24 02:12:12.471287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.667 qpair failed and we were unable to recover it. 00:33:57.667 [2024-07-24 02:12:12.471422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.667 [2024-07-24 02:12:12.471450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.667 qpair failed and we were unable to recover it. 00:33:57.667 [2024-07-24 02:12:12.471611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.667 [2024-07-24 02:12:12.471645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.667 qpair failed and we were unable to recover it. 00:33:57.667 [2024-07-24 02:12:12.471819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.667 [2024-07-24 02:12:12.471848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.667 qpair failed and we were unable to recover it. 00:33:57.667 [2024-07-24 02:12:12.472003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.667 [2024-07-24 02:12:12.472030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.667 qpair failed and we were unable to recover it. 00:33:57.667 [2024-07-24 02:12:12.472155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.667 [2024-07-24 02:12:12.472198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.667 qpair failed and we were unable to recover it. 00:33:57.667 [2024-07-24 02:12:12.472342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.667 [2024-07-24 02:12:12.472376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.667 qpair failed and we were unable to recover it. 00:33:57.667 [2024-07-24 02:12:12.472491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.667 [2024-07-24 02:12:12.472519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.667 qpair failed and we were unable to recover it. 00:33:57.667 [2024-07-24 02:12:12.472647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.667 [2024-07-24 02:12:12.472676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.667 qpair failed and we were unable to recover it. 00:33:57.667 [2024-07-24 02:12:12.472802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.667 [2024-07-24 02:12:12.472828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.667 qpair failed and we were unable to recover it. 00:33:57.667 [2024-07-24 02:12:12.473011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.667 [2024-07-24 02:12:12.473040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.667 qpair failed and we were unable to recover it. 00:33:57.667 [2024-07-24 02:12:12.473180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.667 [2024-07-24 02:12:12.473209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.667 qpair failed and we were unable to recover it. 00:33:57.667 [2024-07-24 02:12:12.473367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.667 [2024-07-24 02:12:12.473393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.667 qpair failed and we were unable to recover it. 00:33:57.667 [2024-07-24 02:12:12.473529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.667 [2024-07-24 02:12:12.473572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.668 qpair failed and we were unable to recover it. 00:33:57.668 [2024-07-24 02:12:12.473713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.668 [2024-07-24 02:12:12.473742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.668 qpair failed and we were unable to recover it. 00:33:57.668 [2024-07-24 02:12:12.473912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.668 [2024-07-24 02:12:12.473941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.668 qpair failed and we were unable to recover it. 00:33:57.668 [2024-07-24 02:12:12.474095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.668 [2024-07-24 02:12:12.474122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.668 qpair failed and we were unable to recover it. 00:33:57.668 [2024-07-24 02:12:12.474262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.668 [2024-07-24 02:12:12.474305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.668 qpair failed and we were unable to recover it. 00:33:57.668 [2024-07-24 02:12:12.474482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.668 [2024-07-24 02:12:12.474511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.668 qpair failed and we were unable to recover it. 00:33:57.668 [2024-07-24 02:12:12.474645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.668 [2024-07-24 02:12:12.474673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.668 qpair failed and we were unable to recover it. 00:33:57.668 [2024-07-24 02:12:12.474826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.668 [2024-07-24 02:12:12.474852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.668 qpair failed and we were unable to recover it. 00:33:57.668 [2024-07-24 02:12:12.474945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.668 [2024-07-24 02:12:12.474969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.668 qpair failed and we were unable to recover it. 00:33:57.668 [2024-07-24 02:12:12.475099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.668 [2024-07-24 02:12:12.475128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.668 qpair failed and we were unable to recover it. 00:33:57.668 [2024-07-24 02:12:12.475248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.668 [2024-07-24 02:12:12.475278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.668 qpair failed and we were unable to recover it. 00:33:57.668 [2024-07-24 02:12:12.475463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.668 [2024-07-24 02:12:12.475490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.668 qpair failed and we were unable to recover it. 00:33:57.668 [2024-07-24 02:12:12.475621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.668 [2024-07-24 02:12:12.475647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.668 qpair failed and we were unable to recover it. 00:33:57.668 [2024-07-24 02:12:12.475774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.668 [2024-07-24 02:12:12.475800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.668 qpair failed and we were unable to recover it. 00:33:57.668 [2024-07-24 02:12:12.475938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.668 [2024-07-24 02:12:12.475964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.668 qpair failed and we were unable to recover it. 00:33:57.668 [2024-07-24 02:12:12.476119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.668 [2024-07-24 02:12:12.476148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.668 qpair failed and we were unable to recover it. 00:33:57.668 [2024-07-24 02:12:12.476298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.668 [2024-07-24 02:12:12.476336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.668 qpair failed and we were unable to recover it. 00:33:57.668 [2024-07-24 02:12:12.476523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.668 [2024-07-24 02:12:12.476551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.668 qpair failed and we were unable to recover it. 00:33:57.668 [2024-07-24 02:12:12.476663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.668 [2024-07-24 02:12:12.476692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.668 qpair failed and we were unable to recover it. 00:33:57.668 [2024-07-24 02:12:12.476844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.668 [2024-07-24 02:12:12.476871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.668 qpair failed and we were unable to recover it. 00:33:57.668 [2024-07-24 02:12:12.477004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.668 [2024-07-24 02:12:12.477048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.668 qpair failed and we were unable to recover it. 00:33:57.668 [2024-07-24 02:12:12.477192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.668 [2024-07-24 02:12:12.477222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.668 qpair failed and we were unable to recover it. 00:33:57.668 [2024-07-24 02:12:12.477373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.668 [2024-07-24 02:12:12.477402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.668 qpair failed and we were unable to recover it. 00:33:57.668 [2024-07-24 02:12:12.477554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.668 [2024-07-24 02:12:12.477580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.668 qpair failed and we were unable to recover it. 00:33:57.668 [2024-07-24 02:12:12.477717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.668 [2024-07-24 02:12:12.477743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.668 qpair failed and we were unable to recover it. 00:33:57.668 [2024-07-24 02:12:12.477885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.668 [2024-07-24 02:12:12.477911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.668 qpair failed and we were unable to recover it. 00:33:57.668 [2024-07-24 02:12:12.478035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.668 [2024-07-24 02:12:12.478063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.668 qpair failed and we were unable to recover it. 00:33:57.668 [2024-07-24 02:12:12.478248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.668 [2024-07-24 02:12:12.478275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.668 qpair failed and we were unable to recover it. 00:33:57.668 [2024-07-24 02:12:12.478444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.668 [2024-07-24 02:12:12.478473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.668 qpair failed and we were unable to recover it. 00:33:57.668 [2024-07-24 02:12:12.478616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.668 [2024-07-24 02:12:12.478644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.668 qpair failed and we were unable to recover it. 00:33:57.668 [2024-07-24 02:12:12.478836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.668 [2024-07-24 02:12:12.478862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.668 qpair failed and we were unable to recover it. 00:33:57.668 [2024-07-24 02:12:12.478994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.668 [2024-07-24 02:12:12.479020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.668 qpair failed and we were unable to recover it. 00:33:57.668 [2024-07-24 02:12:12.479151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.669 [2024-07-24 02:12:12.479186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.669 qpair failed and we were unable to recover it. 00:33:57.669 [2024-07-24 02:12:12.479347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.669 [2024-07-24 02:12:12.479378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.669 qpair failed and we were unable to recover it. 00:33:57.669 [2024-07-24 02:12:12.479526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.669 [2024-07-24 02:12:12.479552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.669 qpair failed and we were unable to recover it. 00:33:57.669 [2024-07-24 02:12:12.479721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.669 [2024-07-24 02:12:12.479747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.669 qpair failed and we were unable to recover it. 00:33:57.669 [2024-07-24 02:12:12.479940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.669 [2024-07-24 02:12:12.479992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.669 qpair failed and we were unable to recover it. 00:33:57.669 [2024-07-24 02:12:12.480128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.669 [2024-07-24 02:12:12.480157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.669 qpair failed and we were unable to recover it. 00:33:57.669 [2024-07-24 02:12:12.480272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.669 [2024-07-24 02:12:12.480300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.669 qpair failed and we were unable to recover it. 00:33:57.669 [2024-07-24 02:12:12.480460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.669 [2024-07-24 02:12:12.480486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.669 qpair failed and we were unable to recover it. 00:33:57.669 [2024-07-24 02:12:12.480618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.669 [2024-07-24 02:12:12.480660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.669 qpair failed and we were unable to recover it. 00:33:57.669 [2024-07-24 02:12:12.480841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.669 [2024-07-24 02:12:12.480870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.669 qpair failed and we were unable to recover it. 00:33:57.669 [2024-07-24 02:12:12.481028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.669 [2024-07-24 02:12:12.481054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.669 qpair failed and we were unable to recover it. 00:33:57.669 [2024-07-24 02:12:12.481188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.669 [2024-07-24 02:12:12.481213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.669 qpair failed and we were unable to recover it. 00:33:57.669 [2024-07-24 02:12:12.481371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.669 [2024-07-24 02:12:12.481407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.669 qpair failed and we were unable to recover it. 00:33:57.669 [2024-07-24 02:12:12.481547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.669 [2024-07-24 02:12:12.481588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.669 qpair failed and we were unable to recover it. 00:33:57.669 [2024-07-24 02:12:12.481714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.669 [2024-07-24 02:12:12.481742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.669 qpair failed and we were unable to recover it. 00:33:57.669 [2024-07-24 02:12:12.481897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.669 [2024-07-24 02:12:12.481922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.669 qpair failed and we were unable to recover it. 00:33:57.669 [2024-07-24 02:12:12.482026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.669 [2024-07-24 02:12:12.482051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.669 qpair failed and we were unable to recover it. 00:33:57.669 [2024-07-24 02:12:12.482211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.669 [2024-07-24 02:12:12.482239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.669 qpair failed and we were unable to recover it. 00:33:57.669 [2024-07-24 02:12:12.482367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.669 [2024-07-24 02:12:12.482396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.669 qpair failed and we were unable to recover it. 00:33:57.669 [2024-07-24 02:12:12.482545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.669 [2024-07-24 02:12:12.482571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.669 qpair failed and we were unable to recover it. 00:33:57.669 [2024-07-24 02:12:12.482716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.669 [2024-07-24 02:12:12.482758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.669 qpair failed and we were unable to recover it. 00:33:57.669 [2024-07-24 02:12:12.482901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.669 [2024-07-24 02:12:12.482931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.669 qpair failed and we were unable to recover it. 00:33:57.669 [2024-07-24 02:12:12.483048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.669 [2024-07-24 02:12:12.483077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.669 qpair failed and we were unable to recover it. 00:33:57.669 [2024-07-24 02:12:12.483227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.669 [2024-07-24 02:12:12.483253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.669 qpair failed and we were unable to recover it. 00:33:57.669 [2024-07-24 02:12:12.483387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.669 [2024-07-24 02:12:12.483430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.669 qpair failed and we were unable to recover it. 00:33:57.669 [2024-07-24 02:12:12.483580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.669 [2024-07-24 02:12:12.483609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.669 qpair failed and we were unable to recover it. 00:33:57.669 [2024-07-24 02:12:12.483747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.669 [2024-07-24 02:12:12.483776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.669 qpair failed and we were unable to recover it. 00:33:57.669 [2024-07-24 02:12:12.483923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.669 [2024-07-24 02:12:12.483949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.669 qpair failed and we were unable to recover it. 00:33:57.669 [2024-07-24 02:12:12.484104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.669 [2024-07-24 02:12:12.484149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.669 qpair failed and we were unable to recover it. 00:33:57.669 [2024-07-24 02:12:12.484328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.669 [2024-07-24 02:12:12.484357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.669 qpair failed and we were unable to recover it. 00:33:57.669 [2024-07-24 02:12:12.484509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.669 [2024-07-24 02:12:12.484535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.669 qpair failed and we were unable to recover it. 00:33:57.669 [2024-07-24 02:12:12.484639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.670 [2024-07-24 02:12:12.484670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.670 qpair failed and we were unable to recover it. 00:33:57.670 [2024-07-24 02:12:12.484804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.670 [2024-07-24 02:12:12.484830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.670 qpair failed and we were unable to recover it. 00:33:57.670 [2024-07-24 02:12:12.485013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.670 [2024-07-24 02:12:12.485042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.670 qpair failed and we were unable to recover it. 00:33:57.670 [2024-07-24 02:12:12.485224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.670 [2024-07-24 02:12:12.485250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.670 qpair failed and we were unable to recover it. 00:33:57.670 [2024-07-24 02:12:12.485407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.670 [2024-07-24 02:12:12.485434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.670 qpair failed and we were unable to recover it. 00:33:57.670 [2024-07-24 02:12:12.485535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.670 [2024-07-24 02:12:12.485577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.670 qpair failed and we were unable to recover it. 00:33:57.670 [2024-07-24 02:12:12.485719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.670 [2024-07-24 02:12:12.485748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.670 qpair failed and we were unable to recover it. 00:33:57.670 [2024-07-24 02:12:12.485899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.670 [2024-07-24 02:12:12.485927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.670 qpair failed and we were unable to recover it. 00:33:57.670 [2024-07-24 02:12:12.486104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.670 [2024-07-24 02:12:12.486130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.670 qpair failed and we were unable to recover it. 00:33:57.670 [2024-07-24 02:12:12.486233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.670 [2024-07-24 02:12:12.486258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.670 qpair failed and we were unable to recover it. 00:33:57.670 [2024-07-24 02:12:12.486430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.670 [2024-07-24 02:12:12.486456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.670 qpair failed and we were unable to recover it. 00:33:57.670 [2024-07-24 02:12:12.486588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.670 [2024-07-24 02:12:12.486615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.670 qpair failed and we were unable to recover it. 00:33:57.670 [2024-07-24 02:12:12.486749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.670 [2024-07-24 02:12:12.486775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.670 qpair failed and we were unable to recover it. 00:33:57.670 [2024-07-24 02:12:12.486884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.670 [2024-07-24 02:12:12.486911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.670 qpair failed and we were unable to recover it. 00:33:57.670 [2024-07-24 02:12:12.487077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.670 [2024-07-24 02:12:12.487128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.670 qpair failed and we were unable to recover it. 00:33:57.670 [2024-07-24 02:12:12.487244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.670 [2024-07-24 02:12:12.487275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.670 qpair failed and we were unable to recover it. 00:33:57.670 [2024-07-24 02:12:12.487474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.670 [2024-07-24 02:12:12.487501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.670 qpair failed and we were unable to recover it. 00:33:57.670 [2024-07-24 02:12:12.487650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.670 [2024-07-24 02:12:12.487679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.670 qpair failed and we were unable to recover it. 00:33:57.670 [2024-07-24 02:12:12.487826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.670 [2024-07-24 02:12:12.487855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.670 qpair failed and we were unable to recover it. 00:33:57.670 [2024-07-24 02:12:12.487968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.670 [2024-07-24 02:12:12.487997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.670 qpair failed and we were unable to recover it. 00:33:57.670 [2024-07-24 02:12:12.488145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.670 [2024-07-24 02:12:12.488173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.670 qpair failed and we were unable to recover it. 00:33:57.670 [2024-07-24 02:12:12.488351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.670 [2024-07-24 02:12:12.488385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.670 qpair failed and we were unable to recover it. 00:33:57.670 [2024-07-24 02:12:12.488496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.670 [2024-07-24 02:12:12.488525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.670 qpair failed and we were unable to recover it. 00:33:57.670 [2024-07-24 02:12:12.488676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.670 [2024-07-24 02:12:12.488704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.670 qpair failed and we were unable to recover it. 00:33:57.670 [2024-07-24 02:12:12.488857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.670 [2024-07-24 02:12:12.488883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.670 qpair failed and we were unable to recover it. 00:33:57.670 [2024-07-24 02:12:12.489011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.670 [2024-07-24 02:12:12.489037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.670 qpair failed and we were unable to recover it. 00:33:57.670 [2024-07-24 02:12:12.489219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.670 [2024-07-24 02:12:12.489248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.670 qpair failed and we were unable to recover it. 00:33:57.670 [2024-07-24 02:12:12.489382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.670 [2024-07-24 02:12:12.489411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.670 qpair failed and we were unable to recover it. 00:33:57.670 [2024-07-24 02:12:12.489539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.670 [2024-07-24 02:12:12.489566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.670 qpair failed and we were unable to recover it. 00:33:57.670 [2024-07-24 02:12:12.489730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.670 [2024-07-24 02:12:12.489771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.670 qpair failed and we were unable to recover it. 00:33:57.670 [2024-07-24 02:12:12.489921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.670 [2024-07-24 02:12:12.489947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.670 qpair failed and we were unable to recover it. 00:33:57.670 [2024-07-24 02:12:12.490041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.670 [2024-07-24 02:12:12.490067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.670 qpair failed and we were unable to recover it. 00:33:57.670 [2024-07-24 02:12:12.490198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.670 [2024-07-24 02:12:12.490224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.670 qpair failed and we were unable to recover it. 00:33:57.670 [2024-07-24 02:12:12.490329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.670 [2024-07-24 02:12:12.490366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.670 qpair failed and we were unable to recover it. 00:33:57.670 [2024-07-24 02:12:12.490488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.671 [2024-07-24 02:12:12.490517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.671 qpair failed and we were unable to recover it. 00:33:57.671 [2024-07-24 02:12:12.490696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.671 [2024-07-24 02:12:12.490724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.671 qpair failed and we were unable to recover it. 00:33:57.671 [2024-07-24 02:12:12.490882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.671 [2024-07-24 02:12:12.490908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.671 qpair failed and we were unable to recover it. 00:33:57.671 [2024-07-24 02:12:12.491079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.671 [2024-07-24 02:12:12.491108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.671 qpair failed and we were unable to recover it. 00:33:57.671 [2024-07-24 02:12:12.491285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.671 [2024-07-24 02:12:12.491314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.671 qpair failed and we were unable to recover it. 00:33:57.671 [2024-07-24 02:12:12.491468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.671 [2024-07-24 02:12:12.491497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.671 qpair failed and we were unable to recover it. 00:33:57.671 [2024-07-24 02:12:12.491676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.671 [2024-07-24 02:12:12.491702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.671 qpair failed and we were unable to recover it. 00:33:57.671 [2024-07-24 02:12:12.491899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.671 [2024-07-24 02:12:12.491960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.671 qpair failed and we were unable to recover it. 00:33:57.671 [2024-07-24 02:12:12.492131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.671 [2024-07-24 02:12:12.492160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.671 qpair failed and we were unable to recover it. 00:33:57.671 [2024-07-24 02:12:12.492276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.671 [2024-07-24 02:12:12.492305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.671 qpair failed and we were unable to recover it. 00:33:57.671 [2024-07-24 02:12:12.492464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.671 [2024-07-24 02:12:12.492490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.671 qpair failed and we were unable to recover it. 00:33:57.671 [2024-07-24 02:12:12.492670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.671 [2024-07-24 02:12:12.492699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.671 qpair failed and we were unable to recover it. 00:33:57.671 [2024-07-24 02:12:12.492879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.671 [2024-07-24 02:12:12.492908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.671 qpair failed and we were unable to recover it. 00:33:57.671 [2024-07-24 02:12:12.493022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.671 [2024-07-24 02:12:12.493051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.671 qpair failed and we were unable to recover it. 00:33:57.671 [2024-07-24 02:12:12.493203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.671 [2024-07-24 02:12:12.493229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.671 qpair failed and we were unable to recover it. 00:33:57.671 [2024-07-24 02:12:12.493405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.671 [2024-07-24 02:12:12.493434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.671 qpair failed and we were unable to recover it. 00:33:57.671 [2024-07-24 02:12:12.493549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.671 [2024-07-24 02:12:12.493577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.671 qpair failed and we were unable to recover it. 00:33:57.671 [2024-07-24 02:12:12.493728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.671 [2024-07-24 02:12:12.493757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.671 qpair failed and we were unable to recover it. 00:33:57.671 [2024-07-24 02:12:12.493930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.671 [2024-07-24 02:12:12.493956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.671 qpair failed and we were unable to recover it. 00:33:57.671 [2024-07-24 02:12:12.494106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.671 [2024-07-24 02:12:12.494137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.671 qpair failed and we were unable to recover it. 00:33:57.671 [2024-07-24 02:12:12.494285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.671 [2024-07-24 02:12:12.494314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.671 qpair failed and we were unable to recover it. 00:33:57.671 [2024-07-24 02:12:12.494497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.671 [2024-07-24 02:12:12.494526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.671 qpair failed and we were unable to recover it. 00:33:57.671 [2024-07-24 02:12:12.494680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.671 [2024-07-24 02:12:12.494707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.671 qpair failed and we were unable to recover it. 00:33:57.671 [2024-07-24 02:12:12.494841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.671 [2024-07-24 02:12:12.494883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.671 qpair failed and we were unable to recover it. 00:33:57.671 [2024-07-24 02:12:12.495019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.671 [2024-07-24 02:12:12.495048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.671 qpair failed and we were unable to recover it. 00:33:57.671 [2024-07-24 02:12:12.495198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.671 [2024-07-24 02:12:12.495224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.671 qpair failed and we were unable to recover it. 00:33:57.671 [2024-07-24 02:12:12.495360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.671 [2024-07-24 02:12:12.495387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.672 qpair failed and we were unable to recover it. 00:33:57.672 [2024-07-24 02:12:12.495562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.672 [2024-07-24 02:12:12.495591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.672 qpair failed and we were unable to recover it. 00:33:57.672 [2024-07-24 02:12:12.495762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.672 [2024-07-24 02:12:12.495790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.672 qpair failed and we were unable to recover it. 00:33:57.672 [2024-07-24 02:12:12.495892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.672 [2024-07-24 02:12:12.495921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.672 qpair failed and we were unable to recover it. 00:33:57.672 [2024-07-24 02:12:12.496076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.672 [2024-07-24 02:12:12.496102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.672 qpair failed and we were unable to recover it. 00:33:57.672 [2024-07-24 02:12:12.496233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.672 [2024-07-24 02:12:12.496275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.672 qpair failed and we were unable to recover it. 00:33:57.672 [2024-07-24 02:12:12.496441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.672 [2024-07-24 02:12:12.496467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.672 qpair failed and we were unable to recover it. 00:33:57.672 [2024-07-24 02:12:12.496572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.672 [2024-07-24 02:12:12.496598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.672 qpair failed and we were unable to recover it. 00:33:57.672 [2024-07-24 02:12:12.496774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.672 [2024-07-24 02:12:12.496803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.672 qpair failed and we were unable to recover it. 00:33:57.672 [2024-07-24 02:12:12.496924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.672 [2024-07-24 02:12:12.496966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.672 qpair failed and we were unable to recover it. 00:33:57.672 [2024-07-24 02:12:12.497115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.672 [2024-07-24 02:12:12.497144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.672 qpair failed and we were unable to recover it. 00:33:57.672 [2024-07-24 02:12:12.497301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.672 [2024-07-24 02:12:12.497340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.672 qpair failed and we were unable to recover it. 00:33:57.672 [2024-07-24 02:12:12.497471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.672 [2024-07-24 02:12:12.497497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.672 qpair failed and we were unable to recover it. 00:33:57.672 [2024-07-24 02:12:12.497622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.672 [2024-07-24 02:12:12.497648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.672 qpair failed and we were unable to recover it. 00:33:57.672 [2024-07-24 02:12:12.497825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.672 [2024-07-24 02:12:12.497851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.672 qpair failed and we were unable to recover it. 00:33:57.672 [2024-07-24 02:12:12.497986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.672 [2024-07-24 02:12:12.498012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.672 qpair failed and we were unable to recover it. 00:33:57.672 [2024-07-24 02:12:12.498141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.672 [2024-07-24 02:12:12.498166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.672 qpair failed and we were unable to recover it. 00:33:57.672 [2024-07-24 02:12:12.498273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.672 [2024-07-24 02:12:12.498299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.672 qpair failed and we were unable to recover it. 00:33:57.672 [2024-07-24 02:12:12.498446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.672 [2024-07-24 02:12:12.498478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.672 qpair failed and we were unable to recover it. 00:33:57.672 [2024-07-24 02:12:12.498591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.672 [2024-07-24 02:12:12.498627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.672 qpair failed and we were unable to recover it. 00:33:57.672 [2024-07-24 02:12:12.498811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.672 [2024-07-24 02:12:12.498838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.672 qpair failed and we were unable to recover it. 00:33:57.672 [2024-07-24 02:12:12.498969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.672 [2024-07-24 02:12:12.498995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.672 qpair failed and we were unable to recover it. 00:33:57.672 [2024-07-24 02:12:12.499133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.672 [2024-07-24 02:12:12.499160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.672 qpair failed and we were unable to recover it. 00:33:57.672 [2024-07-24 02:12:12.499301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.672 [2024-07-24 02:12:12.499337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.672 qpair failed and we were unable to recover it. 00:33:57.672 [2024-07-24 02:12:12.499443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.672 [2024-07-24 02:12:12.499468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.672 qpair failed and we were unable to recover it. 00:33:57.672 [2024-07-24 02:12:12.499571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.672 [2024-07-24 02:12:12.499599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.672 qpair failed and we were unable to recover it. 00:33:57.672 [2024-07-24 02:12:12.499746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.672 [2024-07-24 02:12:12.499774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.672 qpair failed and we were unable to recover it. 00:33:57.672 [2024-07-24 02:12:12.499891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.672 [2024-07-24 02:12:12.499920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.672 qpair failed and we were unable to recover it. 00:33:57.672 [2024-07-24 02:12:12.500056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.672 [2024-07-24 02:12:12.500081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.672 qpair failed and we were unable to recover it. 00:33:57.672 [2024-07-24 02:12:12.500184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.672 [2024-07-24 02:12:12.500210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.672 qpair failed and we were unable to recover it. 00:33:57.672 [2024-07-24 02:12:12.500340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.672 [2024-07-24 02:12:12.500383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.672 qpair failed and we were unable to recover it. 00:33:57.672 [2024-07-24 02:12:12.500569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.672 [2024-07-24 02:12:12.500611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.672 qpair failed and we were unable to recover it. 00:33:57.672 [2024-07-24 02:12:12.500766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.672 [2024-07-24 02:12:12.500794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.672 qpair failed and we were unable to recover it. 00:33:57.672 [2024-07-24 02:12:12.500975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.672 [2024-07-24 02:12:12.501004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.672 qpair failed and we were unable to recover it. 00:33:57.672 [2024-07-24 02:12:12.501148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.672 [2024-07-24 02:12:12.501179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.672 qpair failed and we were unable to recover it. 00:33:57.673 [2024-07-24 02:12:12.501332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.673 [2024-07-24 02:12:12.501379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.673 qpair failed and we were unable to recover it. 00:33:57.673 [2024-07-24 02:12:12.501523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.673 [2024-07-24 02:12:12.501549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.673 qpair failed and we were unable to recover it. 00:33:57.673 [2024-07-24 02:12:12.501663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.673 [2024-07-24 02:12:12.501688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.673 qpair failed and we were unable to recover it. 00:33:57.673 [2024-07-24 02:12:12.501813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.673 [2024-07-24 02:12:12.501841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.673 qpair failed and we were unable to recover it. 00:33:57.673 [2024-07-24 02:12:12.501985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.673 [2024-07-24 02:12:12.502013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.673 qpair failed and we were unable to recover it. 00:33:57.673 [2024-07-24 02:12:12.502166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.673 [2024-07-24 02:12:12.502192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.673 qpair failed and we were unable to recover it. 00:33:57.673 [2024-07-24 02:12:12.502332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.673 [2024-07-24 02:12:12.502395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.673 qpair failed and we were unable to recover it. 00:33:57.673 [2024-07-24 02:12:12.502523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.673 [2024-07-24 02:12:12.502560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.673 qpair failed and we were unable to recover it. 00:33:57.673 [2024-07-24 02:12:12.502696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.673 [2024-07-24 02:12:12.502727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.673 qpair failed and we were unable to recover it. 00:33:57.673 [2024-07-24 02:12:12.502888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.989 [2024-07-24 02:12:12.502914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.989 qpair failed and we were unable to recover it. 00:33:57.989 [2024-07-24 02:12:12.503044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.989 [2024-07-24 02:12:12.503087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.989 qpair failed and we were unable to recover it. 00:33:57.989 [2024-07-24 02:12:12.503219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.989 [2024-07-24 02:12:12.503247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.989 qpair failed and we were unable to recover it. 00:33:57.989 [2024-07-24 02:12:12.503369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.989 [2024-07-24 02:12:12.503399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.989 qpair failed and we were unable to recover it. 00:33:57.989 [2024-07-24 02:12:12.503557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.989 [2024-07-24 02:12:12.503587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.989 qpair failed and we were unable to recover it. 00:33:57.989 [2024-07-24 02:12:12.503720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.989 [2024-07-24 02:12:12.503753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.989 qpair failed and we were unable to recover it. 00:33:57.989 [2024-07-24 02:12:12.503872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.989 [2024-07-24 02:12:12.503907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.989 qpair failed and we were unable to recover it. 00:33:57.989 [2024-07-24 02:12:12.504029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.989 [2024-07-24 02:12:12.504069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.989 qpair failed and we were unable to recover it. 00:33:57.989 [2024-07-24 02:12:12.504232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.989 [2024-07-24 02:12:12.504268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.989 qpair failed and we were unable to recover it. 00:33:57.989 [2024-07-24 02:12:12.504458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.989 [2024-07-24 02:12:12.504496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.989 qpair failed and we were unable to recover it. 00:33:57.989 [2024-07-24 02:12:12.504643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.989 [2024-07-24 02:12:12.504684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.989 qpair failed and we were unable to recover it. 00:33:57.989 [2024-07-24 02:12:12.504850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.989 [2024-07-24 02:12:12.504886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.989 qpair failed and we were unable to recover it. 00:33:57.989 [2024-07-24 02:12:12.505065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.989 [2024-07-24 02:12:12.505100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.989 qpair failed and we were unable to recover it. 00:33:57.989 [2024-07-24 02:12:12.505244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.989 [2024-07-24 02:12:12.505281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.989 qpair failed and we were unable to recover it. 00:33:57.989 [2024-07-24 02:12:12.505444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.989 [2024-07-24 02:12:12.505484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.989 qpair failed and we were unable to recover it. 00:33:57.989 [2024-07-24 02:12:12.505655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.989 [2024-07-24 02:12:12.505694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.989 qpair failed and we were unable to recover it. 00:33:57.989 [2024-07-24 02:12:12.505828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.989 [2024-07-24 02:12:12.505864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.989 qpair failed and we were unable to recover it. 00:33:57.989 [2024-07-24 02:12:12.506063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.989 [2024-07-24 02:12:12.506102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.990 qpair failed and we were unable to recover it. 00:33:57.990 [2024-07-24 02:12:12.506237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.990 [2024-07-24 02:12:12.506276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.990 qpair failed and we were unable to recover it. 00:33:57.990 [2024-07-24 02:12:12.506438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.990 [2024-07-24 02:12:12.506469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.990 qpair failed and we were unable to recover it. 00:33:57.990 [2024-07-24 02:12:12.506638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.990 [2024-07-24 02:12:12.506666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.990 qpair failed and we were unable to recover it. 00:33:57.990 [2024-07-24 02:12:12.506800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.990 [2024-07-24 02:12:12.506847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.990 qpair failed and we were unable to recover it. 00:33:57.990 [2024-07-24 02:12:12.507024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.990 [2024-07-24 02:12:12.507052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.990 qpair failed and we were unable to recover it. 00:33:57.990 [2024-07-24 02:12:12.507185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.990 [2024-07-24 02:12:12.507230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.990 qpair failed and we were unable to recover it. 00:33:57.990 [2024-07-24 02:12:12.507401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.990 [2024-07-24 02:12:12.507428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.990 qpair failed and we were unable to recover it. 00:33:57.990 [2024-07-24 02:12:12.507532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.990 [2024-07-24 02:12:12.507556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.990 qpair failed and we were unable to recover it. 00:33:57.990 [2024-07-24 02:12:12.507715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.990 [2024-07-24 02:12:12.507746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.990 qpair failed and we were unable to recover it. 00:33:57.990 [2024-07-24 02:12:12.507892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.990 [2024-07-24 02:12:12.507921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.990 qpair failed and we were unable to recover it. 00:33:57.990 [2024-07-24 02:12:12.508069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.990 [2024-07-24 02:12:12.508095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.990 qpair failed and we were unable to recover it. 00:33:57.990 [2024-07-24 02:12:12.508228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.990 [2024-07-24 02:12:12.508278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.990 qpair failed and we were unable to recover it. 00:33:57.990 [2024-07-24 02:12:12.508424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.990 [2024-07-24 02:12:12.508451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.990 qpair failed and we were unable to recover it. 00:33:57.990 [2024-07-24 02:12:12.508584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.990 [2024-07-24 02:12:12.508627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.990 qpair failed and we were unable to recover it. 00:33:57.990 [2024-07-24 02:12:12.508785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.990 [2024-07-24 02:12:12.508817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.990 qpair failed and we were unable to recover it. 00:33:57.990 [2024-07-24 02:12:12.508976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.990 [2024-07-24 02:12:12.509005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.990 qpair failed and we were unable to recover it. 00:33:57.990 [2024-07-24 02:12:12.509163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.990 [2024-07-24 02:12:12.509192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.990 qpair failed and we were unable to recover it. 00:33:57.990 [2024-07-24 02:12:12.509309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.990 [2024-07-24 02:12:12.509345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.990 qpair failed and we were unable to recover it. 00:33:57.990 [2024-07-24 02:12:12.509474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.990 [2024-07-24 02:12:12.509501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.990 qpair failed and we were unable to recover it. 00:33:57.990 [2024-07-24 02:12:12.509643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.990 [2024-07-24 02:12:12.509669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.990 qpair failed and we were unable to recover it. 00:33:57.990 [2024-07-24 02:12:12.509799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.990 [2024-07-24 02:12:12.509828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.990 qpair failed and we were unable to recover it. 00:33:57.990 [2024-07-24 02:12:12.509993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.990 [2024-07-24 02:12:12.510029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.990 qpair failed and we were unable to recover it. 00:33:57.990 [2024-07-24 02:12:12.510165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.990 [2024-07-24 02:12:12.510193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.990 qpair failed and we were unable to recover it. 00:33:57.990 [2024-07-24 02:12:12.510300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.990 [2024-07-24 02:12:12.510333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.990 qpair failed and we were unable to recover it. 00:33:57.990 [2024-07-24 02:12:12.510496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.990 [2024-07-24 02:12:12.510525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.990 qpair failed and we were unable to recover it. 00:33:57.990 [2024-07-24 02:12:12.510676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.990 [2024-07-24 02:12:12.510704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.990 qpair failed and we were unable to recover it. 00:33:57.990 [2024-07-24 02:12:12.510858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.990 [2024-07-24 02:12:12.510884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.990 qpair failed and we were unable to recover it. 00:33:57.990 [2024-07-24 02:12:12.511025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.990 [2024-07-24 02:12:12.511070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.990 qpair failed and we were unable to recover it. 00:33:57.990 [2024-07-24 02:12:12.511235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.990 [2024-07-24 02:12:12.511281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.990 qpair failed and we were unable to recover it. 00:33:57.990 [2024-07-24 02:12:12.511462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.990 [2024-07-24 02:12:12.511491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.990 qpair failed and we were unable to recover it. 00:33:57.990 [2024-07-24 02:12:12.511620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.990 [2024-07-24 02:12:12.511648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.990 qpair failed and we were unable to recover it. 00:33:57.990 [2024-07-24 02:12:12.511823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.990 [2024-07-24 02:12:12.511852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.990 qpair failed and we were unable to recover it. 00:33:57.990 [2024-07-24 02:12:12.511995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.990 [2024-07-24 02:12:12.512024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.990 qpair failed and we were unable to recover it. 00:33:57.990 [2024-07-24 02:12:12.512166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.990 [2024-07-24 02:12:12.512195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.990 qpair failed and we were unable to recover it. 00:33:57.990 [2024-07-24 02:12:12.512331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.990 [2024-07-24 02:12:12.512369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.990 qpair failed and we were unable to recover it. 00:33:57.990 [2024-07-24 02:12:12.512477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.990 [2024-07-24 02:12:12.512504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.990 qpair failed and we were unable to recover it. 00:33:57.990 [2024-07-24 02:12:12.512657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.990 [2024-07-24 02:12:12.512687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.990 qpair failed and we were unable to recover it. 00:33:57.990 [2024-07-24 02:12:12.512830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.990 [2024-07-24 02:12:12.512867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.990 qpair failed and we were unable to recover it. 00:33:57.990 [2024-07-24 02:12:12.513030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.990 [2024-07-24 02:12:12.513057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.990 qpair failed and we were unable to recover it. 00:33:57.990 [2024-07-24 02:12:12.513165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.990 [2024-07-24 02:12:12.513191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.990 qpair failed and we were unable to recover it. 00:33:57.990 [2024-07-24 02:12:12.513362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.990 [2024-07-24 02:12:12.513407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.990 qpair failed and we were unable to recover it. 00:33:57.990 [2024-07-24 02:12:12.513520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.990 [2024-07-24 02:12:12.513550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.990 qpair failed and we were unable to recover it. 00:33:57.990 [2024-07-24 02:12:12.513663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.990 [2024-07-24 02:12:12.513689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.990 qpair failed and we were unable to recover it. 00:33:57.990 [2024-07-24 02:12:12.513790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.990 [2024-07-24 02:12:12.513816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.990 qpair failed and we were unable to recover it. 00:33:57.990 [2024-07-24 02:12:12.513969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.990 [2024-07-24 02:12:12.513998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.990 qpair failed and we were unable to recover it. 00:33:57.990 [2024-07-24 02:12:12.514107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.990 [2024-07-24 02:12:12.514135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.990 qpair failed and we were unable to recover it. 00:33:57.990 [2024-07-24 02:12:12.514308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.990 [2024-07-24 02:12:12.514341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.990 qpair failed and we were unable to recover it. 00:33:57.990 [2024-07-24 02:12:12.514455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.990 [2024-07-24 02:12:12.514481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.990 qpair failed and we were unable to recover it. 00:33:57.990 [2024-07-24 02:12:12.514667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.990 [2024-07-24 02:12:12.514695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.990 qpair failed and we were unable to recover it. 00:33:57.990 [2024-07-24 02:12:12.514859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.990 [2024-07-24 02:12:12.514884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.990 qpair failed and we were unable to recover it. 00:33:57.990 [2024-07-24 02:12:12.514979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.990 [2024-07-24 02:12:12.515005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.990 qpair failed and we were unable to recover it. 00:33:57.990 [2024-07-24 02:12:12.515134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.990 [2024-07-24 02:12:12.515160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.990 qpair failed and we were unable to recover it. 00:33:57.990 [2024-07-24 02:12:12.515329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.990 [2024-07-24 02:12:12.515395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.990 qpair failed and we were unable to recover it. 00:33:57.990 [2024-07-24 02:12:12.515511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.990 [2024-07-24 02:12:12.515538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.990 qpair failed and we were unable to recover it. 00:33:57.990 [2024-07-24 02:12:12.515699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.990 [2024-07-24 02:12:12.515726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.990 qpair failed and we were unable to recover it. 00:33:57.990 [2024-07-24 02:12:12.515879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.990 [2024-07-24 02:12:12.515908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.990 qpair failed and we were unable to recover it. 00:33:57.990 [2024-07-24 02:12:12.516085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.990 [2024-07-24 02:12:12.516136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.990 qpair failed and we were unable to recover it. 00:33:57.990 [2024-07-24 02:12:12.516286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.990 [2024-07-24 02:12:12.516315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.990 qpair failed and we were unable to recover it. 00:33:57.990 [2024-07-24 02:12:12.516462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.990 [2024-07-24 02:12:12.516489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.990 qpair failed and we were unable to recover it. 00:33:57.990 [2024-07-24 02:12:12.516613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.990 [2024-07-24 02:12:12.516640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.990 qpair failed and we were unable to recover it. 00:33:57.990 [2024-07-24 02:12:12.516916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.990 [2024-07-24 02:12:12.516971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.990 qpair failed and we were unable to recover it. 00:33:57.990 [2024-07-24 02:12:12.517142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.990 [2024-07-24 02:12:12.517172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.990 qpair failed and we were unable to recover it. 00:33:57.990 [2024-07-24 02:12:12.517302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.990 [2024-07-24 02:12:12.517333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.990 qpair failed and we were unable to recover it. 00:33:57.990 [2024-07-24 02:12:12.517445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.990 [2024-07-24 02:12:12.517471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.990 qpair failed and we were unable to recover it. 00:33:57.990 [2024-07-24 02:12:12.517570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.990 [2024-07-24 02:12:12.517617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.990 qpair failed and we were unable to recover it. 00:33:57.990 [2024-07-24 02:12:12.517765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.990 [2024-07-24 02:12:12.517794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.990 qpair failed and we were unable to recover it. 00:33:57.990 [2024-07-24 02:12:12.517955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.990 [2024-07-24 02:12:12.517983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.990 qpair failed and we were unable to recover it. 00:33:57.990 [2024-07-24 02:12:12.518193] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd4620 is same with the state(5) to be set 00:33:57.990 [2024-07-24 02:12:12.518419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.990 [2024-07-24 02:12:12.518457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.990 qpair failed and we were unable to recover it. 00:33:57.990 [2024-07-24 02:12:12.518583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.990 [2024-07-24 02:12:12.518611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.990 qpair failed and we were unable to recover it. 00:33:57.990 [2024-07-24 02:12:12.518762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.990 [2024-07-24 02:12:12.518789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.990 qpair failed and we were unable to recover it. 00:33:57.990 [2024-07-24 02:12:12.518956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.990 [2024-07-24 02:12:12.518983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.990 qpair failed and we were unable to recover it. 00:33:57.990 [2024-07-24 02:12:12.519152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.990 [2024-07-24 02:12:12.519178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.990 qpair failed and we were unable to recover it. 00:33:57.990 [2024-07-24 02:12:12.519311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.990 [2024-07-24 02:12:12.519346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.990 qpair failed and we were unable to recover it. 00:33:57.990 [2024-07-24 02:12:12.519445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.990 [2024-07-24 02:12:12.519471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.990 qpair failed and we were unable to recover it. 00:33:57.990 [2024-07-24 02:12:12.519636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.990 [2024-07-24 02:12:12.519665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.990 qpair failed and we were unable to recover it. 00:33:57.990 [2024-07-24 02:12:12.519849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.990 [2024-07-24 02:12:12.519874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.990 qpair failed and we were unable to recover it. 00:33:57.990 [2024-07-24 02:12:12.520037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.990 [2024-07-24 02:12:12.520093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.990 qpair failed and we were unable to recover it. 00:33:57.990 [2024-07-24 02:12:12.520217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.990 [2024-07-24 02:12:12.520246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.990 qpair failed and we were unable to recover it. 00:33:57.990 [2024-07-24 02:12:12.520383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.990 [2024-07-24 02:12:12.520410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.990 qpair failed and we were unable to recover it. 00:33:57.990 [2024-07-24 02:12:12.520512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.990 [2024-07-24 02:12:12.520538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.990 qpair failed and we were unable to recover it. 00:33:57.991 [2024-07-24 02:12:12.520664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.991 [2024-07-24 02:12:12.520693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.991 qpair failed and we were unable to recover it. 00:33:57.991 [2024-07-24 02:12:12.520844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.991 [2024-07-24 02:12:12.520874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.991 qpair failed and we were unable to recover it. 00:33:57.991 [2024-07-24 02:12:12.520981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.991 [2024-07-24 02:12:12.521007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.991 qpair failed and we were unable to recover it. 00:33:57.991 [2024-07-24 02:12:12.521155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.991 [2024-07-24 02:12:12.521184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.991 qpair failed and we were unable to recover it. 00:33:57.991 [2024-07-24 02:12:12.521334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.991 [2024-07-24 02:12:12.521372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.991 qpair failed and we were unable to recover it. 00:33:57.991 [2024-07-24 02:12:12.521480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.991 [2024-07-24 02:12:12.521506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.991 qpair failed and we were unable to recover it. 00:33:57.991 [2024-07-24 02:12:12.521662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.991 [2024-07-24 02:12:12.521691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.991 qpair failed and we were unable to recover it. 00:33:57.991 [2024-07-24 02:12:12.521840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.991 [2024-07-24 02:12:12.521866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.991 qpair failed and we were unable to recover it. 00:33:57.991 [2024-07-24 02:12:12.521969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.991 [2024-07-24 02:12:12.521995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.991 qpair failed and we were unable to recover it. 00:33:57.991 [2024-07-24 02:12:12.522156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.991 [2024-07-24 02:12:12.522185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.991 qpair failed and we were unable to recover it. 00:33:57.991 [2024-07-24 02:12:12.522347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.991 [2024-07-24 02:12:12.522377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.991 qpair failed and we were unable to recover it. 00:33:57.991 [2024-07-24 02:12:12.522502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.991 [2024-07-24 02:12:12.522542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.991 qpair failed and we were unable to recover it. 00:33:57.991 [2024-07-24 02:12:12.522714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.991 [2024-07-24 02:12:12.522745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.991 qpair failed and we were unable to recover it. 00:33:57.991 [2024-07-24 02:12:12.522880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.991 [2024-07-24 02:12:12.522908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.991 qpair failed and we were unable to recover it. 00:33:57.991 [2024-07-24 02:12:12.523020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.991 [2024-07-24 02:12:12.523047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.991 qpair failed and we were unable to recover it. 00:33:57.991 [2024-07-24 02:12:12.523187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.991 [2024-07-24 02:12:12.523214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.991 qpair failed and we were unable to recover it. 00:33:57.991 [2024-07-24 02:12:12.523328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.991 [2024-07-24 02:12:12.523367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.991 qpair failed and we were unable to recover it. 00:33:57.991 [2024-07-24 02:12:12.523495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.991 [2024-07-24 02:12:12.523521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.991 qpair failed and we were unable to recover it. 00:33:57.991 [2024-07-24 02:12:12.523660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.991 [2024-07-24 02:12:12.523689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.991 qpair failed and we were unable to recover it. 00:33:57.991 [2024-07-24 02:12:12.523844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.991 [2024-07-24 02:12:12.523871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.991 qpair failed and we were unable to recover it. 00:33:57.991 [2024-07-24 02:12:12.524032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.991 [2024-07-24 02:12:12.524077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.991 qpair failed and we were unable to recover it. 00:33:57.991 [2024-07-24 02:12:12.524190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.991 [2024-07-24 02:12:12.524220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.991 qpair failed and we were unable to recover it. 00:33:57.991 [2024-07-24 02:12:12.524369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.991 [2024-07-24 02:12:12.524395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.991 qpair failed and we were unable to recover it. 00:33:57.991 [2024-07-24 02:12:12.524522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.991 [2024-07-24 02:12:12.524548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.991 qpair failed and we were unable to recover it. 00:33:57.991 [2024-07-24 02:12:12.524677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.991 [2024-07-24 02:12:12.524706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.991 qpair failed and we were unable to recover it. 00:33:57.991 [2024-07-24 02:12:12.524840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.991 [2024-07-24 02:12:12.524868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.991 qpair failed and we were unable to recover it. 00:33:57.991 [2024-07-24 02:12:12.524974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.991 [2024-07-24 02:12:12.525000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.991 qpair failed and we were unable to recover it. 00:33:57.991 [2024-07-24 02:12:12.525127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.991 [2024-07-24 02:12:12.525156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.991 qpair failed and we were unable to recover it. 00:33:57.991 [2024-07-24 02:12:12.525331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.991 [2024-07-24 02:12:12.525362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.991 qpair failed and we were unable to recover it. 00:33:57.991 [2024-07-24 02:12:12.525490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.991 [2024-07-24 02:12:12.525516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.991 qpair failed and we were unable to recover it. 00:33:57.991 [2024-07-24 02:12:12.525664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.991 [2024-07-24 02:12:12.525693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.991 qpair failed and we were unable to recover it. 00:33:57.991 [2024-07-24 02:12:12.525823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.991 [2024-07-24 02:12:12.525849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.991 qpair failed and we were unable to recover it. 00:33:57.991 [2024-07-24 02:12:12.526006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.991 [2024-07-24 02:12:12.526049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.991 qpair failed and we were unable to recover it. 00:33:57.991 [2024-07-24 02:12:12.526156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.991 [2024-07-24 02:12:12.526185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.991 qpair failed and we were unable to recover it. 00:33:57.991 [2024-07-24 02:12:12.526311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.991 [2024-07-24 02:12:12.526344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.991 qpair failed and we were unable to recover it. 00:33:57.991 [2024-07-24 02:12:12.526453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.991 [2024-07-24 02:12:12.526480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.991 qpair failed and we were unable to recover it. 00:33:57.991 [2024-07-24 02:12:12.526626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.991 [2024-07-24 02:12:12.526652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.991 qpair failed and we were unable to recover it. 00:33:57.991 [2024-07-24 02:12:12.526793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.991 [2024-07-24 02:12:12.526820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.991 qpair failed and we were unable to recover it. 00:33:57.991 [2024-07-24 02:12:12.526959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.991 [2024-07-24 02:12:12.527004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.991 qpair failed and we were unable to recover it. 00:33:57.991 [2024-07-24 02:12:12.527159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.991 [2024-07-24 02:12:12.527186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.991 qpair failed and we were unable to recover it. 00:33:57.991 [2024-07-24 02:12:12.527338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.991 [2024-07-24 02:12:12.527365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.991 qpair failed and we were unable to recover it. 00:33:57.991 [2024-07-24 02:12:12.527478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.991 [2024-07-24 02:12:12.527504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.991 qpair failed and we were unable to recover it. 00:33:57.991 [2024-07-24 02:12:12.527637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.991 [2024-07-24 02:12:12.527666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.991 qpair failed and we were unable to recover it. 00:33:57.991 [2024-07-24 02:12:12.527777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.991 [2024-07-24 02:12:12.527819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.991 qpair failed and we were unable to recover it. 00:33:57.991 [2024-07-24 02:12:12.527953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.991 [2024-07-24 02:12:12.527979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.991 qpair failed and we were unable to recover it. 00:33:57.991 [2024-07-24 02:12:12.528133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.991 [2024-07-24 02:12:12.528163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.991 qpair failed and we were unable to recover it. 00:33:57.991 [2024-07-24 02:12:12.528338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.991 [2024-07-24 02:12:12.528365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.991 qpair failed and we were unable to recover it. 00:33:57.991 [2024-07-24 02:12:12.528497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.991 [2024-07-24 02:12:12.528524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.991 qpair failed and we were unable to recover it. 00:33:57.991 [2024-07-24 02:12:12.528680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.991 [2024-07-24 02:12:12.528707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.991 qpair failed and we were unable to recover it. 00:33:57.991 [2024-07-24 02:12:12.528842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.991 [2024-07-24 02:12:12.528868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.991 qpair failed and we were unable to recover it. 00:33:57.991 [2024-07-24 02:12:12.528986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.991 [2024-07-24 02:12:12.529031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.991 qpair failed and we were unable to recover it. 00:33:57.991 [2024-07-24 02:12:12.529153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.991 [2024-07-24 02:12:12.529182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.991 qpair failed and we were unable to recover it. 00:33:57.991 [2024-07-24 02:12:12.529312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.991 [2024-07-24 02:12:12.529344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.991 qpair failed and we were unable to recover it. 00:33:57.991 [2024-07-24 02:12:12.529453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.991 [2024-07-24 02:12:12.529479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.991 qpair failed and we were unable to recover it. 00:33:57.991 [2024-07-24 02:12:12.529610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.991 [2024-07-24 02:12:12.529636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.991 qpair failed and we were unable to recover it. 00:33:57.991 [2024-07-24 02:12:12.529745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.991 [2024-07-24 02:12:12.529775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.991 qpair failed and we were unable to recover it. 00:33:57.991 [2024-07-24 02:12:12.529924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.991 [2024-07-24 02:12:12.529970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.991 qpair failed and we were unable to recover it. 00:33:57.991 [2024-07-24 02:12:12.530091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.991 [2024-07-24 02:12:12.530121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.991 qpair failed and we were unable to recover it. 00:33:57.991 [2024-07-24 02:12:12.530250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.991 [2024-07-24 02:12:12.530277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.991 qpair failed and we were unable to recover it. 00:33:57.991 [2024-07-24 02:12:12.530391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.991 [2024-07-24 02:12:12.530418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.991 qpair failed and we were unable to recover it. 00:33:57.991 [2024-07-24 02:12:12.530546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.991 [2024-07-24 02:12:12.530572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.991 qpair failed and we were unable to recover it. 00:33:57.991 [2024-07-24 02:12:12.530729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.991 [2024-07-24 02:12:12.530755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.991 qpair failed and we were unable to recover it. 00:33:57.991 [2024-07-24 02:12:12.530870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.991 [2024-07-24 02:12:12.530898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.991 qpair failed and we were unable to recover it. 00:33:57.991 [2024-07-24 02:12:12.531026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.991 [2024-07-24 02:12:12.531052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.991 qpair failed and we were unable to recover it. 00:33:57.991 [2024-07-24 02:12:12.531192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.991 [2024-07-24 02:12:12.531218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.991 qpair failed and we were unable to recover it. 00:33:57.991 [2024-07-24 02:12:12.531351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.991 [2024-07-24 02:12:12.531379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.991 qpair failed and we were unable to recover it. 00:33:57.991 [2024-07-24 02:12:12.531481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.991 [2024-07-24 02:12:12.531506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.991 qpair failed and we were unable to recover it. 00:33:57.991 [2024-07-24 02:12:12.531636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.991 [2024-07-24 02:12:12.531663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.991 qpair failed and we were unable to recover it. 00:33:57.991 [2024-07-24 02:12:12.531808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.991 [2024-07-24 02:12:12.531837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.991 qpair failed and we were unable to recover it. 00:33:57.991 [2024-07-24 02:12:12.532010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.991 [2024-07-24 02:12:12.532039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.991 qpair failed and we were unable to recover it. 00:33:57.991 [2024-07-24 02:12:12.532190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.991 [2024-07-24 02:12:12.532216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.991 qpair failed and we were unable to recover it. 00:33:57.991 [2024-07-24 02:12:12.532353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.991 [2024-07-24 02:12:12.532381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.991 qpair failed and we were unable to recover it. 00:33:57.991 [2024-07-24 02:12:12.532517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.991 [2024-07-24 02:12:12.532545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.992 qpair failed and we were unable to recover it. 00:33:57.992 [2024-07-24 02:12:12.532646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.992 [2024-07-24 02:12:12.532673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.992 qpair failed and we were unable to recover it. 00:33:57.992 [2024-07-24 02:12:12.532809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.992 [2024-07-24 02:12:12.532853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.992 qpair failed and we were unable to recover it. 00:33:57.992 [2024-07-24 02:12:12.532975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.992 [2024-07-24 02:12:12.533005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.992 qpair failed and we were unable to recover it. 00:33:57.992 [2024-07-24 02:12:12.533129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.992 [2024-07-24 02:12:12.533156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.992 qpair failed and we were unable to recover it. 00:33:57.992 [2024-07-24 02:12:12.533290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.992 [2024-07-24 02:12:12.533322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.992 qpair failed and we were unable to recover it. 00:33:57.992 [2024-07-24 02:12:12.533483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.992 [2024-07-24 02:12:12.533509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.992 qpair failed and we were unable to recover it. 00:33:57.992 [2024-07-24 02:12:12.533615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.992 [2024-07-24 02:12:12.533641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.992 qpair failed and we were unable to recover it. 00:33:57.992 [2024-07-24 02:12:12.533772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.992 [2024-07-24 02:12:12.533820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.992 qpair failed and we were unable to recover it. 00:33:57.992 [2024-07-24 02:12:12.533973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.992 [2024-07-24 02:12:12.534000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.992 qpair failed and we were unable to recover it. 00:33:57.992 [2024-07-24 02:12:12.534131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.992 [2024-07-24 02:12:12.534162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.992 qpair failed and we were unable to recover it. 00:33:57.992 [2024-07-24 02:12:12.534296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.992 [2024-07-24 02:12:12.534328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.992 qpair failed and we were unable to recover it. 00:33:57.992 [2024-07-24 02:12:12.534494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.992 [2024-07-24 02:12:12.534521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.992 qpair failed and we were unable to recover it. 00:33:57.992 [2024-07-24 02:12:12.534619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.992 [2024-07-24 02:12:12.534644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.992 qpair failed and we were unable to recover it. 00:33:57.992 [2024-07-24 02:12:12.534769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.992 [2024-07-24 02:12:12.534796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.992 qpair failed and we were unable to recover it. 00:33:57.992 [2024-07-24 02:12:12.534924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.992 [2024-07-24 02:12:12.534953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.992 qpair failed and we were unable to recover it. 00:33:57.992 [2024-07-24 02:12:12.535085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.992 [2024-07-24 02:12:12.535112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.992 qpair failed and we were unable to recover it. 00:33:57.992 [2024-07-24 02:12:12.535247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.992 [2024-07-24 02:12:12.535273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.992 qpair failed and we were unable to recover it. 00:33:57.992 [2024-07-24 02:12:12.535389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.992 [2024-07-24 02:12:12.535416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.992 qpair failed and we were unable to recover it. 00:33:57.992 [2024-07-24 02:12:12.535521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.992 [2024-07-24 02:12:12.535549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.992 qpair failed and we were unable to recover it. 00:33:57.992 [2024-07-24 02:12:12.535707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.992 [2024-07-24 02:12:12.535751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.992 qpair failed and we were unable to recover it. 00:33:57.992 [2024-07-24 02:12:12.535924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.992 [2024-07-24 02:12:12.535953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.992 qpair failed and we were unable to recover it. 00:33:57.992 [2024-07-24 02:12:12.536086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.992 [2024-07-24 02:12:12.536112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.992 qpair failed and we were unable to recover it. 00:33:57.992 [2024-07-24 02:12:12.536248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.992 [2024-07-24 02:12:12.536278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.992 qpair failed and we were unable to recover it. 00:33:57.992 [2024-07-24 02:12:12.536429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.992 [2024-07-24 02:12:12.536457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.992 qpair failed and we were unable to recover it. 00:33:57.992 [2024-07-24 02:12:12.536572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.992 [2024-07-24 02:12:12.536598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.992 qpair failed and we were unable to recover it. 00:33:57.992 [2024-07-24 02:12:12.536732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.992 [2024-07-24 02:12:12.536758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.992 qpair failed and we were unable to recover it. 00:33:57.992 [2024-07-24 02:12:12.536862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.992 [2024-07-24 02:12:12.536888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.992 qpair failed and we were unable to recover it. 00:33:57.992 [2024-07-24 02:12:12.536998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.992 [2024-07-24 02:12:12.537026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.992 qpair failed and we were unable to recover it. 00:33:57.992 [2024-07-24 02:12:12.537152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.992 [2024-07-24 02:12:12.537178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.992 qpair failed and we were unable to recover it. 00:33:57.992 [2024-07-24 02:12:12.537310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.992 [2024-07-24 02:12:12.537344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.992 qpair failed and we were unable to recover it. 00:33:57.992 [2024-07-24 02:12:12.537461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.992 [2024-07-24 02:12:12.537487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.992 qpair failed and we were unable to recover it. 00:33:57.992 [2024-07-24 02:12:12.537593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.992 [2024-07-24 02:12:12.537619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.992 qpair failed and we were unable to recover it. 00:33:57.992 [2024-07-24 02:12:12.537722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.992 [2024-07-24 02:12:12.537748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.992 qpair failed and we were unable to recover it. 00:33:57.992 [2024-07-24 02:12:12.537881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.992 [2024-07-24 02:12:12.537907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.992 qpair failed and we were unable to recover it. 00:33:57.992 [2024-07-24 02:12:12.538038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.992 [2024-07-24 02:12:12.538064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.992 qpair failed and we were unable to recover it. 00:33:57.992 [2024-07-24 02:12:12.538218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.992 [2024-07-24 02:12:12.538247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.992 qpair failed and we were unable to recover it. 00:33:57.992 [2024-07-24 02:12:12.538412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.992 [2024-07-24 02:12:12.538443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.992 qpair failed and we were unable to recover it. 00:33:57.992 [2024-07-24 02:12:12.538552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.992 [2024-07-24 02:12:12.538577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.992 qpair failed and we were unable to recover it. 00:33:57.992 [2024-07-24 02:12:12.538701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.992 [2024-07-24 02:12:12.538727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.992 qpair failed and we were unable to recover it. 00:33:57.992 [2024-07-24 02:12:12.538850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.992 [2024-07-24 02:12:12.538893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.992 qpair failed and we were unable to recover it. 00:33:57.992 [2024-07-24 02:12:12.539038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.992 [2024-07-24 02:12:12.539066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.992 qpair failed and we were unable to recover it. 00:33:57.992 [2024-07-24 02:12:12.539241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.992 [2024-07-24 02:12:12.539269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.992 qpair failed and we were unable to recover it. 00:33:57.992 [2024-07-24 02:12:12.539412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.992 [2024-07-24 02:12:12.539439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.992 qpair failed and we were unable to recover it. 00:33:57.992 [2024-07-24 02:12:12.539538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.992 [2024-07-24 02:12:12.539576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.992 qpair failed and we were unable to recover it. 00:33:57.992 [2024-07-24 02:12:12.539741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.992 [2024-07-24 02:12:12.539767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.992 qpair failed and we were unable to recover it. 00:33:57.992 [2024-07-24 02:12:12.539871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.992 [2024-07-24 02:12:12.539897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.992 qpair failed and we were unable to recover it. 00:33:57.992 [2024-07-24 02:12:12.540073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.992 [2024-07-24 02:12:12.540099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.992 qpair failed and we were unable to recover it. 00:33:57.992 [2024-07-24 02:12:12.540257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.992 [2024-07-24 02:12:12.540286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.992 qpair failed and we were unable to recover it. 00:33:57.992 [2024-07-24 02:12:12.540466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.992 [2024-07-24 02:12:12.540493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.992 qpair failed and we were unable to recover it. 00:33:57.992 [2024-07-24 02:12:12.540608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.992 [2024-07-24 02:12:12.540634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.992 qpair failed and we were unable to recover it. 00:33:57.992 [2024-07-24 02:12:12.540763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.993 [2024-07-24 02:12:12.540806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.993 qpair failed and we were unable to recover it. 00:33:57.993 [2024-07-24 02:12:12.540947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.993 [2024-07-24 02:12:12.540976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.993 qpair failed and we were unable to recover it. 00:33:57.993 [2024-07-24 02:12:12.541092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.993 [2024-07-24 02:12:12.541122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.993 qpair failed and we were unable to recover it. 00:33:57.993 [2024-07-24 02:12:12.541248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.993 [2024-07-24 02:12:12.541274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.993 qpair failed and we were unable to recover it. 00:33:57.993 [2024-07-24 02:12:12.541419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.993 [2024-07-24 02:12:12.541445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.993 qpair failed and we were unable to recover it. 00:33:57.993 [2024-07-24 02:12:12.541547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.993 [2024-07-24 02:12:12.541572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.993 qpair failed and we were unable to recover it. 00:33:57.993 [2024-07-24 02:12:12.541706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.993 [2024-07-24 02:12:12.541732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.993 qpair failed and we were unable to recover it. 00:33:57.993 [2024-07-24 02:12:12.541886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.993 [2024-07-24 02:12:12.541915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.993 qpair failed and we were unable to recover it. 00:33:57.993 [2024-07-24 02:12:12.542057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.993 [2024-07-24 02:12:12.542086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.993 qpair failed and we were unable to recover it. 00:33:57.993 [2024-07-24 02:12:12.542202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.993 [2024-07-24 02:12:12.542231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.993 qpair failed and we were unable to recover it. 00:33:57.993 [2024-07-24 02:12:12.542410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.993 [2024-07-24 02:12:12.542436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.993 qpair failed and we were unable to recover it. 00:33:57.993 [2024-07-24 02:12:12.542565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.993 [2024-07-24 02:12:12.542619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.993 qpair failed and we were unable to recover it. 00:33:57.993 [2024-07-24 02:12:12.542746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.993 [2024-07-24 02:12:12.542772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.993 qpair failed and we were unable to recover it. 00:33:57.993 [2024-07-24 02:12:12.542903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.993 [2024-07-24 02:12:12.542933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.993 qpair failed and we were unable to recover it. 00:33:57.993 [2024-07-24 02:12:12.543105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.993 [2024-07-24 02:12:12.543133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.993 qpair failed and we were unable to recover it. 00:33:57.993 [2024-07-24 02:12:12.543256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.993 [2024-07-24 02:12:12.543282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.993 qpair failed and we were unable to recover it. 00:33:57.993 [2024-07-24 02:12:12.543426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.993 [2024-07-24 02:12:12.543454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.993 qpair failed and we were unable to recover it. 00:33:57.993 [2024-07-24 02:12:12.543553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.993 [2024-07-24 02:12:12.543601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.993 qpair failed and we were unable to recover it. 00:33:57.993 [2024-07-24 02:12:12.543758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.993 [2024-07-24 02:12:12.543785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.993 qpair failed and we were unable to recover it. 00:33:57.993 [2024-07-24 02:12:12.543891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.993 [2024-07-24 02:12:12.543917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.993 qpair failed and we were unable to recover it. 00:33:57.993 [2024-07-24 02:12:12.544130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.993 [2024-07-24 02:12:12.544158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.993 qpair failed and we were unable to recover it. 00:33:57.993 [2024-07-24 02:12:12.544325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.993 [2024-07-24 02:12:12.544363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.993 qpair failed and we were unable to recover it. 00:33:57.993 [2024-07-24 02:12:12.544462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.993 [2024-07-24 02:12:12.544488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.993 qpair failed and we were unable to recover it. 00:33:57.993 [2024-07-24 02:12:12.544609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.993 [2024-07-24 02:12:12.544637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.993 qpair failed and we were unable to recover it. 00:33:57.993 [2024-07-24 02:12:12.544816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.993 [2024-07-24 02:12:12.544842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.993 qpair failed and we were unable to recover it. 00:33:57.993 [2024-07-24 02:12:12.544991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.993 [2024-07-24 02:12:12.545020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.993 qpair failed and we were unable to recover it. 00:33:57.993 [2024-07-24 02:12:12.545139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.993 [2024-07-24 02:12:12.545168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.993 qpair failed and we were unable to recover it. 00:33:57.993 [2024-07-24 02:12:12.545395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.993 [2024-07-24 02:12:12.545421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.993 qpair failed and we were unable to recover it. 00:33:57.993 [2024-07-24 02:12:12.545573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.993 [2024-07-24 02:12:12.545617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.993 qpair failed and we were unable to recover it. 00:33:57.993 [2024-07-24 02:12:12.545760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.993 [2024-07-24 02:12:12.545789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.993 qpair failed and we were unable to recover it. 00:33:57.993 [2024-07-24 02:12:12.545934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.993 [2024-07-24 02:12:12.545964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.993 qpair failed and we were unable to recover it. 00:33:57.993 [2024-07-24 02:12:12.546114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.993 [2024-07-24 02:12:12.546152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.993 qpair failed and we were unable to recover it. 00:33:57.993 [2024-07-24 02:12:12.546293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.993 [2024-07-24 02:12:12.546330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.993 qpair failed and we were unable to recover it. 00:33:57.993 [2024-07-24 02:12:12.546465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.993 [2024-07-24 02:12:12.546491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.993 qpair failed and we were unable to recover it. 00:33:57.993 [2024-07-24 02:12:12.546597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.993 [2024-07-24 02:12:12.546624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.993 qpair failed and we were unable to recover it. 00:33:57.993 [2024-07-24 02:12:12.546755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.993 [2024-07-24 02:12:12.546782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.993 qpair failed and we were unable to recover it. 00:33:57.993 [2024-07-24 02:12:12.546917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.993 [2024-07-24 02:12:12.546944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.993 qpair failed and we were unable to recover it. 00:33:57.993 [2024-07-24 02:12:12.547130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.993 [2024-07-24 02:12:12.547161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.993 qpair failed and we were unable to recover it. 00:33:57.993 [2024-07-24 02:12:12.547272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.993 [2024-07-24 02:12:12.547301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.993 qpair failed and we were unable to recover it. 00:33:57.993 [2024-07-24 02:12:12.547472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.993 [2024-07-24 02:12:12.547498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.993 qpair failed and we were unable to recover it. 00:33:57.993 [2024-07-24 02:12:12.547599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.993 [2024-07-24 02:12:12.547625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.993 qpair failed and we were unable to recover it. 00:33:57.993 [2024-07-24 02:12:12.547784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.993 [2024-07-24 02:12:12.547813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.993 qpair failed and we were unable to recover it. 00:33:57.993 [2024-07-24 02:12:12.547937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.993 [2024-07-24 02:12:12.547983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.993 qpair failed and we were unable to recover it. 00:33:57.993 [2024-07-24 02:12:12.548155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.993 [2024-07-24 02:12:12.548184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.993 qpair failed and we were unable to recover it. 00:33:57.993 [2024-07-24 02:12:12.548333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.993 [2024-07-24 02:12:12.548385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.993 qpair failed and we were unable to recover it. 00:33:57.993 [2024-07-24 02:12:12.548514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.993 [2024-07-24 02:12:12.548540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.993 qpair failed and we were unable to recover it. 00:33:57.993 [2024-07-24 02:12:12.548685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.993 [2024-07-24 02:12:12.548711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.993 qpair failed and we were unable to recover it. 00:33:57.993 [2024-07-24 02:12:12.548817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.993 [2024-07-24 02:12:12.548843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.993 qpair failed and we were unable to recover it. 00:33:57.993 [2024-07-24 02:12:12.548948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.993 [2024-07-24 02:12:12.548974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.993 qpair failed and we were unable to recover it. 00:33:57.993 [2024-07-24 02:12:12.549096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.993 [2024-07-24 02:12:12.549122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.993 qpair failed and we were unable to recover it. 00:33:57.993 [2024-07-24 02:12:12.549251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.993 [2024-07-24 02:12:12.549280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.993 qpair failed and we were unable to recover it. 00:33:57.993 [2024-07-24 02:12:12.549461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.993 [2024-07-24 02:12:12.549488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.993 qpair failed and we were unable to recover it. 00:33:57.993 [2024-07-24 02:12:12.549592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.993 [2024-07-24 02:12:12.549640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.993 qpair failed and we were unable to recover it. 00:33:57.993 [2024-07-24 02:12:12.549797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.993 [2024-07-24 02:12:12.549824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.993 qpair failed and we were unable to recover it. 00:33:57.993 [2024-07-24 02:12:12.549934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.993 [2024-07-24 02:12:12.549961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.993 qpair failed and we were unable to recover it. 00:33:57.993 [2024-07-24 02:12:12.550119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.993 [2024-07-24 02:12:12.550165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.993 qpair failed and we were unable to recover it. 00:33:57.993 [2024-07-24 02:12:12.550310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.993 [2024-07-24 02:12:12.550345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.993 qpair failed and we were unable to recover it. 00:33:57.993 [2024-07-24 02:12:12.550495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.993 [2024-07-24 02:12:12.550522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.993 qpair failed and we were unable to recover it. 00:33:57.993 [2024-07-24 02:12:12.550658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.993 [2024-07-24 02:12:12.550703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.993 qpair failed and we were unable to recover it. 00:33:57.993 [2024-07-24 02:12:12.550850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.993 [2024-07-24 02:12:12.550880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.993 qpair failed and we were unable to recover it. 00:33:57.993 [2024-07-24 02:12:12.551035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.993 [2024-07-24 02:12:12.551061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.993 qpair failed and we were unable to recover it. 00:33:57.993 [2024-07-24 02:12:12.551219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.993 [2024-07-24 02:12:12.551249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.993 qpair failed and we were unable to recover it. 00:33:57.993 [2024-07-24 02:12:12.551407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.993 [2024-07-24 02:12:12.551434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.993 qpair failed and we were unable to recover it. 00:33:57.993 [2024-07-24 02:12:12.551562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.993 [2024-07-24 02:12:12.551596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.993 qpair failed and we were unable to recover it. 00:33:57.993 [2024-07-24 02:12:12.551731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.993 [2024-07-24 02:12:12.551774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.993 qpair failed and we were unable to recover it. 00:33:57.993 [2024-07-24 02:12:12.551941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.993 [2024-07-24 02:12:12.551970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.993 qpair failed and we were unable to recover it. 00:33:57.993 [2024-07-24 02:12:12.552106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.993 [2024-07-24 02:12:12.552132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.993 qpair failed and we were unable to recover it. 00:33:57.993 [2024-07-24 02:12:12.552242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.993 [2024-07-24 02:12:12.552268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.993 qpair failed and we were unable to recover it. 00:33:57.993 [2024-07-24 02:12:12.552435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.993 [2024-07-24 02:12:12.552462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.993 qpair failed and we were unable to recover it. 00:33:57.993 [2024-07-24 02:12:12.552596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.994 [2024-07-24 02:12:12.552622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.994 qpair failed and we were unable to recover it. 00:33:57.994 [2024-07-24 02:12:12.552751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.994 [2024-07-24 02:12:12.552777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.994 qpair failed and we were unable to recover it. 00:33:57.994 [2024-07-24 02:12:12.552907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.994 [2024-07-24 02:12:12.552936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.994 qpair failed and we were unable to recover it. 00:33:57.994 [2024-07-24 02:12:12.553081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.994 [2024-07-24 02:12:12.553106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.994 qpair failed and we were unable to recover it. 00:33:57.994 [2024-07-24 02:12:12.553218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.994 [2024-07-24 02:12:12.553244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.994 qpair failed and we were unable to recover it. 00:33:57.994 [2024-07-24 02:12:12.553415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.994 [2024-07-24 02:12:12.553440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.994 qpair failed and we were unable to recover it. 00:33:57.994 [2024-07-24 02:12:12.553582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.994 [2024-07-24 02:12:12.553608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.994 qpair failed and we were unable to recover it. 00:33:57.994 [2024-07-24 02:12:12.553725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.994 [2024-07-24 02:12:12.553766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.994 qpair failed and we were unable to recover it. 00:33:57.994 [2024-07-24 02:12:12.553905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.994 [2024-07-24 02:12:12.553935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.994 qpair failed and we were unable to recover it. 00:33:57.994 [2024-07-24 02:12:12.554082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.994 [2024-07-24 02:12:12.554109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.994 qpair failed and we were unable to recover it. 00:33:57.994 [2024-07-24 02:12:12.554302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.994 [2024-07-24 02:12:12.554341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.994 qpair failed and we were unable to recover it. 00:33:57.994 [2024-07-24 02:12:12.554481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.994 [2024-07-24 02:12:12.554507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.994 qpair failed and we were unable to recover it. 00:33:57.994 [2024-07-24 02:12:12.554645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.994 [2024-07-24 02:12:12.554671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.994 qpair failed and we were unable to recover it. 00:33:57.994 [2024-07-24 02:12:12.554807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.994 [2024-07-24 02:12:12.554833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.994 qpair failed and we were unable to recover it. 00:33:57.994 [2024-07-24 02:12:12.554962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.994 [2024-07-24 02:12:12.554988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.994 qpair failed and we were unable to recover it. 00:33:57.994 [2024-07-24 02:12:12.555114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.994 [2024-07-24 02:12:12.555139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.994 qpair failed and we were unable to recover it. 00:33:57.994 [2024-07-24 02:12:12.555270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.994 [2024-07-24 02:12:12.555313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.994 qpair failed and we were unable to recover it. 00:33:57.994 [2024-07-24 02:12:12.555485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.994 [2024-07-24 02:12:12.555511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.994 qpair failed and we were unable to recover it. 00:33:57.994 [2024-07-24 02:12:12.555618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.994 [2024-07-24 02:12:12.555644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.994 qpair failed and we were unable to recover it. 00:33:57.994 [2024-07-24 02:12:12.555742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.994 [2024-07-24 02:12:12.555768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.994 qpair failed and we were unable to recover it. 00:33:57.994 [2024-07-24 02:12:12.555921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.994 [2024-07-24 02:12:12.555950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.994 qpair failed and we were unable to recover it. 00:33:57.994 [2024-07-24 02:12:12.556104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.994 [2024-07-24 02:12:12.556130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.994 qpair failed and we were unable to recover it. 00:33:57.994 [2024-07-24 02:12:12.556263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.994 [2024-07-24 02:12:12.556289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.994 qpair failed and we were unable to recover it. 00:33:57.994 [2024-07-24 02:12:12.556455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.994 [2024-07-24 02:12:12.556481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.994 qpair failed and we were unable to recover it. 00:33:57.994 [2024-07-24 02:12:12.556584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.994 [2024-07-24 02:12:12.556610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.994 qpair failed and we were unable to recover it. 00:33:57.994 [2024-07-24 02:12:12.556710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.994 [2024-07-24 02:12:12.556736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.994 qpair failed and we were unable to recover it. 00:33:57.994 [2024-07-24 02:12:12.556847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.994 [2024-07-24 02:12:12.556872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.994 qpair failed and we were unable to recover it. 00:33:57.994 [2024-07-24 02:12:12.556983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.994 [2024-07-24 02:12:12.557009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.994 qpair failed and we were unable to recover it. 00:33:57.994 [2024-07-24 02:12:12.557110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.994 [2024-07-24 02:12:12.557136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.994 qpair failed and we were unable to recover it. 00:33:57.994 [2024-07-24 02:12:12.557288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.994 [2024-07-24 02:12:12.557334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.994 qpair failed and we were unable to recover it. 00:33:57.994 [2024-07-24 02:12:12.557470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.994 [2024-07-24 02:12:12.557496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.994 qpair failed and we were unable to recover it. 00:33:57.994 [2024-07-24 02:12:12.557599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.994 [2024-07-24 02:12:12.557625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.994 qpair failed and we were unable to recover it. 00:33:57.994 [2024-07-24 02:12:12.557741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.994 [2024-07-24 02:12:12.557769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.994 qpair failed and we were unable to recover it. 00:33:57.994 [2024-07-24 02:12:12.557923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.994 [2024-07-24 02:12:12.557949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.994 qpair failed and we were unable to recover it. 00:33:57.994 [2024-07-24 02:12:12.558081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.994 [2024-07-24 02:12:12.558124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.994 qpair failed and we were unable to recover it. 00:33:57.994 [2024-07-24 02:12:12.558234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.994 [2024-07-24 02:12:12.558262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.994 qpair failed and we were unable to recover it. 00:33:57.994 [2024-07-24 02:12:12.558394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.994 [2024-07-24 02:12:12.558421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.994 qpair failed and we were unable to recover it. 00:33:57.994 [2024-07-24 02:12:12.558546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.994 [2024-07-24 02:12:12.558571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.994 qpair failed and we were unable to recover it. 00:33:57.994 [2024-07-24 02:12:12.558723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.994 [2024-07-24 02:12:12.558751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.994 qpair failed and we were unable to recover it. 00:33:57.994 [2024-07-24 02:12:12.558909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.994 [2024-07-24 02:12:12.558940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.994 qpair failed and we were unable to recover it. 00:33:57.994 [2024-07-24 02:12:12.559066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.994 [2024-07-24 02:12:12.559109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.994 qpair failed and we were unable to recover it. 00:33:57.994 [2024-07-24 02:12:12.559218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.994 [2024-07-24 02:12:12.559246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.994 qpair failed and we were unable to recover it. 00:33:57.994 [2024-07-24 02:12:12.559368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.994 [2024-07-24 02:12:12.559395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.994 qpair failed and we were unable to recover it. 00:33:57.994 [2024-07-24 02:12:12.559535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.994 [2024-07-24 02:12:12.559561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.994 qpair failed and we were unable to recover it. 00:33:57.994 [2024-07-24 02:12:12.559689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.994 [2024-07-24 02:12:12.559719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.994 qpair failed and we were unable to recover it. 00:33:57.994 [2024-07-24 02:12:12.559841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.994 [2024-07-24 02:12:12.559868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.994 qpair failed and we were unable to recover it. 00:33:57.994 [2024-07-24 02:12:12.559977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.994 [2024-07-24 02:12:12.560004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.994 qpair failed and we were unable to recover it. 00:33:57.994 [2024-07-24 02:12:12.560174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.994 [2024-07-24 02:12:12.560203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.994 qpair failed and we were unable to recover it. 00:33:57.994 [2024-07-24 02:12:12.560361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.994 [2024-07-24 02:12:12.560387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.994 qpair failed and we were unable to recover it. 00:33:57.994 [2024-07-24 02:12:12.560499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.994 [2024-07-24 02:12:12.560524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.994 qpair failed and we were unable to recover it. 00:33:57.994 [2024-07-24 02:12:12.560664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.994 [2024-07-24 02:12:12.560692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.994 qpair failed and we were unable to recover it. 00:33:57.994 [2024-07-24 02:12:12.560828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.995 [2024-07-24 02:12:12.560854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.995 qpair failed and we were unable to recover it. 00:33:57.995 [2024-07-24 02:12:12.560976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.995 [2024-07-24 02:12:12.561001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.995 qpair failed and we were unable to recover it. 00:33:57.995 [2024-07-24 02:12:12.561158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.995 [2024-07-24 02:12:12.561187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.995 qpair failed and we were unable to recover it. 00:33:57.995 [2024-07-24 02:12:12.561370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.995 [2024-07-24 02:12:12.561396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.995 qpair failed and we were unable to recover it. 00:33:57.995 [2024-07-24 02:12:12.561529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.995 [2024-07-24 02:12:12.561555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.995 qpair failed and we were unable to recover it. 00:33:57.995 [2024-07-24 02:12:12.561707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.995 [2024-07-24 02:12:12.561735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.995 qpair failed and we were unable to recover it. 00:33:57.995 [2024-07-24 02:12:12.561885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.995 [2024-07-24 02:12:12.561910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.995 qpair failed and we were unable to recover it. 00:33:57.995 [2024-07-24 02:12:12.562019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.995 [2024-07-24 02:12:12.562046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.995 qpair failed and we were unable to recover it. 00:33:57.995 [2024-07-24 02:12:12.562211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.995 [2024-07-24 02:12:12.562240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.995 qpair failed and we were unable to recover it. 00:33:57.995 [2024-07-24 02:12:12.562374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.995 [2024-07-24 02:12:12.562411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.995 qpair failed and we were unable to recover it. 00:33:57.995 [2024-07-24 02:12:12.562536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.995 [2024-07-24 02:12:12.562560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.995 qpair failed and we were unable to recover it. 00:33:57.995 [2024-07-24 02:12:12.562685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.995 [2024-07-24 02:12:12.562711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.995 qpair failed and we were unable to recover it. 00:33:57.995 [2024-07-24 02:12:12.562881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.995 [2024-07-24 02:12:12.562909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.995 qpair failed and we were unable to recover it. 00:33:57.995 [2024-07-24 02:12:12.563027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.995 [2024-07-24 02:12:12.563057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.995 qpair failed and we were unable to recover it. 00:33:57.995 [2024-07-24 02:12:12.563200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.995 [2024-07-24 02:12:12.563229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.995 qpair failed and we were unable to recover it. 00:33:57.995 [2024-07-24 02:12:12.563415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.995 [2024-07-24 02:12:12.563457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.995 qpair failed and we were unable to recover it. 00:33:57.995 [2024-07-24 02:12:12.563578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.995 [2024-07-24 02:12:12.563606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.995 qpair failed and we were unable to recover it. 00:33:57.995 [2024-07-24 02:12:12.563761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.995 [2024-07-24 02:12:12.563806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.995 qpair failed and we were unable to recover it. 00:33:57.995 [2024-07-24 02:12:12.563963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.995 [2024-07-24 02:12:12.564008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.995 qpair failed and we were unable to recover it. 00:33:57.995 [2024-07-24 02:12:12.564125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.995 [2024-07-24 02:12:12.564180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.995 qpair failed and we were unable to recover it. 00:33:57.995 [2024-07-24 02:12:12.564368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.995 [2024-07-24 02:12:12.564397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.995 qpair failed and we were unable to recover it. 00:33:57.996 [2024-07-24 02:12:12.564530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.996 [2024-07-24 02:12:12.564557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.996 qpair failed and we were unable to recover it. 00:33:57.996 [2024-07-24 02:12:12.564711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.996 [2024-07-24 02:12:12.564738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.996 qpair failed and we were unable to recover it. 00:33:57.996 [2024-07-24 02:12:12.564888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.996 [2024-07-24 02:12:12.564917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.996 qpair failed and we were unable to recover it. 00:33:57.996 [2024-07-24 02:12:12.565122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.996 [2024-07-24 02:12:12.565154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.996 qpair failed and we were unable to recover it. 00:33:57.996 [2024-07-24 02:12:12.565266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.996 [2024-07-24 02:12:12.565295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.996 qpair failed and we were unable to recover it. 00:33:57.996 [2024-07-24 02:12:12.565435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.996 [2024-07-24 02:12:12.565461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.996 qpair failed and we were unable to recover it. 00:33:57.996 [2024-07-24 02:12:12.565585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.996 [2024-07-24 02:12:12.565614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.996 qpair failed and we were unable to recover it. 00:33:57.996 [2024-07-24 02:12:12.565725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.996 [2024-07-24 02:12:12.565767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.996 qpair failed and we were unable to recover it. 00:33:57.996 [2024-07-24 02:12:12.565955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.996 [2024-07-24 02:12:12.565984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.996 qpair failed and we were unable to recover it. 00:33:57.996 [2024-07-24 02:12:12.566156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.996 [2024-07-24 02:12:12.566185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.996 qpair failed and we were unable to recover it. 00:33:57.996 [2024-07-24 02:12:12.566308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.996 [2024-07-24 02:12:12.566361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.996 qpair failed and we were unable to recover it. 00:33:57.996 [2024-07-24 02:12:12.566473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.996 [2024-07-24 02:12:12.566499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.996 qpair failed and we were unable to recover it. 00:33:57.996 [2024-07-24 02:12:12.566659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.996 [2024-07-24 02:12:12.566710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.996 qpair failed and we were unable to recover it. 00:33:57.996 [2024-07-24 02:12:12.566840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.996 [2024-07-24 02:12:12.566884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.996 qpair failed and we were unable to recover it. 00:33:57.996 [2024-07-24 02:12:12.567011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.996 [2024-07-24 02:12:12.567056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.996 qpair failed and we were unable to recover it. 00:33:57.996 [2024-07-24 02:12:12.567167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.996 [2024-07-24 02:12:12.567194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.996 qpair failed and we were unable to recover it. 00:33:57.996 [2024-07-24 02:12:12.567301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.996 [2024-07-24 02:12:12.567333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.996 qpair failed and we were unable to recover it. 00:33:57.996 [2024-07-24 02:12:12.567507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.996 [2024-07-24 02:12:12.567534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.996 qpair failed and we were unable to recover it. 00:33:57.996 [2024-07-24 02:12:12.567716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.996 [2024-07-24 02:12:12.567746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.996 qpair failed and we were unable to recover it. 00:33:57.996 [2024-07-24 02:12:12.567896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.996 [2024-07-24 02:12:12.567923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.996 qpair failed and we were unable to recover it. 00:33:57.996 [2024-07-24 02:12:12.568028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.996 [2024-07-24 02:12:12.568054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.996 qpair failed and we were unable to recover it. 00:33:57.996 [2024-07-24 02:12:12.568183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.996 [2024-07-24 02:12:12.568216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.996 qpair failed and we were unable to recover it. 00:33:57.996 [2024-07-24 02:12:12.568388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.996 [2024-07-24 02:12:12.568429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.996 qpair failed and we were unable to recover it. 00:33:57.996 [2024-07-24 02:12:12.568552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.996 [2024-07-24 02:12:12.568580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.996 qpair failed and we were unable to recover it. 00:33:57.996 [2024-07-24 02:12:12.568740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.996 [2024-07-24 02:12:12.568770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.996 qpair failed and we were unable to recover it. 00:33:57.996 [2024-07-24 02:12:12.568941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.996 [2024-07-24 02:12:12.568970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.996 qpair failed and we were unable to recover it. 00:33:57.996 [2024-07-24 02:12:12.569112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.996 [2024-07-24 02:12:12.569141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.996 qpair failed and we were unable to recover it. 00:33:57.996 [2024-07-24 02:12:12.569323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.996 [2024-07-24 02:12:12.569353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.996 qpair failed and we were unable to recover it. 00:33:57.996 [2024-07-24 02:12:12.569480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.996 [2024-07-24 02:12:12.569506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.996 qpair failed and we were unable to recover it. 00:33:57.996 [2024-07-24 02:12:12.569689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.996 [2024-07-24 02:12:12.569718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.996 qpair failed and we were unable to recover it. 00:33:57.996 [2024-07-24 02:12:12.569859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.996 [2024-07-24 02:12:12.569888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.996 qpair failed and we were unable to recover it. 00:33:57.996 [2024-07-24 02:12:12.570046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.996 [2024-07-24 02:12:12.570076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.996 qpair failed and we were unable to recover it. 00:33:57.996 [2024-07-24 02:12:12.570188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.996 [2024-07-24 02:12:12.570218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.996 qpair failed and we were unable to recover it. 00:33:57.996 [2024-07-24 02:12:12.570336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.996 [2024-07-24 02:12:12.570380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.996 qpair failed and we were unable to recover it. 00:33:57.996 [2024-07-24 02:12:12.570534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.996 [2024-07-24 02:12:12.570563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.996 qpair failed and we were unable to recover it. 00:33:57.996 [2024-07-24 02:12:12.570720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.996 [2024-07-24 02:12:12.570750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.996 qpair failed and we were unable to recover it. 00:33:57.996 [2024-07-24 02:12:12.570925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.996 [2024-07-24 02:12:12.570978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.996 qpair failed and we were unable to recover it. 00:33:57.996 [2024-07-24 02:12:12.571130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.996 [2024-07-24 02:12:12.571159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.996 qpair failed and we were unable to recover it. 00:33:57.996 [2024-07-24 02:12:12.571305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.996 [2024-07-24 02:12:12.571359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.996 qpair failed and we were unable to recover it. 00:33:57.996 [2024-07-24 02:12:12.571493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.996 [2024-07-24 02:12:12.571520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.996 qpair failed and we were unable to recover it. 00:33:57.996 [2024-07-24 02:12:12.571674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.996 [2024-07-24 02:12:12.571703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.996 qpair failed and we were unable to recover it. 00:33:57.996 [2024-07-24 02:12:12.571812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.996 [2024-07-24 02:12:12.571841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.996 qpair failed and we were unable to recover it. 00:33:57.996 [2024-07-24 02:12:12.571985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.996 [2024-07-24 02:12:12.572014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.996 qpair failed and we were unable to recover it. 00:33:57.996 [2024-07-24 02:12:12.572155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.996 [2024-07-24 02:12:12.572185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.996 qpair failed and we were unable to recover it. 00:33:57.996 [2024-07-24 02:12:12.572374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.996 [2024-07-24 02:12:12.572401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.996 qpair failed and we were unable to recover it. 00:33:57.996 [2024-07-24 02:12:12.572531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.996 [2024-07-24 02:12:12.572557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.996 qpair failed and we were unable to recover it. 00:33:57.996 [2024-07-24 02:12:12.572701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.996 [2024-07-24 02:12:12.572745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.996 qpair failed and we were unable to recover it. 00:33:57.996 [2024-07-24 02:12:12.572868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.996 [2024-07-24 02:12:12.572897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.996 qpair failed and we were unable to recover it. 00:33:57.996 [2024-07-24 02:12:12.573027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.996 [2024-07-24 02:12:12.573061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.996 qpair failed and we were unable to recover it. 00:33:57.996 [2024-07-24 02:12:12.573236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.996 [2024-07-24 02:12:12.573265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.996 qpair failed and we were unable to recover it. 00:33:57.996 [2024-07-24 02:12:12.573396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.996 [2024-07-24 02:12:12.573424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.996 qpair failed and we were unable to recover it. 00:33:57.996 [2024-07-24 02:12:12.573559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.996 [2024-07-24 02:12:12.573586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.996 qpair failed and we were unable to recover it. 00:33:57.996 [2024-07-24 02:12:12.573732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.996 [2024-07-24 02:12:12.573783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.996 qpair failed and we were unable to recover it. 00:33:57.996 [2024-07-24 02:12:12.573930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.996 [2024-07-24 02:12:12.573959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.996 qpair failed and we were unable to recover it. 00:33:57.996 [2024-07-24 02:12:12.574105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.996 [2024-07-24 02:12:12.574134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.996 qpair failed and we were unable to recover it. 00:33:57.996 [2024-07-24 02:12:12.574302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.996 [2024-07-24 02:12:12.574349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.996 qpair failed and we were unable to recover it. 00:33:57.996 [2024-07-24 02:12:12.574496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.996 [2024-07-24 02:12:12.574526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.996 qpair failed and we were unable to recover it. 00:33:57.996 [2024-07-24 02:12:12.574690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.996 [2024-07-24 02:12:12.574718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.996 qpair failed and we were unable to recover it. 00:33:57.996 [2024-07-24 02:12:12.574893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.996 [2024-07-24 02:12:12.574946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.996 qpair failed and we were unable to recover it. 00:33:57.996 [2024-07-24 02:12:12.575103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.996 [2024-07-24 02:12:12.575147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.996 qpair failed and we were unable to recover it. 00:33:57.996 [2024-07-24 02:12:12.575260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.996 [2024-07-24 02:12:12.575287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.996 qpair failed and we were unable to recover it. 00:33:57.996 [2024-07-24 02:12:12.575437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.996 [2024-07-24 02:12:12.575465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.996 qpair failed and we were unable to recover it. 00:33:57.996 [2024-07-24 02:12:12.575610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.996 [2024-07-24 02:12:12.575639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.996 qpair failed and we were unable to recover it. 00:33:57.996 [2024-07-24 02:12:12.575763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.996 [2024-07-24 02:12:12.575808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.996 qpair failed and we were unable to recover it. 00:33:57.996 [2024-07-24 02:12:12.575917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.996 [2024-07-24 02:12:12.575947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.996 qpair failed and we were unable to recover it. 00:33:57.996 [2024-07-24 02:12:12.576095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.996 [2024-07-24 02:12:12.576124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.996 qpair failed and we were unable to recover it. 00:33:57.996 [2024-07-24 02:12:12.576287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.996 [2024-07-24 02:12:12.576334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.996 qpair failed and we were unable to recover it. 00:33:57.997 [2024-07-24 02:12:12.576504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.997 [2024-07-24 02:12:12.576532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.997 qpair failed and we were unable to recover it. 00:33:57.997 [2024-07-24 02:12:12.576636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.997 [2024-07-24 02:12:12.576663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.997 qpair failed and we were unable to recover it. 00:33:57.997 [2024-07-24 02:12:12.576777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.997 [2024-07-24 02:12:12.576807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.997 qpair failed and we were unable to recover it. 00:33:57.997 [2024-07-24 02:12:12.576976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.997 [2024-07-24 02:12:12.577020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.997 qpair failed and we were unable to recover it. 00:33:57.997 [2024-07-24 02:12:12.577166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.997 [2024-07-24 02:12:12.577196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.997 qpair failed and we were unable to recover it. 00:33:57.997 [2024-07-24 02:12:12.577310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.997 [2024-07-24 02:12:12.577346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.997 qpair failed and we were unable to recover it. 00:33:57.997 [2024-07-24 02:12:12.577524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.997 [2024-07-24 02:12:12.577551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.997 qpair failed and we were unable to recover it. 00:33:57.997 [2024-07-24 02:12:12.577648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.997 [2024-07-24 02:12:12.577691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.997 qpair failed and we were unable to recover it. 00:33:57.997 [2024-07-24 02:12:12.577855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.997 [2024-07-24 02:12:12.577900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.997 qpair failed and we were unable to recover it. 00:33:57.997 [2024-07-24 02:12:12.578066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.997 [2024-07-24 02:12:12.578096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.997 qpair failed and we were unable to recover it. 00:33:57.997 [2024-07-24 02:12:12.578252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.997 [2024-07-24 02:12:12.578279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.997 qpair failed and we were unable to recover it. 00:33:57.997 [2024-07-24 02:12:12.578417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.997 [2024-07-24 02:12:12.578445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.997 qpair failed and we were unable to recover it. 00:33:57.997 [2024-07-24 02:12:12.578573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.997 [2024-07-24 02:12:12.578600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.997 qpair failed and we were unable to recover it. 00:33:57.997 [2024-07-24 02:12:12.578779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.997 [2024-07-24 02:12:12.578807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.997 qpair failed and we were unable to recover it. 00:33:57.997 [2024-07-24 02:12:12.578918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.997 [2024-07-24 02:12:12.578947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.997 qpair failed and we were unable to recover it. 00:33:57.997 [2024-07-24 02:12:12.579068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.997 [2024-07-24 02:12:12.579101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.997 qpair failed and we were unable to recover it. 00:33:57.997 [2024-07-24 02:12:12.579230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.997 [2024-07-24 02:12:12.579259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.997 qpair failed and we were unable to recover it. 00:33:57.997 [2024-07-24 02:12:12.579406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.997 [2024-07-24 02:12:12.579446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.997 qpair failed and we were unable to recover it. 00:33:57.997 [2024-07-24 02:12:12.579602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.997 [2024-07-24 02:12:12.579642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.997 qpair failed and we were unable to recover it. 00:33:57.997 [2024-07-24 02:12:12.579803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.997 [2024-07-24 02:12:12.579849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.997 qpair failed and we were unable to recover it. 00:33:57.997 [2024-07-24 02:12:12.579983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.997 [2024-07-24 02:12:12.580027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.997 qpair failed and we were unable to recover it. 00:33:57.997 [2024-07-24 02:12:12.580135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.997 [2024-07-24 02:12:12.580167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.997 qpair failed and we were unable to recover it. 00:33:57.997 [2024-07-24 02:12:12.580300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.997 [2024-07-24 02:12:12.580333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.997 qpair failed and we were unable to recover it. 00:33:57.997 [2024-07-24 02:12:12.580482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.997 [2024-07-24 02:12:12.580509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.997 qpair failed and we were unable to recover it. 00:33:57.997 [2024-07-24 02:12:12.580639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.997 [2024-07-24 02:12:12.580666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.997 qpair failed and we were unable to recover it. 00:33:57.997 [2024-07-24 02:12:12.580772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.997 [2024-07-24 02:12:12.580799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.997 qpair failed and we were unable to recover it. 00:33:57.997 [2024-07-24 02:12:12.580953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.997 [2024-07-24 02:12:12.580980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.997 qpair failed and we were unable to recover it. 00:33:57.997 [2024-07-24 02:12:12.581108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.997 [2024-07-24 02:12:12.581134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.997 qpair failed and we were unable to recover it. 00:33:57.997 [2024-07-24 02:12:12.581242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.997 [2024-07-24 02:12:12.581269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.997 qpair failed and we were unable to recover it. 00:33:57.997 [2024-07-24 02:12:12.581418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.997 [2024-07-24 02:12:12.581447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.997 qpair failed and we were unable to recover it. 00:33:57.997 [2024-07-24 02:12:12.581575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.997 [2024-07-24 02:12:12.581605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.997 qpair failed and we were unable to recover it. 00:33:57.997 [2024-07-24 02:12:12.581772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.997 [2024-07-24 02:12:12.581801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.997 qpair failed and we were unable to recover it. 00:33:57.997 [2024-07-24 02:12:12.581926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.997 [2024-07-24 02:12:12.581955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.997 qpair failed and we were unable to recover it. 00:33:57.997 [2024-07-24 02:12:12.582137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.997 [2024-07-24 02:12:12.582180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.997 qpair failed and we were unable to recover it. 00:33:57.997 [2024-07-24 02:12:12.582310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.997 [2024-07-24 02:12:12.582351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.997 qpair failed and we were unable to recover it. 00:33:57.997 [2024-07-24 02:12:12.582496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.997 [2024-07-24 02:12:12.582525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.997 qpair failed and we were unable to recover it. 00:33:57.997 [2024-07-24 02:12:12.582637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.997 [2024-07-24 02:12:12.582667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.997 qpair failed and we were unable to recover it. 00:33:57.997 [2024-07-24 02:12:12.582850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.997 [2024-07-24 02:12:12.582879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.997 qpair failed and we were unable to recover it. 00:33:57.997 [2024-07-24 02:12:12.583104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.997 [2024-07-24 02:12:12.583158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.997 qpair failed and we were unable to recover it. 00:33:57.997 [2024-07-24 02:12:12.583290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.997 [2024-07-24 02:12:12.583323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.997 qpair failed and we were unable to recover it. 00:33:57.997 [2024-07-24 02:12:12.583432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.997 [2024-07-24 02:12:12.583460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.997 qpair failed and we were unable to recover it. 00:33:57.997 [2024-07-24 02:12:12.583618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.997 [2024-07-24 02:12:12.583662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.997 qpair failed and we were unable to recover it. 00:33:57.997 [2024-07-24 02:12:12.583814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.997 [2024-07-24 02:12:12.583869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.997 qpair failed and we were unable to recover it. 00:33:57.997 [2024-07-24 02:12:12.584027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.997 [2024-07-24 02:12:12.584072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.997 qpair failed and we were unable to recover it. 00:33:57.997 [2024-07-24 02:12:12.584232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.997 [2024-07-24 02:12:12.584259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.997 qpair failed and we were unable to recover it. 00:33:57.997 [2024-07-24 02:12:12.584401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.997 [2024-07-24 02:12:12.584429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.997 qpair failed and we were unable to recover it. 00:33:57.997 [2024-07-24 02:12:12.584555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.997 [2024-07-24 02:12:12.584598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.997 qpair failed and we were unable to recover it. 00:33:57.997 [2024-07-24 02:12:12.584745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.997 [2024-07-24 02:12:12.584772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.997 qpair failed and we were unable to recover it. 00:33:57.997 [2024-07-24 02:12:12.584917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.997 [2024-07-24 02:12:12.584946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.997 qpair failed and we were unable to recover it. 00:33:57.997 [2024-07-24 02:12:12.585105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.997 [2024-07-24 02:12:12.585134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.997 qpair failed and we were unable to recover it. 00:33:57.997 [2024-07-24 02:12:12.585253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.997 [2024-07-24 02:12:12.585283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.997 qpair failed and we were unable to recover it. 00:33:57.997 [2024-07-24 02:12:12.585444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.997 [2024-07-24 02:12:12.585474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.997 qpair failed and we were unable to recover it. 00:33:57.997 [2024-07-24 02:12:12.585615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.997 [2024-07-24 02:12:12.585644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.997 qpair failed and we were unable to recover it. 00:33:57.997 [2024-07-24 02:12:12.585760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.997 [2024-07-24 02:12:12.585790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.997 qpair failed and we were unable to recover it. 00:33:57.997 [2024-07-24 02:12:12.585928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.997 [2024-07-24 02:12:12.585957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.997 qpair failed and we were unable to recover it. 00:33:57.997 [2024-07-24 02:12:12.586099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.997 [2024-07-24 02:12:12.586129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:57.997 qpair failed and we were unable to recover it. 00:33:57.997 [2024-07-24 02:12:12.586294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.997 [2024-07-24 02:12:12.586366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.997 qpair failed and we were unable to recover it. 00:33:57.997 [2024-07-24 02:12:12.586530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.997 [2024-07-24 02:12:12.586558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.997 qpair failed and we were unable to recover it. 00:33:57.997 [2024-07-24 02:12:12.586685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.997 [2024-07-24 02:12:12.586714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.997 qpair failed and we were unable to recover it. 00:33:57.997 [2024-07-24 02:12:12.586864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.997 [2024-07-24 02:12:12.586893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.997 qpair failed and we were unable to recover it. 00:33:57.997 [2024-07-24 02:12:12.587069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.997 [2024-07-24 02:12:12.587119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.997 qpair failed and we were unable to recover it. 00:33:57.997 [2024-07-24 02:12:12.587264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.997 [2024-07-24 02:12:12.587292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.997 qpair failed and we were unable to recover it. 00:33:57.997 [2024-07-24 02:12:12.587466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.997 [2024-07-24 02:12:12.587493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.997 qpair failed and we were unable to recover it. 00:33:57.997 [2024-07-24 02:12:12.587654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.997 [2024-07-24 02:12:12.587681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.997 qpair failed and we were unable to recover it. 00:33:57.997 [2024-07-24 02:12:12.587833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.997 [2024-07-24 02:12:12.587862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.997 qpair failed and we were unable to recover it. 00:33:57.997 [2024-07-24 02:12:12.588006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.997 [2024-07-24 02:12:12.588035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.997 qpair failed and we were unable to recover it. 00:33:57.997 [2024-07-24 02:12:12.588153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.997 [2024-07-24 02:12:12.588182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.997 qpair failed and we were unable to recover it. 00:33:57.997 [2024-07-24 02:12:12.588328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.997 [2024-07-24 02:12:12.588372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.997 qpair failed and we were unable to recover it. 00:33:57.997 [2024-07-24 02:12:12.588538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.997 [2024-07-24 02:12:12.588564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.998 qpair failed and we were unable to recover it. 00:33:57.998 [2024-07-24 02:12:12.588667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.998 [2024-07-24 02:12:12.588711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.998 qpair failed and we were unable to recover it. 00:33:57.998 [2024-07-24 02:12:12.588857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.998 [2024-07-24 02:12:12.588886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.998 qpair failed and we were unable to recover it. 00:33:57.998 [2024-07-24 02:12:12.589023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.998 [2024-07-24 02:12:12.589051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.998 qpair failed and we were unable to recover it. 00:33:57.998 [2024-07-24 02:12:12.589163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.998 [2024-07-24 02:12:12.589191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.998 qpair failed and we were unable to recover it. 00:33:57.998 [2024-07-24 02:12:12.589306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.998 [2024-07-24 02:12:12.589342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.998 qpair failed and we were unable to recover it. 00:33:57.998 [2024-07-24 02:12:12.589471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.998 [2024-07-24 02:12:12.589497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.998 qpair failed and we were unable to recover it. 00:33:57.998 [2024-07-24 02:12:12.589612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.998 [2024-07-24 02:12:12.589642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.998 qpair failed and we were unable to recover it. 00:33:57.998 [2024-07-24 02:12:12.589778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.998 [2024-07-24 02:12:12.589804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.998 qpair failed and we were unable to recover it. 00:33:57.998 [2024-07-24 02:12:12.589930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.998 [2024-07-24 02:12:12.589959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.998 qpair failed and we were unable to recover it. 00:33:57.998 [2024-07-24 02:12:12.590162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.998 [2024-07-24 02:12:12.590190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.998 qpair failed and we were unable to recover it. 00:33:57.998 [2024-07-24 02:12:12.590304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.998 [2024-07-24 02:12:12.590344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.998 qpair failed and we were unable to recover it. 00:33:57.998 [2024-07-24 02:12:12.590496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.998 [2024-07-24 02:12:12.590522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.998 qpair failed and we were unable to recover it. 00:33:57.998 [2024-07-24 02:12:12.590680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.998 [2024-07-24 02:12:12.590706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.998 qpair failed and we were unable to recover it. 00:33:57.998 [2024-07-24 02:12:12.590831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.998 [2024-07-24 02:12:12.590876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.998 qpair failed and we were unable to recover it. 00:33:57.998 [2024-07-24 02:12:12.590998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.998 [2024-07-24 02:12:12.591027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.998 qpair failed and we were unable to recover it. 00:33:57.998 [2024-07-24 02:12:12.591198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.998 [2024-07-24 02:12:12.591227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.998 qpair failed and we were unable to recover it. 00:33:57.998 [2024-07-24 02:12:12.591390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.998 [2024-07-24 02:12:12.591418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.998 qpair failed and we were unable to recover it. 00:33:57.998 [2024-07-24 02:12:12.591553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.998 [2024-07-24 02:12:12.591579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.998 qpair failed and we were unable to recover it. 00:33:57.998 [2024-07-24 02:12:12.591682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.998 [2024-07-24 02:12:12.591709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.998 qpair failed and we were unable to recover it. 00:33:57.998 [2024-07-24 02:12:12.591833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.998 [2024-07-24 02:12:12.591862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.998 qpair failed and we were unable to recover it. 00:33:57.998 [2024-07-24 02:12:12.592037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.998 [2024-07-24 02:12:12.592067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.998 qpair failed and we were unable to recover it. 00:33:57.998 [2024-07-24 02:12:12.592255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.998 [2024-07-24 02:12:12.592296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.998 qpair failed and we were unable to recover it. 00:33:57.998 [2024-07-24 02:12:12.592420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.998 [2024-07-24 02:12:12.592449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.998 qpair failed and we were unable to recover it. 00:33:57.998 [2024-07-24 02:12:12.592584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.998 [2024-07-24 02:12:12.592612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.998 qpair failed and we were unable to recover it. 00:33:57.998 [2024-07-24 02:12:12.592722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.998 [2024-07-24 02:12:12.592750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.998 qpair failed and we were unable to recover it. 00:33:57.998 [2024-07-24 02:12:12.592933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.998 [2024-07-24 02:12:12.592978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.998 qpair failed and we were unable to recover it. 00:33:57.998 [2024-07-24 02:12:12.593149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.998 [2024-07-24 02:12:12.593198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.998 qpair failed and we were unable to recover it. 00:33:57.998 [2024-07-24 02:12:12.593305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.998 [2024-07-24 02:12:12.593339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.998 qpair failed and we were unable to recover it. 00:33:57.998 [2024-07-24 02:12:12.593492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.998 [2024-07-24 02:12:12.593537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.998 qpair failed and we were unable to recover it. 00:33:57.998 [2024-07-24 02:12:12.593695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.998 [2024-07-24 02:12:12.593725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.998 qpair failed and we were unable to recover it. 00:33:57.998 [2024-07-24 02:12:12.593904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.998 [2024-07-24 02:12:12.593952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.998 qpair failed and we were unable to recover it. 00:33:57.998 [2024-07-24 02:12:12.594079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.998 [2024-07-24 02:12:12.594105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.998 qpair failed and we were unable to recover it. 00:33:57.998 [2024-07-24 02:12:12.594266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.998 [2024-07-24 02:12:12.594293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.998 qpair failed and we were unable to recover it. 00:33:57.998 [2024-07-24 02:12:12.594453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.998 [2024-07-24 02:12:12.594503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.998 qpair failed and we were unable to recover it. 00:33:57.998 [2024-07-24 02:12:12.594636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.998 [2024-07-24 02:12:12.594680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.998 qpair failed and we were unable to recover it. 00:33:57.998 [2024-07-24 02:12:12.594784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.998 [2024-07-24 02:12:12.594812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.998 qpair failed and we were unable to recover it. 00:33:57.998 [2024-07-24 02:12:12.594922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.998 [2024-07-24 02:12:12.594948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.998 qpair failed and we were unable to recover it. 00:33:57.998 [2024-07-24 02:12:12.595052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.998 [2024-07-24 02:12:12.595079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.998 qpair failed and we were unable to recover it. 00:33:57.998 [2024-07-24 02:12:12.595188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.998 [2024-07-24 02:12:12.595216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.998 qpair failed and we were unable to recover it. 00:33:57.998 [2024-07-24 02:12:12.595375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.998 [2024-07-24 02:12:12.595402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.998 qpair failed and we were unable to recover it. 00:33:57.998 [2024-07-24 02:12:12.595537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.998 [2024-07-24 02:12:12.595564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.998 qpair failed and we were unable to recover it. 00:33:57.998 [2024-07-24 02:12:12.595667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.998 [2024-07-24 02:12:12.595694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.998 qpair failed and we were unable to recover it. 00:33:57.998 [2024-07-24 02:12:12.595848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.998 [2024-07-24 02:12:12.595875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.998 qpair failed and we were unable to recover it. 00:33:57.998 [2024-07-24 02:12:12.595982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.998 [2024-07-24 02:12:12.596008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.998 qpair failed and we were unable to recover it. 00:33:57.998 [2024-07-24 02:12:12.596111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.998 [2024-07-24 02:12:12.596138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.998 qpair failed and we were unable to recover it. 00:33:57.998 [2024-07-24 02:12:12.596263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.998 [2024-07-24 02:12:12.596302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.998 qpair failed and we were unable to recover it. 00:33:57.998 [2024-07-24 02:12:12.596454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.998 [2024-07-24 02:12:12.596482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.998 qpair failed and we were unable to recover it. 00:33:57.998 [2024-07-24 02:12:12.596604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.998 [2024-07-24 02:12:12.596631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.998 qpair failed and we were unable to recover it. 00:33:57.998 [2024-07-24 02:12:12.596738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.998 [2024-07-24 02:12:12.596765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.998 qpair failed and we were unable to recover it. 00:33:57.998 [2024-07-24 02:12:12.596923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.998 [2024-07-24 02:12:12.596949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.998 qpair failed and we were unable to recover it. 00:33:57.998 [2024-07-24 02:12:12.597065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.998 [2024-07-24 02:12:12.597091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.998 qpair failed and we were unable to recover it. 00:33:57.998 [2024-07-24 02:12:12.597215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.998 [2024-07-24 02:12:12.597241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.998 qpair failed and we were unable to recover it. 00:33:57.998 [2024-07-24 02:12:12.597381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.998 [2024-07-24 02:12:12.597409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.998 qpair failed and we were unable to recover it. 00:33:57.998 [2024-07-24 02:12:12.597522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.998 [2024-07-24 02:12:12.597549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.998 qpair failed and we were unable to recover it. 00:33:57.998 [2024-07-24 02:12:12.597661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.998 [2024-07-24 02:12:12.597690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.998 qpair failed and we were unable to recover it. 00:33:57.998 [2024-07-24 02:12:12.597818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.998 [2024-07-24 02:12:12.597847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.998 qpair failed and we were unable to recover it. 00:33:57.998 [2024-07-24 02:12:12.598017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.998 [2024-07-24 02:12:12.598046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.998 qpair failed and we were unable to recover it. 00:33:57.998 [2024-07-24 02:12:12.598166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.998 [2024-07-24 02:12:12.598195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.998 qpair failed and we were unable to recover it. 00:33:57.998 [2024-07-24 02:12:12.598372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.998 [2024-07-24 02:12:12.598399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.998 qpair failed and we were unable to recover it. 00:33:57.998 [2024-07-24 02:12:12.598528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.998 [2024-07-24 02:12:12.598554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.998 qpair failed and we were unable to recover it. 00:33:57.999 [2024-07-24 02:12:12.598745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.999 [2024-07-24 02:12:12.598774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.999 qpair failed and we were unable to recover it. 00:33:57.999 [2024-07-24 02:12:12.598918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.999 [2024-07-24 02:12:12.598947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.999 qpair failed and we were unable to recover it. 00:33:57.999 [2024-07-24 02:12:12.599095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.999 [2024-07-24 02:12:12.599123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.999 qpair failed and we were unable to recover it. 00:33:57.999 [2024-07-24 02:12:12.599279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.999 [2024-07-24 02:12:12.599305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.999 qpair failed and we were unable to recover it. 00:33:57.999 [2024-07-24 02:12:12.599442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.999 [2024-07-24 02:12:12.599468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.999 qpair failed and we were unable to recover it. 00:33:57.999 [2024-07-24 02:12:12.599578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.999 [2024-07-24 02:12:12.599622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.999 qpair failed and we were unable to recover it. 00:33:57.999 [2024-07-24 02:12:12.599774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.999 [2024-07-24 02:12:12.599801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.999 qpair failed and we were unable to recover it. 00:33:57.999 [2024-07-24 02:12:12.599910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.999 [2024-07-24 02:12:12.599936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.999 qpair failed and we were unable to recover it. 00:33:57.999 [2024-07-24 02:12:12.600085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.999 [2024-07-24 02:12:12.600114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.999 qpair failed and we were unable to recover it. 00:33:57.999 [2024-07-24 02:12:12.600270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.999 [2024-07-24 02:12:12.600296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.999 qpair failed and we were unable to recover it. 00:33:57.999 [2024-07-24 02:12:12.600427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.999 [2024-07-24 02:12:12.600453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.999 qpair failed and we were unable to recover it. 00:33:57.999 [2024-07-24 02:12:12.600586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.999 [2024-07-24 02:12:12.600631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.999 qpair failed and we were unable to recover it. 00:33:57.999 [2024-07-24 02:12:12.600758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.999 [2024-07-24 02:12:12.600785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.999 qpair failed and we were unable to recover it. 00:33:57.999 [2024-07-24 02:12:12.600894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.999 [2024-07-24 02:12:12.600920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.999 qpair failed and we were unable to recover it. 00:33:57.999 [2024-07-24 02:12:12.601046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.999 [2024-07-24 02:12:12.601075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.999 qpair failed and we were unable to recover it. 00:33:57.999 [2024-07-24 02:12:12.601275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.999 [2024-07-24 02:12:12.601304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.999 qpair failed and we were unable to recover it. 00:33:57.999 [2024-07-24 02:12:12.601464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.999 [2024-07-24 02:12:12.601492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.999 qpair failed and we were unable to recover it. 00:33:57.999 [2024-07-24 02:12:12.601612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.999 [2024-07-24 02:12:12.601641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.999 qpair failed and we were unable to recover it. 00:33:57.999 [2024-07-24 02:12:12.601779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.999 [2024-07-24 02:12:12.601805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.999 qpair failed and we were unable to recover it. 00:33:57.999 [2024-07-24 02:12:12.601937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.999 [2024-07-24 02:12:12.601963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.999 qpair failed and we were unable to recover it. 00:33:57.999 [2024-07-24 02:12:12.602122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.999 [2024-07-24 02:12:12.602151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.999 qpair failed and we were unable to recover it. 00:33:57.999 [2024-07-24 02:12:12.602273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.999 [2024-07-24 02:12:12.602300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.999 qpair failed and we were unable to recover it. 00:33:57.999 [2024-07-24 02:12:12.602432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.999 [2024-07-24 02:12:12.602458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.999 qpair failed and we were unable to recover it. 00:33:57.999 [2024-07-24 02:12:12.602565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.999 [2024-07-24 02:12:12.602591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.999 qpair failed and we were unable to recover it. 00:33:57.999 [2024-07-24 02:12:12.602685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.999 [2024-07-24 02:12:12.602727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.999 qpair failed and we were unable to recover it. 00:33:57.999 [2024-07-24 02:12:12.602864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.999 [2024-07-24 02:12:12.602893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.999 qpair failed and we were unable to recover it. 00:33:57.999 [2024-07-24 02:12:12.603032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.999 [2024-07-24 02:12:12.603061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.999 qpair failed and we were unable to recover it. 00:33:57.999 [2024-07-24 02:12:12.603206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.999 [2024-07-24 02:12:12.603242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.999 qpair failed and we were unable to recover it. 00:33:57.999 [2024-07-24 02:12:12.603422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.999 [2024-07-24 02:12:12.603449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.999 qpair failed and we were unable to recover it. 00:33:57.999 [2024-07-24 02:12:12.603577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.999 [2024-07-24 02:12:12.603621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.999 qpair failed and we were unable to recover it. 00:33:57.999 [2024-07-24 02:12:12.603748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.999 [2024-07-24 02:12:12.603774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.999 qpair failed and we were unable to recover it. 00:33:57.999 [2024-07-24 02:12:12.603905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.999 [2024-07-24 02:12:12.603931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.999 qpair failed and we were unable to recover it. 00:33:57.999 [2024-07-24 02:12:12.604062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.999 [2024-07-24 02:12:12.604091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.999 qpair failed and we were unable to recover it. 00:33:57.999 [2024-07-24 02:12:12.604205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.999 [2024-07-24 02:12:12.604233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.999 qpair failed and we were unable to recover it. 00:33:57.999 [2024-07-24 02:12:12.604378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.999 [2024-07-24 02:12:12.604404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.999 qpair failed and we were unable to recover it. 00:33:57.999 [2024-07-24 02:12:12.604535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.999 [2024-07-24 02:12:12.604561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.999 qpair failed and we were unable to recover it. 00:33:57.999 [2024-07-24 02:12:12.604757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.999 [2024-07-24 02:12:12.604783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.999 qpair failed and we were unable to recover it. 00:33:57.999 [2024-07-24 02:12:12.604933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.999 [2024-07-24 02:12:12.604961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.999 qpair failed and we were unable to recover it. 00:33:57.999 [2024-07-24 02:12:12.605095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.999 [2024-07-24 02:12:12.605123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.999 qpair failed and we were unable to recover it. 00:33:57.999 [2024-07-24 02:12:12.605236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.999 [2024-07-24 02:12:12.605263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.999 qpair failed and we were unable to recover it. 00:33:57.999 [2024-07-24 02:12:12.605367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.999 [2024-07-24 02:12:12.605394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.999 qpair failed and we were unable to recover it. 00:33:57.999 [2024-07-24 02:12:12.605530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.999 [2024-07-24 02:12:12.605556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.999 qpair failed and we were unable to recover it. 00:33:57.999 [2024-07-24 02:12:12.605689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.999 [2024-07-24 02:12:12.605714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.999 qpair failed and we were unable to recover it. 00:33:57.999 [2024-07-24 02:12:12.605851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.999 [2024-07-24 02:12:12.605894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.999 qpair failed and we were unable to recover it. 00:33:57.999 [2024-07-24 02:12:12.606058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.999 [2024-07-24 02:12:12.606086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.999 qpair failed and we were unable to recover it. 00:33:57.999 [2024-07-24 02:12:12.606256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.999 [2024-07-24 02:12:12.606285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.999 qpair failed and we were unable to recover it. 00:33:57.999 [2024-07-24 02:12:12.606448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.999 [2024-07-24 02:12:12.606474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.999 qpair failed and we were unable to recover it. 00:33:57.999 [2024-07-24 02:12:12.606585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.999 [2024-07-24 02:12:12.606631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.999 qpair failed and we were unable to recover it. 00:33:57.999 [2024-07-24 02:12:12.606765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.999 [2024-07-24 02:12:12.606810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.999 qpair failed and we were unable to recover it. 00:33:57.999 [2024-07-24 02:12:12.606931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.999 [2024-07-24 02:12:12.606960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.999 qpair failed and we were unable to recover it. 00:33:57.999 [2024-07-24 02:12:12.607105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.999 [2024-07-24 02:12:12.607133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:57.999 qpair failed and we were unable to recover it. 00:33:57.999 [2024-07-24 02:12:12.607323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.999 [2024-07-24 02:12:12.607381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.999 qpair failed and we were unable to recover it. 00:33:57.999 [2024-07-24 02:12:12.607548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.999 [2024-07-24 02:12:12.607576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.999 qpair failed and we were unable to recover it. 00:33:57.999 [2024-07-24 02:12:12.607679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.999 [2024-07-24 02:12:12.607707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.999 qpair failed and we were unable to recover it. 00:33:57.999 [2024-07-24 02:12:12.607863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.999 [2024-07-24 02:12:12.607910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.999 qpair failed and we were unable to recover it. 00:33:57.999 [2024-07-24 02:12:12.608116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.999 [2024-07-24 02:12:12.608160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.999 qpair failed and we were unable to recover it. 00:33:57.999 [2024-07-24 02:12:12.608326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.999 [2024-07-24 02:12:12.608354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.999 qpair failed and we were unable to recover it. 00:33:57.999 [2024-07-24 02:12:12.608510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.999 [2024-07-24 02:12:12.608537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.999 qpair failed and we were unable to recover it. 00:33:57.999 [2024-07-24 02:12:12.608655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.999 [2024-07-24 02:12:12.608699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.999 qpair failed and we were unable to recover it. 00:33:57.999 [2024-07-24 02:12:12.608828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.999 [2024-07-24 02:12:12.608873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.999 qpair failed and we were unable to recover it. 00:33:57.999 [2024-07-24 02:12:12.609002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.999 [2024-07-24 02:12:12.609029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.999 qpair failed and we were unable to recover it. 00:33:57.999 [2024-07-24 02:12:12.609167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.999 [2024-07-24 02:12:12.609193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.999 qpair failed and we were unable to recover it. 00:33:57.999 [2024-07-24 02:12:12.609327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.999 [2024-07-24 02:12:12.609355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.999 qpair failed and we were unable to recover it. 00:33:57.999 [2024-07-24 02:12:12.609536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.999 [2024-07-24 02:12:12.609581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.999 qpair failed and we were unable to recover it. 00:33:57.999 [2024-07-24 02:12:12.609760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.999 [2024-07-24 02:12:12.609805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.999 qpair failed and we were unable to recover it. 00:33:57.999 [2024-07-24 02:12:12.609956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.999 [2024-07-24 02:12:12.609985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.999 qpair failed and we were unable to recover it. 00:33:57.999 [2024-07-24 02:12:12.610132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:57.999 [2024-07-24 02:12:12.610158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:57.999 qpair failed and we were unable to recover it. 00:33:57.999 [2024-07-24 02:12:12.610293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.000 [2024-07-24 02:12:12.610325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.000 qpair failed and we were unable to recover it. 00:33:58.000 [2024-07-24 02:12:12.610474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.000 [2024-07-24 02:12:12.610521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.000 qpair failed and we were unable to recover it. 00:33:58.000 [2024-07-24 02:12:12.610680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.000 [2024-07-24 02:12:12.610723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.000 qpair failed and we were unable to recover it. 00:33:58.000 [2024-07-24 02:12:12.610909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.000 [2024-07-24 02:12:12.610953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.000 qpair failed and we were unable to recover it. 00:33:58.000 [2024-07-24 02:12:12.611090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.000 [2024-07-24 02:12:12.611115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.000 qpair failed and we were unable to recover it. 00:33:58.000 [2024-07-24 02:12:12.611249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.000 [2024-07-24 02:12:12.611276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.000 qpair failed and we were unable to recover it. 00:33:58.000 [2024-07-24 02:12:12.611467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.000 [2024-07-24 02:12:12.611496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.000 qpair failed and we were unable to recover it. 00:33:58.000 [2024-07-24 02:12:12.611663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.000 [2024-07-24 02:12:12.611706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.000 qpair failed and we were unable to recover it. 00:33:58.000 [2024-07-24 02:12:12.611895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.000 [2024-07-24 02:12:12.611939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.000 qpair failed and we were unable to recover it. 00:33:58.000 [2024-07-24 02:12:12.612049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.000 [2024-07-24 02:12:12.612077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.000 qpair failed and we were unable to recover it. 00:33:58.000 [2024-07-24 02:12:12.612175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.000 [2024-07-24 02:12:12.612202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.000 qpair failed and we were unable to recover it. 00:33:58.000 [2024-07-24 02:12:12.612337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.000 [2024-07-24 02:12:12.612365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.000 qpair failed and we were unable to recover it. 00:33:58.000 [2024-07-24 02:12:12.612513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.000 [2024-07-24 02:12:12.612557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.000 qpair failed and we were unable to recover it. 00:33:58.000 [2024-07-24 02:12:12.612741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.000 [2024-07-24 02:12:12.612783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.000 qpair failed and we were unable to recover it. 00:33:58.000 [2024-07-24 02:12:12.612893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.000 [2024-07-24 02:12:12.612923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.000 qpair failed and we were unable to recover it. 00:33:58.000 [2024-07-24 02:12:12.613030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.000 [2024-07-24 02:12:12.613056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.000 qpair failed and we were unable to recover it. 00:33:58.000 [2024-07-24 02:12:12.613208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.000 [2024-07-24 02:12:12.613234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.000 qpair failed and we were unable to recover it. 00:33:58.000 [2024-07-24 02:12:12.613389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.000 [2024-07-24 02:12:12.613420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.000 qpair failed and we were unable to recover it. 00:33:58.000 [2024-07-24 02:12:12.613558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.000 [2024-07-24 02:12:12.613588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.000 qpair failed and we were unable to recover it. 00:33:58.000 [2024-07-24 02:12:12.613760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.000 [2024-07-24 02:12:12.613786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.000 qpair failed and we were unable to recover it. 00:33:58.000 [2024-07-24 02:12:12.613918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.000 [2024-07-24 02:12:12.613943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.000 qpair failed and we were unable to recover it. 00:33:58.000 [2024-07-24 02:12:12.614074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.000 [2024-07-24 02:12:12.614100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.000 qpair failed and we were unable to recover it. 00:33:58.000 [2024-07-24 02:12:12.614231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.000 [2024-07-24 02:12:12.614257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.000 qpair failed and we were unable to recover it. 00:33:58.000 [2024-07-24 02:12:12.614367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.000 [2024-07-24 02:12:12.614395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.000 qpair failed and we were unable to recover it. 00:33:58.000 [2024-07-24 02:12:12.614526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.000 [2024-07-24 02:12:12.614569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.000 qpair failed and we were unable to recover it. 00:33:58.000 [2024-07-24 02:12:12.614715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.000 [2024-07-24 02:12:12.614743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.000 qpair failed and we were unable to recover it. 00:33:58.000 [2024-07-24 02:12:12.614910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.000 [2024-07-24 02:12:12.614954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.000 qpair failed and we were unable to recover it. 00:33:58.000 [2024-07-24 02:12:12.615085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.000 [2024-07-24 02:12:12.615112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.000 qpair failed and we were unable to recover it. 00:33:58.000 [2024-07-24 02:12:12.615247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.000 [2024-07-24 02:12:12.615273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.000 qpair failed and we were unable to recover it. 00:33:58.000 [2024-07-24 02:12:12.615409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.000 [2024-07-24 02:12:12.615453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.000 qpair failed and we were unable to recover it. 00:33:58.000 [2024-07-24 02:12:12.615600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.000 [2024-07-24 02:12:12.615645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.000 qpair failed and we were unable to recover it. 00:33:58.000 [2024-07-24 02:12:12.615769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.000 [2024-07-24 02:12:12.615816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.000 qpair failed and we were unable to recover it. 00:33:58.000 [2024-07-24 02:12:12.615921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.000 [2024-07-24 02:12:12.615947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.000 qpair failed and we were unable to recover it. 00:33:58.000 [2024-07-24 02:12:12.616083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.000 [2024-07-24 02:12:12.616108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.000 qpair failed and we were unable to recover it. 00:33:58.000 [2024-07-24 02:12:12.616221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.000 [2024-07-24 02:12:12.616248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.000 qpair failed and we were unable to recover it. 00:33:58.000 [2024-07-24 02:12:12.616397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.000 [2024-07-24 02:12:12.616441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.000 qpair failed and we were unable to recover it. 00:33:58.000 [2024-07-24 02:12:12.616598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.000 [2024-07-24 02:12:12.616643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.000 qpair failed and we were unable to recover it. 00:33:58.000 [2024-07-24 02:12:12.616789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.000 [2024-07-24 02:12:12.616832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.000 qpair failed and we were unable to recover it. 00:33:58.000 [2024-07-24 02:12:12.616959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.000 [2024-07-24 02:12:12.616985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.000 qpair failed and we were unable to recover it. 00:33:58.000 [2024-07-24 02:12:12.617111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.000 [2024-07-24 02:12:12.617137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.000 qpair failed and we were unable to recover it. 00:33:58.000 [2024-07-24 02:12:12.617269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.000 [2024-07-24 02:12:12.617296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.000 qpair failed and we were unable to recover it. 00:33:58.000 [2024-07-24 02:12:12.617461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.000 [2024-07-24 02:12:12.617488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.000 qpair failed and we were unable to recover it. 00:33:58.000 [2024-07-24 02:12:12.617661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.000 [2024-07-24 02:12:12.617704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.000 qpair failed and we were unable to recover it. 00:33:58.000 [2024-07-24 02:12:12.617857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.000 [2024-07-24 02:12:12.617900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.000 qpair failed and we were unable to recover it. 00:33:58.000 [2024-07-24 02:12:12.618032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.000 [2024-07-24 02:12:12.618058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.000 qpair failed and we were unable to recover it. 00:33:58.000 [2024-07-24 02:12:12.618165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.000 [2024-07-24 02:12:12.618192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.000 qpair failed and we were unable to recover it. 00:33:58.000 [2024-07-24 02:12:12.618332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.000 [2024-07-24 02:12:12.618359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.000 qpair failed and we were unable to recover it. 00:33:58.000 [2024-07-24 02:12:12.618497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.000 [2024-07-24 02:12:12.618541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.000 qpair failed and we were unable to recover it. 00:33:58.000 [2024-07-24 02:12:12.618655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.000 [2024-07-24 02:12:12.618684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.000 qpair failed and we were unable to recover it. 00:33:58.000 [2024-07-24 02:12:12.618824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.000 [2024-07-24 02:12:12.618868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.000 qpair failed and we were unable to recover it. 00:33:58.000 [2024-07-24 02:12:12.619003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.000 [2024-07-24 02:12:12.619028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.000 qpair failed and we were unable to recover it. 00:33:58.000 [2024-07-24 02:12:12.619158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.000 [2024-07-24 02:12:12.619184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.000 qpair failed and we were unable to recover it. 00:33:58.000 [2024-07-24 02:12:12.619321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.000 [2024-07-24 02:12:12.619349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.000 qpair failed and we were unable to recover it. 00:33:58.000 [2024-07-24 02:12:12.619476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.000 [2024-07-24 02:12:12.619502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.000 qpair failed and we were unable to recover it. 00:33:58.000 [2024-07-24 02:12:12.619612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.000 [2024-07-24 02:12:12.619642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.000 qpair failed and we were unable to recover it. 00:33:58.000 [2024-07-24 02:12:12.619774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.000 [2024-07-24 02:12:12.619800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.000 qpair failed and we were unable to recover it. 00:33:58.000 [2024-07-24 02:12:12.619936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.000 [2024-07-24 02:12:12.619962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.000 qpair failed and we were unable to recover it. 00:33:58.000 [2024-07-24 02:12:12.620059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.000 [2024-07-24 02:12:12.620086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.000 qpair failed and we were unable to recover it. 00:33:58.000 [2024-07-24 02:12:12.620216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.000 [2024-07-24 02:12:12.620241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.000 qpair failed and we were unable to recover it. 00:33:58.000 [2024-07-24 02:12:12.620344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.000 [2024-07-24 02:12:12.620370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.000 qpair failed and we were unable to recover it. 00:33:58.000 [2024-07-24 02:12:12.620484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.000 [2024-07-24 02:12:12.620513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.000 qpair failed and we were unable to recover it. 00:33:58.000 [2024-07-24 02:12:12.620691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.000 [2024-07-24 02:12:12.620716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.000 qpair failed and we were unable to recover it. 00:33:58.001 [2024-07-24 02:12:12.620828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.001 [2024-07-24 02:12:12.620855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.001 qpair failed and we were unable to recover it. 00:33:58.001 [2024-07-24 02:12:12.620992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.001 [2024-07-24 02:12:12.621018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.001 qpair failed and we were unable to recover it. 00:33:58.001 [2024-07-24 02:12:12.621151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.001 [2024-07-24 02:12:12.621179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.001 qpair failed and we were unable to recover it. 00:33:58.001 [2024-07-24 02:12:12.621288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.001 [2024-07-24 02:12:12.621320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.001 qpair failed and we were unable to recover it. 00:33:58.001 [2024-07-24 02:12:12.621456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.001 [2024-07-24 02:12:12.621481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.001 qpair failed and we were unable to recover it. 00:33:58.001 [2024-07-24 02:12:12.621670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.001 [2024-07-24 02:12:12.621714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.001 qpair failed and we were unable to recover it. 00:33:58.001 [2024-07-24 02:12:12.621853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.001 [2024-07-24 02:12:12.621880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.001 qpair failed and we were unable to recover it. 00:33:58.001 [2024-07-24 02:12:12.621978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.001 [2024-07-24 02:12:12.622005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.001 qpair failed and we were unable to recover it. 00:33:58.001 [2024-07-24 02:12:12.622104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.001 [2024-07-24 02:12:12.622130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.001 qpair failed and we were unable to recover it. 00:33:58.001 [2024-07-24 02:12:12.622233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.001 [2024-07-24 02:12:12.622259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.001 qpair failed and we were unable to recover it. 00:33:58.001 [2024-07-24 02:12:12.622402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.001 [2024-07-24 02:12:12.622447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.001 qpair failed and we were unable to recover it. 00:33:58.001 [2024-07-24 02:12:12.622599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.001 [2024-07-24 02:12:12.622629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.001 qpair failed and we were unable to recover it. 00:33:58.001 [2024-07-24 02:12:12.622767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.001 [2024-07-24 02:12:12.622796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.001 qpair failed and we were unable to recover it. 00:33:58.001 [2024-07-24 02:12:12.622973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.001 [2024-07-24 02:12:12.622998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.001 qpair failed and we were unable to recover it. 00:33:58.001 [2024-07-24 02:12:12.623159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.001 [2024-07-24 02:12:12.623186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.001 qpair failed and we were unable to recover it. 00:33:58.001 [2024-07-24 02:12:12.623328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.001 [2024-07-24 02:12:12.623355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.001 qpair failed and we were unable to recover it. 00:33:58.001 [2024-07-24 02:12:12.623477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.001 [2024-07-24 02:12:12.623521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.001 qpair failed and we were unable to recover it. 00:33:58.001 [2024-07-24 02:12:12.623674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.001 [2024-07-24 02:12:12.623718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.001 qpair failed and we were unable to recover it. 00:33:58.001 [2024-07-24 02:12:12.623822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.001 [2024-07-24 02:12:12.623849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.001 qpair failed and we were unable to recover it. 00:33:58.001 [2024-07-24 02:12:12.624012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.001 [2024-07-24 02:12:12.624038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.001 qpair failed and we were unable to recover it. 00:33:58.001 [2024-07-24 02:12:12.624168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.001 [2024-07-24 02:12:12.624195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.001 qpair failed and we were unable to recover it. 00:33:58.001 [2024-07-24 02:12:12.624365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.001 [2024-07-24 02:12:12.624395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.001 qpair failed and we were unable to recover it. 00:33:58.001 [2024-07-24 02:12:12.624563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.001 [2024-07-24 02:12:12.624591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.001 qpair failed and we were unable to recover it. 00:33:58.001 [2024-07-24 02:12:12.624800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.001 [2024-07-24 02:12:12.624843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.001 qpair failed and we were unable to recover it. 00:33:58.001 [2024-07-24 02:12:12.624954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.001 [2024-07-24 02:12:12.624980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.001 qpair failed and we were unable to recover it. 00:33:58.001 [2024-07-24 02:12:12.625087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.001 [2024-07-24 02:12:12.625113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.001 qpair failed and we were unable to recover it. 00:33:58.001 [2024-07-24 02:12:12.625239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.001 [2024-07-24 02:12:12.625265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.001 qpair failed and we were unable to recover it. 00:33:58.001 [2024-07-24 02:12:12.625448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.001 [2024-07-24 02:12:12.625493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.001 qpair failed and we were unable to recover it. 00:33:58.001 [2024-07-24 02:12:12.625678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.001 [2024-07-24 02:12:12.625722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.001 qpair failed and we were unable to recover it. 00:33:58.001 [2024-07-24 02:12:12.625855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.001 [2024-07-24 02:12:12.625880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.001 qpair failed and we were unable to recover it. 00:33:58.001 [2024-07-24 02:12:12.626010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.001 [2024-07-24 02:12:12.626037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.001 qpair failed and we were unable to recover it. 00:33:58.001 [2024-07-24 02:12:12.626203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.001 [2024-07-24 02:12:12.626229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.001 qpair failed and we were unable to recover it. 00:33:58.001 [2024-07-24 02:12:12.626379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.001 [2024-07-24 02:12:12.626413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.001 qpair failed and we were unable to recover it. 00:33:58.001 [2024-07-24 02:12:12.626538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.001 [2024-07-24 02:12:12.626564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.001 qpair failed and we were unable to recover it. 00:33:58.001 [2024-07-24 02:12:12.626700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.001 [2024-07-24 02:12:12.626726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.001 qpair failed and we were unable to recover it. 00:33:58.001 [2024-07-24 02:12:12.626857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.001 [2024-07-24 02:12:12.626884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.001 qpair failed and we were unable to recover it. 00:33:58.001 [2024-07-24 02:12:12.627018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.001 [2024-07-24 02:12:12.627044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.001 qpair failed and we were unable to recover it. 00:33:58.001 [2024-07-24 02:12:12.627186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.001 [2024-07-24 02:12:12.627211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.001 qpair failed and we were unable to recover it. 00:33:58.001 [2024-07-24 02:12:12.627345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.001 [2024-07-24 02:12:12.627372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.001 qpair failed and we were unable to recover it. 00:33:58.001 [2024-07-24 02:12:12.627553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.001 [2024-07-24 02:12:12.627597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.001 qpair failed and we were unable to recover it. 00:33:58.001 [2024-07-24 02:12:12.627756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.001 [2024-07-24 02:12:12.627785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.001 qpair failed and we were unable to recover it. 00:33:58.001 [2024-07-24 02:12:12.627962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.001 [2024-07-24 02:12:12.627987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.001 qpair failed and we were unable to recover it. 00:33:58.001 [2024-07-24 02:12:12.628144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.001 [2024-07-24 02:12:12.628169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.001 qpair failed and we were unable to recover it. 00:33:58.001 [2024-07-24 02:12:12.628276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.001 [2024-07-24 02:12:12.628303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.001 qpair failed and we were unable to recover it. 00:33:58.001 [2024-07-24 02:12:12.628443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.001 [2024-07-24 02:12:12.628487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.001 qpair failed and we were unable to recover it. 00:33:58.001 [2024-07-24 02:12:12.628642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.001 [2024-07-24 02:12:12.628690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.001 qpair failed and we were unable to recover it. 00:33:58.001 [2024-07-24 02:12:12.628854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.001 [2024-07-24 02:12:12.628899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.001 qpair failed and we were unable to recover it. 00:33:58.001 [2024-07-24 02:12:12.629031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.001 [2024-07-24 02:12:12.629058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.001 qpair failed and we were unable to recover it. 00:33:58.001 [2024-07-24 02:12:12.629186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.001 [2024-07-24 02:12:12.629218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.001 qpair failed and we were unable to recover it. 00:33:58.001 [2024-07-24 02:12:12.629359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.001 [2024-07-24 02:12:12.629386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.001 qpair failed and we were unable to recover it. 00:33:58.001 [2024-07-24 02:12:12.629512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.001 [2024-07-24 02:12:12.629538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.001 qpair failed and we were unable to recover it. 00:33:58.001 [2024-07-24 02:12:12.629638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.001 [2024-07-24 02:12:12.629663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.001 qpair failed and we were unable to recover it. 00:33:58.001 [2024-07-24 02:12:12.629788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.001 [2024-07-24 02:12:12.629813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.001 qpair failed and we were unable to recover it. 00:33:58.001 [2024-07-24 02:12:12.629924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.001 [2024-07-24 02:12:12.629950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.001 qpair failed and we were unable to recover it. 00:33:58.001 [2024-07-24 02:12:12.630078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.001 [2024-07-24 02:12:12.630104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.001 qpair failed and we were unable to recover it. 00:33:58.001 [2024-07-24 02:12:12.630235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.001 [2024-07-24 02:12:12.630261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.001 qpair failed and we were unable to recover it. 00:33:58.001 [2024-07-24 02:12:12.630367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.001 [2024-07-24 02:12:12.630394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.001 qpair failed and we were unable to recover it. 00:33:58.001 [2024-07-24 02:12:12.630549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.001 [2024-07-24 02:12:12.630592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.001 qpair failed and we were unable to recover it. 00:33:58.001 [2024-07-24 02:12:12.630714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.001 [2024-07-24 02:12:12.630743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.001 qpair failed and we were unable to recover it. 00:33:58.001 [2024-07-24 02:12:12.630921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.001 [2024-07-24 02:12:12.630948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.001 qpair failed and we were unable to recover it. 00:33:58.001 [2024-07-24 02:12:12.631068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.001 [2024-07-24 02:12:12.631094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.001 qpair failed and we were unable to recover it. 00:33:58.001 [2024-07-24 02:12:12.631208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.001 [2024-07-24 02:12:12.631234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.001 qpair failed and we were unable to recover it. 00:33:58.001 [2024-07-24 02:12:12.631355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.001 [2024-07-24 02:12:12.631381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.001 qpair failed and we were unable to recover it. 00:33:58.001 [2024-07-24 02:12:12.631537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.001 [2024-07-24 02:12:12.631582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.001 qpair failed and we were unable to recover it. 00:33:58.001 [2024-07-24 02:12:12.631741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.001 [2024-07-24 02:12:12.631771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.001 qpair failed and we were unable to recover it. 00:33:58.001 [2024-07-24 02:12:12.631930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.001 [2024-07-24 02:12:12.631956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.001 qpair failed and we were unable to recover it. 00:33:58.001 [2024-07-24 02:12:12.632079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.001 [2024-07-24 02:12:12.632105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.001 qpair failed and we were unable to recover it. 00:33:58.001 [2024-07-24 02:12:12.632239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.001 [2024-07-24 02:12:12.632266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.001 qpair failed and we were unable to recover it. 00:33:58.002 [2024-07-24 02:12:12.632447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.002 [2024-07-24 02:12:12.632491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.002 qpair failed and we were unable to recover it. 00:33:58.002 [2024-07-24 02:12:12.632614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.002 [2024-07-24 02:12:12.632659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.002 qpair failed and we were unable to recover it. 00:33:58.002 [2024-07-24 02:12:12.632819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.002 [2024-07-24 02:12:12.632863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.002 qpair failed and we were unable to recover it. 00:33:58.002 [2024-07-24 02:12:12.632999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.002 [2024-07-24 02:12:12.633026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.002 qpair failed and we were unable to recover it. 00:33:58.002 [2024-07-24 02:12:12.633134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.002 [2024-07-24 02:12:12.633163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.002 qpair failed and we were unable to recover it. 00:33:58.002 [2024-07-24 02:12:12.633301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.002 [2024-07-24 02:12:12.633336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.002 qpair failed and we were unable to recover it. 00:33:58.002 [2024-07-24 02:12:12.633493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.002 [2024-07-24 02:12:12.633523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.002 qpair failed and we were unable to recover it. 00:33:58.002 [2024-07-24 02:12:12.633696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.002 [2024-07-24 02:12:12.633722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.002 qpair failed and we were unable to recover it. 00:33:58.002 [2024-07-24 02:12:12.633861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.002 [2024-07-24 02:12:12.633887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.002 qpair failed and we were unable to recover it. 00:33:58.002 [2024-07-24 02:12:12.633997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.002 [2024-07-24 02:12:12.634024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.002 qpair failed and we were unable to recover it. 00:33:58.002 [2024-07-24 02:12:12.634134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.002 [2024-07-24 02:12:12.634160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.002 qpair failed and we were unable to recover it. 00:33:58.002 [2024-07-24 02:12:12.634289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.002 [2024-07-24 02:12:12.634322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.002 qpair failed and we were unable to recover it. 00:33:58.002 [2024-07-24 02:12:12.634491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.002 [2024-07-24 02:12:12.634518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.002 qpair failed and we were unable to recover it. 00:33:58.002 [2024-07-24 02:12:12.634632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.002 [2024-07-24 02:12:12.634657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.002 qpair failed and we were unable to recover it. 00:33:58.002 [2024-07-24 02:12:12.634792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.002 [2024-07-24 02:12:12.634818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.002 qpair failed and we were unable to recover it. 00:33:58.002 [2024-07-24 02:12:12.634958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.002 [2024-07-24 02:12:12.634983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.002 qpair failed and we were unable to recover it. 00:33:58.002 [2024-07-24 02:12:12.635092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.002 [2024-07-24 02:12:12.635117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.002 qpair failed and we were unable to recover it. 00:33:58.002 [2024-07-24 02:12:12.635276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.002 [2024-07-24 02:12:12.635303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.002 qpair failed and we were unable to recover it. 00:33:58.002 [2024-07-24 02:12:12.635463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.002 [2024-07-24 02:12:12.635490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.002 qpair failed and we were unable to recover it. 00:33:58.002 [2024-07-24 02:12:12.635597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.002 [2024-07-24 02:12:12.635623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.002 qpair failed and we were unable to recover it. 00:33:58.002 [2024-07-24 02:12:12.635761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.002 [2024-07-24 02:12:12.635786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.002 qpair failed and we were unable to recover it. 00:33:58.002 [2024-07-24 02:12:12.635941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.002 [2024-07-24 02:12:12.635966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.002 qpair failed and we were unable to recover it. 00:33:58.002 [2024-07-24 02:12:12.636123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.002 [2024-07-24 02:12:12.636150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.002 qpair failed and we were unable to recover it. 00:33:58.002 [2024-07-24 02:12:12.636281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.002 [2024-07-24 02:12:12.636308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.002 qpair failed and we were unable to recover it. 00:33:58.002 [2024-07-24 02:12:12.636453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.002 [2024-07-24 02:12:12.636479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.002 qpair failed and we were unable to recover it. 00:33:58.002 [2024-07-24 02:12:12.636642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.002 [2024-07-24 02:12:12.636686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.002 qpair failed and we were unable to recover it. 00:33:58.002 [2024-07-24 02:12:12.636815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.002 [2024-07-24 02:12:12.636865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.002 qpair failed and we were unable to recover it. 00:33:58.002 [2024-07-24 02:12:12.637023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.002 [2024-07-24 02:12:12.637049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.002 qpair failed and we were unable to recover it. 00:33:58.002 [2024-07-24 02:12:12.637158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.002 [2024-07-24 02:12:12.637184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.002 qpair failed and we were unable to recover it. 00:33:58.002 [2024-07-24 02:12:12.637291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.002 [2024-07-24 02:12:12.637324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.002 qpair failed and we were unable to recover it. 00:33:58.002 [2024-07-24 02:12:12.637489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.002 [2024-07-24 02:12:12.637518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.002 qpair failed and we were unable to recover it. 00:33:58.002 [2024-07-24 02:12:12.637687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.002 [2024-07-24 02:12:12.637716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.002 qpair failed and we were unable to recover it. 00:33:58.002 [2024-07-24 02:12:12.637919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.002 [2024-07-24 02:12:12.637963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.002 qpair failed and we were unable to recover it. 00:33:58.002 [2024-07-24 02:12:12.638094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.002 [2024-07-24 02:12:12.638121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.002 qpair failed and we were unable to recover it. 00:33:58.002 [2024-07-24 02:12:12.638254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.002 [2024-07-24 02:12:12.638280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.002 qpair failed and we were unable to recover it. 00:33:58.002 [2024-07-24 02:12:12.638415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.002 [2024-07-24 02:12:12.638461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.002 qpair failed and we were unable to recover it. 00:33:58.002 [2024-07-24 02:12:12.638606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.002 [2024-07-24 02:12:12.638649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.002 qpair failed and we were unable to recover it. 00:33:58.002 [2024-07-24 02:12:12.638836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.002 [2024-07-24 02:12:12.638880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.002 qpair failed and we were unable to recover it. 00:33:58.002 [2024-07-24 02:12:12.638989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.002 [2024-07-24 02:12:12.639016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.002 qpair failed and we were unable to recover it. 00:33:58.002 [2024-07-24 02:12:12.639150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.002 [2024-07-24 02:12:12.639176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.002 qpair failed and we were unable to recover it. 00:33:58.002 [2024-07-24 02:12:12.639295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.002 [2024-07-24 02:12:12.639327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.002 qpair failed and we were unable to recover it. 00:33:58.002 [2024-07-24 02:12:12.639430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.002 [2024-07-24 02:12:12.639457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.002 qpair failed and we were unable to recover it. 00:33:58.002 [2024-07-24 02:12:12.639599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.002 [2024-07-24 02:12:12.639625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.002 qpair failed and we were unable to recover it. 00:33:58.002 [2024-07-24 02:12:12.639755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.002 [2024-07-24 02:12:12.639780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.002 qpair failed and we were unable to recover it. 00:33:58.002 [2024-07-24 02:12:12.639881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.002 [2024-07-24 02:12:12.639911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.002 qpair failed and we were unable to recover it. 00:33:58.002 [2024-07-24 02:12:12.640042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.002 [2024-07-24 02:12:12.640069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.002 qpair failed and we were unable to recover it. 00:33:58.002 [2024-07-24 02:12:12.640206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.002 [2024-07-24 02:12:12.640233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.002 qpair failed and we were unable to recover it. 00:33:58.002 [2024-07-24 02:12:12.640365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.002 [2024-07-24 02:12:12.640391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.002 qpair failed and we were unable to recover it. 00:33:58.002 [2024-07-24 02:12:12.640520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.002 [2024-07-24 02:12:12.640546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.002 qpair failed and we were unable to recover it. 00:33:58.002 [2024-07-24 02:12:12.640658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.002 [2024-07-24 02:12:12.640685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.002 qpair failed and we were unable to recover it. 00:33:58.002 [2024-07-24 02:12:12.640846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.002 [2024-07-24 02:12:12.640871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.002 qpair failed and we were unable to recover it. 00:33:58.002 [2024-07-24 02:12:12.640972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.002 [2024-07-24 02:12:12.640999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.002 qpair failed and we were unable to recover it. 00:33:58.002 [2024-07-24 02:12:12.641110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.002 [2024-07-24 02:12:12.641135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.002 qpair failed and we were unable to recover it. 00:33:58.002 [2024-07-24 02:12:12.641276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.002 [2024-07-24 02:12:12.641303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.002 qpair failed and we were unable to recover it. 00:33:58.002 [2024-07-24 02:12:12.641414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.002 [2024-07-24 02:12:12.641440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.002 qpair failed and we were unable to recover it. 00:33:58.002 [2024-07-24 02:12:12.641609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.002 [2024-07-24 02:12:12.641635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.002 qpair failed and we were unable to recover it. 00:33:58.002 [2024-07-24 02:12:12.641778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.002 [2024-07-24 02:12:12.641804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.002 qpair failed and we were unable to recover it. 00:33:58.002 [2024-07-24 02:12:12.641956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.002 [2024-07-24 02:12:12.642000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.002 qpair failed and we were unable to recover it. 00:33:58.002 [2024-07-24 02:12:12.642112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.002 [2024-07-24 02:12:12.642138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.002 qpair failed and we were unable to recover it. 00:33:58.002 [2024-07-24 02:12:12.642274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.002 [2024-07-24 02:12:12.642300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.002 qpair failed and we were unable to recover it. 00:33:58.002 [2024-07-24 02:12:12.642461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.002 [2024-07-24 02:12:12.642508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.002 qpair failed and we were unable to recover it. 00:33:58.002 [2024-07-24 02:12:12.642639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.002 [2024-07-24 02:12:12.642685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.002 qpair failed and we were unable to recover it. 00:33:58.002 [2024-07-24 02:12:12.642789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.002 [2024-07-24 02:12:12.642816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.002 qpair failed and we were unable to recover it. 00:33:58.002 [2024-07-24 02:12:12.642950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.002 [2024-07-24 02:12:12.642976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.002 qpair failed and we were unable to recover it. 00:33:58.002 [2024-07-24 02:12:12.643105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.002 [2024-07-24 02:12:12.643130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.002 qpair failed and we were unable to recover it. 00:33:58.002 [2024-07-24 02:12:12.643259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.002 [2024-07-24 02:12:12.643286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.002 qpair failed and we were unable to recover it. 00:33:58.002 [2024-07-24 02:12:12.643437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.002 [2024-07-24 02:12:12.643464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.002 qpair failed and we were unable to recover it. 00:33:58.002 [2024-07-24 02:12:12.643601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.002 [2024-07-24 02:12:12.643628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.002 qpair failed and we were unable to recover it. 00:33:58.002 [2024-07-24 02:12:12.643730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.002 [2024-07-24 02:12:12.643757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.002 qpair failed and we were unable to recover it. 00:33:58.002 [2024-07-24 02:12:12.643913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.002 [2024-07-24 02:12:12.643939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.003 qpair failed and we were unable to recover it. 00:33:58.003 [2024-07-24 02:12:12.644070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.003 [2024-07-24 02:12:12.644096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.003 qpair failed and we were unable to recover it. 00:33:58.003 [2024-07-24 02:12:12.644243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.003 [2024-07-24 02:12:12.644270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.003 qpair failed and we were unable to recover it. 00:33:58.003 [2024-07-24 02:12:12.644380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.003 [2024-07-24 02:12:12.644407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.003 qpair failed and we were unable to recover it. 00:33:58.003 [2024-07-24 02:12:12.644545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.003 [2024-07-24 02:12:12.644571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.003 qpair failed and we were unable to recover it. 00:33:58.003 [2024-07-24 02:12:12.644671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.003 [2024-07-24 02:12:12.644697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.003 qpair failed and we were unable to recover it. 00:33:58.003 [2024-07-24 02:12:12.644827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.003 [2024-07-24 02:12:12.644852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.003 qpair failed and we were unable to recover it. 00:33:58.003 [2024-07-24 02:12:12.644962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.003 [2024-07-24 02:12:12.644988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.003 qpair failed and we were unable to recover it. 00:33:58.003 [2024-07-24 02:12:12.645089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.003 [2024-07-24 02:12:12.645115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.003 qpair failed and we were unable to recover it. 00:33:58.003 [2024-07-24 02:12:12.645275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.003 [2024-07-24 02:12:12.645301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.003 qpair failed and we were unable to recover it. 00:33:58.003 [2024-07-24 02:12:12.645408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.003 [2024-07-24 02:12:12.645432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.003 qpair failed and we were unable to recover it. 00:33:58.003 [2024-07-24 02:12:12.645545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.003 [2024-07-24 02:12:12.645572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.003 qpair failed and we were unable to recover it. 00:33:58.003 [2024-07-24 02:12:12.645701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.003 [2024-07-24 02:12:12.645727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.003 qpair failed and we were unable to recover it. 00:33:58.003 [2024-07-24 02:12:12.645863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.003 [2024-07-24 02:12:12.645889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.003 qpair failed and we were unable to recover it. 00:33:58.003 [2024-07-24 02:12:12.645997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.003 [2024-07-24 02:12:12.646023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.003 qpair failed and we were unable to recover it. 00:33:58.003 [2024-07-24 02:12:12.646158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.003 [2024-07-24 02:12:12.646188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.003 qpair failed and we were unable to recover it. 00:33:58.003 [2024-07-24 02:12:12.646327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.003 [2024-07-24 02:12:12.646354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.003 qpair failed and we were unable to recover it. 00:33:58.003 [2024-07-24 02:12:12.646463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.003 [2024-07-24 02:12:12.646489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.003 qpair failed and we were unable to recover it. 00:33:58.003 [2024-07-24 02:12:12.646596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.003 [2024-07-24 02:12:12.646622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.003 qpair failed and we were unable to recover it. 00:33:58.003 [2024-07-24 02:12:12.646725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.003 [2024-07-24 02:12:12.646750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.003 qpair failed and we were unable to recover it. 00:33:58.003 [2024-07-24 02:12:12.646881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.003 [2024-07-24 02:12:12.646908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.003 qpair failed and we were unable to recover it. 00:33:58.003 [2024-07-24 02:12:12.647012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.003 [2024-07-24 02:12:12.647039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.003 qpair failed and we were unable to recover it. 00:33:58.003 [2024-07-24 02:12:12.647172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.003 [2024-07-24 02:12:12.647198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.003 qpair failed and we were unable to recover it. 00:33:58.003 [2024-07-24 02:12:12.647311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.003 [2024-07-24 02:12:12.647356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.003 qpair failed and we were unable to recover it. 00:33:58.003 [2024-07-24 02:12:12.647492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.003 [2024-07-24 02:12:12.647518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.003 qpair failed and we were unable to recover it. 00:33:58.003 [2024-07-24 02:12:12.647677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.003 [2024-07-24 02:12:12.647703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.003 qpair failed and we were unable to recover it. 00:33:58.003 [2024-07-24 02:12:12.647831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.003 [2024-07-24 02:12:12.647857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.003 qpair failed and we were unable to recover it. 00:33:58.003 [2024-07-24 02:12:12.647988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.003 [2024-07-24 02:12:12.648014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.003 qpair failed and we were unable to recover it. 00:33:58.003 [2024-07-24 02:12:12.648122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.003 [2024-07-24 02:12:12.648149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.003 qpair failed and we were unable to recover it. 00:33:58.003 [2024-07-24 02:12:12.648284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.003 [2024-07-24 02:12:12.648310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.003 qpair failed and we were unable to recover it. 00:33:58.003 [2024-07-24 02:12:12.648454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.003 [2024-07-24 02:12:12.648481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.003 qpair failed and we were unable to recover it. 00:33:58.003 [2024-07-24 02:12:12.648613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.003 [2024-07-24 02:12:12.648639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.003 qpair failed and we were unable to recover it. 00:33:58.003 [2024-07-24 02:12:12.648766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.003 [2024-07-24 02:12:12.648792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.003 qpair failed and we were unable to recover it. 00:33:58.003 [2024-07-24 02:12:12.648903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.003 [2024-07-24 02:12:12.648930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.003 qpair failed and we were unable to recover it. 00:33:58.003 [2024-07-24 02:12:12.649037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.003 [2024-07-24 02:12:12.649064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.003 qpair failed and we were unable to recover it. 00:33:58.003 [2024-07-24 02:12:12.649191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.003 [2024-07-24 02:12:12.649218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.003 qpair failed and we were unable to recover it. 00:33:58.003 [2024-07-24 02:12:12.649352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.003 [2024-07-24 02:12:12.649378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.003 qpair failed and we were unable to recover it. 00:33:58.003 [2024-07-24 02:12:12.649534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.003 [2024-07-24 02:12:12.649560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.003 qpair failed and we were unable to recover it. 00:33:58.003 [2024-07-24 02:12:12.649687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.003 [2024-07-24 02:12:12.649713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.003 qpair failed and we were unable to recover it. 00:33:58.003 [2024-07-24 02:12:12.649846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.003 [2024-07-24 02:12:12.649871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.003 qpair failed and we were unable to recover it. 00:33:58.003 [2024-07-24 02:12:12.650010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.003 [2024-07-24 02:12:12.650035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.003 qpair failed and we were unable to recover it. 00:33:58.003 [2024-07-24 02:12:12.650173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.003 [2024-07-24 02:12:12.650198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.003 qpair failed and we were unable to recover it. 00:33:58.003 [2024-07-24 02:12:12.650341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.003 [2024-07-24 02:12:12.650368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.003 qpair failed and we were unable to recover it. 00:33:58.003 [2024-07-24 02:12:12.650495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.003 [2024-07-24 02:12:12.650538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.003 qpair failed and we were unable to recover it. 00:33:58.003 [2024-07-24 02:12:12.650662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.003 [2024-07-24 02:12:12.650705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.003 qpair failed and we were unable to recover it. 00:33:58.003 [2024-07-24 02:12:12.650803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.003 [2024-07-24 02:12:12.650829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.003 qpair failed and we were unable to recover it. 00:33:58.003 [2024-07-24 02:12:12.650959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.003 [2024-07-24 02:12:12.650985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.003 qpair failed and we were unable to recover it. 00:33:58.003 [2024-07-24 02:12:12.651142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.003 [2024-07-24 02:12:12.651167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.003 qpair failed and we were unable to recover it. 00:33:58.003 [2024-07-24 02:12:12.651274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.003 [2024-07-24 02:12:12.651300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.003 qpair failed and we were unable to recover it. 00:33:58.003 [2024-07-24 02:12:12.651446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.003 [2024-07-24 02:12:12.651472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.003 qpair failed and we were unable to recover it. 00:33:58.003 [2024-07-24 02:12:12.651609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.003 [2024-07-24 02:12:12.651634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.003 qpair failed and we were unable to recover it. 00:33:58.003 [2024-07-24 02:12:12.651769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.003 [2024-07-24 02:12:12.651795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.003 qpair failed and we were unable to recover it. 00:33:58.003 [2024-07-24 02:12:12.651896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.003 [2024-07-24 02:12:12.651922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.003 qpair failed and we were unable to recover it. 00:33:58.003 [2024-07-24 02:12:12.652051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.003 [2024-07-24 02:12:12.652079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.003 qpair failed and we were unable to recover it. 00:33:58.003 [2024-07-24 02:12:12.652237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.003 [2024-07-24 02:12:12.652264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.003 qpair failed and we were unable to recover it. 00:33:58.003 [2024-07-24 02:12:12.652410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.003 [2024-07-24 02:12:12.652457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.003 qpair failed and we were unable to recover it. 00:33:58.003 [2024-07-24 02:12:12.652613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.003 [2024-07-24 02:12:12.652656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.003 qpair failed and we were unable to recover it. 00:33:58.003 [2024-07-24 02:12:12.652780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.003 [2024-07-24 02:12:12.652823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.003 qpair failed and we were unable to recover it. 00:33:58.003 [2024-07-24 02:12:12.652930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.003 [2024-07-24 02:12:12.652956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.003 qpair failed and we were unable to recover it. 00:33:58.003 [2024-07-24 02:12:12.653087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.003 [2024-07-24 02:12:12.653115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.003 qpair failed and we were unable to recover it. 00:33:58.003 [2024-07-24 02:12:12.653258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.003 [2024-07-24 02:12:12.653284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.003 qpair failed and we were unable to recover it. 00:33:58.003 [2024-07-24 02:12:12.653430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.003 [2024-07-24 02:12:12.653457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.003 qpair failed and we were unable to recover it. 00:33:58.003 [2024-07-24 02:12:12.653569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.003 [2024-07-24 02:12:12.653596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.003 qpair failed and we were unable to recover it. 00:33:58.003 [2024-07-24 02:12:12.653700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.003 [2024-07-24 02:12:12.653726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.003 qpair failed and we were unable to recover it. 00:33:58.003 [2024-07-24 02:12:12.653851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.003 [2024-07-24 02:12:12.653878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.004 qpair failed and we were unable to recover it. 00:33:58.004 [2024-07-24 02:12:12.654014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.004 [2024-07-24 02:12:12.654040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.004 qpair failed and we were unable to recover it. 00:33:58.004 [2024-07-24 02:12:12.654167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.004 [2024-07-24 02:12:12.654193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.004 qpair failed and we were unable to recover it. 00:33:58.004 [2024-07-24 02:12:12.654304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.004 [2024-07-24 02:12:12.654336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.004 qpair failed and we were unable to recover it. 00:33:58.004 [2024-07-24 02:12:12.654473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.004 [2024-07-24 02:12:12.654499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.004 qpair failed and we were unable to recover it. 00:33:58.004 [2024-07-24 02:12:12.654635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.004 [2024-07-24 02:12:12.654661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.004 qpair failed and we were unable to recover it. 00:33:58.004 [2024-07-24 02:12:12.654789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.004 [2024-07-24 02:12:12.654815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.004 qpair failed and we were unable to recover it. 00:33:58.004 [2024-07-24 02:12:12.654914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.004 [2024-07-24 02:12:12.654940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.004 qpair failed and we were unable to recover it. 00:33:58.004 [2024-07-24 02:12:12.655043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.004 [2024-07-24 02:12:12.655069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.004 qpair failed and we were unable to recover it. 00:33:58.004 [2024-07-24 02:12:12.655197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.004 [2024-07-24 02:12:12.655224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.004 qpair failed and we were unable to recover it. 00:33:58.004 [2024-07-24 02:12:12.655351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.004 [2024-07-24 02:12:12.655379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.004 qpair failed and we were unable to recover it. 00:33:58.004 [2024-07-24 02:12:12.655514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.004 [2024-07-24 02:12:12.655540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.004 qpair failed and we were unable to recover it. 00:33:58.004 [2024-07-24 02:12:12.655673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.004 [2024-07-24 02:12:12.655698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.004 qpair failed and we were unable to recover it. 00:33:58.004 [2024-07-24 02:12:12.655858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.004 [2024-07-24 02:12:12.655885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.004 qpair failed and we were unable to recover it. 00:33:58.004 [2024-07-24 02:12:12.656019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.004 [2024-07-24 02:12:12.656046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.004 qpair failed and we were unable to recover it. 00:33:58.004 [2024-07-24 02:12:12.656182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.004 [2024-07-24 02:12:12.656209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.004 qpair failed and we were unable to recover it. 00:33:58.004 [2024-07-24 02:12:12.656385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.004 [2024-07-24 02:12:12.656415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.004 qpair failed and we were unable to recover it. 00:33:58.004 [2024-07-24 02:12:12.656541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.004 [2024-07-24 02:12:12.656566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.004 qpair failed and we were unable to recover it. 00:33:58.004 [2024-07-24 02:12:12.656700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.004 [2024-07-24 02:12:12.656726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.004 qpair failed and we were unable to recover it. 00:33:58.004 [2024-07-24 02:12:12.656825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.004 [2024-07-24 02:12:12.656851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.004 qpair failed and we were unable to recover it. 00:33:58.004 [2024-07-24 02:12:12.656978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.004 [2024-07-24 02:12:12.657004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.004 qpair failed and we were unable to recover it. 00:33:58.004 [2024-07-24 02:12:12.657106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.004 [2024-07-24 02:12:12.657133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.004 qpair failed and we were unable to recover it. 00:33:58.004 [2024-07-24 02:12:12.657238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.004 [2024-07-24 02:12:12.657264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.004 qpair failed and we were unable to recover it. 00:33:58.004 [2024-07-24 02:12:12.657420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.004 [2024-07-24 02:12:12.657448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.004 qpair failed and we were unable to recover it. 00:33:58.004 [2024-07-24 02:12:12.657644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.004 [2024-07-24 02:12:12.657670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.004 qpair failed and we were unable to recover it. 00:33:58.004 [2024-07-24 02:12:12.657804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.004 [2024-07-24 02:12:12.657831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.004 qpair failed and we were unable to recover it. 00:33:58.004 [2024-07-24 02:12:12.657963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.004 [2024-07-24 02:12:12.657990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.004 qpair failed and we were unable to recover it. 00:33:58.004 [2024-07-24 02:12:12.658121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.004 [2024-07-24 02:12:12.658148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.004 qpair failed and we were unable to recover it. 00:33:58.004 [2024-07-24 02:12:12.658285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.004 [2024-07-24 02:12:12.658312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.004 qpair failed and we were unable to recover it. 00:33:58.004 [2024-07-24 02:12:12.658445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.004 [2024-07-24 02:12:12.658489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.004 qpair failed and we were unable to recover it. 00:33:58.004 [2024-07-24 02:12:12.658678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.004 [2024-07-24 02:12:12.658721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.004 qpair failed and we were unable to recover it. 00:33:58.004 [2024-07-24 02:12:12.658892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.004 [2024-07-24 02:12:12.658924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.004 qpair failed and we were unable to recover it. 00:33:58.004 [2024-07-24 02:12:12.659029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.004 [2024-07-24 02:12:12.659056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.004 qpair failed and we were unable to recover it. 00:33:58.004 [2024-07-24 02:12:12.659181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.004 [2024-07-24 02:12:12.659207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.004 qpair failed and we were unable to recover it. 00:33:58.004 [2024-07-24 02:12:12.659339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.004 [2024-07-24 02:12:12.659376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.004 qpair failed and we were unable to recover it. 00:33:58.004 [2024-07-24 02:12:12.659560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.004 [2024-07-24 02:12:12.659604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.004 qpair failed and we were unable to recover it. 00:33:58.004 [2024-07-24 02:12:12.659766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.004 [2024-07-24 02:12:12.659810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.004 qpair failed and we were unable to recover it. 00:33:58.004 [2024-07-24 02:12:12.659940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.004 [2024-07-24 02:12:12.659967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.004 qpair failed and we were unable to recover it. 00:33:58.004 [2024-07-24 02:12:12.660124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.004 [2024-07-24 02:12:12.660150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.004 qpair failed and we were unable to recover it. 00:33:58.004 [2024-07-24 02:12:12.660280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.004 [2024-07-24 02:12:12.660307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.004 qpair failed and we were unable to recover it. 00:33:58.004 [2024-07-24 02:12:12.660449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.004 [2024-07-24 02:12:12.660494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.004 qpair failed and we were unable to recover it. 00:33:58.004 [2024-07-24 02:12:12.660678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.004 [2024-07-24 02:12:12.660722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.004 qpair failed and we were unable to recover it. 00:33:58.004 [2024-07-24 02:12:12.660901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.004 [2024-07-24 02:12:12.660945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.004 qpair failed and we were unable to recover it. 00:33:58.004 [2024-07-24 02:12:12.661077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.004 [2024-07-24 02:12:12.661103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.004 qpair failed and we were unable to recover it. 00:33:58.004 [2024-07-24 02:12:12.661233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.004 [2024-07-24 02:12:12.661260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.004 qpair failed and we were unable to recover it. 00:33:58.004 [2024-07-24 02:12:12.661447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.004 [2024-07-24 02:12:12.661491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.004 qpair failed and we were unable to recover it. 00:33:58.004 [2024-07-24 02:12:12.661639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.004 [2024-07-24 02:12:12.661682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.004 qpair failed and we were unable to recover it. 00:33:58.004 [2024-07-24 02:12:12.661802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.004 [2024-07-24 02:12:12.661846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.004 qpair failed and we were unable to recover it. 00:33:58.004 [2024-07-24 02:12:12.662008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.004 [2024-07-24 02:12:12.662035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.004 qpair failed and we were unable to recover it. 00:33:58.004 [2024-07-24 02:12:12.662140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.004 [2024-07-24 02:12:12.662178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.004 qpair failed and we were unable to recover it. 00:33:58.004 [2024-07-24 02:12:12.662284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.004 [2024-07-24 02:12:12.662309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.004 qpair failed and we were unable to recover it. 00:33:58.004 [2024-07-24 02:12:12.662450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.004 [2024-07-24 02:12:12.662477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.004 qpair failed and we were unable to recover it. 00:33:58.004 [2024-07-24 02:12:12.662592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.004 [2024-07-24 02:12:12.662619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.004 qpair failed and we were unable to recover it. 00:33:58.004 [2024-07-24 02:12:12.662722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.004 [2024-07-24 02:12:12.662750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.004 qpair failed and we were unable to recover it. 00:33:58.004 [2024-07-24 02:12:12.662881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.004 [2024-07-24 02:12:12.662908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.004 qpair failed and we were unable to recover it. 00:33:58.004 [2024-07-24 02:12:12.663011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.004 [2024-07-24 02:12:12.663037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.004 qpair failed and we were unable to recover it. 00:33:58.004 [2024-07-24 02:12:12.663139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.004 [2024-07-24 02:12:12.663165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.004 qpair failed and we were unable to recover it. 00:33:58.004 [2024-07-24 02:12:12.663302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.004 [2024-07-24 02:12:12.663348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.004 qpair failed and we were unable to recover it. 00:33:58.004 [2024-07-24 02:12:12.663509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.004 [2024-07-24 02:12:12.663553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.004 qpair failed and we were unable to recover it. 00:33:58.004 [2024-07-24 02:12:12.663701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.004 [2024-07-24 02:12:12.663731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.004 qpair failed and we were unable to recover it. 00:33:58.004 [2024-07-24 02:12:12.663927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.004 [2024-07-24 02:12:12.663971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.004 qpair failed and we were unable to recover it. 00:33:58.004 [2024-07-24 02:12:12.664105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.004 [2024-07-24 02:12:12.664131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.004 qpair failed and we were unable to recover it. 00:33:58.004 [2024-07-24 02:12:12.664264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.004 [2024-07-24 02:12:12.664292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.004 qpair failed and we were unable to recover it. 00:33:58.004 [2024-07-24 02:12:12.664460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.004 [2024-07-24 02:12:12.664487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.004 qpair failed and we were unable to recover it. 00:33:58.004 [2024-07-24 02:12:12.664669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.004 [2024-07-24 02:12:12.664712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.004 qpair failed and we were unable to recover it. 00:33:58.004 [2024-07-24 02:12:12.664867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.004 [2024-07-24 02:12:12.664911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.004 qpair failed and we were unable to recover it. 00:33:58.004 [2024-07-24 02:12:12.665047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.004 [2024-07-24 02:12:12.665074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.004 qpair failed and we were unable to recover it. 00:33:58.004 [2024-07-24 02:12:12.665208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.004 [2024-07-24 02:12:12.665236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.004 qpair failed and we were unable to recover it. 00:33:58.004 [2024-07-24 02:12:12.665338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.004 [2024-07-24 02:12:12.665365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.004 qpair failed and we were unable to recover it. 00:33:58.004 [2024-07-24 02:12:12.665517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.004 [2024-07-24 02:12:12.665560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.004 qpair failed and we were unable to recover it. 00:33:58.004 [2024-07-24 02:12:12.665684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.005 [2024-07-24 02:12:12.665727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.005 qpair failed and we were unable to recover it. 00:33:58.005 [2024-07-24 02:12:12.665852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.005 [2024-07-24 02:12:12.665903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.005 qpair failed and we were unable to recover it. 00:33:58.005 [2024-07-24 02:12:12.666003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.005 [2024-07-24 02:12:12.666029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.005 qpair failed and we were unable to recover it. 00:33:58.005 [2024-07-24 02:12:12.666163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.005 [2024-07-24 02:12:12.666191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.005 qpair failed and we were unable to recover it. 00:33:58.005 [2024-07-24 02:12:12.666300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.005 [2024-07-24 02:12:12.666334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.005 qpair failed and we were unable to recover it. 00:33:58.005 [2024-07-24 02:12:12.666471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.005 [2024-07-24 02:12:12.666498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.005 qpair failed and we were unable to recover it. 00:33:58.005 [2024-07-24 02:12:12.666661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.005 [2024-07-24 02:12:12.666688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.005 qpair failed and we were unable to recover it. 00:33:58.005 [2024-07-24 02:12:12.666791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.005 [2024-07-24 02:12:12.666817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.005 qpair failed and we were unable to recover it. 00:33:58.005 [2024-07-24 02:12:12.666925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.005 [2024-07-24 02:12:12.666951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.005 qpair failed and we were unable to recover it. 00:33:58.005 [2024-07-24 02:12:12.667082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.005 [2024-07-24 02:12:12.667108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.005 qpair failed and we were unable to recover it. 00:33:58.005 [2024-07-24 02:12:12.667235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.005 [2024-07-24 02:12:12.667261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.005 qpair failed and we were unable to recover it. 00:33:58.005 [2024-07-24 02:12:12.667404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.005 [2024-07-24 02:12:12.667450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.005 qpair failed and we were unable to recover it. 00:33:58.005 [2024-07-24 02:12:12.667608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.005 [2024-07-24 02:12:12.667652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.005 qpair failed and we were unable to recover it. 00:33:58.005 [2024-07-24 02:12:12.667814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.005 [2024-07-24 02:12:12.667864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.005 qpair failed and we were unable to recover it. 00:33:58.005 [2024-07-24 02:12:12.668025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.005 [2024-07-24 02:12:12.668052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.005 qpair failed and we were unable to recover it. 00:33:58.005 [2024-07-24 02:12:12.668158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.005 [2024-07-24 02:12:12.668184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.005 qpair failed and we were unable to recover it. 00:33:58.005 [2024-07-24 02:12:12.668286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.005 [2024-07-24 02:12:12.668314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.005 qpair failed and we were unable to recover it. 00:33:58.005 [2024-07-24 02:12:12.668509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.005 [2024-07-24 02:12:12.668553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.005 qpair failed and we were unable to recover it. 00:33:58.005 [2024-07-24 02:12:12.668699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.005 [2024-07-24 02:12:12.668742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.005 qpair failed and we were unable to recover it. 00:33:58.005 [2024-07-24 02:12:12.668921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.005 [2024-07-24 02:12:12.668968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.005 qpair failed and we were unable to recover it. 00:33:58.005 [2024-07-24 02:12:12.669126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.005 [2024-07-24 02:12:12.669153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.005 qpair failed and we were unable to recover it. 00:33:58.005 [2024-07-24 02:12:12.669259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.005 [2024-07-24 02:12:12.669285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.005 qpair failed and we were unable to recover it. 00:33:58.005 [2024-07-24 02:12:12.669423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.005 [2024-07-24 02:12:12.669477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.005 qpair failed and we were unable to recover it. 00:33:58.005 [2024-07-24 02:12:12.669655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.005 [2024-07-24 02:12:12.669686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.005 qpair failed and we were unable to recover it. 00:33:58.005 [2024-07-24 02:12:12.669831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.005 [2024-07-24 02:12:12.669880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.005 qpair failed and we were unable to recover it. 00:33:58.005 [2024-07-24 02:12:12.670039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.005 [2024-07-24 02:12:12.670065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.005 qpair failed and we were unable to recover it. 00:33:58.005 [2024-07-24 02:12:12.670196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.005 [2024-07-24 02:12:12.670222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.005 qpair failed and we were unable to recover it. 00:33:58.005 [2024-07-24 02:12:12.670373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.005 [2024-07-24 02:12:12.670403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.005 qpair failed and we were unable to recover it. 00:33:58.005 [2024-07-24 02:12:12.670577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.005 [2024-07-24 02:12:12.670606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.005 qpair failed and we were unable to recover it. 00:33:58.005 [2024-07-24 02:12:12.670759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.005 [2024-07-24 02:12:12.670785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.005 qpair failed and we were unable to recover it. 00:33:58.005 [2024-07-24 02:12:12.670939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.005 [2024-07-24 02:12:12.670966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.005 qpair failed and we were unable to recover it. 00:33:58.005 [2024-07-24 02:12:12.671103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.005 [2024-07-24 02:12:12.671129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.005 qpair failed and we were unable to recover it. 00:33:58.005 [2024-07-24 02:12:12.671237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.005 [2024-07-24 02:12:12.671264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.005 qpair failed and we were unable to recover it. 00:33:58.005 [2024-07-24 02:12:12.671418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.005 [2024-07-24 02:12:12.671448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.005 qpair failed and we were unable to recover it. 00:33:58.005 [2024-07-24 02:12:12.671608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.005 [2024-07-24 02:12:12.671652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.005 qpair failed and we were unable to recover it. 00:33:58.005 [2024-07-24 02:12:12.671811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.005 [2024-07-24 02:12:12.671854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.005 qpair failed and we were unable to recover it. 00:33:58.005 [2024-07-24 02:12:12.671990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.005 [2024-07-24 02:12:12.672016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.005 qpair failed and we were unable to recover it. 00:33:58.005 [2024-07-24 02:12:12.672151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.005 [2024-07-24 02:12:12.672178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.005 qpair failed and we were unable to recover it. 00:33:58.005 [2024-07-24 02:12:12.672331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.005 [2024-07-24 02:12:12.672358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.005 qpair failed and we were unable to recover it. 00:33:58.005 [2024-07-24 02:12:12.672515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.005 [2024-07-24 02:12:12.672544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.005 qpair failed and we were unable to recover it. 00:33:58.005 [2024-07-24 02:12:12.672716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.005 [2024-07-24 02:12:12.672760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.005 qpair failed and we were unable to recover it. 00:33:58.005 [2024-07-24 02:12:12.672865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.005 [2024-07-24 02:12:12.672894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.005 qpair failed and we were unable to recover it. 00:33:58.005 [2024-07-24 02:12:12.673028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.005 [2024-07-24 02:12:12.673054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.005 qpair failed and we were unable to recover it. 00:33:58.005 [2024-07-24 02:12:12.673193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.005 [2024-07-24 02:12:12.673219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.005 qpair failed and we were unable to recover it. 00:33:58.005 [2024-07-24 02:12:12.673369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.005 [2024-07-24 02:12:12.673399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.005 qpair failed and we were unable to recover it. 00:33:58.005 [2024-07-24 02:12:12.673597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.005 [2024-07-24 02:12:12.673642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.005 qpair failed and we were unable to recover it. 00:33:58.005 [2024-07-24 02:12:12.673821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.005 [2024-07-24 02:12:12.673869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.005 qpair failed and we were unable to recover it. 00:33:58.005 [2024-07-24 02:12:12.674004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.005 [2024-07-24 02:12:12.674032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.005 qpair failed and we were unable to recover it. 00:33:58.005 [2024-07-24 02:12:12.674162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.005 [2024-07-24 02:12:12.674188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.005 qpair failed and we were unable to recover it. 00:33:58.005 [2024-07-24 02:12:12.674300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.005 [2024-07-24 02:12:12.674332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.005 qpair failed and we were unable to recover it. 00:33:58.005 [2024-07-24 02:12:12.674466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.005 [2024-07-24 02:12:12.674510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.005 qpair failed and we were unable to recover it. 00:33:58.005 [2024-07-24 02:12:12.674641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.005 [2024-07-24 02:12:12.674666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.005 qpair failed and we were unable to recover it. 00:33:58.005 [2024-07-24 02:12:12.674801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.005 [2024-07-24 02:12:12.674826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.005 qpair failed and we were unable to recover it. 00:33:58.005 [2024-07-24 02:12:12.674988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.005 [2024-07-24 02:12:12.675014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.005 qpair failed and we were unable to recover it. 00:33:58.005 [2024-07-24 02:12:12.675146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.005 [2024-07-24 02:12:12.675171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.005 qpair failed and we were unable to recover it. 00:33:58.005 [2024-07-24 02:12:12.675335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.005 [2024-07-24 02:12:12.675362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.005 qpair failed and we were unable to recover it. 00:33:58.005 [2024-07-24 02:12:12.675550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.005 [2024-07-24 02:12:12.675594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.005 qpair failed and we were unable to recover it. 00:33:58.005 [2024-07-24 02:12:12.675776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.005 [2024-07-24 02:12:12.675825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.005 qpair failed and we were unable to recover it. 00:33:58.005 [2024-07-24 02:12:12.675963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.005 [2024-07-24 02:12:12.675988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.005 qpair failed and we were unable to recover it. 00:33:58.005 [2024-07-24 02:12:12.676117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.005 [2024-07-24 02:12:12.676143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.005 qpair failed and we were unable to recover it. 00:33:58.005 [2024-07-24 02:12:12.676270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.005 [2024-07-24 02:12:12.676297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.005 qpair failed and we were unable to recover it. 00:33:58.005 [2024-07-24 02:12:12.676462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.005 [2024-07-24 02:12:12.676488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.005 qpair failed and we were unable to recover it. 00:33:58.006 [2024-07-24 02:12:12.676594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.006 [2024-07-24 02:12:12.676619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.006 qpair failed and we were unable to recover it. 00:33:58.006 [2024-07-24 02:12:12.676745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.006 [2024-07-24 02:12:12.676793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.006 qpair failed and we were unable to recover it. 00:33:58.006 [2024-07-24 02:12:12.676900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.006 [2024-07-24 02:12:12.676926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.006 qpair failed and we were unable to recover it. 00:33:58.006 [2024-07-24 02:12:12.677087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.006 [2024-07-24 02:12:12.677113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.006 qpair failed and we were unable to recover it. 00:33:58.006 [2024-07-24 02:12:12.677220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.006 [2024-07-24 02:12:12.677246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.006 qpair failed and we were unable to recover it. 00:33:58.006 [2024-07-24 02:12:12.677384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.006 [2024-07-24 02:12:12.677410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.006 qpair failed and we were unable to recover it. 00:33:58.006 [2024-07-24 02:12:12.677569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.006 [2024-07-24 02:12:12.677596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.006 qpair failed and we were unable to recover it. 00:33:58.006 [2024-07-24 02:12:12.677729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.006 [2024-07-24 02:12:12.677756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.006 qpair failed and we were unable to recover it. 00:33:58.006 [2024-07-24 02:12:12.677888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.006 [2024-07-24 02:12:12.677913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.006 qpair failed and we were unable to recover it. 00:33:58.006 [2024-07-24 02:12:12.678072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.006 [2024-07-24 02:12:12.678099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.006 qpair failed and we were unable to recover it. 00:33:58.006 [2024-07-24 02:12:12.678225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.006 [2024-07-24 02:12:12.678250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.006 qpair failed and we were unable to recover it. 00:33:58.006 [2024-07-24 02:12:12.678406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.006 [2024-07-24 02:12:12.678449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.006 qpair failed and we were unable to recover it. 00:33:58.006 [2024-07-24 02:12:12.678631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.006 [2024-07-24 02:12:12.678674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.006 qpair failed and we were unable to recover it. 00:33:58.006 [2024-07-24 02:12:12.678803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.006 [2024-07-24 02:12:12.678832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.006 qpair failed and we were unable to recover it. 00:33:58.006 [2024-07-24 02:12:12.679008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.006 [2024-07-24 02:12:12.679034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.006 qpair failed and we were unable to recover it. 00:33:58.006 [2024-07-24 02:12:12.679136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.006 [2024-07-24 02:12:12.679162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.006 qpair failed and we were unable to recover it. 00:33:58.006 [2024-07-24 02:12:12.679265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.006 [2024-07-24 02:12:12.679290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.006 qpair failed and we were unable to recover it. 00:33:58.006 [2024-07-24 02:12:12.679406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.006 [2024-07-24 02:12:12.679432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.006 qpair failed and we were unable to recover it. 00:33:58.006 [2024-07-24 02:12:12.679585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.006 [2024-07-24 02:12:12.679630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.006 qpair failed and we were unable to recover it. 00:33:58.006 [2024-07-24 02:12:12.679800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.006 [2024-07-24 02:12:12.679831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.006 qpair failed and we were unable to recover it. 00:33:58.006 [2024-07-24 02:12:12.679997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.006 [2024-07-24 02:12:12.680024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.006 qpair failed and we were unable to recover it. 00:33:58.006 [2024-07-24 02:12:12.680178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.006 [2024-07-24 02:12:12.680209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.006 qpair failed and we were unable to recover it. 00:33:58.006 [2024-07-24 02:12:12.680312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.006 [2024-07-24 02:12:12.680344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.006 qpair failed and we were unable to recover it. 00:33:58.006 [2024-07-24 02:12:12.680505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.006 [2024-07-24 02:12:12.680549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.006 qpair failed and we were unable to recover it. 00:33:58.006 [2024-07-24 02:12:12.680729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.006 [2024-07-24 02:12:12.680773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.006 qpair failed and we were unable to recover it. 00:33:58.006 [2024-07-24 02:12:12.680939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.006 [2024-07-24 02:12:12.680966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.006 qpair failed and we were unable to recover it. 00:33:58.006 [2024-07-24 02:12:12.681104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.006 [2024-07-24 02:12:12.681142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.006 qpair failed and we were unable to recover it. 00:33:58.006 [2024-07-24 02:12:12.681281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.006 [2024-07-24 02:12:12.681307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.006 qpair failed and we were unable to recover it. 00:33:58.006 [2024-07-24 02:12:12.681470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.006 [2024-07-24 02:12:12.681513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.006 qpair failed and we were unable to recover it. 00:33:58.006 [2024-07-24 02:12:12.681696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.006 [2024-07-24 02:12:12.681740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.006 qpair failed and we were unable to recover it. 00:33:58.006 [2024-07-24 02:12:12.681891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.006 [2024-07-24 02:12:12.681920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.006 qpair failed and we were unable to recover it. 00:33:58.006 [2024-07-24 02:12:12.682045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.006 [2024-07-24 02:12:12.682072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.006 qpair failed and we were unable to recover it. 00:33:58.006 [2024-07-24 02:12:12.682196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.006 [2024-07-24 02:12:12.682223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.006 qpair failed and we were unable to recover it. 00:33:58.006 [2024-07-24 02:12:12.682363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.006 [2024-07-24 02:12:12.682390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.006 qpair failed and we were unable to recover it. 00:33:58.006 [2024-07-24 02:12:12.682516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.006 [2024-07-24 02:12:12.682542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.006 qpair failed and we were unable to recover it. 00:33:58.006 [2024-07-24 02:12:12.682699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.006 [2024-07-24 02:12:12.682725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.006 qpair failed and we were unable to recover it. 00:33:58.006 [2024-07-24 02:12:12.682905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.006 [2024-07-24 02:12:12.682934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.006 qpair failed and we were unable to recover it. 00:33:58.006 [2024-07-24 02:12:12.683079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.006 [2024-07-24 02:12:12.683104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.006 qpair failed and we were unable to recover it. 00:33:58.006 [2024-07-24 02:12:12.683237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.006 [2024-07-24 02:12:12.683264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.006 qpair failed and we were unable to recover it. 00:33:58.006 [2024-07-24 02:12:12.683438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.006 [2024-07-24 02:12:12.683482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.006 qpair failed and we were unable to recover it. 00:33:58.006 [2024-07-24 02:12:12.683635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.006 [2024-07-24 02:12:12.683678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.006 qpair failed and we were unable to recover it. 00:33:58.006 [2024-07-24 02:12:12.683860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.006 [2024-07-24 02:12:12.683888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.006 qpair failed and we were unable to recover it. 00:33:58.006 [2024-07-24 02:12:12.684038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.006 [2024-07-24 02:12:12.684065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.006 qpair failed and we were unable to recover it. 00:33:58.006 [2024-07-24 02:12:12.684195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.006 [2024-07-24 02:12:12.684220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.006 qpair failed and we were unable to recover it. 00:33:58.006 [2024-07-24 02:12:12.684372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.006 [2024-07-24 02:12:12.684401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.006 qpair failed and we were unable to recover it. 00:33:58.006 [2024-07-24 02:12:12.684558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.006 [2024-07-24 02:12:12.684588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.006 qpair failed and we were unable to recover it. 00:33:58.006 [2024-07-24 02:12:12.684760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.006 [2024-07-24 02:12:12.684803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.006 qpair failed and we were unable to recover it. 00:33:58.006 [2024-07-24 02:12:12.684939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.006 [2024-07-24 02:12:12.684966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.006 qpair failed and we were unable to recover it. 00:33:58.006 [2024-07-24 02:12:12.685129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.006 [2024-07-24 02:12:12.685155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.006 qpair failed and we were unable to recover it. 00:33:58.006 [2024-07-24 02:12:12.685289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.006 [2024-07-24 02:12:12.685314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.006 qpair failed and we were unable to recover it. 00:33:58.006 [2024-07-24 02:12:12.685439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.006 [2024-07-24 02:12:12.685465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.006 qpair failed and we were unable to recover it. 00:33:58.006 [2024-07-24 02:12:12.685622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.006 [2024-07-24 02:12:12.685648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.006 qpair failed and we were unable to recover it. 00:33:58.006 [2024-07-24 02:12:12.685801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.006 [2024-07-24 02:12:12.685848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.006 qpair failed and we were unable to recover it. 00:33:58.006 [2024-07-24 02:12:12.686007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.006 [2024-07-24 02:12:12.686033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.006 qpair failed and we were unable to recover it. 00:33:58.006 [2024-07-24 02:12:12.686169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.006 [2024-07-24 02:12:12.686194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.006 qpair failed and we were unable to recover it. 00:33:58.006 [2024-07-24 02:12:12.686324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.006 [2024-07-24 02:12:12.686351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.006 qpair failed and we were unable to recover it. 00:33:58.006 [2024-07-24 02:12:12.686486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.006 [2024-07-24 02:12:12.686513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.006 qpair failed and we were unable to recover it. 00:33:58.006 [2024-07-24 02:12:12.686649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.006 [2024-07-24 02:12:12.686676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.006 qpair failed and we were unable to recover it. 00:33:58.006 [2024-07-24 02:12:12.686808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.006 [2024-07-24 02:12:12.686833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.006 qpair failed and we were unable to recover it. 00:33:58.006 [2024-07-24 02:12:12.686944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.006 [2024-07-24 02:12:12.686973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.006 qpair failed and we were unable to recover it. 00:33:58.006 [2024-07-24 02:12:12.687131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.006 [2024-07-24 02:12:12.687157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.006 qpair failed and we were unable to recover it. 00:33:58.006 [2024-07-24 02:12:12.687262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.006 [2024-07-24 02:12:12.687287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.006 qpair failed and we were unable to recover it. 00:33:58.006 [2024-07-24 02:12:12.687401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.006 [2024-07-24 02:12:12.687428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.006 qpair failed and we were unable to recover it. 00:33:58.006 [2024-07-24 02:12:12.687540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.006 [2024-07-24 02:12:12.687566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.006 qpair failed and we were unable to recover it. 00:33:58.006 [2024-07-24 02:12:12.687673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.006 [2024-07-24 02:12:12.687699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.006 qpair failed and we were unable to recover it. 00:33:58.006 [2024-07-24 02:12:12.687807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.006 [2024-07-24 02:12:12.687833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.006 qpair failed and we were unable to recover it. 00:33:58.006 [2024-07-24 02:12:12.687968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.006 [2024-07-24 02:12:12.687994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.006 qpair failed and we were unable to recover it. 00:33:58.006 [2024-07-24 02:12:12.688119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.006 [2024-07-24 02:12:12.688145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.006 qpair failed and we were unable to recover it. 00:33:58.006 [2024-07-24 02:12:12.688279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.006 [2024-07-24 02:12:12.688304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.006 qpair failed and we were unable to recover it. 00:33:58.007 [2024-07-24 02:12:12.688409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.007 [2024-07-24 02:12:12.688436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.007 qpair failed and we were unable to recover it. 00:33:58.007 [2024-07-24 02:12:12.688546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.007 [2024-07-24 02:12:12.688572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.007 qpair failed and we were unable to recover it. 00:33:58.007 [2024-07-24 02:12:12.688717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.007 [2024-07-24 02:12:12.688744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.007 qpair failed and we were unable to recover it. 00:33:58.007 [2024-07-24 02:12:12.688877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.007 [2024-07-24 02:12:12.688902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.007 qpair failed and we were unable to recover it. 00:33:58.007 [2024-07-24 02:12:12.689016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.007 [2024-07-24 02:12:12.689041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.007 qpair failed and we were unable to recover it. 00:33:58.007 [2024-07-24 02:12:12.689201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.007 [2024-07-24 02:12:12.689227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.007 qpair failed and we were unable to recover it. 00:33:58.007 [2024-07-24 02:12:12.689359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.007 [2024-07-24 02:12:12.689386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.007 qpair failed and we were unable to recover it. 00:33:58.007 [2024-07-24 02:12:12.689496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.007 [2024-07-24 02:12:12.689524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.007 qpair failed and we were unable to recover it. 00:33:58.007 [2024-07-24 02:12:12.689684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.007 [2024-07-24 02:12:12.689710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.007 qpair failed and we were unable to recover it. 00:33:58.007 [2024-07-24 02:12:12.689890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.007 [2024-07-24 02:12:12.689933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.007 qpair failed and we were unable to recover it. 00:33:58.007 [2024-07-24 02:12:12.690066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.007 [2024-07-24 02:12:12.690091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.007 qpair failed and we were unable to recover it. 00:33:58.007 [2024-07-24 02:12:12.690246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.007 [2024-07-24 02:12:12.690272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.007 qpair failed and we were unable to recover it. 00:33:58.007 [2024-07-24 02:12:12.690408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.007 [2024-07-24 02:12:12.690434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.007 qpair failed and we were unable to recover it. 00:33:58.007 [2024-07-24 02:12:12.690544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.007 [2024-07-24 02:12:12.690569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.007 qpair failed and we were unable to recover it. 00:33:58.007 [2024-07-24 02:12:12.690712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.007 [2024-07-24 02:12:12.690760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.007 qpair failed and we were unable to recover it. 00:33:58.007 [2024-07-24 02:12:12.690885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.007 [2024-07-24 02:12:12.690927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.007 qpair failed and we were unable to recover it. 00:33:58.007 [2024-07-24 02:12:12.691059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.007 [2024-07-24 02:12:12.691087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.007 qpair failed and we were unable to recover it. 00:33:58.007 [2024-07-24 02:12:12.691224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.007 [2024-07-24 02:12:12.691251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.007 qpair failed and we were unable to recover it. 00:33:58.007 [2024-07-24 02:12:12.691384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.007 [2024-07-24 02:12:12.691411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.007 qpair failed and we were unable to recover it. 00:33:58.007 [2024-07-24 02:12:12.691555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.007 [2024-07-24 02:12:12.691582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.007 qpair failed and we were unable to recover it. 00:33:58.007 [2024-07-24 02:12:12.691714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.007 [2024-07-24 02:12:12.691740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.007 qpair failed and we were unable to recover it. 00:33:58.007 [2024-07-24 02:12:12.691869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.007 [2024-07-24 02:12:12.691895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.007 qpair failed and we were unable to recover it. 00:33:58.007 [2024-07-24 02:12:12.692036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.007 [2024-07-24 02:12:12.692062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.007 qpair failed and we were unable to recover it. 00:33:58.007 [2024-07-24 02:12:12.692167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.007 [2024-07-24 02:12:12.692192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.007 qpair failed and we were unable to recover it. 00:33:58.007 [2024-07-24 02:12:12.692350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.007 [2024-07-24 02:12:12.692377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.007 qpair failed and we were unable to recover it. 00:33:58.007 [2024-07-24 02:12:12.692512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.007 [2024-07-24 02:12:12.692537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.007 qpair failed and we were unable to recover it. 00:33:58.007 [2024-07-24 02:12:12.692664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.007 [2024-07-24 02:12:12.692708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.007 qpair failed and we were unable to recover it. 00:33:58.007 [2024-07-24 02:12:12.692839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.007 [2024-07-24 02:12:12.692864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.007 qpair failed and we were unable to recover it. 00:33:58.007 [2024-07-24 02:12:12.692965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.007 [2024-07-24 02:12:12.692990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.007 qpair failed and we were unable to recover it. 00:33:58.007 [2024-07-24 02:12:12.693125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.007 [2024-07-24 02:12:12.693151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.007 qpair failed and we were unable to recover it. 00:33:58.007 [2024-07-24 02:12:12.693260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.007 [2024-07-24 02:12:12.693286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.007 qpair failed and we were unable to recover it. 00:33:58.007 [2024-07-24 02:12:12.693399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.007 [2024-07-24 02:12:12.693426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.007 qpair failed and we were unable to recover it. 00:33:58.007 [2024-07-24 02:12:12.693539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.007 [2024-07-24 02:12:12.693565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.007 qpair failed and we were unable to recover it. 00:33:58.007 [2024-07-24 02:12:12.693694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.007 [2024-07-24 02:12:12.693720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.007 qpair failed and we were unable to recover it. 00:33:58.007 [2024-07-24 02:12:12.693823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.007 [2024-07-24 02:12:12.693849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.007 qpair failed and we were unable to recover it. 00:33:58.007 [2024-07-24 02:12:12.693985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.007 [2024-07-24 02:12:12.694011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.007 qpair failed and we were unable to recover it. 00:33:58.007 [2024-07-24 02:12:12.694116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.007 [2024-07-24 02:12:12.694142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.007 qpair failed and we were unable to recover it. 00:33:58.007 [2024-07-24 02:12:12.694274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.007 [2024-07-24 02:12:12.694301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.007 qpair failed and we were unable to recover it. 00:33:58.007 [2024-07-24 02:12:12.694467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.007 [2024-07-24 02:12:12.694493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.007 qpair failed and we were unable to recover it. 00:33:58.007 [2024-07-24 02:12:12.694620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.007 [2024-07-24 02:12:12.694647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.007 qpair failed and we were unable to recover it. 00:33:58.007 [2024-07-24 02:12:12.694757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.007 [2024-07-24 02:12:12.694783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.007 qpair failed and we were unable to recover it. 00:33:58.007 [2024-07-24 02:12:12.694918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.007 [2024-07-24 02:12:12.694943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.007 qpair failed and we were unable to recover it. 00:33:58.007 [2024-07-24 02:12:12.695072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.007 [2024-07-24 02:12:12.695098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.007 qpair failed and we were unable to recover it. 00:33:58.007 [2024-07-24 02:12:12.695207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.007 [2024-07-24 02:12:12.695233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.007 qpair failed and we were unable to recover it. 00:33:58.007 [2024-07-24 02:12:12.695381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.007 [2024-07-24 02:12:12.695409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.007 qpair failed and we were unable to recover it. 00:33:58.007 [2024-07-24 02:12:12.695566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.007 [2024-07-24 02:12:12.695591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.007 qpair failed and we were unable to recover it. 00:33:58.007 [2024-07-24 02:12:12.695724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.007 [2024-07-24 02:12:12.695751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.007 qpair failed and we were unable to recover it. 00:33:58.007 [2024-07-24 02:12:12.695875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.007 [2024-07-24 02:12:12.695900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.007 qpair failed and we were unable to recover it. 00:33:58.007 [2024-07-24 02:12:12.696004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.007 [2024-07-24 02:12:12.696031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.007 qpair failed and we were unable to recover it. 00:33:58.007 [2024-07-24 02:12:12.696139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.007 [2024-07-24 02:12:12.696166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.007 qpair failed and we were unable to recover it. 00:33:58.007 [2024-07-24 02:12:12.696353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.007 [2024-07-24 02:12:12.696399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.007 qpair failed and we were unable to recover it. 00:33:58.007 [2024-07-24 02:12:12.696541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.007 [2024-07-24 02:12:12.696585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.007 qpair failed and we were unable to recover it. 00:33:58.007 [2024-07-24 02:12:12.696713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.007 [2024-07-24 02:12:12.696739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.007 qpair failed and we were unable to recover it. 00:33:58.007 [2024-07-24 02:12:12.696877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.007 [2024-07-24 02:12:12.696903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.007 qpair failed and we were unable to recover it. 00:33:58.007 [2024-07-24 02:12:12.697036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.007 [2024-07-24 02:12:12.697062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.007 qpair failed and we were unable to recover it. 00:33:58.007 [2024-07-24 02:12:12.697219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.007 [2024-07-24 02:12:12.697245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.007 qpair failed and we were unable to recover it. 00:33:58.007 [2024-07-24 02:12:12.697364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.007 [2024-07-24 02:12:12.697394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.007 qpair failed and we were unable to recover it. 00:33:58.007 [2024-07-24 02:12:12.697552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.007 [2024-07-24 02:12:12.697599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.007 qpair failed and we were unable to recover it. 00:33:58.007 [2024-07-24 02:12:12.697779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.007 [2024-07-24 02:12:12.697826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.007 qpair failed and we were unable to recover it. 00:33:58.007 [2024-07-24 02:12:12.697951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.007 [2024-07-24 02:12:12.697978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.007 qpair failed and we were unable to recover it. 00:33:58.007 [2024-07-24 02:12:12.698084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.007 [2024-07-24 02:12:12.698110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.007 qpair failed and we were unable to recover it. 00:33:58.007 [2024-07-24 02:12:12.698244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.007 [2024-07-24 02:12:12.698271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.007 qpair failed and we were unable to recover it. 00:33:58.007 [2024-07-24 02:12:12.698438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.007 [2024-07-24 02:12:12.698465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.007 qpair failed and we were unable to recover it. 00:33:58.007 [2024-07-24 02:12:12.698628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.007 [2024-07-24 02:12:12.698653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.007 qpair failed and we were unable to recover it. 00:33:58.007 [2024-07-24 02:12:12.698820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.007 [2024-07-24 02:12:12.698846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.007 qpair failed and we were unable to recover it. 00:33:58.007 [2024-07-24 02:12:12.698979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.007 [2024-07-24 02:12:12.699006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.007 qpair failed and we were unable to recover it. 00:33:58.007 [2024-07-24 02:12:12.699118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.007 [2024-07-24 02:12:12.699144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.007 qpair failed and we were unable to recover it. 00:33:58.007 [2024-07-24 02:12:12.699283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.007 [2024-07-24 02:12:12.699309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.007 qpair failed and we were unable to recover it. 00:33:58.007 [2024-07-24 02:12:12.699508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.007 [2024-07-24 02:12:12.699538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.007 qpair failed and we were unable to recover it. 00:33:58.008 [2024-07-24 02:12:12.699709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.008 [2024-07-24 02:12:12.699752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.008 qpair failed and we were unable to recover it. 00:33:58.008 [2024-07-24 02:12:12.699883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.008 [2024-07-24 02:12:12.699909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.008 qpair failed and we were unable to recover it. 00:33:58.008 [2024-07-24 02:12:12.700073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.008 [2024-07-24 02:12:12.700099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.008 qpair failed and we were unable to recover it. 00:33:58.008 [2024-07-24 02:12:12.700200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.008 [2024-07-24 02:12:12.700226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.008 qpair failed and we were unable to recover it. 00:33:58.008 [2024-07-24 02:12:12.700388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.008 [2024-07-24 02:12:12.700415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.008 qpair failed and we were unable to recover it. 00:33:58.008 [2024-07-24 02:12:12.700511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.008 [2024-07-24 02:12:12.700536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.008 qpair failed and we were unable to recover it. 00:33:58.008 [2024-07-24 02:12:12.700667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.008 [2024-07-24 02:12:12.700696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.008 qpair failed and we were unable to recover it. 00:33:58.008 [2024-07-24 02:12:12.700832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.008 [2024-07-24 02:12:12.700858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.008 qpair failed and we were unable to recover it. 00:33:58.008 [2024-07-24 02:12:12.701001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.008 [2024-07-24 02:12:12.701027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.008 qpair failed and we were unable to recover it. 00:33:58.008 [2024-07-24 02:12:12.701164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.008 [2024-07-24 02:12:12.701191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.008 qpair failed and we were unable to recover it. 00:33:58.008 [2024-07-24 02:12:12.701327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.008 [2024-07-24 02:12:12.701355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.008 qpair failed and we were unable to recover it. 00:33:58.008 [2024-07-24 02:12:12.701502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.008 [2024-07-24 02:12:12.701545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.008 qpair failed and we were unable to recover it. 00:33:58.008 [2024-07-24 02:12:12.701709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.008 [2024-07-24 02:12:12.701738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.008 qpair failed and we were unable to recover it. 00:33:58.008 [2024-07-24 02:12:12.701879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.008 [2024-07-24 02:12:12.701922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.008 qpair failed and we were unable to recover it. 00:33:58.008 [2024-07-24 02:12:12.702057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.008 [2024-07-24 02:12:12.702085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.008 qpair failed and we were unable to recover it. 00:33:58.008 [2024-07-24 02:12:12.702249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.008 [2024-07-24 02:12:12.702275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.008 qpair failed and we were unable to recover it. 00:33:58.008 [2024-07-24 02:12:12.702440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.008 [2024-07-24 02:12:12.702485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.008 qpair failed and we were unable to recover it. 00:33:58.008 [2024-07-24 02:12:12.702603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.008 [2024-07-24 02:12:12.702648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.008 qpair failed and we were unable to recover it. 00:33:58.008 [2024-07-24 02:12:12.702781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.008 [2024-07-24 02:12:12.702808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.008 qpair failed and we were unable to recover it. 00:33:58.008 [2024-07-24 02:12:12.702970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.008 [2024-07-24 02:12:12.702997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.008 qpair failed and we were unable to recover it. 00:33:58.008 [2024-07-24 02:12:12.703131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.008 [2024-07-24 02:12:12.703157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.008 qpair failed and we were unable to recover it. 00:33:58.008 [2024-07-24 02:12:12.703291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.008 [2024-07-24 02:12:12.703323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.008 qpair failed and we were unable to recover it. 00:33:58.008 [2024-07-24 02:12:12.703436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.008 [2024-07-24 02:12:12.703463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.008 qpair failed and we were unable to recover it. 00:33:58.008 [2024-07-24 02:12:12.703606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.008 [2024-07-24 02:12:12.703632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.008 qpair failed and we were unable to recover it. 00:33:58.008 [2024-07-24 02:12:12.703740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.008 [2024-07-24 02:12:12.703766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.008 qpair failed and we were unable to recover it. 00:33:58.008 [2024-07-24 02:12:12.703879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.008 [2024-07-24 02:12:12.703906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.008 qpair failed and we were unable to recover it. 00:33:58.008 [2024-07-24 02:12:12.704065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.008 [2024-07-24 02:12:12.704091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.008 qpair failed and we were unable to recover it. 00:33:58.008 [2024-07-24 02:12:12.704224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.008 [2024-07-24 02:12:12.704250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.008 qpair failed and we were unable to recover it. 00:33:58.008 [2024-07-24 02:12:12.704403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.008 [2024-07-24 02:12:12.704452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.008 qpair failed and we were unable to recover it. 00:33:58.008 [2024-07-24 02:12:12.704593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.008 [2024-07-24 02:12:12.704620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.008 qpair failed and we were unable to recover it. 00:33:58.008 [2024-07-24 02:12:12.704796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.008 [2024-07-24 02:12:12.704825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.008 qpair failed and we were unable to recover it. 00:33:58.008 [2024-07-24 02:12:12.705028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.008 [2024-07-24 02:12:12.705072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.008 qpair failed and we were unable to recover it. 00:33:58.008 [2024-07-24 02:12:12.705181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.008 [2024-07-24 02:12:12.705207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.008 qpair failed and we were unable to recover it. 00:33:58.008 [2024-07-24 02:12:12.705322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.008 [2024-07-24 02:12:12.705347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.008 qpair failed and we were unable to recover it. 00:33:58.008 [2024-07-24 02:12:12.705476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.008 [2024-07-24 02:12:12.705503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.008 qpair failed and we were unable to recover it. 00:33:58.008 [2024-07-24 02:12:12.705655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.008 [2024-07-24 02:12:12.705702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.008 qpair failed and we were unable to recover it. 00:33:58.008 [2024-07-24 02:12:12.705858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.008 [2024-07-24 02:12:12.705884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.008 qpair failed and we were unable to recover it. 00:33:58.008 [2024-07-24 02:12:12.705992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.008 [2024-07-24 02:12:12.706018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.008 qpair failed and we were unable to recover it. 00:33:58.008 [2024-07-24 02:12:12.706151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.008 [2024-07-24 02:12:12.706186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.008 qpair failed and we were unable to recover it. 00:33:58.008 [2024-07-24 02:12:12.706331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.008 [2024-07-24 02:12:12.706370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.008 qpair failed and we were unable to recover it. 00:33:58.008 [2024-07-24 02:12:12.706523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.008 [2024-07-24 02:12:12.706549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.008 qpair failed and we were unable to recover it. 00:33:58.008 [2024-07-24 02:12:12.706680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.008 [2024-07-24 02:12:12.706707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.008 qpair failed and we were unable to recover it. 00:33:58.008 [2024-07-24 02:12:12.706812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.008 [2024-07-24 02:12:12.706839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.008 qpair failed and we were unable to recover it. 00:33:58.008 [2024-07-24 02:12:12.706943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.008 [2024-07-24 02:12:12.706969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.008 qpair failed and we were unable to recover it. 00:33:58.008 [2024-07-24 02:12:12.707140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.008 [2024-07-24 02:12:12.707166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.008 qpair failed and we were unable to recover it. 00:33:58.008 [2024-07-24 02:12:12.707302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.008 [2024-07-24 02:12:12.707349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.008 qpair failed and we were unable to recover it. 00:33:58.008 [2024-07-24 02:12:12.707451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.008 [2024-07-24 02:12:12.707479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.008 qpair failed and we were unable to recover it. 00:33:58.008 [2024-07-24 02:12:12.707615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.008 [2024-07-24 02:12:12.707642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.008 qpair failed and we were unable to recover it. 00:33:58.008 [2024-07-24 02:12:12.707821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.008 [2024-07-24 02:12:12.707865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.008 qpair failed and we were unable to recover it. 00:33:58.008 [2024-07-24 02:12:12.708024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.008 [2024-07-24 02:12:12.708050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.008 qpair failed and we were unable to recover it. 00:33:58.008 [2024-07-24 02:12:12.708181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.008 [2024-07-24 02:12:12.708208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.008 qpair failed and we were unable to recover it. 00:33:58.008 [2024-07-24 02:12:12.708335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.008 [2024-07-24 02:12:12.708362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.008 qpair failed and we were unable to recover it. 00:33:58.008 [2024-07-24 02:12:12.708518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.008 [2024-07-24 02:12:12.708563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.008 qpair failed and we were unable to recover it. 00:33:58.008 [2024-07-24 02:12:12.708709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.008 [2024-07-24 02:12:12.708757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.008 qpair failed and we were unable to recover it. 00:33:58.008 [2024-07-24 02:12:12.708884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.008 [2024-07-24 02:12:12.708910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.008 qpair failed and we were unable to recover it. 00:33:58.008 [2024-07-24 02:12:12.709021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.008 [2024-07-24 02:12:12.709048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.008 qpair failed and we were unable to recover it. 00:33:58.008 [2024-07-24 02:12:12.709147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.008 [2024-07-24 02:12:12.709173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.008 qpair failed and we were unable to recover it. 00:33:58.008 [2024-07-24 02:12:12.709300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.008 [2024-07-24 02:12:12.709334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.008 qpair failed and we were unable to recover it. 00:33:58.008 [2024-07-24 02:12:12.709462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.008 [2024-07-24 02:12:12.709506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.008 qpair failed and we were unable to recover it. 00:33:58.008 [2024-07-24 02:12:12.709629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.008 [2024-07-24 02:12:12.709659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.008 qpair failed and we were unable to recover it. 00:33:58.008 [2024-07-24 02:12:12.709838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.008 [2024-07-24 02:12:12.709865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.008 qpair failed and we were unable to recover it. 00:33:58.008 [2024-07-24 02:12:12.709968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.009 [2024-07-24 02:12:12.709995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.009 qpair failed and we were unable to recover it. 00:33:58.009 [2024-07-24 02:12:12.710128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.009 [2024-07-24 02:12:12.710155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.009 qpair failed and we were unable to recover it. 00:33:58.009 [2024-07-24 02:12:12.710291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.009 [2024-07-24 02:12:12.710323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.009 qpair failed and we were unable to recover it. 00:33:58.009 [2024-07-24 02:12:12.710463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.009 [2024-07-24 02:12:12.710489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.009 qpair failed and we were unable to recover it. 00:33:58.009 [2024-07-24 02:12:12.710596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.009 [2024-07-24 02:12:12.710622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.009 qpair failed and we were unable to recover it. 00:33:58.009 [2024-07-24 02:12:12.710728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.009 [2024-07-24 02:12:12.710754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.009 qpair failed and we were unable to recover it. 00:33:58.009 [2024-07-24 02:12:12.710915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.009 [2024-07-24 02:12:12.710942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.009 qpair failed and we were unable to recover it. 00:33:58.009 [2024-07-24 02:12:12.711105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.009 [2024-07-24 02:12:12.711135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.009 qpair failed and we were unable to recover it. 00:33:58.009 [2024-07-24 02:12:12.711270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.009 [2024-07-24 02:12:12.711297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.009 qpair failed and we were unable to recover it. 00:33:58.009 [2024-07-24 02:12:12.711424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.009 [2024-07-24 02:12:12.711452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.009 qpair failed and we were unable to recover it. 00:33:58.009 [2024-07-24 02:12:12.711607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.009 [2024-07-24 02:12:12.711651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.009 qpair failed and we were unable to recover it. 00:33:58.009 [2024-07-24 02:12:12.711788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.009 [2024-07-24 02:12:12.711833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.009 qpair failed and we were unable to recover it. 00:33:58.009 [2024-07-24 02:12:12.711982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.009 [2024-07-24 02:12:12.712026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.009 qpair failed and we were unable to recover it. 00:33:58.009 [2024-07-24 02:12:12.712160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.009 [2024-07-24 02:12:12.712186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.009 qpair failed and we were unable to recover it. 00:33:58.009 [2024-07-24 02:12:12.712326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.009 [2024-07-24 02:12:12.712354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.009 qpair failed and we were unable to recover it. 00:33:58.009 [2024-07-24 02:12:12.712503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.009 [2024-07-24 02:12:12.712547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.009 qpair failed and we were unable to recover it. 00:33:58.009 [2024-07-24 02:12:12.712679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.009 [2024-07-24 02:12:12.712705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.009 qpair failed and we were unable to recover it. 00:33:58.009 [2024-07-24 02:12:12.712834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.009 [2024-07-24 02:12:12.712861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.009 qpair failed and we were unable to recover it. 00:33:58.009 [2024-07-24 02:12:12.712997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.009 [2024-07-24 02:12:12.713022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.009 qpair failed and we were unable to recover it. 00:33:58.009 [2024-07-24 02:12:12.713122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.009 [2024-07-24 02:12:12.713147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.009 qpair failed and we were unable to recover it. 00:33:58.009 [2024-07-24 02:12:12.713281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.009 [2024-07-24 02:12:12.713307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.009 qpair failed and we were unable to recover it. 00:33:58.009 [2024-07-24 02:12:12.713423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.009 [2024-07-24 02:12:12.713450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.009 qpair failed and we were unable to recover it. 00:33:58.009 [2024-07-24 02:12:12.713609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.009 [2024-07-24 02:12:12.713653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.009 qpair failed and we were unable to recover it. 00:33:58.009 [2024-07-24 02:12:12.713806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.009 [2024-07-24 02:12:12.713836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.009 qpair failed and we were unable to recover it. 00:33:58.009 [2024-07-24 02:12:12.713958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.009 [2024-07-24 02:12:12.713984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.009 qpair failed and we were unable to recover it. 00:33:58.009 [2024-07-24 02:12:12.714109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.009 [2024-07-24 02:12:12.714136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.009 qpair failed and we were unable to recover it. 00:33:58.009 [2024-07-24 02:12:12.714273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.009 [2024-07-24 02:12:12.714299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.009 qpair failed and we were unable to recover it. 00:33:58.009 [2024-07-24 02:12:12.714471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.009 [2024-07-24 02:12:12.714516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.009 qpair failed and we were unable to recover it. 00:33:58.009 [2024-07-24 02:12:12.714692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.009 [2024-07-24 02:12:12.714735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.009 qpair failed and we were unable to recover it. 00:33:58.009 [2024-07-24 02:12:12.714887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.009 [2024-07-24 02:12:12.714931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.009 qpair failed and we were unable to recover it. 00:33:58.009 [2024-07-24 02:12:12.715064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.009 [2024-07-24 02:12:12.715090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.009 qpair failed and we were unable to recover it. 00:33:58.009 [2024-07-24 02:12:12.715213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.009 [2024-07-24 02:12:12.715239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.009 qpair failed and we were unable to recover it. 00:33:58.009 [2024-07-24 02:12:12.715380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.009 [2024-07-24 02:12:12.715407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.009 qpair failed and we were unable to recover it. 00:33:58.009 [2024-07-24 02:12:12.715557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.009 [2024-07-24 02:12:12.715598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.009 qpair failed and we were unable to recover it. 00:33:58.009 [2024-07-24 02:12:12.715783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.009 [2024-07-24 02:12:12.715830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.009 qpair failed and we were unable to recover it. 00:33:58.009 [2024-07-24 02:12:12.715985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.009 [2024-07-24 02:12:12.716029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.009 qpair failed and we were unable to recover it. 00:33:58.009 [2024-07-24 02:12:12.716164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.009 [2024-07-24 02:12:12.716191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.009 qpair failed and we were unable to recover it. 00:33:58.009 [2024-07-24 02:12:12.716327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.009 [2024-07-24 02:12:12.716353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.009 qpair failed and we were unable to recover it. 00:33:58.009 [2024-07-24 02:12:12.716490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.009 [2024-07-24 02:12:12.716517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.009 qpair failed and we were unable to recover it. 00:33:58.009 [2024-07-24 02:12:12.716693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.009 [2024-07-24 02:12:12.716723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.009 qpair failed and we were unable to recover it. 00:33:58.009 [2024-07-24 02:12:12.716871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.009 [2024-07-24 02:12:12.716897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.009 qpair failed and we were unable to recover it. 00:33:58.009 [2024-07-24 02:12:12.717026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.009 [2024-07-24 02:12:12.717052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.009 qpair failed and we were unable to recover it. 00:33:58.009 [2024-07-24 02:12:12.717208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.009 [2024-07-24 02:12:12.717235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.009 qpair failed and we were unable to recover it. 00:33:58.009 [2024-07-24 02:12:12.717368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.009 [2024-07-24 02:12:12.717395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.009 qpair failed and we were unable to recover it. 00:33:58.009 [2024-07-24 02:12:12.717530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.009 [2024-07-24 02:12:12.717556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.009 qpair failed and we were unable to recover it. 00:33:58.009 [2024-07-24 02:12:12.717712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.009 [2024-07-24 02:12:12.717738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.009 qpair failed and we were unable to recover it. 00:33:58.009 [2024-07-24 02:12:12.717862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.009 [2024-07-24 02:12:12.717905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.009 qpair failed and we were unable to recover it. 00:33:58.009 [2024-07-24 02:12:12.718033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.009 [2024-07-24 02:12:12.718062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.009 qpair failed and we were unable to recover it. 00:33:58.009 [2024-07-24 02:12:12.718199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.009 [2024-07-24 02:12:12.718226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.009 qpair failed and we were unable to recover it. 00:33:58.009 [2024-07-24 02:12:12.718367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.009 [2024-07-24 02:12:12.718394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.009 qpair failed and we were unable to recover it. 00:33:58.009 [2024-07-24 02:12:12.718552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.009 [2024-07-24 02:12:12.718578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.009 qpair failed and we were unable to recover it. 00:33:58.009 [2024-07-24 02:12:12.718710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.009 [2024-07-24 02:12:12.718737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.009 qpair failed and we were unable to recover it. 00:33:58.009 [2024-07-24 02:12:12.718895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.009 [2024-07-24 02:12:12.718921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.009 qpair failed and we were unable to recover it. 00:33:58.009 [2024-07-24 02:12:12.719054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.009 [2024-07-24 02:12:12.719080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.009 qpair failed and we were unable to recover it. 00:33:58.009 [2024-07-24 02:12:12.719220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.009 [2024-07-24 02:12:12.719246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.009 qpair failed and we were unable to recover it. 00:33:58.009 [2024-07-24 02:12:12.719403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.009 [2024-07-24 02:12:12.719447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.009 qpair failed and we were unable to recover it. 00:33:58.009 [2024-07-24 02:12:12.719637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.009 [2024-07-24 02:12:12.719681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.009 qpair failed and we were unable to recover it. 00:33:58.009 [2024-07-24 02:12:12.719795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.009 [2024-07-24 02:12:12.719838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.009 qpair failed and we were unable to recover it. 00:33:58.009 [2024-07-24 02:12:12.719967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.009 [2024-07-24 02:12:12.719993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.009 qpair failed and we were unable to recover it. 00:33:58.009 [2024-07-24 02:12:12.720151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.009 [2024-07-24 02:12:12.720176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.009 qpair failed and we were unable to recover it. 00:33:58.009 [2024-07-24 02:12:12.720374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.009 [2024-07-24 02:12:12.720401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.009 qpair failed and we were unable to recover it. 00:33:58.009 [2024-07-24 02:12:12.720553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.009 [2024-07-24 02:12:12.720596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.009 qpair failed and we were unable to recover it. 00:33:58.009 [2024-07-24 02:12:12.720720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.009 [2024-07-24 02:12:12.720747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.009 qpair failed and we were unable to recover it. 00:33:58.009 [2024-07-24 02:12:12.720876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.009 [2024-07-24 02:12:12.720902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.009 qpair failed and we were unable to recover it. 00:33:58.009 [2024-07-24 02:12:12.721032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.009 [2024-07-24 02:12:12.721058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.009 qpair failed and we were unable to recover it. 00:33:58.009 [2024-07-24 02:12:12.721187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.009 [2024-07-24 02:12:12.721213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.009 qpair failed and we were unable to recover it. 00:33:58.009 [2024-07-24 02:12:12.721327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.009 [2024-07-24 02:12:12.721354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.009 qpair failed and we were unable to recover it. 00:33:58.009 [2024-07-24 02:12:12.721466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.009 [2024-07-24 02:12:12.721492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.009 qpair failed and we were unable to recover it. 00:33:58.009 [2024-07-24 02:12:12.721655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.009 [2024-07-24 02:12:12.721680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.009 qpair failed and we were unable to recover it. 00:33:58.009 [2024-07-24 02:12:12.721846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.009 [2024-07-24 02:12:12.721872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.009 qpair failed and we were unable to recover it. 00:33:58.009 [2024-07-24 02:12:12.722007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.010 [2024-07-24 02:12:12.722034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.010 qpair failed and we were unable to recover it. 00:33:58.010 [2024-07-24 02:12:12.722193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.010 [2024-07-24 02:12:12.722219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.010 qpair failed and we were unable to recover it. 00:33:58.010 [2024-07-24 02:12:12.722376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.010 [2024-07-24 02:12:12.722403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.010 qpair failed and we were unable to recover it. 00:33:58.010 [2024-07-24 02:12:12.722558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.010 [2024-07-24 02:12:12.722604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.010 qpair failed and we were unable to recover it. 00:33:58.010 [2024-07-24 02:12:12.722746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.010 [2024-07-24 02:12:12.722790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.010 qpair failed and we were unable to recover it. 00:33:58.010 [2024-07-24 02:12:12.722923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.010 [2024-07-24 02:12:12.722949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.010 qpair failed and we were unable to recover it. 00:33:58.010 [2024-07-24 02:12:12.723056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.010 [2024-07-24 02:12:12.723083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.010 qpair failed and we were unable to recover it. 00:33:58.010 [2024-07-24 02:12:12.723219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.010 [2024-07-24 02:12:12.723245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.010 qpair failed and we were unable to recover it. 00:33:58.010 [2024-07-24 02:12:12.723404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.010 [2024-07-24 02:12:12.723448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.010 qpair failed and we were unable to recover it. 00:33:58.010 [2024-07-24 02:12:12.723572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.010 [2024-07-24 02:12:12.723616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.010 qpair failed and we were unable to recover it. 00:33:58.010 [2024-07-24 02:12:12.723797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.010 [2024-07-24 02:12:12.723840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.010 qpair failed and we were unable to recover it. 00:33:58.010 [2024-07-24 02:12:12.723968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.010 [2024-07-24 02:12:12.723993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.010 qpair failed and we were unable to recover it. 00:33:58.010 [2024-07-24 02:12:12.724122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.010 [2024-07-24 02:12:12.724149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.010 qpair failed and we were unable to recover it. 00:33:58.010 [2024-07-24 02:12:12.724328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.010 [2024-07-24 02:12:12.724355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.010 qpair failed and we were unable to recover it. 00:33:58.010 [2024-07-24 02:12:12.724508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.010 [2024-07-24 02:12:12.724556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.010 qpair failed and we were unable to recover it. 00:33:58.010 [2024-07-24 02:12:12.724705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.010 [2024-07-24 02:12:12.724747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.010 qpair failed and we were unable to recover it. 00:33:58.010 [2024-07-24 02:12:12.724868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.010 [2024-07-24 02:12:12.724913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.010 qpair failed and we were unable to recover it. 00:33:58.010 [2024-07-24 02:12:12.725047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.010 [2024-07-24 02:12:12.725076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.010 qpair failed and we were unable to recover it. 00:33:58.010 [2024-07-24 02:12:12.725237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.010 [2024-07-24 02:12:12.725262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.010 qpair failed and we were unable to recover it. 00:33:58.010 [2024-07-24 02:12:12.725440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.010 [2024-07-24 02:12:12.725485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.010 qpair failed and we were unable to recover it. 00:33:58.010 [2024-07-24 02:12:12.725672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.010 [2024-07-24 02:12:12.725716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.010 qpair failed and we were unable to recover it. 00:33:58.010 [2024-07-24 02:12:12.725859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.010 [2024-07-24 02:12:12.725902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.010 qpair failed and we were unable to recover it. 00:33:58.010 [2024-07-24 02:12:12.726062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.010 [2024-07-24 02:12:12.726087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.010 qpair failed and we were unable to recover it. 00:33:58.010 [2024-07-24 02:12:12.726222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.010 [2024-07-24 02:12:12.726249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.010 qpair failed and we were unable to recover it. 00:33:58.010 [2024-07-24 02:12:12.726402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.010 [2024-07-24 02:12:12.726445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.010 qpair failed and we were unable to recover it. 00:33:58.010 [2024-07-24 02:12:12.726597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.010 [2024-07-24 02:12:12.726641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.010 qpair failed and we were unable to recover it. 00:33:58.010 [2024-07-24 02:12:12.726766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.010 [2024-07-24 02:12:12.726810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.010 qpair failed and we were unable to recover it. 00:33:58.010 [2024-07-24 02:12:12.726993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.010 [2024-07-24 02:12:12.727023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.010 qpair failed and we were unable to recover it. 00:33:58.010 [2024-07-24 02:12:12.727174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.010 [2024-07-24 02:12:12.727201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.010 qpair failed and we were unable to recover it. 00:33:58.010 [2024-07-24 02:12:12.727332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.010 [2024-07-24 02:12:12.727359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.010 qpair failed and we were unable to recover it. 00:33:58.010 [2024-07-24 02:12:12.727507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.010 [2024-07-24 02:12:12.727551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.010 qpair failed and we were unable to recover it. 00:33:58.010 [2024-07-24 02:12:12.727712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.010 [2024-07-24 02:12:12.727742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.010 qpair failed and we were unable to recover it. 00:33:58.010 [2024-07-24 02:12:12.727913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.010 [2024-07-24 02:12:12.727956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.010 qpair failed and we were unable to recover it. 00:33:58.010 [2024-07-24 02:12:12.728088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.010 [2024-07-24 02:12:12.728114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.010 qpair failed and we were unable to recover it. 00:33:58.010 [2024-07-24 02:12:12.728247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.010 [2024-07-24 02:12:12.728275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.010 qpair failed and we were unable to recover it. 00:33:58.010 [2024-07-24 02:12:12.728438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.010 [2024-07-24 02:12:12.728469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.010 qpair failed and we were unable to recover it. 00:33:58.010 [2024-07-24 02:12:12.728672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.010 [2024-07-24 02:12:12.728715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.010 qpair failed and we were unable to recover it. 00:33:58.010 [2024-07-24 02:12:12.728872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.010 [2024-07-24 02:12:12.728915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.010 qpair failed and we were unable to recover it. 00:33:58.010 [2024-07-24 02:12:12.729057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.010 [2024-07-24 02:12:12.729083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.010 qpair failed and we were unable to recover it. 00:33:58.010 [2024-07-24 02:12:12.729223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.010 [2024-07-24 02:12:12.729249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.010 qpair failed and we were unable to recover it. 00:33:58.010 [2024-07-24 02:12:12.729413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.010 [2024-07-24 02:12:12.729456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.010 qpair failed and we were unable to recover it. 00:33:58.010 [2024-07-24 02:12:12.729635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.010 [2024-07-24 02:12:12.729680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.010 qpair failed and we were unable to recover it. 00:33:58.010 [2024-07-24 02:12:12.729832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.010 [2024-07-24 02:12:12.729876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.010 qpair failed and we were unable to recover it. 00:33:58.010 [2024-07-24 02:12:12.730033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.010 [2024-07-24 02:12:12.730060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.010 qpair failed and we were unable to recover it. 00:33:58.010 [2024-07-24 02:12:12.730166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.010 [2024-07-24 02:12:12.730193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.010 qpair failed and we were unable to recover it. 00:33:58.010 [2024-07-24 02:12:12.730308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.010 [2024-07-24 02:12:12.730340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.010 qpair failed and we were unable to recover it. 00:33:58.010 [2024-07-24 02:12:12.730522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.010 [2024-07-24 02:12:12.730569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.010 qpair failed and we were unable to recover it. 00:33:58.010 [2024-07-24 02:12:12.730731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.010 [2024-07-24 02:12:12.730774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.010 qpair failed and we were unable to recover it. 00:33:58.010 [2024-07-24 02:12:12.730954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.010 [2024-07-24 02:12:12.730997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.010 qpair failed and we were unable to recover it. 00:33:58.010 [2024-07-24 02:12:12.731130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.010 [2024-07-24 02:12:12.731157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.010 qpair failed and we were unable to recover it. 00:33:58.010 [2024-07-24 02:12:12.731302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.010 [2024-07-24 02:12:12.731334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.010 qpair failed and we were unable to recover it. 00:33:58.010 [2024-07-24 02:12:12.731462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.010 [2024-07-24 02:12:12.731505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.010 qpair failed and we were unable to recover it. 00:33:58.010 [2024-07-24 02:12:12.731655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.010 [2024-07-24 02:12:12.731700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.010 qpair failed and we were unable to recover it. 00:33:58.010 [2024-07-24 02:12:12.731866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.010 [2024-07-24 02:12:12.731910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.010 qpair failed and we were unable to recover it. 00:33:58.010 [2024-07-24 02:12:12.732016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.010 [2024-07-24 02:12:12.732043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.010 qpair failed and we were unable to recover it. 00:33:58.010 [2024-07-24 02:12:12.732181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.010 [2024-07-24 02:12:12.732209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.010 qpair failed and we were unable to recover it. 00:33:58.010 [2024-07-24 02:12:12.732329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.010 [2024-07-24 02:12:12.732357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.010 qpair failed and we were unable to recover it. 00:33:58.010 [2024-07-24 02:12:12.732516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.010 [2024-07-24 02:12:12.732545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.010 qpair failed and we were unable to recover it. 00:33:58.010 [2024-07-24 02:12:12.732673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.010 [2024-07-24 02:12:12.732717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.010 qpair failed and we were unable to recover it. 00:33:58.010 [2024-07-24 02:12:12.732849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.010 [2024-07-24 02:12:12.732876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.010 qpair failed and we were unable to recover it. 00:33:58.010 [2024-07-24 02:12:12.733011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.010 [2024-07-24 02:12:12.733037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.010 qpair failed and we were unable to recover it. 00:33:58.010 [2024-07-24 02:12:12.733171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.010 [2024-07-24 02:12:12.733197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.010 qpair failed and we were unable to recover it. 00:33:58.010 [2024-07-24 02:12:12.733329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.010 [2024-07-24 02:12:12.733357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.010 qpair failed and we were unable to recover it. 00:33:58.010 [2024-07-24 02:12:12.733476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.010 [2024-07-24 02:12:12.733520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.010 qpair failed and we were unable to recover it. 00:33:58.010 [2024-07-24 02:12:12.733665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.010 [2024-07-24 02:12:12.733710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.010 qpair failed and we were unable to recover it. 00:33:58.010 [2024-07-24 02:12:12.733840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.010 [2024-07-24 02:12:12.733868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.010 qpair failed and we were unable to recover it. 00:33:58.010 [2024-07-24 02:12:12.734035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.010 [2024-07-24 02:12:12.734061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.011 qpair failed and we were unable to recover it. 00:33:58.011 [2024-07-24 02:12:12.734188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.011 [2024-07-24 02:12:12.734213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.011 qpair failed and we were unable to recover it. 00:33:58.011 [2024-07-24 02:12:12.734327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.011 [2024-07-24 02:12:12.734354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.011 qpair failed and we were unable to recover it. 00:33:58.011 [2024-07-24 02:12:12.734528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.011 [2024-07-24 02:12:12.734556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.011 qpair failed and we were unable to recover it. 00:33:58.011 [2024-07-24 02:12:12.734722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.011 [2024-07-24 02:12:12.734766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.011 qpair failed and we were unable to recover it. 00:33:58.011 [2024-07-24 02:12:12.734899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.011 [2024-07-24 02:12:12.734926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.011 qpair failed and we were unable to recover it. 00:33:58.011 [2024-07-24 02:12:12.735033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.011 [2024-07-24 02:12:12.735059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.011 qpair failed and we were unable to recover it. 00:33:58.011 [2024-07-24 02:12:12.735186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.011 [2024-07-24 02:12:12.735213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.011 qpair failed and we were unable to recover it. 00:33:58.011 [2024-07-24 02:12:12.735361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.011 [2024-07-24 02:12:12.735389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.011 qpair failed and we were unable to recover it. 00:33:58.011 [2024-07-24 02:12:12.735559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.011 [2024-07-24 02:12:12.735603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.011 qpair failed and we were unable to recover it. 00:33:58.011 [2024-07-24 02:12:12.735746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.011 [2024-07-24 02:12:12.735789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.011 qpair failed and we were unable to recover it. 00:33:58.011 [2024-07-24 02:12:12.735920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.011 [2024-07-24 02:12:12.735947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.011 qpair failed and we were unable to recover it. 00:33:58.011 [2024-07-24 02:12:12.736077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.011 [2024-07-24 02:12:12.736103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.011 qpair failed and we were unable to recover it. 00:33:58.011 [2024-07-24 02:12:12.736206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.011 [2024-07-24 02:12:12.736233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.011 qpair failed and we were unable to recover it. 00:33:58.011 [2024-07-24 02:12:12.736362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.011 [2024-07-24 02:12:12.736388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.011 qpair failed and we were unable to recover it. 00:33:58.011 [2024-07-24 02:12:12.736549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.011 [2024-07-24 02:12:12.736574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.011 qpair failed and we were unable to recover it. 00:33:58.011 [2024-07-24 02:12:12.736733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.011 [2024-07-24 02:12:12.736760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.011 qpair failed and we were unable to recover it. 00:33:58.011 [2024-07-24 02:12:12.736917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.011 [2024-07-24 02:12:12.736942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.011 qpair failed and we were unable to recover it. 00:33:58.011 [2024-07-24 02:12:12.737105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.011 [2024-07-24 02:12:12.737130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.011 qpair failed and we were unable to recover it. 00:33:58.011 [2024-07-24 02:12:12.737288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.011 [2024-07-24 02:12:12.737314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.011 qpair failed and we were unable to recover it. 00:33:58.011 [2024-07-24 02:12:12.737476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.011 [2024-07-24 02:12:12.737505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.011 qpair failed and we were unable to recover it. 00:33:58.011 [2024-07-24 02:12:12.737701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.011 [2024-07-24 02:12:12.737747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.011 qpair failed and we were unable to recover it. 00:33:58.011 [2024-07-24 02:12:12.737903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.011 [2024-07-24 02:12:12.737947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.011 qpair failed and we were unable to recover it. 00:33:58.011 [2024-07-24 02:12:12.738057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.011 [2024-07-24 02:12:12.738084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.011 qpair failed and we were unable to recover it. 00:33:58.011 [2024-07-24 02:12:12.738241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.011 [2024-07-24 02:12:12.738267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.011 qpair failed and we were unable to recover it. 00:33:58.011 [2024-07-24 02:12:12.738421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.011 [2024-07-24 02:12:12.738465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.011 qpair failed and we were unable to recover it. 00:33:58.011 [2024-07-24 02:12:12.738582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.011 [2024-07-24 02:12:12.738626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.011 qpair failed and we were unable to recover it. 00:33:58.011 [2024-07-24 02:12:12.738781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.011 [2024-07-24 02:12:12.738807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.011 qpair failed and we were unable to recover it. 00:33:58.011 [2024-07-24 02:12:12.738967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.011 [2024-07-24 02:12:12.738993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.011 qpair failed and we were unable to recover it. 00:33:58.011 [2024-07-24 02:12:12.739094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.011 [2024-07-24 02:12:12.739121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.011 qpair failed and we were unable to recover it. 00:33:58.011 [2024-07-24 02:12:12.739255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.011 [2024-07-24 02:12:12.739284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.011 qpair failed and we were unable to recover it. 00:33:58.011 [2024-07-24 02:12:12.739435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.011 [2024-07-24 02:12:12.739476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.011 qpair failed and we were unable to recover it. 00:33:58.011 [2024-07-24 02:12:12.739611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.011 [2024-07-24 02:12:12.739636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.011 qpair failed and we were unable to recover it. 00:33:58.011 [2024-07-24 02:12:12.739772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.011 [2024-07-24 02:12:12.739804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.011 qpair failed and we were unable to recover it. 00:33:58.011 [2024-07-24 02:12:12.739963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.011 [2024-07-24 02:12:12.739989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.011 qpair failed and we were unable to recover it. 00:33:58.011 [2024-07-24 02:12:12.740121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.011 [2024-07-24 02:12:12.740146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.011 qpair failed and we were unable to recover it. 00:33:58.011 [2024-07-24 02:12:12.740259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.011 [2024-07-24 02:12:12.740286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.011 qpair failed and we were unable to recover it. 00:33:58.011 [2024-07-24 02:12:12.740439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.011 [2024-07-24 02:12:12.740466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.011 qpair failed and we were unable to recover it. 00:33:58.011 [2024-07-24 02:12:12.740625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.011 [2024-07-24 02:12:12.740650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.011 qpair failed and we were unable to recover it. 00:33:58.011 [2024-07-24 02:12:12.740785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.011 [2024-07-24 02:12:12.740812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.011 qpair failed and we were unable to recover it. 00:33:58.011 [2024-07-24 02:12:12.740945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.011 [2024-07-24 02:12:12.740970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.011 qpair failed and we were unable to recover it. 00:33:58.011 [2024-07-24 02:12:12.741074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.011 [2024-07-24 02:12:12.741100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.011 qpair failed and we were unable to recover it. 00:33:58.011 [2024-07-24 02:12:12.741211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.011 [2024-07-24 02:12:12.741237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.011 qpair failed and we were unable to recover it. 00:33:58.011 [2024-07-24 02:12:12.741367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.011 [2024-07-24 02:12:12.741394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.011 qpair failed and we were unable to recover it. 00:33:58.011 [2024-07-24 02:12:12.741553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.011 [2024-07-24 02:12:12.741580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.011 qpair failed and we were unable to recover it. 00:33:58.011 [2024-07-24 02:12:12.741720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.011 [2024-07-24 02:12:12.741745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.011 qpair failed and we were unable to recover it. 00:33:58.011 [2024-07-24 02:12:12.741878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.011 [2024-07-24 02:12:12.741904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.011 qpair failed and we were unable to recover it. 00:33:58.011 [2024-07-24 02:12:12.742011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.011 [2024-07-24 02:12:12.742037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.011 qpair failed and we were unable to recover it. 00:33:58.011 [2024-07-24 02:12:12.742165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.011 [2024-07-24 02:12:12.742191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.011 qpair failed and we were unable to recover it. 00:33:58.011 [2024-07-24 02:12:12.742327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.011 [2024-07-24 02:12:12.742354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.011 qpair failed and we were unable to recover it. 00:33:58.011 [2024-07-24 02:12:12.742488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.011 [2024-07-24 02:12:12.742515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.011 qpair failed and we were unable to recover it. 00:33:58.011 [2024-07-24 02:12:12.742617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.011 [2024-07-24 02:12:12.742643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.011 qpair failed and we were unable to recover it. 00:33:58.011 [2024-07-24 02:12:12.742777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.011 [2024-07-24 02:12:12.742803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.011 qpair failed and we were unable to recover it. 00:33:58.011 [2024-07-24 02:12:12.742910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.011 [2024-07-24 02:12:12.742935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.011 qpair failed and we were unable to recover it. 00:33:58.011 [2024-07-24 02:12:12.743061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.011 [2024-07-24 02:12:12.743086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.011 qpair failed and we were unable to recover it. 00:33:58.011 [2024-07-24 02:12:12.743242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.011 [2024-07-24 02:12:12.743269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.011 qpair failed and we were unable to recover it. 00:33:58.011 [2024-07-24 02:12:12.743377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.011 [2024-07-24 02:12:12.743404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.011 qpair failed and we were unable to recover it. 00:33:58.011 [2024-07-24 02:12:12.743561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.011 [2024-07-24 02:12:12.743587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.011 qpair failed and we were unable to recover it. 00:33:58.011 [2024-07-24 02:12:12.743752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.011 [2024-07-24 02:12:12.743778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.011 qpair failed and we were unable to recover it. 00:33:58.011 [2024-07-24 02:12:12.743911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.011 [2024-07-24 02:12:12.743937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.011 qpair failed and we were unable to recover it. 00:33:58.011 [2024-07-24 02:12:12.744072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.011 [2024-07-24 02:12:12.744098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.011 qpair failed and we were unable to recover it. 00:33:58.011 [2024-07-24 02:12:12.744227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.011 [2024-07-24 02:12:12.744253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.011 qpair failed and we were unable to recover it. 00:33:58.011 [2024-07-24 02:12:12.744405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.011 [2024-07-24 02:12:12.744450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.011 qpair failed and we were unable to recover it. 00:33:58.011 [2024-07-24 02:12:12.744602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.011 [2024-07-24 02:12:12.744644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.011 qpair failed and we were unable to recover it. 00:33:58.011 [2024-07-24 02:12:12.744789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.011 [2024-07-24 02:12:12.744837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.011 qpair failed and we were unable to recover it. 00:33:58.011 [2024-07-24 02:12:12.744941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.011 [2024-07-24 02:12:12.744969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.011 qpair failed and we were unable to recover it. 00:33:58.011 [2024-07-24 02:12:12.745104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.012 [2024-07-24 02:12:12.745131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.012 qpair failed and we were unable to recover it. 00:33:58.012 [2024-07-24 02:12:12.745277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.012 [2024-07-24 02:12:12.745307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.012 qpair failed and we were unable to recover it. 00:33:58.012 [2024-07-24 02:12:12.745472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.012 [2024-07-24 02:12:12.745517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.012 qpair failed and we were unable to recover it. 00:33:58.012 [2024-07-24 02:12:12.745661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.012 [2024-07-24 02:12:12.745690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.012 qpair failed and we were unable to recover it. 00:33:58.012 [2024-07-24 02:12:12.745883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.012 [2024-07-24 02:12:12.745931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.012 qpair failed and we were unable to recover it. 00:33:58.012 [2024-07-24 02:12:12.746088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.012 [2024-07-24 02:12:12.746119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.012 qpair failed and we were unable to recover it. 00:33:58.012 [2024-07-24 02:12:12.746223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.012 [2024-07-24 02:12:12.746249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.012 qpair failed and we were unable to recover it. 00:33:58.012 [2024-07-24 02:12:12.746387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.012 [2024-07-24 02:12:12.746414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.012 qpair failed and we were unable to recover it. 00:33:58.012 [2024-07-24 02:12:12.746549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.012 [2024-07-24 02:12:12.746576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.012 qpair failed and we were unable to recover it. 00:33:58.012 [2024-07-24 02:12:12.746720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.012 [2024-07-24 02:12:12.746763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.012 qpair failed and we were unable to recover it. 00:33:58.012 [2024-07-24 02:12:12.746922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.012 [2024-07-24 02:12:12.746949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.012 qpair failed and we were unable to recover it. 00:33:58.012 [2024-07-24 02:12:12.747081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.012 [2024-07-24 02:12:12.747108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.012 qpair failed and we were unable to recover it. 00:33:58.012 [2024-07-24 02:12:12.747270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.012 [2024-07-24 02:12:12.747297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.012 qpair failed and we were unable to recover it. 00:33:58.012 [2024-07-24 02:12:12.747455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.012 [2024-07-24 02:12:12.747504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.012 qpair failed and we were unable to recover it. 00:33:58.012 [2024-07-24 02:12:12.747654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.012 [2024-07-24 02:12:12.747698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.012 qpair failed and we were unable to recover it. 00:33:58.012 [2024-07-24 02:12:12.747845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.012 [2024-07-24 02:12:12.747887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.012 qpair failed and we were unable to recover it. 00:33:58.012 [2024-07-24 02:12:12.748025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.012 [2024-07-24 02:12:12.748051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.012 qpair failed and we were unable to recover it. 00:33:58.012 [2024-07-24 02:12:12.748178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.012 [2024-07-24 02:12:12.748204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.012 qpair failed and we were unable to recover it. 00:33:58.012 [2024-07-24 02:12:12.748381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.012 [2024-07-24 02:12:12.748410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.012 qpair failed and we were unable to recover it. 00:33:58.012 [2024-07-24 02:12:12.748580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.012 [2024-07-24 02:12:12.748624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.012 qpair failed and we were unable to recover it. 00:33:58.012 [2024-07-24 02:12:12.748757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.012 [2024-07-24 02:12:12.748783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.012 qpair failed and we were unable to recover it. 00:33:58.012 [2024-07-24 02:12:12.748896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.012 [2024-07-24 02:12:12.748922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.012 qpair failed and we were unable to recover it. 00:33:58.012 [2024-07-24 02:12:12.749028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.012 [2024-07-24 02:12:12.749055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.012 qpair failed and we were unable to recover it. 00:33:58.012 [2024-07-24 02:12:12.749166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.012 [2024-07-24 02:12:12.749191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.012 qpair failed and we were unable to recover it. 00:33:58.012 [2024-07-24 02:12:12.749321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.012 [2024-07-24 02:12:12.749349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.012 qpair failed and we were unable to recover it. 00:33:58.012 [2024-07-24 02:12:12.749506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.012 [2024-07-24 02:12:12.749532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.012 qpair failed and we were unable to recover it. 00:33:58.012 [2024-07-24 02:12:12.749664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.012 [2024-07-24 02:12:12.749690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.012 qpair failed and we were unable to recover it. 00:33:58.012 [2024-07-24 02:12:12.749826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.012 [2024-07-24 02:12:12.749853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.012 qpair failed and we were unable to recover it. 00:33:58.012 [2024-07-24 02:12:12.749983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.012 [2024-07-24 02:12:12.750010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.012 qpair failed and we were unable to recover it. 00:33:58.012 [2024-07-24 02:12:12.750148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.012 [2024-07-24 02:12:12.750175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.012 qpair failed and we were unable to recover it. 00:33:58.012 [2024-07-24 02:12:12.750309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.012 [2024-07-24 02:12:12.750342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.012 qpair failed and we were unable to recover it. 00:33:58.012 [2024-07-24 02:12:12.750477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.012 [2024-07-24 02:12:12.750503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.012 qpair failed and we were unable to recover it. 00:33:58.012 [2024-07-24 02:12:12.750613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.012 [2024-07-24 02:12:12.750640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.012 qpair failed and we were unable to recover it. 00:33:58.012 [2024-07-24 02:12:12.750735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.012 [2024-07-24 02:12:12.750762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.012 qpair failed and we were unable to recover it. 00:33:58.012 [2024-07-24 02:12:12.750872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.012 [2024-07-24 02:12:12.750899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.012 qpair failed and we were unable to recover it. 00:33:58.012 [2024-07-24 02:12:12.751003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.012 [2024-07-24 02:12:12.751030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.012 qpair failed and we were unable to recover it. 00:33:58.012 [2024-07-24 02:12:12.751165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.012 [2024-07-24 02:12:12.751191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.012 qpair failed and we were unable to recover it. 00:33:58.012 [2024-07-24 02:12:12.751302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.012 [2024-07-24 02:12:12.751335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.012 qpair failed and we were unable to recover it. 00:33:58.012 [2024-07-24 02:12:12.751480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.012 [2024-07-24 02:12:12.751506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.012 qpair failed and we were unable to recover it. 00:33:58.012 [2024-07-24 02:12:12.751638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.012 [2024-07-24 02:12:12.751665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.012 qpair failed and we were unable to recover it. 00:33:58.012 [2024-07-24 02:12:12.751788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.012 [2024-07-24 02:12:12.751814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.012 qpair failed and we were unable to recover it. 00:33:58.012 [2024-07-24 02:12:12.751948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.012 [2024-07-24 02:12:12.751975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.012 qpair failed and we were unable to recover it. 00:33:58.012 [2024-07-24 02:12:12.752078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.012 [2024-07-24 02:12:12.752104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.012 qpair failed and we were unable to recover it. 00:33:58.012 [2024-07-24 02:12:12.752229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.012 [2024-07-24 02:12:12.752256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.012 qpair failed and we were unable to recover it. 00:33:58.012 [2024-07-24 02:12:12.752411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.012 [2024-07-24 02:12:12.752457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.012 qpair failed and we were unable to recover it. 00:33:58.012 [2024-07-24 02:12:12.752616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.012 [2024-07-24 02:12:12.752663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.012 qpair failed and we were unable to recover it. 00:33:58.012 [2024-07-24 02:12:12.752819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.012 [2024-07-24 02:12:12.752863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.012 qpair failed and we were unable to recover it. 00:33:58.012 [2024-07-24 02:12:12.752977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.012 [2024-07-24 02:12:12.753003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.012 qpair failed and we were unable to recover it. 00:33:58.012 [2024-07-24 02:12:12.753144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.012 [2024-07-24 02:12:12.753170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.012 qpair failed and we were unable to recover it. 00:33:58.012 [2024-07-24 02:12:12.753289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.012 [2024-07-24 02:12:12.753340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.012 qpair failed and we were unable to recover it. 00:33:58.012 [2024-07-24 02:12:12.753466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.012 [2024-07-24 02:12:12.753495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.012 qpair failed and we were unable to recover it. 00:33:58.012 [2024-07-24 02:12:12.753644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.012 [2024-07-24 02:12:12.753688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.012 qpair failed and we were unable to recover it. 00:33:58.012 [2024-07-24 02:12:12.753821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.012 [2024-07-24 02:12:12.753848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.012 qpair failed and we were unable to recover it. 00:33:58.012 [2024-07-24 02:12:12.753949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.012 [2024-07-24 02:12:12.753975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.012 qpair failed and we were unable to recover it. 00:33:58.012 [2024-07-24 02:12:12.754081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.012 [2024-07-24 02:12:12.754107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.012 qpair failed and we were unable to recover it. 00:33:58.012 [2024-07-24 02:12:12.754233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.012 [2024-07-24 02:12:12.754259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.012 qpair failed and we were unable to recover it. 00:33:58.012 [2024-07-24 02:12:12.754363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.012 [2024-07-24 02:12:12.754388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.012 qpair failed and we were unable to recover it. 00:33:58.012 [2024-07-24 02:12:12.754488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.012 [2024-07-24 02:12:12.754515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.012 qpair failed and we were unable to recover it. 00:33:58.012 [2024-07-24 02:12:12.754667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.012 [2024-07-24 02:12:12.754714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.012 qpair failed and we were unable to recover it. 00:33:58.012 [2024-07-24 02:12:12.754846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.012 [2024-07-24 02:12:12.754872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.012 qpair failed and we were unable to recover it. 00:33:58.012 [2024-07-24 02:12:12.754977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.012 [2024-07-24 02:12:12.755003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.012 qpair failed and we were unable to recover it. 00:33:58.012 [2024-07-24 02:12:12.755132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.012 [2024-07-24 02:12:12.755158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.012 qpair failed and we were unable to recover it. 00:33:58.012 [2024-07-24 02:12:12.755291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.012 [2024-07-24 02:12:12.755325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.012 qpair failed and we were unable to recover it. 00:33:58.012 [2024-07-24 02:12:12.755437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.012 [2024-07-24 02:12:12.755463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.012 qpair failed and we were unable to recover it. 00:33:58.012 [2024-07-24 02:12:12.755589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.012 [2024-07-24 02:12:12.755617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.012 qpair failed and we were unable to recover it. 00:33:58.012 [2024-07-24 02:12:12.755782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.012 [2024-07-24 02:12:12.755811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.012 qpair failed and we were unable to recover it. 00:33:58.012 [2024-07-24 02:12:12.755963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.012 [2024-07-24 02:12:12.755989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.012 qpair failed and we were unable to recover it. 00:33:58.012 [2024-07-24 02:12:12.756124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.012 [2024-07-24 02:12:12.756150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.012 qpair failed and we were unable to recover it. 00:33:58.012 [2024-07-24 02:12:12.756290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.012 [2024-07-24 02:12:12.756323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.012 qpair failed and we were unable to recover it. 00:33:58.012 [2024-07-24 02:12:12.756435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.013 [2024-07-24 02:12:12.756461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.013 qpair failed and we were unable to recover it. 00:33:58.013 [2024-07-24 02:12:12.756591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.013 [2024-07-24 02:12:12.756618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.013 qpair failed and we were unable to recover it. 00:33:58.013 [2024-07-24 02:12:12.756722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.013 [2024-07-24 02:12:12.756748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.013 qpair failed and we were unable to recover it. 00:33:58.013 [2024-07-24 02:12:12.756856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.013 [2024-07-24 02:12:12.756882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.013 qpair failed and we were unable to recover it. 00:33:58.013 [2024-07-24 02:12:12.756990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.013 [2024-07-24 02:12:12.757018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.013 qpair failed and we were unable to recover it. 00:33:58.013 [2024-07-24 02:12:12.757124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.013 [2024-07-24 02:12:12.757151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.013 qpair failed and we were unable to recover it. 00:33:58.013 [2024-07-24 02:12:12.757262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.013 [2024-07-24 02:12:12.757289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.013 qpair failed and we were unable to recover it. 00:33:58.013 [2024-07-24 02:12:12.757407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.013 [2024-07-24 02:12:12.757434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.013 qpair failed and we were unable to recover it. 00:33:58.013 [2024-07-24 02:12:12.757567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.013 [2024-07-24 02:12:12.757593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.013 qpair failed and we were unable to recover it. 00:33:58.013 [2024-07-24 02:12:12.757748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.013 [2024-07-24 02:12:12.757774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.013 qpair failed and we were unable to recover it. 00:33:58.013 [2024-07-24 02:12:12.757904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.013 [2024-07-24 02:12:12.757930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.013 qpair failed and we were unable to recover it. 00:33:58.013 [2024-07-24 02:12:12.758035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.013 [2024-07-24 02:12:12.758061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.013 qpair failed and we were unable to recover it. 00:33:58.013 [2024-07-24 02:12:12.758192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.013 [2024-07-24 02:12:12.758218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.013 qpair failed and we were unable to recover it. 00:33:58.013 [2024-07-24 02:12:12.758351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.013 [2024-07-24 02:12:12.758377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.013 qpair failed and we were unable to recover it. 00:33:58.013 [2024-07-24 02:12:12.758493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.013 [2024-07-24 02:12:12.758520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.013 qpair failed and we were unable to recover it. 00:33:58.013 [2024-07-24 02:12:12.758616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.013 [2024-07-24 02:12:12.758641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.013 qpair failed and we were unable to recover it. 00:33:58.013 [2024-07-24 02:12:12.758736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.013 [2024-07-24 02:12:12.758766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.013 qpair failed and we were unable to recover it. 00:33:58.013 [2024-07-24 02:12:12.758869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.013 [2024-07-24 02:12:12.758896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.013 qpair failed and we were unable to recover it. 00:33:58.013 [2024-07-24 02:12:12.759031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.013 [2024-07-24 02:12:12.759058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.013 qpair failed and we were unable to recover it. 00:33:58.013 [2024-07-24 02:12:12.759165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.013 [2024-07-24 02:12:12.759192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.013 qpair failed and we were unable to recover it. 00:33:58.013 [2024-07-24 02:12:12.759287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.013 [2024-07-24 02:12:12.759313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.013 qpair failed and we were unable to recover it. 00:33:58.013 [2024-07-24 02:12:12.759458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.013 [2024-07-24 02:12:12.759485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.013 qpair failed and we were unable to recover it. 00:33:58.013 [2024-07-24 02:12:12.759583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.013 [2024-07-24 02:12:12.759607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.013 qpair failed and we were unable to recover it. 00:33:58.013 [2024-07-24 02:12:12.759740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.013 [2024-07-24 02:12:12.759766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.013 qpair failed and we were unable to recover it. 00:33:58.013 [2024-07-24 02:12:12.759994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.013 [2024-07-24 02:12:12.760035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.013 qpair failed and we were unable to recover it. 00:33:58.013 [2024-07-24 02:12:12.760177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.013 [2024-07-24 02:12:12.760205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.013 qpair failed and we were unable to recover it. 00:33:58.013 [2024-07-24 02:12:12.760337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.013 [2024-07-24 02:12:12.760365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.013 qpair failed and we were unable to recover it. 00:33:58.013 [2024-07-24 02:12:12.760502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.013 [2024-07-24 02:12:12.760529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.013 qpair failed and we were unable to recover it. 00:33:58.013 [2024-07-24 02:12:12.760662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.013 [2024-07-24 02:12:12.760690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.013 qpair failed and we were unable to recover it. 00:33:58.013 [2024-07-24 02:12:12.760821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.013 [2024-07-24 02:12:12.760847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.013 qpair failed and we were unable to recover it. 00:33:58.013 [2024-07-24 02:12:12.760982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.013 [2024-07-24 02:12:12.761010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.013 qpair failed and we were unable to recover it. 00:33:58.013 [2024-07-24 02:12:12.761144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.013 [2024-07-24 02:12:12.761170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.013 qpair failed and we were unable to recover it. 00:33:58.013 [2024-07-24 02:12:12.761283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.013 [2024-07-24 02:12:12.761308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.013 qpair failed and we were unable to recover it. 00:33:58.013 [2024-07-24 02:12:12.761450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.013 [2024-07-24 02:12:12.761476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.013 qpair failed and we were unable to recover it. 00:33:58.013 [2024-07-24 02:12:12.761632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.013 [2024-07-24 02:12:12.761658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.013 qpair failed and we were unable to recover it. 00:33:58.013 [2024-07-24 02:12:12.761764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.013 [2024-07-24 02:12:12.761790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.013 qpair failed and we were unable to recover it. 00:33:58.013 [2024-07-24 02:12:12.761926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.013 [2024-07-24 02:12:12.761953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.013 qpair failed and we were unable to recover it. 00:33:58.013 [2024-07-24 02:12:12.762065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.013 [2024-07-24 02:12:12.762090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.013 qpair failed and we were unable to recover it. 00:33:58.013 [2024-07-24 02:12:12.762197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.013 [2024-07-24 02:12:12.762224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.013 qpair failed and we were unable to recover it. 00:33:58.013 [2024-07-24 02:12:12.762334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.013 [2024-07-24 02:12:12.762362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.013 qpair failed and we were unable to recover it. 00:33:58.013 [2024-07-24 02:12:12.762520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.013 [2024-07-24 02:12:12.762547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.013 qpair failed and we were unable to recover it. 00:33:58.013 [2024-07-24 02:12:12.762702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.013 [2024-07-24 02:12:12.762730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.013 qpair failed and we were unable to recover it. 00:33:58.013 [2024-07-24 02:12:12.762902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.013 [2024-07-24 02:12:12.762944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.013 qpair failed and we were unable to recover it. 00:33:58.013 [2024-07-24 02:12:12.763053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.013 [2024-07-24 02:12:12.763078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.013 qpair failed and we were unable to recover it. 00:33:58.013 [2024-07-24 02:12:12.763194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.013 [2024-07-24 02:12:12.763220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.013 qpair failed and we were unable to recover it. 00:33:58.013 [2024-07-24 02:12:12.763360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.013 [2024-07-24 02:12:12.763387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.013 qpair failed and we were unable to recover it. 00:33:58.013 [2024-07-24 02:12:12.763514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.013 [2024-07-24 02:12:12.763541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.013 qpair failed and we were unable to recover it. 00:33:58.013 [2024-07-24 02:12:12.763651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.013 [2024-07-24 02:12:12.763676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.013 qpair failed and we were unable to recover it. 00:33:58.013 [2024-07-24 02:12:12.763800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.013 [2024-07-24 02:12:12.763827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.013 qpair failed and we were unable to recover it. 00:33:58.013 [2024-07-24 02:12:12.763972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.013 [2024-07-24 02:12:12.763997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.013 qpair failed and we were unable to recover it. 00:33:58.013 [2024-07-24 02:12:12.764109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.013 [2024-07-24 02:12:12.764134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.013 qpair failed and we were unable to recover it. 00:33:58.013 [2024-07-24 02:12:12.764289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.013 [2024-07-24 02:12:12.764322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.013 qpair failed and we were unable to recover it. 00:33:58.013 [2024-07-24 02:12:12.764456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.013 [2024-07-24 02:12:12.764481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.013 qpair failed and we were unable to recover it. 00:33:58.013 [2024-07-24 02:12:12.764615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.013 [2024-07-24 02:12:12.764640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.013 qpair failed and we were unable to recover it. 00:33:58.013 [2024-07-24 02:12:12.764775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.013 [2024-07-24 02:12:12.764802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.013 qpair failed and we were unable to recover it. 00:33:58.013 [2024-07-24 02:12:12.764945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.013 [2024-07-24 02:12:12.764970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.013 qpair failed and we were unable to recover it. 00:33:58.013 [2024-07-24 02:12:12.765095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.013 [2024-07-24 02:12:12.765127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.013 qpair failed and we were unable to recover it. 00:33:58.013 [2024-07-24 02:12:12.765251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.013 [2024-07-24 02:12:12.765291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.013 qpair failed and we were unable to recover it. 00:33:58.013 [2024-07-24 02:12:12.765436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.013 [2024-07-24 02:12:12.765464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.013 qpair failed and we were unable to recover it. 00:33:58.013 [2024-07-24 02:12:12.765602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.013 [2024-07-24 02:12:12.765628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.013 qpair failed and we were unable to recover it. 00:33:58.013 [2024-07-24 02:12:12.765760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.013 [2024-07-24 02:12:12.765787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.013 qpair failed and we were unable to recover it. 00:33:58.013 [2024-07-24 02:12:12.765892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.013 [2024-07-24 02:12:12.765919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.013 qpair failed and we were unable to recover it. 00:33:58.013 [2024-07-24 02:12:12.766026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.013 [2024-07-24 02:12:12.766053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.013 qpair failed and we were unable to recover it. 00:33:58.013 [2024-07-24 02:12:12.766195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.013 [2024-07-24 02:12:12.766222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.013 qpair failed and we were unable to recover it. 00:33:58.013 [2024-07-24 02:12:12.766361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.014 [2024-07-24 02:12:12.766389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.014 qpair failed and we were unable to recover it. 00:33:58.014 [2024-07-24 02:12:12.766513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.014 [2024-07-24 02:12:12.766542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.014 qpair failed and we were unable to recover it. 00:33:58.014 [2024-07-24 02:12:12.766689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.014 [2024-07-24 02:12:12.766733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.014 qpair failed and we were unable to recover it. 00:33:58.014 [2024-07-24 02:12:12.766910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.014 [2024-07-24 02:12:12.766953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.014 qpair failed and we were unable to recover it. 00:33:58.014 [2024-07-24 02:12:12.767094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.014 [2024-07-24 02:12:12.767120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.014 qpair failed and we were unable to recover it. 00:33:58.014 [2024-07-24 02:12:12.767259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.014 [2024-07-24 02:12:12.767285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.014 qpair failed and we were unable to recover it. 00:33:58.014 [2024-07-24 02:12:12.767425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.014 [2024-07-24 02:12:12.767470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.014 qpair failed and we were unable to recover it. 00:33:58.014 [2024-07-24 02:12:12.767599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.014 [2024-07-24 02:12:12.767642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.014 qpair failed and we were unable to recover it. 00:33:58.014 [2024-07-24 02:12:12.767771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.014 [2024-07-24 02:12:12.767797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.014 qpair failed and we were unable to recover it. 00:33:58.014 [2024-07-24 02:12:12.767952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.014 [2024-07-24 02:12:12.767978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.014 qpair failed and we were unable to recover it. 00:33:58.014 [2024-07-24 02:12:12.768085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.014 [2024-07-24 02:12:12.768111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.014 qpair failed and we were unable to recover it. 00:33:58.014 [2024-07-24 02:12:12.768245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.014 [2024-07-24 02:12:12.768274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.014 qpair failed and we were unable to recover it. 00:33:58.014 [2024-07-24 02:12:12.768390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.014 [2024-07-24 02:12:12.768418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.014 qpair failed and we were unable to recover it. 00:33:58.014 [2024-07-24 02:12:12.768549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.014 [2024-07-24 02:12:12.768576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.014 qpair failed and we were unable to recover it. 00:33:58.014 [2024-07-24 02:12:12.768685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.014 [2024-07-24 02:12:12.768712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.014 qpair failed and we were unable to recover it. 00:33:58.014 [2024-07-24 02:12:12.768805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.014 [2024-07-24 02:12:12.768830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.014 qpair failed and we were unable to recover it. 00:33:58.014 [2024-07-24 02:12:12.768930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.014 [2024-07-24 02:12:12.768957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.014 qpair failed and we were unable to recover it. 00:33:58.014 [2024-07-24 02:12:12.769099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.014 [2024-07-24 02:12:12.769129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.014 qpair failed and we were unable to recover it. 00:33:58.014 [2024-07-24 02:12:12.769266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.014 [2024-07-24 02:12:12.769295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.014 qpair failed and we were unable to recover it. 00:33:58.014 [2024-07-24 02:12:12.769446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.014 [2024-07-24 02:12:12.769488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.014 qpair failed and we were unable to recover it. 00:33:58.014 [2024-07-24 02:12:12.769659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.014 [2024-07-24 02:12:12.769693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.014 qpair failed and we were unable to recover it. 00:33:58.014 [2024-07-24 02:12:12.769870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.014 [2024-07-24 02:12:12.769901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.014 qpair failed and we were unable to recover it. 00:33:58.014 [2024-07-24 02:12:12.770020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.014 [2024-07-24 02:12:12.770050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.014 qpair failed and we were unable to recover it. 00:33:58.014 [2024-07-24 02:12:12.770203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.014 [2024-07-24 02:12:12.770231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.014 qpair failed and we were unable to recover it. 00:33:58.014 [2024-07-24 02:12:12.770353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.014 [2024-07-24 02:12:12.770380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.014 qpair failed and we were unable to recover it. 00:33:58.014 [2024-07-24 02:12:12.770491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.014 [2024-07-24 02:12:12.770519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.014 qpair failed and we were unable to recover it. 00:33:58.014 [2024-07-24 02:12:12.770672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.014 [2024-07-24 02:12:12.770720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.014 qpair failed and we were unable to recover it. 00:33:58.014 [2024-07-24 02:12:12.770845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.014 [2024-07-24 02:12:12.770889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.014 qpair failed and we were unable to recover it. 00:33:58.014 [2024-07-24 02:12:12.771106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.014 [2024-07-24 02:12:12.771158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.014 qpair failed and we were unable to recover it. 00:33:58.014 [2024-07-24 02:12:12.771314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.014 [2024-07-24 02:12:12.771376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.014 qpair failed and we were unable to recover it. 00:33:58.014 [2024-07-24 02:12:12.771534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.014 [2024-07-24 02:12:12.771579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.014 qpair failed and we were unable to recover it. 00:33:58.014 [2024-07-24 02:12:12.771706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.014 [2024-07-24 02:12:12.771735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.014 qpair failed and we were unable to recover it. 00:33:58.014 [2024-07-24 02:12:12.771878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.014 [2024-07-24 02:12:12.771927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.014 qpair failed and we were unable to recover it. 00:33:58.014 [2024-07-24 02:12:12.772062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.014 [2024-07-24 02:12:12.772088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.014 qpair failed and we were unable to recover it. 00:33:58.014 [2024-07-24 02:12:12.772218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.014 [2024-07-24 02:12:12.772245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.014 qpair failed and we were unable to recover it. 00:33:58.014 [2024-07-24 02:12:12.772405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.014 [2024-07-24 02:12:12.772454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.014 qpair failed and we were unable to recover it. 00:33:58.014 [2024-07-24 02:12:12.772611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.014 [2024-07-24 02:12:12.772637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.014 qpair failed and we were unable to recover it. 00:33:58.014 [2024-07-24 02:12:12.772747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.014 [2024-07-24 02:12:12.772773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.014 qpair failed and we were unable to recover it. 00:33:58.014 [2024-07-24 02:12:12.772881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.014 [2024-07-24 02:12:12.772907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.014 qpair failed and we were unable to recover it. 00:33:58.014 [2024-07-24 02:12:12.773042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.014 [2024-07-24 02:12:12.773068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.014 qpair failed and we were unable to recover it. 00:33:58.014 [2024-07-24 02:12:12.773179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.014 [2024-07-24 02:12:12.773204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.014 qpair failed and we were unable to recover it. 00:33:58.014 [2024-07-24 02:12:12.773314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.014 [2024-07-24 02:12:12.773347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.014 qpair failed and we were unable to recover it. 00:33:58.014 [2024-07-24 02:12:12.773444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.014 [2024-07-24 02:12:12.773475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.014 qpair failed and we were unable to recover it. 00:33:58.014 [2024-07-24 02:12:12.773610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.014 [2024-07-24 02:12:12.773638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.014 qpair failed and we were unable to recover it. 00:33:58.014 [2024-07-24 02:12:12.773794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.014 [2024-07-24 02:12:12.773820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.014 qpair failed and we were unable to recover it. 00:33:58.014 [2024-07-24 02:12:12.773921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.014 [2024-07-24 02:12:12.773948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.014 qpair failed and we were unable to recover it. 00:33:58.014 [2024-07-24 02:12:12.774065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.014 [2024-07-24 02:12:12.774092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.014 qpair failed and we were unable to recover it. 00:33:58.014 [2024-07-24 02:12:12.774250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.014 [2024-07-24 02:12:12.774276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.014 qpair failed and we were unable to recover it. 00:33:58.014 [2024-07-24 02:12:12.774382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.014 [2024-07-24 02:12:12.774409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.014 qpair failed and we were unable to recover it. 00:33:58.014 [2024-07-24 02:12:12.774538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.014 [2024-07-24 02:12:12.774564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.014 qpair failed and we were unable to recover it. 00:33:58.014 [2024-07-24 02:12:12.774690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.014 [2024-07-24 02:12:12.774734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.014 qpair failed and we were unable to recover it. 00:33:58.014 [2024-07-24 02:12:12.774871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.014 [2024-07-24 02:12:12.774898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.014 qpair failed and we were unable to recover it. 00:33:58.014 [2024-07-24 02:12:12.775031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.014 [2024-07-24 02:12:12.775057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.014 qpair failed and we were unable to recover it. 00:33:58.014 [2024-07-24 02:12:12.775195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.014 [2024-07-24 02:12:12.775221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.014 qpair failed and we were unable to recover it. 00:33:58.014 [2024-07-24 02:12:12.775355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.014 [2024-07-24 02:12:12.775382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.014 qpair failed and we were unable to recover it. 00:33:58.014 [2024-07-24 02:12:12.775492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.014 [2024-07-24 02:12:12.775518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.014 qpair failed and we were unable to recover it. 00:33:58.014 [2024-07-24 02:12:12.775616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.014 [2024-07-24 02:12:12.775642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.014 qpair failed and we were unable to recover it. 00:33:58.014 [2024-07-24 02:12:12.775773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.014 [2024-07-24 02:12:12.775799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.014 qpair failed and we were unable to recover it. 00:33:58.014 [2024-07-24 02:12:12.775931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.014 [2024-07-24 02:12:12.775958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.014 qpair failed and we were unable to recover it. 00:33:58.014 [2024-07-24 02:12:12.776112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.014 [2024-07-24 02:12:12.776152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.014 qpair failed and we were unable to recover it. 00:33:58.014 [2024-07-24 02:12:12.776301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.014 [2024-07-24 02:12:12.776337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.014 qpair failed and we were unable to recover it. 00:33:58.014 [2024-07-24 02:12:12.776472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.014 [2024-07-24 02:12:12.776499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.014 qpair failed and we were unable to recover it. 00:33:58.014 [2024-07-24 02:12:12.776640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.014 [2024-07-24 02:12:12.776679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.014 qpair failed and we were unable to recover it. 00:33:58.014 [2024-07-24 02:12:12.776819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.014 [2024-07-24 02:12:12.776845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.014 qpair failed and we were unable to recover it. 00:33:58.014 [2024-07-24 02:12:12.776956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.014 [2024-07-24 02:12:12.776982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.014 qpair failed and we were unable to recover it. 00:33:58.014 [2024-07-24 02:12:12.777111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.014 [2024-07-24 02:12:12.777140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.014 qpair failed and we were unable to recover it. 00:33:58.014 [2024-07-24 02:12:12.777254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.014 [2024-07-24 02:12:12.777284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.014 qpair failed and we were unable to recover it. 00:33:58.014 [2024-07-24 02:12:12.777464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.014 [2024-07-24 02:12:12.777491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.014 qpair failed and we were unable to recover it. 00:33:58.014 [2024-07-24 02:12:12.777651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.014 [2024-07-24 02:12:12.777681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.014 qpair failed and we were unable to recover it. 00:33:58.014 [2024-07-24 02:12:12.777821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.014 [2024-07-24 02:12:12.777851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.014 qpair failed and we were unable to recover it. 00:33:58.015 [2024-07-24 02:12:12.777969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.015 [2024-07-24 02:12:12.777998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.015 qpair failed and we were unable to recover it. 00:33:58.015 [2024-07-24 02:12:12.778137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.015 [2024-07-24 02:12:12.778166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.015 qpair failed and we were unable to recover it. 00:33:58.015 [2024-07-24 02:12:12.778296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.015 [2024-07-24 02:12:12.778328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.015 qpair failed and we were unable to recover it. 00:33:58.015 [2024-07-24 02:12:12.778465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.015 [2024-07-24 02:12:12.778491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.015 qpair failed and we were unable to recover it. 00:33:58.015 [2024-07-24 02:12:12.778632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.015 [2024-07-24 02:12:12.778661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.015 qpair failed and we were unable to recover it. 00:33:58.015 [2024-07-24 02:12:12.778804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.015 [2024-07-24 02:12:12.778833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.015 qpair failed and we were unable to recover it. 00:33:58.015 [2024-07-24 02:12:12.778986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.015 [2024-07-24 02:12:12.779015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.015 qpair failed and we were unable to recover it. 00:33:58.015 [2024-07-24 02:12:12.779161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.015 [2024-07-24 02:12:12.779190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.015 qpair failed and we were unable to recover it. 00:33:58.015 [2024-07-24 02:12:12.779342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.015 [2024-07-24 02:12:12.779385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.015 qpair failed and we were unable to recover it. 00:33:58.015 [2024-07-24 02:12:12.779491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.015 [2024-07-24 02:12:12.779518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.015 qpair failed and we were unable to recover it. 00:33:58.015 [2024-07-24 02:12:12.779674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.015 [2024-07-24 02:12:12.779704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.015 qpair failed and we were unable to recover it. 00:33:58.015 [2024-07-24 02:12:12.779850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.015 [2024-07-24 02:12:12.779879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.015 qpair failed and we were unable to recover it. 00:33:58.015 [2024-07-24 02:12:12.780006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.015 [2024-07-24 02:12:12.780050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.015 qpair failed and we were unable to recover it. 00:33:58.015 [2024-07-24 02:12:12.780193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.015 [2024-07-24 02:12:12.780223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.015 qpair failed and we were unable to recover it. 00:33:58.015 [2024-07-24 02:12:12.780343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.015 [2024-07-24 02:12:12.780386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.015 qpair failed and we were unable to recover it. 00:33:58.015 [2024-07-24 02:12:12.780494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.015 [2024-07-24 02:12:12.780521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.015 qpair failed and we were unable to recover it. 00:33:58.015 [2024-07-24 02:12:12.780737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.015 [2024-07-24 02:12:12.780763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.015 qpair failed and we were unable to recover it. 00:33:58.015 [2024-07-24 02:12:12.780945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.015 [2024-07-24 02:12:12.780974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.015 qpair failed and we were unable to recover it. 00:33:58.015 [2024-07-24 02:12:12.781118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.015 [2024-07-24 02:12:12.781147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.015 qpair failed and we were unable to recover it. 00:33:58.015 [2024-07-24 02:12:12.781307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.015 [2024-07-24 02:12:12.781340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.015 qpair failed and we were unable to recover it. 00:33:58.015 [2024-07-24 02:12:12.781505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.015 [2024-07-24 02:12:12.781531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.015 qpair failed and we were unable to recover it. 00:33:58.015 [2024-07-24 02:12:12.781700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.015 [2024-07-24 02:12:12.781727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.015 qpair failed and we were unable to recover it. 00:33:58.015 [2024-07-24 02:12:12.781876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.015 [2024-07-24 02:12:12.781904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.015 qpair failed and we were unable to recover it. 00:33:58.015 [2024-07-24 02:12:12.782048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.015 [2024-07-24 02:12:12.782077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.015 qpair failed and we were unable to recover it. 00:33:58.015 [2024-07-24 02:12:12.782197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.015 [2024-07-24 02:12:12.782227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.015 qpair failed and we were unable to recover it. 00:33:58.015 [2024-07-24 02:12:12.782395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.015 [2024-07-24 02:12:12.782435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.015 qpair failed and we were unable to recover it. 00:33:58.015 [2024-07-24 02:12:12.782580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.015 [2024-07-24 02:12:12.782609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.015 qpair failed and we were unable to recover it. 00:33:58.015 [2024-07-24 02:12:12.782788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.015 [2024-07-24 02:12:12.782833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.015 qpair failed and we were unable to recover it. 00:33:58.015 [2024-07-24 02:12:12.782957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.015 [2024-07-24 02:12:12.782987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.015 qpair failed and we were unable to recover it. 00:33:58.015 [2024-07-24 02:12:12.783187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.015 [2024-07-24 02:12:12.783236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.015 qpair failed and we were unable to recover it. 00:33:58.015 [2024-07-24 02:12:12.783344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.015 [2024-07-24 02:12:12.783371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.015 qpair failed and we were unable to recover it. 00:33:58.015 [2024-07-24 02:12:12.783528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.015 [2024-07-24 02:12:12.783554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.015 qpair failed and we were unable to recover it. 00:33:58.015 [2024-07-24 02:12:12.783665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.015 [2024-07-24 02:12:12.783695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.015 qpair failed and we were unable to recover it. 00:33:58.015 [2024-07-24 02:12:12.783868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.015 [2024-07-24 02:12:12.783912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.015 qpair failed and we were unable to recover it. 00:33:58.015 [2024-07-24 02:12:12.784057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.015 [2024-07-24 02:12:12.784101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.015 qpair failed and we were unable to recover it. 00:33:58.015 [2024-07-24 02:12:12.784232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.015 [2024-07-24 02:12:12.784268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.015 qpair failed and we were unable to recover it. 00:33:58.015 [2024-07-24 02:12:12.784416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.015 [2024-07-24 02:12:12.784464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.015 qpair failed and we were unable to recover it. 00:33:58.015 [2024-07-24 02:12:12.784620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.015 [2024-07-24 02:12:12.784663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.015 qpair failed and we were unable to recover it. 00:33:58.015 [2024-07-24 02:12:12.784775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.015 [2024-07-24 02:12:12.784820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.015 qpair failed and we were unable to recover it. 00:33:58.015 [2024-07-24 02:12:12.784977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.015 [2024-07-24 02:12:12.785003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.015 qpair failed and we were unable to recover it. 00:33:58.015 [2024-07-24 02:12:12.785171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.015 [2024-07-24 02:12:12.785198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.015 qpair failed and we were unable to recover it. 00:33:58.015 [2024-07-24 02:12:12.785308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.015 [2024-07-24 02:12:12.785342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.015 qpair failed and we were unable to recover it. 00:33:58.015 [2024-07-24 02:12:12.785437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.015 [2024-07-24 02:12:12.785462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.015 qpair failed and we were unable to recover it. 00:33:58.015 [2024-07-24 02:12:12.785584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.015 [2024-07-24 02:12:12.785610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.015 qpair failed and we were unable to recover it. 00:33:58.015 [2024-07-24 02:12:12.785711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.015 [2024-07-24 02:12:12.785738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.015 qpair failed and we were unable to recover it. 00:33:58.015 [2024-07-24 02:12:12.785866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.015 [2024-07-24 02:12:12.785893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.015 qpair failed and we were unable to recover it. 00:33:58.015 [2024-07-24 02:12:12.786021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.015 [2024-07-24 02:12:12.786048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.015 qpair failed and we were unable to recover it. 00:33:58.015 [2024-07-24 02:12:12.786177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.015 [2024-07-24 02:12:12.786204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.015 qpair failed and we were unable to recover it. 00:33:58.015 [2024-07-24 02:12:12.786334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.015 [2024-07-24 02:12:12.786361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.015 qpair failed and we were unable to recover it. 00:33:58.015 [2024-07-24 02:12:12.786520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.015 [2024-07-24 02:12:12.786552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.015 qpair failed and we were unable to recover it. 00:33:58.015 [2024-07-24 02:12:12.786695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.015 [2024-07-24 02:12:12.786724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.015 qpair failed and we were unable to recover it. 00:33:58.015 [2024-07-24 02:12:12.786893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.015 [2024-07-24 02:12:12.786922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.015 qpair failed and we were unable to recover it. 00:33:58.015 [2024-07-24 02:12:12.787181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.015 [2024-07-24 02:12:12.787209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.015 qpair failed and we were unable to recover it. 00:33:58.015 [2024-07-24 02:12:12.787364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.015 [2024-07-24 02:12:12.787391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.015 qpair failed and we were unable to recover it. 00:33:58.015 [2024-07-24 02:12:12.787524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.015 [2024-07-24 02:12:12.787551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.015 qpair failed and we were unable to recover it. 00:33:58.015 [2024-07-24 02:12:12.787669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.015 [2024-07-24 02:12:12.787700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.015 qpair failed and we were unable to recover it. 00:33:58.015 [2024-07-24 02:12:12.787853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.015 [2024-07-24 02:12:12.787882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.015 qpair failed and we were unable to recover it. 00:33:58.015 [2024-07-24 02:12:12.788028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.015 [2024-07-24 02:12:12.788057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.015 qpair failed and we were unable to recover it. 00:33:58.015 [2024-07-24 02:12:12.788211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.015 [2024-07-24 02:12:12.788240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.015 qpair failed and we were unable to recover it. 00:33:58.015 [2024-07-24 02:12:12.788399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.015 [2024-07-24 02:12:12.788426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.015 qpair failed and we were unable to recover it. 00:33:58.015 [2024-07-24 02:12:12.788552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.015 [2024-07-24 02:12:12.788597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.015 qpair failed and we were unable to recover it. 00:33:58.015 [2024-07-24 02:12:12.788724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.015 [2024-07-24 02:12:12.788770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.015 qpair failed and we were unable to recover it. 00:33:58.015 [2024-07-24 02:12:12.788905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.015 [2024-07-24 02:12:12.788950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.015 qpair failed and we were unable to recover it. 00:33:58.015 [2024-07-24 02:12:12.789084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.015 [2024-07-24 02:12:12.789111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.015 qpair failed and we were unable to recover it. 00:33:58.015 [2024-07-24 02:12:12.789246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.015 [2024-07-24 02:12:12.789273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.015 qpair failed and we were unable to recover it. 00:33:58.015 [2024-07-24 02:12:12.789406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.015 [2024-07-24 02:12:12.789433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.015 qpair failed and we were unable to recover it. 00:33:58.015 [2024-07-24 02:12:12.789536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.015 [2024-07-24 02:12:12.789563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.015 qpair failed and we were unable to recover it. 00:33:58.015 [2024-07-24 02:12:12.789675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.015 [2024-07-24 02:12:12.789701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.015 qpair failed and we were unable to recover it. 00:33:58.015 [2024-07-24 02:12:12.789831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.015 [2024-07-24 02:12:12.789858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.015 qpair failed and we were unable to recover it. 00:33:58.016 [2024-07-24 02:12:12.789984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.016 [2024-07-24 02:12:12.790015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.016 qpair failed and we were unable to recover it. 00:33:58.016 [2024-07-24 02:12:12.790126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.016 [2024-07-24 02:12:12.790152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.016 qpair failed and we were unable to recover it. 00:33:58.016 [2024-07-24 02:12:12.790288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.016 [2024-07-24 02:12:12.790320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.016 qpair failed and we were unable to recover it. 00:33:58.016 [2024-07-24 02:12:12.790448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.016 [2024-07-24 02:12:12.790475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.016 qpair failed and we were unable to recover it. 00:33:58.016 [2024-07-24 02:12:12.790607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.016 [2024-07-24 02:12:12.790633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.016 qpair failed and we were unable to recover it. 00:33:58.016 [2024-07-24 02:12:12.790764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.016 [2024-07-24 02:12:12.790791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.016 qpair failed and we were unable to recover it. 00:33:58.016 [2024-07-24 02:12:12.790903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.016 [2024-07-24 02:12:12.790930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.016 qpair failed and we were unable to recover it. 00:33:58.016 [2024-07-24 02:12:12.791063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.016 [2024-07-24 02:12:12.791088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.016 qpair failed and we were unable to recover it. 00:33:58.016 [2024-07-24 02:12:12.791193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.016 [2024-07-24 02:12:12.791219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.016 qpair failed and we were unable to recover it. 00:33:58.016 [2024-07-24 02:12:12.791360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.016 [2024-07-24 02:12:12.791388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.016 qpair failed and we were unable to recover it. 00:33:58.016 [2024-07-24 02:12:12.791519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.016 [2024-07-24 02:12:12.791545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.016 qpair failed and we were unable to recover it. 00:33:58.016 [2024-07-24 02:12:12.791675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.016 [2024-07-24 02:12:12.791703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.016 qpair failed and we were unable to recover it. 00:33:58.016 [2024-07-24 02:12:12.791834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.016 [2024-07-24 02:12:12.791860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.016 qpair failed and we were unable to recover it. 00:33:58.016 [2024-07-24 02:12:12.792024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.016 [2024-07-24 02:12:12.792050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.016 qpair failed and we were unable to recover it. 00:33:58.016 [2024-07-24 02:12:12.792183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.016 [2024-07-24 02:12:12.792210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.016 qpair failed and we were unable to recover it. 00:33:58.016 [2024-07-24 02:12:12.792342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.016 [2024-07-24 02:12:12.792368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.016 qpair failed and we were unable to recover it. 00:33:58.016 [2024-07-24 02:12:12.792515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.016 [2024-07-24 02:12:12.792558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.016 qpair failed and we were unable to recover it. 00:33:58.016 [2024-07-24 02:12:12.792708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.016 [2024-07-24 02:12:12.792752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.016 qpair failed and we were unable to recover it. 00:33:58.016 [2024-07-24 02:12:12.792887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.016 [2024-07-24 02:12:12.792914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.016 qpair failed and we were unable to recover it. 00:33:58.016 [2024-07-24 02:12:12.793014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.016 [2024-07-24 02:12:12.793039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.016 qpair failed and we were unable to recover it. 00:33:58.016 [2024-07-24 02:12:12.793133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.016 [2024-07-24 02:12:12.793158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.016 qpair failed and we were unable to recover it. 00:33:58.016 [2024-07-24 02:12:12.793327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.016 [2024-07-24 02:12:12.793354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.016 qpair failed and we were unable to recover it. 00:33:58.016 [2024-07-24 02:12:12.793479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.016 [2024-07-24 02:12:12.793527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.016 qpair failed and we were unable to recover it. 00:33:58.016 [2024-07-24 02:12:12.793657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.016 [2024-07-24 02:12:12.793699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.016 qpair failed and we were unable to recover it. 00:33:58.016 [2024-07-24 02:12:12.793878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.016 [2024-07-24 02:12:12.793907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.016 qpair failed and we were unable to recover it. 00:33:58.016 [2024-07-24 02:12:12.794026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.016 [2024-07-24 02:12:12.794051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.016 qpair failed and we were unable to recover it. 00:33:58.016 [2024-07-24 02:12:12.794157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.016 [2024-07-24 02:12:12.794183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.016 qpair failed and we were unable to recover it. 00:33:58.016 [2024-07-24 02:12:12.794291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.016 [2024-07-24 02:12:12.794324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.016 qpair failed and we were unable to recover it. 00:33:58.016 [2024-07-24 02:12:12.794449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.016 [2024-07-24 02:12:12.794495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.016 qpair failed and we were unable to recover it. 00:33:58.016 [2024-07-24 02:12:12.794648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.016 [2024-07-24 02:12:12.794692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.016 qpair failed and we were unable to recover it. 00:33:58.016 [2024-07-24 02:12:12.794817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.016 [2024-07-24 02:12:12.794847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.016 qpair failed and we were unable to recover it. 00:33:58.016 [2024-07-24 02:12:12.794987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.016 [2024-07-24 02:12:12.795014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.016 qpair failed and we were unable to recover it. 00:33:58.016 [2024-07-24 02:12:12.795116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.016 [2024-07-24 02:12:12.795142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.016 qpair failed and we were unable to recover it. 00:33:58.016 [2024-07-24 02:12:12.795275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.016 [2024-07-24 02:12:12.795302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.016 qpair failed and we were unable to recover it. 00:33:58.016 [2024-07-24 02:12:12.795428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.016 [2024-07-24 02:12:12.795456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.016 qpair failed and we were unable to recover it. 00:33:58.016 [2024-07-24 02:12:12.795606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.016 [2024-07-24 02:12:12.795631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.016 qpair failed and we were unable to recover it. 00:33:58.016 [2024-07-24 02:12:12.795787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.016 [2024-07-24 02:12:12.795814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.016 qpair failed and we were unable to recover it. 00:33:58.016 [2024-07-24 02:12:12.795968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.016 [2024-07-24 02:12:12.795994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.016 qpair failed and we were unable to recover it. 00:33:58.016 [2024-07-24 02:12:12.796152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.016 [2024-07-24 02:12:12.796178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.016 qpair failed and we were unable to recover it. 00:33:58.016 [2024-07-24 02:12:12.796287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.016 [2024-07-24 02:12:12.796314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.016 qpair failed and we were unable to recover it. 00:33:58.016 [2024-07-24 02:12:12.796432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.016 [2024-07-24 02:12:12.796462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.016 qpair failed and we were unable to recover it. 00:33:58.016 [2024-07-24 02:12:12.796579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.016 [2024-07-24 02:12:12.796605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.016 qpair failed and we were unable to recover it. 00:33:58.016 [2024-07-24 02:12:12.796741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.016 [2024-07-24 02:12:12.796766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.016 qpair failed and we were unable to recover it. 00:33:58.016 [2024-07-24 02:12:12.796875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.016 [2024-07-24 02:12:12.796901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.016 qpair failed and we were unable to recover it. 00:33:58.016 [2024-07-24 02:12:12.797055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.016 [2024-07-24 02:12:12.797082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.016 qpair failed and we were unable to recover it. 00:33:58.016 [2024-07-24 02:12:12.797215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.016 [2024-07-24 02:12:12.797241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.016 qpair failed and we were unable to recover it. 00:33:58.016 [2024-07-24 02:12:12.797380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.016 [2024-07-24 02:12:12.797407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.016 qpair failed and we were unable to recover it. 00:33:58.016 [2024-07-24 02:12:12.797544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.016 [2024-07-24 02:12:12.797570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.016 qpair failed and we were unable to recover it. 00:33:58.016 [2024-07-24 02:12:12.797709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.016 [2024-07-24 02:12:12.797735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.016 qpair failed and we were unable to recover it. 00:33:58.016 [2024-07-24 02:12:12.797858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.016 [2024-07-24 02:12:12.797884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.016 qpair failed and we were unable to recover it. 00:33:58.017 [2024-07-24 02:12:12.797989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.017 [2024-07-24 02:12:12.798015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.017 qpair failed and we were unable to recover it. 00:33:58.017 [2024-07-24 02:12:12.798146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.017 [2024-07-24 02:12:12.798174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.017 qpair failed and we were unable to recover it. 00:33:58.017 [2024-07-24 02:12:12.798301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.017 [2024-07-24 02:12:12.798335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.017 qpair failed and we were unable to recover it. 00:33:58.017 [2024-07-24 02:12:12.798496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.017 [2024-07-24 02:12:12.798521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.017 qpair failed and we were unable to recover it. 00:33:58.017 [2024-07-24 02:12:12.798647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.017 [2024-07-24 02:12:12.798674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.017 qpair failed and we were unable to recover it. 00:33:58.017 [2024-07-24 02:12:12.798786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.017 [2024-07-24 02:12:12.798811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.017 qpair failed and we were unable to recover it. 00:33:58.017 [2024-07-24 02:12:12.798971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.017 [2024-07-24 02:12:12.798997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.017 qpair failed and we were unable to recover it. 00:33:58.017 [2024-07-24 02:12:12.799099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.017 [2024-07-24 02:12:12.799124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.017 qpair failed and we were unable to recover it. 00:33:58.017 [2024-07-24 02:12:12.799229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.017 [2024-07-24 02:12:12.799255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.017 qpair failed and we were unable to recover it. 00:33:58.017 [2024-07-24 02:12:12.799357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.017 [2024-07-24 02:12:12.799384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.017 qpair failed and we were unable to recover it. 00:33:58.017 [2024-07-24 02:12:12.799516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.017 [2024-07-24 02:12:12.799542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.017 qpair failed and we were unable to recover it. 00:33:58.017 [2024-07-24 02:12:12.799669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.017 [2024-07-24 02:12:12.799696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.017 qpair failed and we were unable to recover it. 00:33:58.017 [2024-07-24 02:12:12.799798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.017 [2024-07-24 02:12:12.799824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.017 qpair failed and we were unable to recover it. 00:33:58.017 [2024-07-24 02:12:12.799952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.017 [2024-07-24 02:12:12.799979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.017 qpair failed and we were unable to recover it. 00:33:58.017 [2024-07-24 02:12:12.800108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.017 [2024-07-24 02:12:12.800135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.017 qpair failed and we were unable to recover it. 00:33:58.017 [2024-07-24 02:12:12.800271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.017 [2024-07-24 02:12:12.800297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.017 qpair failed and we were unable to recover it. 00:33:58.017 [2024-07-24 02:12:12.800431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.017 [2024-07-24 02:12:12.800476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.017 qpair failed and we were unable to recover it. 00:33:58.017 [2024-07-24 02:12:12.800615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.017 [2024-07-24 02:12:12.800641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.017 qpair failed and we were unable to recover it. 00:33:58.017 [2024-07-24 02:12:12.800745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.017 [2024-07-24 02:12:12.800771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.017 qpair failed and we were unable to recover it. 00:33:58.017 [2024-07-24 02:12:12.800871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.017 [2024-07-24 02:12:12.800896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.017 qpair failed and we were unable to recover it. 00:33:58.017 [2024-07-24 02:12:12.800996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.017 [2024-07-24 02:12:12.801022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.017 qpair failed and we were unable to recover it. 00:33:58.017 [2024-07-24 02:12:12.801148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.017 [2024-07-24 02:12:12.801175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.017 qpair failed and we were unable to recover it. 00:33:58.017 [2024-07-24 02:12:12.801303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.017 [2024-07-24 02:12:12.801342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.017 qpair failed and we were unable to recover it. 00:33:58.017 [2024-07-24 02:12:12.801505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.017 [2024-07-24 02:12:12.801531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.017 qpair failed and we were unable to recover it. 00:33:58.017 [2024-07-24 02:12:12.801674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.017 [2024-07-24 02:12:12.801700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.017 qpair failed and we were unable to recover it. 00:33:58.017 [2024-07-24 02:12:12.801841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.017 [2024-07-24 02:12:12.801866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.017 qpair failed and we were unable to recover it. 00:33:58.017 [2024-07-24 02:12:12.801968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.017 [2024-07-24 02:12:12.801996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.017 qpair failed and we were unable to recover it. 00:33:58.017 [2024-07-24 02:12:12.802132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.017 [2024-07-24 02:12:12.802158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.017 qpair failed and we were unable to recover it. 00:33:58.017 [2024-07-24 02:12:12.802293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.017 [2024-07-24 02:12:12.802327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.017 qpair failed and we were unable to recover it. 00:33:58.017 [2024-07-24 02:12:12.802479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.017 [2024-07-24 02:12:12.802525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.017 qpair failed and we were unable to recover it. 00:33:58.017 [2024-07-24 02:12:12.802676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.017 [2024-07-24 02:12:12.802724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.017 qpair failed and we were unable to recover it. 00:33:58.017 [2024-07-24 02:12:12.802849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.017 [2024-07-24 02:12:12.802878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.017 qpair failed and we were unable to recover it. 00:33:58.017 [2024-07-24 02:12:12.802995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.017 [2024-07-24 02:12:12.803023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.017 qpair failed and we were unable to recover it. 00:33:58.017 [2024-07-24 02:12:12.803177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.017 [2024-07-24 02:12:12.803203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.017 qpair failed and we were unable to recover it. 00:33:58.017 [2024-07-24 02:12:12.803334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.017 [2024-07-24 02:12:12.803360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.017 qpair failed and we were unable to recover it. 00:33:58.017 [2024-07-24 02:12:12.803539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.017 [2024-07-24 02:12:12.803590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.017 qpair failed and we were unable to recover it. 00:33:58.017 [2024-07-24 02:12:12.803709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.017 [2024-07-24 02:12:12.803755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.017 qpair failed and we were unable to recover it. 00:33:58.017 [2024-07-24 02:12:12.803936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.017 [2024-07-24 02:12:12.803979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.018 qpair failed and we were unable to recover it. 00:33:58.018 [2024-07-24 02:12:12.804138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.018 [2024-07-24 02:12:12.804164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.018 qpair failed and we were unable to recover it. 00:33:58.018 [2024-07-24 02:12:12.804307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.018 [2024-07-24 02:12:12.804341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.018 qpair failed and we were unable to recover it. 00:33:58.018 [2024-07-24 02:12:12.804495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.018 [2024-07-24 02:12:12.804524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.018 qpair failed and we were unable to recover it. 00:33:58.018 [2024-07-24 02:12:12.804693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.018 [2024-07-24 02:12:12.804735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.018 qpair failed and we were unable to recover it. 00:33:58.018 [2024-07-24 02:12:12.804884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.018 [2024-07-24 02:12:12.804931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.018 qpair failed and we were unable to recover it. 00:33:58.018 [2024-07-24 02:12:12.805059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.018 [2024-07-24 02:12:12.805086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.018 qpair failed and we were unable to recover it. 00:33:58.018 [2024-07-24 02:12:12.805206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.018 [2024-07-24 02:12:12.805231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.018 qpair failed and we were unable to recover it. 00:33:58.018 [2024-07-24 02:12:12.805340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.018 [2024-07-24 02:12:12.805367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.018 qpair failed and we were unable to recover it. 00:33:58.018 [2024-07-24 02:12:12.805490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.018 [2024-07-24 02:12:12.805534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.018 qpair failed and we were unable to recover it. 00:33:58.018 [2024-07-24 02:12:12.805689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.018 [2024-07-24 02:12:12.805732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.018 qpair failed and we were unable to recover it. 00:33:58.018 [2024-07-24 02:12:12.805867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.018 [2024-07-24 02:12:12.805894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.018 qpair failed and we were unable to recover it. 00:33:58.018 [2024-07-24 02:12:12.806025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.018 [2024-07-24 02:12:12.806051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.018 qpair failed and we were unable to recover it. 00:33:58.018 [2024-07-24 02:12:12.806211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.018 [2024-07-24 02:12:12.806236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.018 qpair failed and we were unable to recover it. 00:33:58.018 [2024-07-24 02:12:12.806346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.018 [2024-07-24 02:12:12.806373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.018 qpair failed and we were unable to recover it. 00:33:58.018 [2024-07-24 02:12:12.806535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.018 [2024-07-24 02:12:12.806578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.018 qpair failed and we were unable to recover it. 00:33:58.018 [2024-07-24 02:12:12.806730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.018 [2024-07-24 02:12:12.806759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.018 qpair failed and we were unable to recover it. 00:33:58.018 [2024-07-24 02:12:12.806950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.018 [2024-07-24 02:12:12.806994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.018 qpair failed and we were unable to recover it. 00:33:58.018 [2024-07-24 02:12:12.807103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.018 [2024-07-24 02:12:12.807141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.018 qpair failed and we were unable to recover it. 00:33:58.018 [2024-07-24 02:12:12.807255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.018 [2024-07-24 02:12:12.807282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.018 qpair failed and we were unable to recover it. 00:33:58.018 [2024-07-24 02:12:12.807480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.018 [2024-07-24 02:12:12.807524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.018 qpair failed and we were unable to recover it. 00:33:58.018 [2024-07-24 02:12:12.807672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.018 [2024-07-24 02:12:12.807703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.018 qpair failed and we were unable to recover it. 00:33:58.018 [2024-07-24 02:12:12.807900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.018 [2024-07-24 02:12:12.807927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.018 qpair failed and we were unable to recover it. 00:33:58.018 [2024-07-24 02:12:12.808031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.018 [2024-07-24 02:12:12.808057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.018 qpair failed and we were unable to recover it. 00:33:58.018 [2024-07-24 02:12:12.808214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.018 [2024-07-24 02:12:12.808241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.018 qpair failed and we were unable to recover it. 00:33:58.018 [2024-07-24 02:12:12.808396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.018 [2024-07-24 02:12:12.808427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.018 qpair failed and we were unable to recover it. 00:33:58.018 [2024-07-24 02:12:12.808568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.018 [2024-07-24 02:12:12.808612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.018 qpair failed and we were unable to recover it. 00:33:58.018 [2024-07-24 02:12:12.808779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.018 [2024-07-24 02:12:12.808805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.018 qpair failed and we were unable to recover it. 00:33:58.018 [2024-07-24 02:12:12.808932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.018 [2024-07-24 02:12:12.808959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.018 qpair failed and we were unable to recover it. 00:33:58.018 [2024-07-24 02:12:12.809093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.018 [2024-07-24 02:12:12.809120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.018 qpair failed and we were unable to recover it. 00:33:58.018 [2024-07-24 02:12:12.809223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.018 [2024-07-24 02:12:12.809250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.018 qpair failed and we were unable to recover it. 00:33:58.018 [2024-07-24 02:12:12.809403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.018 [2024-07-24 02:12:12.809449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.018 qpair failed and we were unable to recover it. 00:33:58.018 [2024-07-24 02:12:12.809605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.018 [2024-07-24 02:12:12.809650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.018 qpair failed and we were unable to recover it. 00:33:58.018 [2024-07-24 02:12:12.809835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.018 [2024-07-24 02:12:12.809868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.018 qpair failed and we were unable to recover it. 00:33:58.018 [2024-07-24 02:12:12.810002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.018 [2024-07-24 02:12:12.810028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.018 qpair failed and we were unable to recover it. 00:33:58.018 [2024-07-24 02:12:12.810140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.018 [2024-07-24 02:12:12.810167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.018 qpair failed and we were unable to recover it. 00:33:58.018 [2024-07-24 02:12:12.810298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.019 [2024-07-24 02:12:12.810331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.019 qpair failed and we were unable to recover it. 00:33:58.019 [2024-07-24 02:12:12.810485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.019 [2024-07-24 02:12:12.810532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.019 qpair failed and we were unable to recover it. 00:33:58.019 [2024-07-24 02:12:12.810684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.019 [2024-07-24 02:12:12.810728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.019 qpair failed and we were unable to recover it. 00:33:58.019 [2024-07-24 02:12:12.810838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.019 [2024-07-24 02:12:12.810864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.019 qpair failed and we were unable to recover it. 00:33:58.019 [2024-07-24 02:12:12.810997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.019 [2024-07-24 02:12:12.811024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.019 qpair failed and we were unable to recover it. 00:33:58.019 [2024-07-24 02:12:12.811159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.019 [2024-07-24 02:12:12.811184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.019 qpair failed and we were unable to recover it. 00:33:58.019 [2024-07-24 02:12:12.811294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.019 [2024-07-24 02:12:12.811327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.019 qpair failed and we were unable to recover it. 00:33:58.019 [2024-07-24 02:12:12.811436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.019 [2024-07-24 02:12:12.811462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.019 qpair failed and we were unable to recover it. 00:33:58.019 [2024-07-24 02:12:12.811598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.019 [2024-07-24 02:12:12.811624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.019 qpair failed and we were unable to recover it. 00:33:58.019 [2024-07-24 02:12:12.811743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.019 [2024-07-24 02:12:12.811769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.019 qpair failed and we were unable to recover it. 00:33:58.019 [2024-07-24 02:12:12.811894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.019 [2024-07-24 02:12:12.811920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.019 qpair failed and we were unable to recover it. 00:33:58.019 [2024-07-24 02:12:12.812033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.019 [2024-07-24 02:12:12.812060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.019 qpair failed and we were unable to recover it. 00:33:58.019 [2024-07-24 02:12:12.812169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.019 [2024-07-24 02:12:12.812195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.019 qpair failed and we were unable to recover it. 00:33:58.019 [2024-07-24 02:12:12.812333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.019 [2024-07-24 02:12:12.812360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.019 qpair failed and we were unable to recover it. 00:33:58.019 [2024-07-24 02:12:12.812492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.019 [2024-07-24 02:12:12.812536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.019 qpair failed and we were unable to recover it. 00:33:58.019 [2024-07-24 02:12:12.812685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.019 [2024-07-24 02:12:12.812729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.019 qpair failed and we were unable to recover it. 00:33:58.019 [2024-07-24 02:12:12.812835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.019 [2024-07-24 02:12:12.812863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.019 qpair failed and we were unable to recover it. 00:33:58.019 [2024-07-24 02:12:12.813031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.019 [2024-07-24 02:12:12.813056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.019 qpair failed and we were unable to recover it. 00:33:58.019 [2024-07-24 02:12:12.813165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.019 [2024-07-24 02:12:12.813191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.019 qpair failed and we were unable to recover it. 00:33:58.019 [2024-07-24 02:12:12.813326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.019 [2024-07-24 02:12:12.813353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.019 qpair failed and we were unable to recover it. 00:33:58.019 [2024-07-24 02:12:12.813509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.019 [2024-07-24 02:12:12.813553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.019 qpair failed and we were unable to recover it. 00:33:58.019 [2024-07-24 02:12:12.813711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.019 [2024-07-24 02:12:12.813755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.019 qpair failed and we were unable to recover it. 00:33:58.019 [2024-07-24 02:12:12.813914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.019 [2024-07-24 02:12:12.813941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.019 qpair failed and we were unable to recover it. 00:33:58.019 [2024-07-24 02:12:12.814094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.019 [2024-07-24 02:12:12.814120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.019 qpair failed and we were unable to recover it. 00:33:58.019 [2024-07-24 02:12:12.814234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.019 [2024-07-24 02:12:12.814262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.019 qpair failed and we were unable to recover it. 00:33:58.019 [2024-07-24 02:12:12.814419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.019 [2024-07-24 02:12:12.814465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.019 qpair failed and we were unable to recover it. 00:33:58.019 [2024-07-24 02:12:12.814609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.019 [2024-07-24 02:12:12.814653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.019 qpair failed and we were unable to recover it. 00:33:58.019 [2024-07-24 02:12:12.814805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.019 [2024-07-24 02:12:12.814848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.019 qpair failed and we were unable to recover it. 00:33:58.019 [2024-07-24 02:12:12.814985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.019 [2024-07-24 02:12:12.815012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.019 qpair failed and we were unable to recover it. 00:33:58.019 [2024-07-24 02:12:12.815142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.019 [2024-07-24 02:12:12.815168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.019 qpair failed and we were unable to recover it. 00:33:58.019 [2024-07-24 02:12:12.815329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.019 [2024-07-24 02:12:12.815356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.019 qpair failed and we were unable to recover it. 00:33:58.019 [2024-07-24 02:12:12.815482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.019 [2024-07-24 02:12:12.815527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.019 qpair failed and we were unable to recover it. 00:33:58.019 [2024-07-24 02:12:12.815678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.019 [2024-07-24 02:12:12.815721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.019 qpair failed and we were unable to recover it. 00:33:58.019 [2024-07-24 02:12:12.815884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.019 [2024-07-24 02:12:12.815910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.019 qpair failed and we were unable to recover it. 00:33:58.019 [2024-07-24 02:12:12.816072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.019 [2024-07-24 02:12:12.816098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.019 qpair failed and we were unable to recover it. 00:33:58.019 [2024-07-24 02:12:12.816259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.019 [2024-07-24 02:12:12.816285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.019 qpair failed and we were unable to recover it. 00:33:58.019 [2024-07-24 02:12:12.816428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.019 [2024-07-24 02:12:12.816472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.019 qpair failed and we were unable to recover it. 00:33:58.020 [2024-07-24 02:12:12.816593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.020 [2024-07-24 02:12:12.816643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.020 qpair failed and we were unable to recover it. 00:33:58.020 [2024-07-24 02:12:12.816832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.020 [2024-07-24 02:12:12.816878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.020 qpair failed and we were unable to recover it. 00:33:58.020 [2024-07-24 02:12:12.817034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.020 [2024-07-24 02:12:12.817061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.020 qpair failed and we were unable to recover it. 00:33:58.020 [2024-07-24 02:12:12.817183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.020 [2024-07-24 02:12:12.817209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.020 qpair failed and we were unable to recover it. 00:33:58.020 [2024-07-24 02:12:12.817314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.020 [2024-07-24 02:12:12.817349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.020 qpair failed and we were unable to recover it. 00:33:58.020 [2024-07-24 02:12:12.817497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.020 [2024-07-24 02:12:12.817541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.020 qpair failed and we were unable to recover it. 00:33:58.020 [2024-07-24 02:12:12.817697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.020 [2024-07-24 02:12:12.817740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.020 qpair failed and we were unable to recover it. 00:33:58.020 [2024-07-24 02:12:12.817885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.020 [2024-07-24 02:12:12.817929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.020 qpair failed and we were unable to recover it. 00:33:58.020 [2024-07-24 02:12:12.818093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.020 [2024-07-24 02:12:12.818119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.020 qpair failed and we were unable to recover it. 00:33:58.020 [2024-07-24 02:12:12.818247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.020 [2024-07-24 02:12:12.818280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.020 qpair failed and we were unable to recover it. 00:33:58.020 [2024-07-24 02:12:12.818409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.020 [2024-07-24 02:12:12.818437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.020 qpair failed and we were unable to recover it. 00:33:58.020 [2024-07-24 02:12:12.818606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.020 [2024-07-24 02:12:12.818649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.020 qpair failed and we were unable to recover it. 00:33:58.020 [2024-07-24 02:12:12.818834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.020 [2024-07-24 02:12:12.818878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.020 qpair failed and we were unable to recover it. 00:33:58.020 [2024-07-24 02:12:12.818992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.020 [2024-07-24 02:12:12.819017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.020 qpair failed and we were unable to recover it. 00:33:58.020 [2024-07-24 02:12:12.819134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.020 [2024-07-24 02:12:12.819161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.020 qpair failed and we were unable to recover it. 00:33:58.020 [2024-07-24 02:12:12.819323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.020 [2024-07-24 02:12:12.819350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.020 qpair failed and we were unable to recover it. 00:33:58.020 [2024-07-24 02:12:12.819451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.020 [2024-07-24 02:12:12.819477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.020 qpair failed and we were unable to recover it. 00:33:58.020 [2024-07-24 02:12:12.819579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.020 [2024-07-24 02:12:12.819605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.020 qpair failed and we were unable to recover it. 00:33:58.020 [2024-07-24 02:12:12.819702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.020 [2024-07-24 02:12:12.819728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.020 qpair failed and we were unable to recover it. 00:33:58.020 [2024-07-24 02:12:12.819835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.020 [2024-07-24 02:12:12.819860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.020 qpair failed and we were unable to recover it. 00:33:58.020 [2024-07-24 02:12:12.819992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.020 [2024-07-24 02:12:12.820018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.020 qpair failed and we were unable to recover it. 00:33:58.020 [2024-07-24 02:12:12.820174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.020 [2024-07-24 02:12:12.820200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.020 qpair failed and we were unable to recover it. 00:33:58.020 [2024-07-24 02:12:12.820334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.020 [2024-07-24 02:12:12.820360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.020 qpair failed and we were unable to recover it. 00:33:58.020 [2024-07-24 02:12:12.820519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.020 [2024-07-24 02:12:12.820545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.020 qpair failed and we were unable to recover it. 00:33:58.020 [2024-07-24 02:12:12.820644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.020 [2024-07-24 02:12:12.820671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.020 qpair failed and we were unable to recover it. 00:33:58.020 [2024-07-24 02:12:12.820802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.020 [2024-07-24 02:12:12.820828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.020 qpair failed and we were unable to recover it. 00:33:58.020 [2024-07-24 02:12:12.820965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.020 [2024-07-24 02:12:12.820992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.020 qpair failed and we were unable to recover it. 00:33:58.020 [2024-07-24 02:12:12.821106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.020 [2024-07-24 02:12:12.821132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.020 qpair failed and we were unable to recover it. 00:33:58.020 [2024-07-24 02:12:12.821268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.020 [2024-07-24 02:12:12.821293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.020 qpair failed and we were unable to recover it. 00:33:58.020 [2024-07-24 02:12:12.821417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.020 [2024-07-24 02:12:12.821444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.020 qpair failed and we were unable to recover it. 00:33:58.021 [2024-07-24 02:12:12.821579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.021 [2024-07-24 02:12:12.821605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.021 qpair failed and we were unable to recover it. 00:33:58.021 [2024-07-24 02:12:12.821737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.021 [2024-07-24 02:12:12.821764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.021 qpair failed and we were unable to recover it. 00:33:58.021 [2024-07-24 02:12:12.821921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.021 [2024-07-24 02:12:12.821948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.021 qpair failed and we were unable to recover it. 00:33:58.021 [2024-07-24 02:12:12.822050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.021 [2024-07-24 02:12:12.822076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.021 qpair failed and we were unable to recover it. 00:33:58.021 [2024-07-24 02:12:12.822207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.021 [2024-07-24 02:12:12.822233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.021 qpair failed and we were unable to recover it. 00:33:58.021 [2024-07-24 02:12:12.822373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.021 [2024-07-24 02:12:12.822399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.021 qpair failed and we were unable to recover it. 00:33:58.021 [2024-07-24 02:12:12.822512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.021 [2024-07-24 02:12:12.822539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.021 qpair failed and we were unable to recover it. 00:33:58.021 [2024-07-24 02:12:12.822669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.021 [2024-07-24 02:12:12.822695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.021 qpair failed and we were unable to recover it. 00:33:58.021 [2024-07-24 02:12:12.822825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.021 [2024-07-24 02:12:12.822851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.021 qpair failed and we were unable to recover it. 00:33:58.021 [2024-07-24 02:12:12.823012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.021 [2024-07-24 02:12:12.823039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.021 qpair failed and we were unable to recover it. 00:33:58.021 [2024-07-24 02:12:12.823173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.021 [2024-07-24 02:12:12.823204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.021 qpair failed and we were unable to recover it. 00:33:58.021 [2024-07-24 02:12:12.823344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.021 [2024-07-24 02:12:12.823371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.021 qpair failed and we were unable to recover it. 00:33:58.021 [2024-07-24 02:12:12.823476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.021 [2024-07-24 02:12:12.823503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.021 qpair failed and we were unable to recover it. 00:33:58.021 [2024-07-24 02:12:12.823604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.021 [2024-07-24 02:12:12.823631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.021 qpair failed and we were unable to recover it. 00:33:58.021 [2024-07-24 02:12:12.823765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.021 [2024-07-24 02:12:12.823792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.021 qpair failed and we were unable to recover it. 00:33:58.021 [2024-07-24 02:12:12.823948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.021 [2024-07-24 02:12:12.823975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.021 qpair failed and we were unable to recover it. 00:33:58.021 [2024-07-24 02:12:12.824107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.021 [2024-07-24 02:12:12.824133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.021 qpair failed and we were unable to recover it. 00:33:58.021 [2024-07-24 02:12:12.824268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.021 [2024-07-24 02:12:12.824294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.021 qpair failed and we were unable to recover it. 00:33:58.021 [2024-07-24 02:12:12.824426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.021 [2024-07-24 02:12:12.824472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.021 qpair failed and we were unable to recover it. 00:33:58.021 [2024-07-24 02:12:12.824633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.021 [2024-07-24 02:12:12.824680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.021 qpair failed and we were unable to recover it. 00:33:58.021 [2024-07-24 02:12:12.824814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.021 [2024-07-24 02:12:12.824840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.021 qpair failed and we were unable to recover it. 00:33:58.021 [2024-07-24 02:12:12.825000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.021 [2024-07-24 02:12:12.825026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.021 qpair failed and we were unable to recover it. 00:33:58.021 [2024-07-24 02:12:12.825175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.021 [2024-07-24 02:12:12.825201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.021 qpair failed and we were unable to recover it. 00:33:58.021 [2024-07-24 02:12:12.825307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.021 [2024-07-24 02:12:12.825356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.021 qpair failed and we were unable to recover it. 00:33:58.021 [2024-07-24 02:12:12.825497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.021 [2024-07-24 02:12:12.825524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.021 qpair failed and we were unable to recover it. 00:33:58.021 [2024-07-24 02:12:12.825683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.021 [2024-07-24 02:12:12.825709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.021 qpair failed and we were unable to recover it. 00:33:58.021 [2024-07-24 02:12:12.825814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.021 [2024-07-24 02:12:12.825840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.021 qpair failed and we were unable to recover it. 00:33:58.021 [2024-07-24 02:12:12.825935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.021 [2024-07-24 02:12:12.825962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.021 qpair failed and we were unable to recover it. 00:33:58.021 [2024-07-24 02:12:12.826100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.021 [2024-07-24 02:12:12.826126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.021 qpair failed and we were unable to recover it. 00:33:58.021 [2024-07-24 02:12:12.826259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.021 [2024-07-24 02:12:12.826285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.021 qpair failed and we were unable to recover it. 00:33:58.021 [2024-07-24 02:12:12.826420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.021 [2024-07-24 02:12:12.826447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.021 qpair failed and we were unable to recover it. 00:33:58.021 [2024-07-24 02:12:12.826601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.021 [2024-07-24 02:12:12.826628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.021 qpair failed and we were unable to recover it. 00:33:58.021 [2024-07-24 02:12:12.826781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.021 [2024-07-24 02:12:12.826808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.021 qpair failed and we were unable to recover it. 00:33:58.021 [2024-07-24 02:12:12.826941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.021 [2024-07-24 02:12:12.826968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.021 qpair failed and we were unable to recover it. 00:33:58.021 [2024-07-24 02:12:12.827126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.021 [2024-07-24 02:12:12.827152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.021 qpair failed and we were unable to recover it. 00:33:58.021 [2024-07-24 02:12:12.827311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.021 [2024-07-24 02:12:12.827344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.021 qpair failed and we were unable to recover it. 00:33:58.021 [2024-07-24 02:12:12.827488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.022 [2024-07-24 02:12:12.827532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.022 qpair failed and we were unable to recover it. 00:33:58.022 [2024-07-24 02:12:12.827681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.022 [2024-07-24 02:12:12.827725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.022 qpair failed and we were unable to recover it. 00:33:58.022 [2024-07-24 02:12:12.827900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.022 [2024-07-24 02:12:12.827931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.022 qpair failed and we were unable to recover it. 00:33:58.022 [2024-07-24 02:12:12.828101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.022 [2024-07-24 02:12:12.828130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.022 qpair failed and we were unable to recover it. 00:33:58.022 [2024-07-24 02:12:12.828281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.022 [2024-07-24 02:12:12.828309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.022 qpair failed and we were unable to recover it. 00:33:58.022 [2024-07-24 02:12:12.828438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.022 [2024-07-24 02:12:12.828467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.022 qpair failed and we were unable to recover it. 00:33:58.022 [2024-07-24 02:12:12.828583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.022 [2024-07-24 02:12:12.828612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.022 qpair failed and we were unable to recover it. 00:33:58.022 [2024-07-24 02:12:12.828817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.022 [2024-07-24 02:12:12.828847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.022 qpair failed and we were unable to recover it. 00:33:58.022 [2024-07-24 02:12:12.829011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.022 [2024-07-24 02:12:12.829054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.022 qpair failed and we were unable to recover it. 00:33:58.022 [2024-07-24 02:12:12.829157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.022 [2024-07-24 02:12:12.829183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.022 qpair failed and we were unable to recover it. 00:33:58.022 [2024-07-24 02:12:12.829314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.022 [2024-07-24 02:12:12.829347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.022 qpair failed and we were unable to recover it. 00:33:58.022 [2024-07-24 02:12:12.829479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.022 [2024-07-24 02:12:12.829506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.022 qpair failed and we were unable to recover it. 00:33:58.022 [2024-07-24 02:12:12.829651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.022 [2024-07-24 02:12:12.829680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.022 qpair failed and we were unable to recover it. 00:33:58.022 [2024-07-24 02:12:12.829848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.022 [2024-07-24 02:12:12.829893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.022 qpair failed and we were unable to recover it. 00:33:58.022 [2024-07-24 02:12:12.829994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.022 [2024-07-24 02:12:12.830025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.022 qpair failed and we were unable to recover it. 00:33:58.022 [2024-07-24 02:12:12.830133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.022 [2024-07-24 02:12:12.830161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.022 qpair failed and we were unable to recover it. 00:33:58.022 [2024-07-24 02:12:12.830290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.022 [2024-07-24 02:12:12.830338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.022 qpair failed and we were unable to recover it. 00:33:58.022 [2024-07-24 02:12:12.830498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.022 [2024-07-24 02:12:12.830529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.022 qpair failed and we were unable to recover it. 00:33:58.022 [2024-07-24 02:12:12.830670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.022 [2024-07-24 02:12:12.830698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.022 qpair failed and we were unable to recover it. 00:33:58.022 [2024-07-24 02:12:12.830868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.022 [2024-07-24 02:12:12.830897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.022 qpair failed and we were unable to recover it. 00:33:58.022 [2024-07-24 02:12:12.831013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.022 [2024-07-24 02:12:12.831042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.022 qpair failed and we were unable to recover it. 00:33:58.022 [2024-07-24 02:12:12.831184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.022 [2024-07-24 02:12:12.831212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.022 qpair failed and we were unable to recover it. 00:33:58.022 [2024-07-24 02:12:12.831368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.022 [2024-07-24 02:12:12.831396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.022 qpair failed and we were unable to recover it. 00:33:58.022 [2024-07-24 02:12:12.831546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.022 [2024-07-24 02:12:12.831591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.022 qpair failed and we were unable to recover it. 00:33:58.022 [2024-07-24 02:12:12.831746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.022 [2024-07-24 02:12:12.831790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.022 qpair failed and we were unable to recover it. 00:33:58.022 [2024-07-24 02:12:12.831945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.022 [2024-07-24 02:12:12.831988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.022 qpair failed and we were unable to recover it. 00:33:58.022 [2024-07-24 02:12:12.832087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.022 [2024-07-24 02:12:12.832114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.022 qpair failed and we were unable to recover it. 00:33:58.022 [2024-07-24 02:12:12.832253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.022 [2024-07-24 02:12:12.832280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.022 qpair failed and we were unable to recover it. 00:33:58.022 [2024-07-24 02:12:12.832445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.022 [2024-07-24 02:12:12.832478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.022 qpair failed and we were unable to recover it. 00:33:58.022 [2024-07-24 02:12:12.832624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.022 [2024-07-24 02:12:12.832656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.022 qpair failed and we were unable to recover it. 00:33:58.022 [2024-07-24 02:12:12.832804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.022 [2024-07-24 02:12:12.832834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.022 qpair failed and we were unable to recover it. 00:33:58.022 [2024-07-24 02:12:12.832994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.022 [2024-07-24 02:12:12.833021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.022 qpair failed and we were unable to recover it. 00:33:58.022 [2024-07-24 02:12:12.833173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.022 [2024-07-24 02:12:12.833201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.022 qpair failed and we were unable to recover it. 00:33:58.022 [2024-07-24 02:12:12.833369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.022 [2024-07-24 02:12:12.833405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.022 qpair failed and we were unable to recover it. 00:33:58.022 [2024-07-24 02:12:12.833561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.022 [2024-07-24 02:12:12.833590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.022 qpair failed and we were unable to recover it. 00:33:58.022 [2024-07-24 02:12:12.833712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.022 [2024-07-24 02:12:12.833737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.022 qpair failed and we were unable to recover it. 00:33:58.022 [2024-07-24 02:12:12.833899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.022 [2024-07-24 02:12:12.833931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.022 qpair failed and we were unable to recover it. 00:33:58.022 [2024-07-24 02:12:12.834099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.023 [2024-07-24 02:12:12.834128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.023 qpair failed and we were unable to recover it. 00:33:58.023 [2024-07-24 02:12:12.834268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.023 [2024-07-24 02:12:12.834296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.023 qpair failed and we were unable to recover it. 00:33:58.023 [2024-07-24 02:12:12.834482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.023 [2024-07-24 02:12:12.834516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.023 qpair failed and we were unable to recover it. 00:33:58.023 [2024-07-24 02:12:12.834649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.023 [2024-07-24 02:12:12.834678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.023 qpair failed and we were unable to recover it. 00:33:58.023 [2024-07-24 02:12:12.834824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.023 [2024-07-24 02:12:12.834852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.023 qpair failed and we were unable to recover it. 00:33:58.023 [2024-07-24 02:12:12.834972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.023 [2024-07-24 02:12:12.835002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.023 qpair failed and we were unable to recover it. 00:33:58.023 [2024-07-24 02:12:12.835195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.023 [2024-07-24 02:12:12.835242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.023 qpair failed and we were unable to recover it. 00:33:58.023 [2024-07-24 02:12:12.835408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.023 [2024-07-24 02:12:12.835436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.023 qpair failed and we were unable to recover it. 00:33:58.023 [2024-07-24 02:12:12.835593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.023 [2024-07-24 02:12:12.835637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.023 qpair failed and we were unable to recover it. 00:33:58.023 [2024-07-24 02:12:12.835794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.023 [2024-07-24 02:12:12.835839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.023 qpair failed and we were unable to recover it. 00:33:58.023 [2024-07-24 02:12:12.835994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.023 [2024-07-24 02:12:12.836038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.023 qpair failed and we were unable to recover it. 00:33:58.023 [2024-07-24 02:12:12.836174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.023 [2024-07-24 02:12:12.836202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.023 qpair failed and we were unable to recover it. 00:33:58.023 [2024-07-24 02:12:12.836356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.023 [2024-07-24 02:12:12.836386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.023 qpair failed and we were unable to recover it. 00:33:58.023 [2024-07-24 02:12:12.836532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.023 [2024-07-24 02:12:12.836562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.023 qpair failed and we were unable to recover it. 00:33:58.023 [2024-07-24 02:12:12.836705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.023 [2024-07-24 02:12:12.836734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.023 qpair failed and we were unable to recover it. 00:33:58.023 [2024-07-24 02:12:12.836844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.023 [2024-07-24 02:12:12.836873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.023 qpair failed and we were unable to recover it. 00:33:58.023 [2024-07-24 02:12:12.837045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.023 [2024-07-24 02:12:12.837074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.023 qpair failed and we were unable to recover it. 00:33:58.023 [2024-07-24 02:12:12.837217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.023 [2024-07-24 02:12:12.837245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.023 qpair failed and we were unable to recover it. 00:33:58.023 [2024-07-24 02:12:12.837398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.023 [2024-07-24 02:12:12.837426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.023 qpair failed and we were unable to recover it. 00:33:58.023 [2024-07-24 02:12:12.837607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.023 [2024-07-24 02:12:12.837652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.023 qpair failed and we were unable to recover it. 00:33:58.023 [2024-07-24 02:12:12.837805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.023 [2024-07-24 02:12:12.837852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.023 qpair failed and we were unable to recover it. 00:33:58.023 [2024-07-24 02:12:12.838008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.023 [2024-07-24 02:12:12.838052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.023 qpair failed and we were unable to recover it. 00:33:58.023 [2024-07-24 02:12:12.838204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.023 [2024-07-24 02:12:12.838231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.023 qpair failed and we were unable to recover it. 00:33:58.023 [2024-07-24 02:12:12.838371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.023 [2024-07-24 02:12:12.838398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.023 qpair failed and we were unable to recover it. 00:33:58.023 [2024-07-24 02:12:12.838553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.023 [2024-07-24 02:12:12.838600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.023 qpair failed and we were unable to recover it. 00:33:58.023 [2024-07-24 02:12:12.838760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.023 [2024-07-24 02:12:12.838804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.023 qpair failed and we were unable to recover it. 00:33:58.023 [2024-07-24 02:12:12.838952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.023 [2024-07-24 02:12:12.838982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.023 qpair failed and we were unable to recover it. 00:33:58.023 [2024-07-24 02:12:12.839136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.023 [2024-07-24 02:12:12.839163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.023 qpair failed and we were unable to recover it. 00:33:58.023 [2024-07-24 02:12:12.839297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.023 [2024-07-24 02:12:12.839330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.023 qpair failed and we were unable to recover it. 00:33:58.023 [2024-07-24 02:12:12.839510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.023 [2024-07-24 02:12:12.839554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.023 qpair failed and we were unable to recover it. 00:33:58.023 [2024-07-24 02:12:12.839780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.023 [2024-07-24 02:12:12.839832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.023 qpair failed and we were unable to recover it. 00:33:58.023 [2024-07-24 02:12:12.840016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.023 [2024-07-24 02:12:12.840065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.023 qpair failed and we were unable to recover it. 00:33:58.023 [2024-07-24 02:12:12.840198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.023 [2024-07-24 02:12:12.840225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.023 qpair failed and we were unable to recover it. 00:33:58.023 [2024-07-24 02:12:12.840335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.023 [2024-07-24 02:12:12.840362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.023 qpair failed and we were unable to recover it. 00:33:58.307 [2024-07-24 02:12:12.840518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.307 [2024-07-24 02:12:12.840562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.307 qpair failed and we were unable to recover it. 00:33:58.307 [2024-07-24 02:12:12.840738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.307 [2024-07-24 02:12:12.840782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.307 qpair failed and we were unable to recover it. 00:33:58.307 [2024-07-24 02:12:12.840909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.307 [2024-07-24 02:12:12.840952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.307 qpair failed and we were unable to recover it. 00:33:58.307 [2024-07-24 02:12:12.841105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.307 [2024-07-24 02:12:12.841132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.307 qpair failed and we were unable to recover it. 00:33:58.307 [2024-07-24 02:12:12.841234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.307 [2024-07-24 02:12:12.841260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.307 qpair failed and we were unable to recover it. 00:33:58.307 [2024-07-24 02:12:12.841380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.307 [2024-07-24 02:12:12.841411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.307 qpair failed and we were unable to recover it. 00:33:58.307 [2024-07-24 02:12:12.841589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.307 [2024-07-24 02:12:12.841618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.307 qpair failed and we were unable to recover it. 00:33:58.307 [2024-07-24 02:12:12.841816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.307 [2024-07-24 02:12:12.841861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.307 qpair failed and we were unable to recover it. 00:33:58.307 [2024-07-24 02:12:12.841960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.307 [2024-07-24 02:12:12.841986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.307 qpair failed and we were unable to recover it. 00:33:58.307 [2024-07-24 02:12:12.842116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.307 [2024-07-24 02:12:12.842142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.307 qpair failed and we were unable to recover it. 00:33:58.307 [2024-07-24 02:12:12.842281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.307 [2024-07-24 02:12:12.842306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.307 qpair failed and we were unable to recover it. 00:33:58.307 [2024-07-24 02:12:12.842503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.307 [2024-07-24 02:12:12.842549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.307 qpair failed and we were unable to recover it. 00:33:58.307 [2024-07-24 02:12:12.842683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.307 [2024-07-24 02:12:12.842727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.307 qpair failed and we were unable to recover it. 00:33:58.307 [2024-07-24 02:12:12.842888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.307 [2024-07-24 02:12:12.842932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.307 qpair failed and we were unable to recover it. 00:33:58.307 [2024-07-24 02:12:12.843092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.307 [2024-07-24 02:12:12.843118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.307 qpair failed and we were unable to recover it. 00:33:58.307 [2024-07-24 02:12:12.843251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.307 [2024-07-24 02:12:12.843277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.307 qpair failed and we were unable to recover it. 00:33:58.307 [2024-07-24 02:12:12.843404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.307 [2024-07-24 02:12:12.843449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.307 qpair failed and we were unable to recover it. 00:33:58.307 [2024-07-24 02:12:12.843581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.307 [2024-07-24 02:12:12.843626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.307 qpair failed and we were unable to recover it. 00:33:58.307 [2024-07-24 02:12:12.843780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.307 [2024-07-24 02:12:12.843823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.307 qpair failed and we were unable to recover it. 00:33:58.307 [2024-07-24 02:12:12.843939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.307 [2024-07-24 02:12:12.843968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.307 qpair failed and we were unable to recover it. 00:33:58.307 [2024-07-24 02:12:12.844142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.307 [2024-07-24 02:12:12.844168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.307 qpair failed and we were unable to recover it. 00:33:58.307 [2024-07-24 02:12:12.844300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.307 [2024-07-24 02:12:12.844332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.307 qpair failed and we were unable to recover it. 00:33:58.307 [2024-07-24 02:12:12.844463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.307 [2024-07-24 02:12:12.844489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.308 qpair failed and we were unable to recover it. 00:33:58.308 [2024-07-24 02:12:12.844593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.308 [2024-07-24 02:12:12.844620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.308 qpair failed and we were unable to recover it. 00:33:58.308 [2024-07-24 02:12:12.844780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.308 [2024-07-24 02:12:12.844806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.308 qpair failed and we were unable to recover it. 00:33:58.308 [2024-07-24 02:12:12.844914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.308 [2024-07-24 02:12:12.844940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.308 qpair failed and we were unable to recover it. 00:33:58.308 [2024-07-24 02:12:12.845067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.308 [2024-07-24 02:12:12.845093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.308 qpair failed and we were unable to recover it. 00:33:58.308 [2024-07-24 02:12:12.845246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.308 [2024-07-24 02:12:12.845272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.308 qpair failed and we were unable to recover it. 00:33:58.308 [2024-07-24 02:12:12.845388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.308 [2024-07-24 02:12:12.845416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.308 qpair failed and we were unable to recover it. 00:33:58.308 [2024-07-24 02:12:12.845551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.308 [2024-07-24 02:12:12.845577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.308 qpair failed and we were unable to recover it. 00:33:58.308 [2024-07-24 02:12:12.845710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.308 [2024-07-24 02:12:12.845736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.308 qpair failed and we were unable to recover it. 00:33:58.308 [2024-07-24 02:12:12.845892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.308 [2024-07-24 02:12:12.845919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.308 qpair failed and we were unable to recover it. 00:33:58.308 [2024-07-24 02:12:12.846078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.308 [2024-07-24 02:12:12.846104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.308 qpair failed and we were unable to recover it. 00:33:58.308 [2024-07-24 02:12:12.846264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.308 [2024-07-24 02:12:12.846290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.308 qpair failed and we were unable to recover it. 00:33:58.308 [2024-07-24 02:12:12.846477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.308 [2024-07-24 02:12:12.846522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.308 qpair failed and we were unable to recover it. 00:33:58.308 [2024-07-24 02:12:12.846699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.308 [2024-07-24 02:12:12.846744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.308 qpair failed and we were unable to recover it. 00:33:58.308 [2024-07-24 02:12:12.846902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.308 [2024-07-24 02:12:12.846944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.308 qpair failed and we were unable to recover it. 00:33:58.308 [2024-07-24 02:12:12.847071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.308 [2024-07-24 02:12:12.847103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.308 qpair failed and we were unable to recover it. 00:33:58.308 [2024-07-24 02:12:12.847211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.308 [2024-07-24 02:12:12.847236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.308 qpair failed and we were unable to recover it. 00:33:58.308 [2024-07-24 02:12:12.847363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.308 [2024-07-24 02:12:12.847392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.308 qpair failed and we were unable to recover it. 00:33:58.308 [2024-07-24 02:12:12.847554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.308 [2024-07-24 02:12:12.847597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.308 qpair failed and we were unable to recover it. 00:33:58.308 [2024-07-24 02:12:12.847766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.308 [2024-07-24 02:12:12.847794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.308 qpair failed and we were unable to recover it. 00:33:58.308 [2024-07-24 02:12:12.847964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.308 [2024-07-24 02:12:12.848008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.308 qpair failed and we were unable to recover it. 00:33:58.308 [2024-07-24 02:12:12.848109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.308 [2024-07-24 02:12:12.848136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.308 qpair failed and we were unable to recover it. 00:33:58.308 [2024-07-24 02:12:12.848267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.308 [2024-07-24 02:12:12.848294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.308 qpair failed and we were unable to recover it. 00:33:58.308 [2024-07-24 02:12:12.848460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.308 [2024-07-24 02:12:12.848488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.308 qpair failed and we were unable to recover it. 00:33:58.308 [2024-07-24 02:12:12.848639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.308 [2024-07-24 02:12:12.848667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.308 qpair failed and we were unable to recover it. 00:33:58.308 [2024-07-24 02:12:12.848863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.308 [2024-07-24 02:12:12.848906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.308 qpair failed and we were unable to recover it. 00:33:58.308 [2024-07-24 02:12:12.849065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.308 [2024-07-24 02:12:12.849091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.308 qpair failed and we were unable to recover it. 00:33:58.308 [2024-07-24 02:12:12.849224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.308 [2024-07-24 02:12:12.849258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.308 qpair failed and we were unable to recover it. 00:33:58.308 [2024-07-24 02:12:12.849438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.308 [2024-07-24 02:12:12.849483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.308 qpair failed and we were unable to recover it. 00:33:58.308 [2024-07-24 02:12:12.849590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.308 [2024-07-24 02:12:12.849617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.308 qpair failed and we were unable to recover it. 00:33:58.308 [2024-07-24 02:12:12.849740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.308 [2024-07-24 02:12:12.849770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.308 qpair failed and we were unable to recover it. 00:33:58.308 [2024-07-24 02:12:12.849913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.308 [2024-07-24 02:12:12.849939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.308 qpair failed and we were unable to recover it. 00:33:58.308 [2024-07-24 02:12:12.850074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.308 [2024-07-24 02:12:12.850100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.308 qpair failed and we were unable to recover it. 00:33:58.308 [2024-07-24 02:12:12.850211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.308 [2024-07-24 02:12:12.850238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.308 qpair failed and we were unable to recover it. 00:33:58.308 [2024-07-24 02:12:12.850418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.308 [2024-07-24 02:12:12.850457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.308 qpair failed and we were unable to recover it. 00:33:58.308 [2024-07-24 02:12:12.850629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.308 [2024-07-24 02:12:12.850659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.308 qpair failed and we were unable to recover it. 00:33:58.308 [2024-07-24 02:12:12.850832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.308 [2024-07-24 02:12:12.850861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.308 qpair failed and we were unable to recover it. 00:33:58.309 [2024-07-24 02:12:12.851063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.309 [2024-07-24 02:12:12.851092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.309 qpair failed and we were unable to recover it. 00:33:58.309 [2024-07-24 02:12:12.851195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.309 [2024-07-24 02:12:12.851224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.309 qpair failed and we were unable to recover it. 00:33:58.309 [2024-07-24 02:12:12.851382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.309 [2024-07-24 02:12:12.851409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.309 qpair failed and we were unable to recover it. 00:33:58.309 [2024-07-24 02:12:12.851565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.309 [2024-07-24 02:12:12.851611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.309 qpair failed and we were unable to recover it. 00:33:58.309 [2024-07-24 02:12:12.851791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.309 [2024-07-24 02:12:12.851817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.309 qpair failed and we were unable to recover it. 00:33:58.309 [2024-07-24 02:12:12.851995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.309 [2024-07-24 02:12:12.852043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.309 qpair failed and we were unable to recover it. 00:33:58.309 [2024-07-24 02:12:12.852203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.309 [2024-07-24 02:12:12.852229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.309 qpair failed and we were unable to recover it. 00:33:58.309 [2024-07-24 02:12:12.852379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.309 [2024-07-24 02:12:12.852409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.309 qpair failed and we were unable to recover it. 00:33:58.309 [2024-07-24 02:12:12.852583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.309 [2024-07-24 02:12:12.852625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.309 qpair failed and we were unable to recover it. 00:33:58.309 [2024-07-24 02:12:12.852865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.309 [2024-07-24 02:12:12.852916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.309 qpair failed and we were unable to recover it. 00:33:58.309 [2024-07-24 02:12:12.853018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.309 [2024-07-24 02:12:12.853044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.309 qpair failed and we were unable to recover it. 00:33:58.309 [2024-07-24 02:12:12.853176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.309 [2024-07-24 02:12:12.853201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.309 qpair failed and we were unable to recover it. 00:33:58.309 [2024-07-24 02:12:12.853330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.309 [2024-07-24 02:12:12.853357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.309 qpair failed and we were unable to recover it. 00:33:58.309 [2024-07-24 02:12:12.853486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.309 [2024-07-24 02:12:12.853533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.309 qpair failed and we were unable to recover it. 00:33:58.309 [2024-07-24 02:12:12.853713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.309 [2024-07-24 02:12:12.853756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.309 qpair failed and we were unable to recover it. 00:33:58.309 [2024-07-24 02:12:12.853909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.309 [2024-07-24 02:12:12.853934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.309 qpair failed and we were unable to recover it. 00:33:58.309 [2024-07-24 02:12:12.854068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.309 [2024-07-24 02:12:12.854094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.309 qpair failed and we were unable to recover it. 00:33:58.309 [2024-07-24 02:12:12.854252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.309 [2024-07-24 02:12:12.854277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.309 qpair failed and we were unable to recover it. 00:33:58.309 [2024-07-24 02:12:12.854464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.309 [2024-07-24 02:12:12.854509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.309 qpair failed and we were unable to recover it. 00:33:58.309 [2024-07-24 02:12:12.854676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.309 [2024-07-24 02:12:12.854720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.309 qpair failed and we were unable to recover it. 00:33:58.309 [2024-07-24 02:12:12.854875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.309 [2024-07-24 02:12:12.854922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.309 qpair failed and we were unable to recover it. 00:33:58.309 [2024-07-24 02:12:12.855083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.309 [2024-07-24 02:12:12.855110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.309 qpair failed and we were unable to recover it. 00:33:58.309 [2024-07-24 02:12:12.855242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.309 [2024-07-24 02:12:12.855268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.309 qpair failed and we were unable to recover it. 00:33:58.309 [2024-07-24 02:12:12.855449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.309 [2024-07-24 02:12:12.855495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.309 qpair failed and we were unable to recover it. 00:33:58.309 [2024-07-24 02:12:12.855650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.309 [2024-07-24 02:12:12.855680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.309 qpair failed and we were unable to recover it. 00:33:58.309 [2024-07-24 02:12:12.855875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.309 [2024-07-24 02:12:12.855918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.309 qpair failed and we were unable to recover it. 00:33:58.309 [2024-07-24 02:12:12.856052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.309 [2024-07-24 02:12:12.856079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.309 qpair failed and we were unable to recover it. 00:33:58.309 [2024-07-24 02:12:12.856234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.309 [2024-07-24 02:12:12.856261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.309 qpair failed and we were unable to recover it. 00:33:58.309 [2024-07-24 02:12:12.856448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.309 [2024-07-24 02:12:12.856493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.309 qpair failed and we were unable to recover it. 00:33:58.309 [2024-07-24 02:12:12.856633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.309 [2024-07-24 02:12:12.856682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.309 qpair failed and we were unable to recover it. 00:33:58.309 [2024-07-24 02:12:12.856837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.309 [2024-07-24 02:12:12.856886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.309 qpair failed and we were unable to recover it. 00:33:58.309 [2024-07-24 02:12:12.856989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.309 [2024-07-24 02:12:12.857015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.309 qpair failed and we were unable to recover it. 00:33:58.309 [2024-07-24 02:12:12.857181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.309 [2024-07-24 02:12:12.857209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.309 qpair failed and we were unable to recover it. 00:33:58.309 [2024-07-24 02:12:12.857360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.309 [2024-07-24 02:12:12.857396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.309 qpair failed and we were unable to recover it. 00:33:58.309 [2024-07-24 02:12:12.857573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.309 [2024-07-24 02:12:12.857617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.309 qpair failed and we were unable to recover it. 00:33:58.309 [2024-07-24 02:12:12.857732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.309 [2024-07-24 02:12:12.857774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.309 qpair failed and we were unable to recover it. 00:33:58.309 [2024-07-24 02:12:12.857932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.310 [2024-07-24 02:12:12.857958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.310 qpair failed and we were unable to recover it. 00:33:58.310 [2024-07-24 02:12:12.858094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.310 [2024-07-24 02:12:12.858119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.310 qpair failed and we were unable to recover it. 00:33:58.310 [2024-07-24 02:12:12.858252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.310 [2024-07-24 02:12:12.858277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.310 qpair failed and we were unable to recover it. 00:33:58.310 [2024-07-24 02:12:12.858392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.310 [2024-07-24 02:12:12.858419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.310 qpair failed and we were unable to recover it. 00:33:58.310 [2024-07-24 02:12:12.858563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.310 [2024-07-24 02:12:12.858593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.310 qpair failed and we were unable to recover it. 00:33:58.310 [2024-07-24 02:12:12.858766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.310 [2024-07-24 02:12:12.858811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.310 qpair failed and we were unable to recover it. 00:33:58.310 [2024-07-24 02:12:12.858975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.310 [2024-07-24 02:12:12.859002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.310 qpair failed and we were unable to recover it. 00:33:58.310 [2024-07-24 02:12:12.859136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.310 [2024-07-24 02:12:12.859162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.310 qpair failed and we were unable to recover it. 00:33:58.310 [2024-07-24 02:12:12.859295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.310 [2024-07-24 02:12:12.859330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.310 qpair failed and we were unable to recover it. 00:33:58.310 [2024-07-24 02:12:12.859522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.310 [2024-07-24 02:12:12.859582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.310 qpair failed and we were unable to recover it. 00:33:58.310 [2024-07-24 02:12:12.859740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.310 [2024-07-24 02:12:12.859784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.310 qpair failed and we were unable to recover it. 00:33:58.310 [2024-07-24 02:12:12.859942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.310 [2024-07-24 02:12:12.859985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.310 qpair failed and we were unable to recover it. 00:33:58.310 [2024-07-24 02:12:12.860119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.310 [2024-07-24 02:12:12.860145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.310 qpair failed and we were unable to recover it. 00:33:58.310 [2024-07-24 02:12:12.860396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.310 [2024-07-24 02:12:12.860424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.310 qpair failed and we were unable to recover it. 00:33:58.310 [2024-07-24 02:12:12.860582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.310 [2024-07-24 02:12:12.860608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.310 qpair failed and we were unable to recover it. 00:33:58.310 [2024-07-24 02:12:12.860721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.310 [2024-07-24 02:12:12.860749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.310 qpair failed and we were unable to recover it. 00:33:58.310 [2024-07-24 02:12:12.860920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.310 [2024-07-24 02:12:12.860964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.310 qpair failed and we were unable to recover it. 00:33:58.310 [2024-07-24 02:12:12.861127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.310 [2024-07-24 02:12:12.861153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.310 qpair failed and we were unable to recover it. 00:33:58.310 [2024-07-24 02:12:12.861287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.310 [2024-07-24 02:12:12.861312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.310 qpair failed and we were unable to recover it. 00:33:58.310 [2024-07-24 02:12:12.861511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.310 [2024-07-24 02:12:12.861558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.310 qpair failed and we were unable to recover it. 00:33:58.310 [2024-07-24 02:12:12.861687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.310 [2024-07-24 02:12:12.861716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.310 qpair failed and we were unable to recover it. 00:33:58.310 [2024-07-24 02:12:12.861972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.310 [2024-07-24 02:12:12.862026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.310 qpair failed and we were unable to recover it. 00:33:58.310 [2024-07-24 02:12:12.862182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.310 [2024-07-24 02:12:12.862208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.310 qpair failed and we were unable to recover it. 00:33:58.310 [2024-07-24 02:12:12.862340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.310 [2024-07-24 02:12:12.862372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.310 qpair failed and we were unable to recover it. 00:33:58.310 [2024-07-24 02:12:12.862540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.310 [2024-07-24 02:12:12.862567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.310 qpair failed and we were unable to recover it. 00:33:58.310 [2024-07-24 02:12:12.862722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.310 [2024-07-24 02:12:12.862764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.310 qpair failed and we were unable to recover it. 00:33:58.310 [2024-07-24 02:12:12.862946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.310 [2024-07-24 02:12:12.862975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.310 qpair failed and we were unable to recover it. 00:33:58.310 [2024-07-24 02:12:12.863150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.310 [2024-07-24 02:12:12.863177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.310 qpair failed and we were unable to recover it. 00:33:58.310 [2024-07-24 02:12:12.863328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.310 [2024-07-24 02:12:12.863374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.310 qpair failed and we were unable to recover it. 00:33:58.310 [2024-07-24 02:12:12.863484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.310 [2024-07-24 02:12:12.863510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.310 qpair failed and we were unable to recover it. 00:33:58.310 [2024-07-24 02:12:12.863672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.310 [2024-07-24 02:12:12.863698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.310 qpair failed and we were unable to recover it. 00:33:58.310 [2024-07-24 02:12:12.863880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.310 [2024-07-24 02:12:12.863923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.310 qpair failed and we were unable to recover it. 00:33:58.310 [2024-07-24 02:12:12.864086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.310 [2024-07-24 02:12:12.864112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.310 qpair failed and we were unable to recover it. 00:33:58.310 [2024-07-24 02:12:12.864246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.310 [2024-07-24 02:12:12.864271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.310 qpair failed and we were unable to recover it. 00:33:58.310 [2024-07-24 02:12:12.864426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.310 [2024-07-24 02:12:12.864472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.310 qpair failed and we were unable to recover it. 00:33:58.310 [2024-07-24 02:12:12.864630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.310 [2024-07-24 02:12:12.864674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.310 qpair failed and we were unable to recover it. 00:33:58.310 [2024-07-24 02:12:12.864858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.310 [2024-07-24 02:12:12.864901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.311 qpair failed and we were unable to recover it. 00:33:58.311 [2024-07-24 02:12:12.865038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.311 [2024-07-24 02:12:12.865063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.311 qpair failed and we were unable to recover it. 00:33:58.311 [2024-07-24 02:12:12.865202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.311 [2024-07-24 02:12:12.865228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.311 qpair failed and we were unable to recover it. 00:33:58.311 [2024-07-24 02:12:12.865346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.311 [2024-07-24 02:12:12.865372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.311 qpair failed and we were unable to recover it. 00:33:58.311 [2024-07-24 02:12:12.865527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.311 [2024-07-24 02:12:12.865570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.311 qpair failed and we were unable to recover it. 00:33:58.311 [2024-07-24 02:12:12.865739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.311 [2024-07-24 02:12:12.865784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.311 qpair failed and we were unable to recover it. 00:33:58.311 [2024-07-24 02:12:12.866043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.311 [2024-07-24 02:12:12.866087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.311 qpair failed and we were unable to recover it. 00:33:58.311 [2024-07-24 02:12:12.866229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.311 [2024-07-24 02:12:12.866256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.311 qpair failed and we were unable to recover it. 00:33:58.311 [2024-07-24 02:12:12.866387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.311 [2024-07-24 02:12:12.866417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.311 qpair failed and we were unable to recover it. 00:33:58.311 [2024-07-24 02:12:12.866586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.311 [2024-07-24 02:12:12.866630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.311 qpair failed and we were unable to recover it. 00:33:58.311 [2024-07-24 02:12:12.866817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.311 [2024-07-24 02:12:12.866860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.311 qpair failed and we were unable to recover it. 00:33:58.311 [2024-07-24 02:12:12.867024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.311 [2024-07-24 02:12:12.867051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.311 qpair failed and we were unable to recover it. 00:33:58.311 [2024-07-24 02:12:12.867183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.311 [2024-07-24 02:12:12.867210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.311 qpair failed and we were unable to recover it. 00:33:58.311 [2024-07-24 02:12:12.867382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.311 [2024-07-24 02:12:12.867431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.311 qpair failed and we were unable to recover it. 00:33:58.311 [2024-07-24 02:12:12.867588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.311 [2024-07-24 02:12:12.867630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.311 qpair failed and we were unable to recover it. 00:33:58.311 [2024-07-24 02:12:12.867791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.311 [2024-07-24 02:12:12.867817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.311 qpair failed and we were unable to recover it. 00:33:58.311 [2024-07-24 02:12:12.867950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.311 [2024-07-24 02:12:12.867975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.311 qpair failed and we were unable to recover it. 00:33:58.311 [2024-07-24 02:12:12.868115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.311 [2024-07-24 02:12:12.868142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.311 qpair failed and we were unable to recover it. 00:33:58.311 [2024-07-24 02:12:12.868274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.311 [2024-07-24 02:12:12.868300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.311 qpair failed and we were unable to recover it. 00:33:58.311 [2024-07-24 02:12:12.868464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.311 [2024-07-24 02:12:12.868490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.311 qpair failed and we were unable to recover it. 00:33:58.311 [2024-07-24 02:12:12.868634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.311 [2024-07-24 02:12:12.868661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.311 qpair failed and we were unable to recover it. 00:33:58.311 [2024-07-24 02:12:12.868790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.311 [2024-07-24 02:12:12.868816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.311 qpair failed and we were unable to recover it. 00:33:58.311 [2024-07-24 02:12:12.868944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.311 [2024-07-24 02:12:12.868971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.311 qpair failed and we were unable to recover it. 00:33:58.311 [2024-07-24 02:12:12.869076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.311 [2024-07-24 02:12:12.869101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.311 qpair failed and we were unable to recover it. 00:33:58.311 [2024-07-24 02:12:12.869225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.311 [2024-07-24 02:12:12.869251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.311 qpair failed and we were unable to recover it. 00:33:58.311 [2024-07-24 02:12:12.869398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.311 [2024-07-24 02:12:12.869425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.311 qpair failed and we were unable to recover it. 00:33:58.311 [2024-07-24 02:12:12.869581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.311 [2024-07-24 02:12:12.869607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.311 qpair failed and we were unable to recover it. 00:33:58.311 [2024-07-24 02:12:12.869713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.311 [2024-07-24 02:12:12.869739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.311 qpair failed and we were unable to recover it. 00:33:58.311 [2024-07-24 02:12:12.869862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.311 [2024-07-24 02:12:12.869887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.311 qpair failed and we were unable to recover it. 00:33:58.311 [2024-07-24 02:12:12.870016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.311 [2024-07-24 02:12:12.870042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.311 qpair failed and we were unable to recover it. 00:33:58.311 [2024-07-24 02:12:12.870198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.311 [2024-07-24 02:12:12.870225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.311 qpair failed and we were unable to recover it. 00:33:58.311 [2024-07-24 02:12:12.870333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.311 [2024-07-24 02:12:12.870369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.311 qpair failed and we were unable to recover it. 00:33:58.311 [2024-07-24 02:12:12.870509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.311 [2024-07-24 02:12:12.870553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.311 qpair failed and we were unable to recover it. 00:33:58.311 [2024-07-24 02:12:12.870741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.311 [2024-07-24 02:12:12.870786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.311 qpair failed and we were unable to recover it. 00:33:58.311 [2024-07-24 02:12:12.870920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.311 [2024-07-24 02:12:12.870947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.311 qpair failed and we were unable to recover it. 00:33:58.311 [2024-07-24 02:12:12.871081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.311 [2024-07-24 02:12:12.871107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.311 qpair failed and we were unable to recover it. 00:33:58.311 [2024-07-24 02:12:12.871238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.311 [2024-07-24 02:12:12.871265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.311 qpair failed and we were unable to recover it. 00:33:58.311 [2024-07-24 02:12:12.871415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.312 [2024-07-24 02:12:12.871459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.312 qpair failed and we were unable to recover it. 00:33:58.312 [2024-07-24 02:12:12.871611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.312 [2024-07-24 02:12:12.871653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.312 qpair failed and we were unable to recover it. 00:33:58.312 [2024-07-24 02:12:12.871808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.312 [2024-07-24 02:12:12.871853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.312 qpair failed and we were unable to recover it. 00:33:58.312 [2024-07-24 02:12:12.871993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.312 [2024-07-24 02:12:12.872020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.312 qpair failed and we were unable to recover it. 00:33:58.312 [2024-07-24 02:12:12.872139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.312 [2024-07-24 02:12:12.872166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.312 qpair failed and we were unable to recover it. 00:33:58.312 [2024-07-24 02:12:12.872272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.312 [2024-07-24 02:12:12.872298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.312 qpair failed and we were unable to recover it. 00:33:58.312 [2024-07-24 02:12:12.872454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.312 [2024-07-24 02:12:12.872481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.312 qpair failed and we were unable to recover it. 00:33:58.312 [2024-07-24 02:12:12.872591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.312 [2024-07-24 02:12:12.872619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.312 qpair failed and we were unable to recover it. 00:33:58.312 [2024-07-24 02:12:12.872753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.312 [2024-07-24 02:12:12.872780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.312 qpair failed and we were unable to recover it. 00:33:58.312 [2024-07-24 02:12:12.872937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.312 [2024-07-24 02:12:12.872963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.312 qpair failed and we were unable to recover it. 00:33:58.312 [2024-07-24 02:12:12.873100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.312 [2024-07-24 02:12:12.873126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.312 qpair failed and we were unable to recover it. 00:33:58.312 [2024-07-24 02:12:12.873264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.312 [2024-07-24 02:12:12.873290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.312 qpair failed and we were unable to recover it. 00:33:58.312 [2024-07-24 02:12:12.873435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.312 [2024-07-24 02:12:12.873462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.312 qpair failed and we were unable to recover it. 00:33:58.312 [2024-07-24 02:12:12.873602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.312 [2024-07-24 02:12:12.873628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.312 qpair failed and we were unable to recover it. 00:33:58.312 [2024-07-24 02:12:12.873795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.312 [2024-07-24 02:12:12.873822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.312 qpair failed and we were unable to recover it. 00:33:58.312 [2024-07-24 02:12:12.873953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.312 [2024-07-24 02:12:12.873980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.312 qpair failed and we were unable to recover it. 00:33:58.312 [2024-07-24 02:12:12.874111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.312 [2024-07-24 02:12:12.874142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.312 qpair failed and we were unable to recover it. 00:33:58.312 [2024-07-24 02:12:12.874273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.312 [2024-07-24 02:12:12.874299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.312 qpair failed and we were unable to recover it. 00:33:58.312 [2024-07-24 02:12:12.874438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.312 [2024-07-24 02:12:12.874465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.312 qpair failed and we were unable to recover it. 00:33:58.312 [2024-07-24 02:12:12.874599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.312 [2024-07-24 02:12:12.874625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.312 qpair failed and we were unable to recover it. 00:33:58.312 [2024-07-24 02:12:12.874797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.312 [2024-07-24 02:12:12.874823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.312 qpair failed and we were unable to recover it. 00:33:58.312 [2024-07-24 02:12:12.874987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.312 [2024-07-24 02:12:12.875013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.312 qpair failed and we were unable to recover it. 00:33:58.312 [2024-07-24 02:12:12.875115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.312 [2024-07-24 02:12:12.875141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.312 qpair failed and we were unable to recover it. 00:33:58.312 [2024-07-24 02:12:12.875274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.312 [2024-07-24 02:12:12.875300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.312 qpair failed and we were unable to recover it. 00:33:58.312 [2024-07-24 02:12:12.875417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.312 [2024-07-24 02:12:12.875443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.312 qpair failed and we were unable to recover it. 00:33:58.312 [2024-07-24 02:12:12.875549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.312 [2024-07-24 02:12:12.875586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.312 qpair failed and we were unable to recover it. 00:33:58.312 [2024-07-24 02:12:12.875723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.312 [2024-07-24 02:12:12.875750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.312 qpair failed and we were unable to recover it. 00:33:58.312 [2024-07-24 02:12:12.875913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.312 [2024-07-24 02:12:12.875939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.312 qpair failed and we were unable to recover it. 00:33:58.312 [2024-07-24 02:12:12.876073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.312 [2024-07-24 02:12:12.876100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.312 qpair failed and we were unable to recover it. 00:33:58.312 [2024-07-24 02:12:12.876231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.312 [2024-07-24 02:12:12.876258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.312 qpair failed and we were unable to recover it. 00:33:58.313 [2024-07-24 02:12:12.876413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.313 [2024-07-24 02:12:12.876440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.313 qpair failed and we were unable to recover it. 00:33:58.313 [2024-07-24 02:12:12.876600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.313 [2024-07-24 02:12:12.876626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.313 qpair failed and we were unable to recover it. 00:33:58.313 [2024-07-24 02:12:12.876758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.313 [2024-07-24 02:12:12.876784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.313 qpair failed and we were unable to recover it. 00:33:58.313 [2024-07-24 02:12:12.876892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.313 [2024-07-24 02:12:12.876918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.313 qpair failed and we were unable to recover it. 00:33:58.313 [2024-07-24 02:12:12.877020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.313 [2024-07-24 02:12:12.877046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.313 qpair failed and we were unable to recover it. 00:33:58.313 [2024-07-24 02:12:12.877204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.313 [2024-07-24 02:12:12.877229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.313 qpair failed and we were unable to recover it. 00:33:58.313 [2024-07-24 02:12:12.877364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.313 [2024-07-24 02:12:12.877391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.313 qpair failed and we were unable to recover it. 00:33:58.313 [2024-07-24 02:12:12.877522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.313 [2024-07-24 02:12:12.877548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.313 qpair failed and we were unable to recover it. 00:33:58.313 [2024-07-24 02:12:12.877670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.313 [2024-07-24 02:12:12.877714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.313 qpair failed and we were unable to recover it. 00:33:58.313 [2024-07-24 02:12:12.877864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.313 [2024-07-24 02:12:12.877891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.313 qpair failed and we were unable to recover it. 00:33:58.313 [2024-07-24 02:12:12.878050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.313 [2024-07-24 02:12:12.878077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.313 qpair failed and we were unable to recover it. 00:33:58.313 [2024-07-24 02:12:12.878172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.313 [2024-07-24 02:12:12.878198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.313 qpair failed and we were unable to recover it. 00:33:58.313 [2024-07-24 02:12:12.878336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.313 [2024-07-24 02:12:12.878363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.313 qpair failed and we were unable to recover it. 00:33:58.313 [2024-07-24 02:12:12.878522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.313 [2024-07-24 02:12:12.878565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.313 qpair failed and we were unable to recover it. 00:33:58.313 [2024-07-24 02:12:12.878746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.313 [2024-07-24 02:12:12.878792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.313 qpair failed and we were unable to recover it. 00:33:58.313 [2024-07-24 02:12:12.878930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.313 [2024-07-24 02:12:12.878956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.313 qpair failed and we were unable to recover it. 00:33:58.313 [2024-07-24 02:12:12.879051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.313 [2024-07-24 02:12:12.879077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.313 qpair failed and we were unable to recover it. 00:33:58.313 [2024-07-24 02:12:12.879206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.313 [2024-07-24 02:12:12.879232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.313 qpair failed and we were unable to recover it. 00:33:58.313 [2024-07-24 02:12:12.879363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.313 [2024-07-24 02:12:12.879390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.313 qpair failed and we were unable to recover it. 00:33:58.313 [2024-07-24 02:12:12.879545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.313 [2024-07-24 02:12:12.879588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.313 qpair failed and we were unable to recover it. 00:33:58.313 [2024-07-24 02:12:12.879772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.313 [2024-07-24 02:12:12.879817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.313 qpair failed and we were unable to recover it. 00:33:58.313 [2024-07-24 02:12:12.879979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.313 [2024-07-24 02:12:12.880005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.313 qpair failed and we were unable to recover it. 00:33:58.313 [2024-07-24 02:12:12.880183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.313 [2024-07-24 02:12:12.880219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.313 qpair failed and we were unable to recover it. 00:33:58.313 [2024-07-24 02:12:12.880378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.313 [2024-07-24 02:12:12.880410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.313 qpair failed and we were unable to recover it. 00:33:58.313 [2024-07-24 02:12:12.880559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.313 [2024-07-24 02:12:12.880589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.313 qpair failed and we were unable to recover it. 00:33:58.313 [2024-07-24 02:12:12.880732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.313 [2024-07-24 02:12:12.880761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.313 qpair failed and we were unable to recover it. 00:33:58.313 [2024-07-24 02:12:12.880872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.313 [2024-07-24 02:12:12.880906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.313 qpair failed and we were unable to recover it. 00:33:58.313 [2024-07-24 02:12:12.881080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.313 [2024-07-24 02:12:12.881108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.313 qpair failed and we were unable to recover it. 00:33:58.313 [2024-07-24 02:12:12.881259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.313 [2024-07-24 02:12:12.881288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.313 qpair failed and we were unable to recover it. 00:33:58.313 [2024-07-24 02:12:12.881406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.313 [2024-07-24 02:12:12.881433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.313 qpair failed and we were unable to recover it. 00:33:58.313 [2024-07-24 02:12:12.881585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.313 [2024-07-24 02:12:12.881614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.313 qpair failed and we were unable to recover it. 00:33:58.313 [2024-07-24 02:12:12.881805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.313 [2024-07-24 02:12:12.881875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.313 qpair failed and we were unable to recover it. 00:33:58.313 [2024-07-24 02:12:12.882025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.313 [2024-07-24 02:12:12.882054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.313 qpair failed and we were unable to recover it. 00:33:58.313 [2024-07-24 02:12:12.882203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.313 [2024-07-24 02:12:12.882229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.313 qpair failed and we were unable to recover it. 00:33:58.313 [2024-07-24 02:12:12.882377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.313 [2024-07-24 02:12:12.882408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.313 qpair failed and we were unable to recover it. 00:33:58.313 [2024-07-24 02:12:12.882605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.313 [2024-07-24 02:12:12.882649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.313 qpair failed and we were unable to recover it. 00:33:58.313 [2024-07-24 02:12:12.882830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.314 [2024-07-24 02:12:12.882876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.314 qpair failed and we were unable to recover it. 00:33:58.314 [2024-07-24 02:12:12.883014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.314 [2024-07-24 02:12:12.883064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.314 qpair failed and we were unable to recover it. 00:33:58.314 [2024-07-24 02:12:12.883199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.314 [2024-07-24 02:12:12.883226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.314 qpair failed and we were unable to recover it. 00:33:58.314 [2024-07-24 02:12:12.883446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.314 [2024-07-24 02:12:12.883489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.314 qpair failed and we were unable to recover it. 00:33:58.314 [2024-07-24 02:12:12.883648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.314 [2024-07-24 02:12:12.883690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.314 qpair failed and we were unable to recover it. 00:33:58.314 [2024-07-24 02:12:12.883849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.314 [2024-07-24 02:12:12.883891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.314 qpair failed and we were unable to recover it. 00:33:58.314 [2024-07-24 02:12:12.884025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.314 [2024-07-24 02:12:12.884051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.314 qpair failed and we were unable to recover it. 00:33:58.314 [2024-07-24 02:12:12.884201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.314 [2024-07-24 02:12:12.884230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.314 qpair failed and we were unable to recover it. 00:33:58.314 [2024-07-24 02:12:12.884368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.314 [2024-07-24 02:12:12.884395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.314 qpair failed and we were unable to recover it. 00:33:58.314 [2024-07-24 02:12:12.884502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.314 [2024-07-24 02:12:12.884529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.314 qpair failed and we were unable to recover it. 00:33:58.314 [2024-07-24 02:12:12.884690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.314 [2024-07-24 02:12:12.884719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.314 qpair failed and we were unable to recover it. 00:33:58.314 [2024-07-24 02:12:12.884846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.314 [2024-07-24 02:12:12.884873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.314 qpair failed and we were unable to recover it. 00:33:58.314 [2024-07-24 02:12:12.885065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.314 [2024-07-24 02:12:12.885094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.314 qpair failed and we were unable to recover it. 00:33:58.314 [2024-07-24 02:12:12.885268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.314 [2024-07-24 02:12:12.885294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.314 qpair failed and we were unable to recover it. 00:33:58.314 [2024-07-24 02:12:12.885514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.314 [2024-07-24 02:12:12.885541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.314 qpair failed and we were unable to recover it. 00:33:58.314 [2024-07-24 02:12:12.885717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.314 [2024-07-24 02:12:12.885746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.314 qpair failed and we were unable to recover it. 00:33:58.314 [2024-07-24 02:12:12.885886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.314 [2024-07-24 02:12:12.885915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.314 qpair failed and we were unable to recover it. 00:33:58.314 [2024-07-24 02:12:12.886059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.314 [2024-07-24 02:12:12.886088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.314 qpair failed and we were unable to recover it. 00:33:58.314 [2024-07-24 02:12:12.886221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.314 [2024-07-24 02:12:12.886247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.314 qpair failed and we were unable to recover it. 00:33:58.314 [2024-07-24 02:12:12.886380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.314 [2024-07-24 02:12:12.886407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.314 qpair failed and we were unable to recover it. 00:33:58.314 [2024-07-24 02:12:12.886545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.314 [2024-07-24 02:12:12.886571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.314 qpair failed and we were unable to recover it. 00:33:58.314 [2024-07-24 02:12:12.886728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.314 [2024-07-24 02:12:12.886757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.314 qpair failed and we were unable to recover it. 00:33:58.314 [2024-07-24 02:12:12.886927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.314 [2024-07-24 02:12:12.886956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.314 qpair failed and we were unable to recover it. 00:33:58.314 [2024-07-24 02:12:12.887103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.314 [2024-07-24 02:12:12.887160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.314 qpair failed and we were unable to recover it. 00:33:58.314 [2024-07-24 02:12:12.887339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.314 [2024-07-24 02:12:12.887383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.314 qpair failed and we were unable to recover it. 00:33:58.314 [2024-07-24 02:12:12.887537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.314 [2024-07-24 02:12:12.887563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.314 qpair failed and we were unable to recover it. 00:33:58.314 [2024-07-24 02:12:12.887717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.314 [2024-07-24 02:12:12.887746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.314 qpair failed and we were unable to recover it. 00:33:58.314 [2024-07-24 02:12:12.887887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.314 [2024-07-24 02:12:12.887915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.314 qpair failed and we were unable to recover it. 00:33:58.314 [2024-07-24 02:12:12.888021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.314 [2024-07-24 02:12:12.888050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.314 qpair failed and we were unable to recover it. 00:33:58.314 [2024-07-24 02:12:12.888224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.314 [2024-07-24 02:12:12.888254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.314 qpair failed and we were unable to recover it. 00:33:58.314 [2024-07-24 02:12:12.888409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.314 [2024-07-24 02:12:12.888440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.314 qpair failed and we were unable to recover it. 00:33:58.314 [2024-07-24 02:12:12.888573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.314 [2024-07-24 02:12:12.888615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.314 qpair failed and we were unable to recover it. 00:33:58.314 [2024-07-24 02:12:12.888835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.314 [2024-07-24 02:12:12.888885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.314 qpair failed and we were unable to recover it. 00:33:58.314 [2024-07-24 02:12:12.889029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.314 [2024-07-24 02:12:12.889058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.314 qpair failed and we were unable to recover it. 00:33:58.314 [2024-07-24 02:12:12.889174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.314 [2024-07-24 02:12:12.889204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.314 qpair failed and we were unable to recover it. 00:33:58.314 [2024-07-24 02:12:12.889401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.314 [2024-07-24 02:12:12.889428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.314 qpair failed and we were unable to recover it. 00:33:58.314 [2024-07-24 02:12:12.889561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.314 [2024-07-24 02:12:12.889603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.314 qpair failed and we were unable to recover it. 00:33:58.315 [2024-07-24 02:12:12.889723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.315 [2024-07-24 02:12:12.889751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.315 qpair failed and we were unable to recover it. 00:33:58.315 [2024-07-24 02:12:12.890006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.315 [2024-07-24 02:12:12.890073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.315 qpair failed and we were unable to recover it. 00:33:58.315 [2024-07-24 02:12:12.890218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.315 [2024-07-24 02:12:12.890247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.315 qpair failed and we were unable to recover it. 00:33:58.315 [2024-07-24 02:12:12.890382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.315 [2024-07-24 02:12:12.890409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.315 qpair failed and we were unable to recover it. 00:33:58.315 [2024-07-24 02:12:12.890523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.315 [2024-07-24 02:12:12.890549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.315 qpair failed and we were unable to recover it. 00:33:58.315 [2024-07-24 02:12:12.890675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.315 [2024-07-24 02:12:12.890701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.315 qpair failed and we were unable to recover it. 00:33:58.315 [2024-07-24 02:12:12.890869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.315 [2024-07-24 02:12:12.890898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.315 qpair failed and we were unable to recover it. 00:33:58.315 [2024-07-24 02:12:12.891023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.315 [2024-07-24 02:12:12.891053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.315 qpair failed and we were unable to recover it. 00:33:58.315 [2024-07-24 02:12:12.891223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.315 [2024-07-24 02:12:12.891252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.315 qpair failed and we were unable to recover it. 00:33:58.315 [2024-07-24 02:12:12.891390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.315 [2024-07-24 02:12:12.891419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.315 qpair failed and we were unable to recover it. 00:33:58.315 [2024-07-24 02:12:12.891582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.315 [2024-07-24 02:12:12.891608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.315 qpair failed and we were unable to recover it. 00:33:58.315 [2024-07-24 02:12:12.891707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.315 [2024-07-24 02:12:12.891753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.315 qpair failed and we were unable to recover it. 00:33:58.315 [2024-07-24 02:12:12.891984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.315 [2024-07-24 02:12:12.892045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.315 qpair failed and we were unable to recover it. 00:33:58.315 [2024-07-24 02:12:12.892214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.315 [2024-07-24 02:12:12.892242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.315 qpair failed and we were unable to recover it. 00:33:58.315 [2024-07-24 02:12:12.892385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.315 [2024-07-24 02:12:12.892411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.315 qpair failed and we were unable to recover it. 00:33:58.315 [2024-07-24 02:12:12.892571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.315 [2024-07-24 02:12:12.892595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.315 qpair failed and we were unable to recover it. 00:33:58.315 [2024-07-24 02:12:12.892776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.315 [2024-07-24 02:12:12.892839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.315 qpair failed and we were unable to recover it. 00:33:58.315 [2024-07-24 02:12:12.892989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.315 [2024-07-24 02:12:12.893017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.315 qpair failed and we were unable to recover it. 00:33:58.315 [2024-07-24 02:12:12.893185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.315 [2024-07-24 02:12:12.893213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.315 qpair failed and we were unable to recover it. 00:33:58.315 [2024-07-24 02:12:12.893327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.315 [2024-07-24 02:12:12.893371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.315 qpair failed and we were unable to recover it. 00:33:58.315 [2024-07-24 02:12:12.893504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.315 [2024-07-24 02:12:12.893535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.315 qpair failed and we were unable to recover it. 00:33:58.315 [2024-07-24 02:12:12.893665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.315 [2024-07-24 02:12:12.893691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.315 qpair failed and we were unable to recover it. 00:33:58.315 [2024-07-24 02:12:12.893802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.315 [2024-07-24 02:12:12.893843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.315 qpair failed and we were unable to recover it. 00:33:58.315 [2024-07-24 02:12:12.893963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.315 [2024-07-24 02:12:12.893991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.315 qpair failed and we were unable to recover it. 00:33:58.315 [2024-07-24 02:12:12.894137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.315 [2024-07-24 02:12:12.894165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.315 qpair failed and we were unable to recover it. 00:33:58.315 [2024-07-24 02:12:12.894274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.315 [2024-07-24 02:12:12.894302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.315 qpair failed and we were unable to recover it. 00:33:58.315 [2024-07-24 02:12:12.894437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.315 [2024-07-24 02:12:12.894463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.315 qpair failed and we were unable to recover it. 00:33:58.315 [2024-07-24 02:12:12.894591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.315 [2024-07-24 02:12:12.894616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.315 qpair failed and we were unable to recover it. 00:33:58.315 [2024-07-24 02:12:12.894777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.315 [2024-07-24 02:12:12.894806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.315 qpair failed and we were unable to recover it. 00:33:58.315 [2024-07-24 02:12:12.894952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.315 [2024-07-24 02:12:12.894980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.315 qpair failed and we were unable to recover it. 00:33:58.315 [2024-07-24 02:12:12.895113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.315 [2024-07-24 02:12:12.895155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.315 qpair failed and we were unable to recover it. 00:33:58.315 [2024-07-24 02:12:12.895305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.315 [2024-07-24 02:12:12.895341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.315 qpair failed and we were unable to recover it. 00:33:58.315 [2024-07-24 02:12:12.895492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.315 [2024-07-24 02:12:12.895518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.315 qpair failed and we were unable to recover it. 00:33:58.315 [2024-07-24 02:12:12.895665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.315 [2024-07-24 02:12:12.895694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.315 qpair failed and we were unable to recover it. 00:33:58.315 [2024-07-24 02:12:12.895874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.315 [2024-07-24 02:12:12.895902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.315 qpair failed and we were unable to recover it. 00:33:58.315 [2024-07-24 02:12:12.896057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.315 [2024-07-24 02:12:12.896082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.315 qpair failed and we were unable to recover it. 00:33:58.315 [2024-07-24 02:12:12.896243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.315 [2024-07-24 02:12:12.896268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.315 qpair failed and we were unable to recover it. 00:33:58.316 [2024-07-24 02:12:12.896399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.316 [2024-07-24 02:12:12.896426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.316 qpair failed and we were unable to recover it. 00:33:58.316 [2024-07-24 02:12:12.896555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.316 [2024-07-24 02:12:12.896580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.316 qpair failed and we were unable to recover it. 00:33:58.316 [2024-07-24 02:12:12.896721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.316 [2024-07-24 02:12:12.896763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.316 qpair failed and we were unable to recover it. 00:33:58.316 [2024-07-24 02:12:12.896908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.316 [2024-07-24 02:12:12.896937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.316 qpair failed and we were unable to recover it. 00:33:58.316 [2024-07-24 02:12:12.897086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.316 [2024-07-24 02:12:12.897114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.316 qpair failed and we were unable to recover it. 00:33:58.316 [2024-07-24 02:12:12.897253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.316 [2024-07-24 02:12:12.897281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.316 qpair failed and we were unable to recover it. 00:33:58.316 [2024-07-24 02:12:12.897433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.316 [2024-07-24 02:12:12.897460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.316 qpair failed and we were unable to recover it. 00:33:58.316 [2024-07-24 02:12:12.897592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.316 [2024-07-24 02:12:12.897618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.316 qpair failed and we were unable to recover it. 00:33:58.316 [2024-07-24 02:12:12.897739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.316 [2024-07-24 02:12:12.897765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.316 qpair failed and we were unable to recover it. 00:33:58.316 [2024-07-24 02:12:12.897860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.316 [2024-07-24 02:12:12.897885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.316 qpair failed and we were unable to recover it. 00:33:58.316 [2024-07-24 02:12:12.898011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.316 [2024-07-24 02:12:12.898039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.316 qpair failed and we were unable to recover it. 00:33:58.316 [2024-07-24 02:12:12.898192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.316 [2024-07-24 02:12:12.898221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.316 qpair failed and we were unable to recover it. 00:33:58.316 [2024-07-24 02:12:12.898390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.316 [2024-07-24 02:12:12.898419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.316 qpair failed and we were unable to recover it. 00:33:58.316 [2024-07-24 02:12:12.898534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.316 [2024-07-24 02:12:12.898562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.316 qpair failed and we were unable to recover it. 00:33:58.316 [2024-07-24 02:12:12.898702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.316 [2024-07-24 02:12:12.898730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.316 qpair failed and we were unable to recover it. 00:33:58.316 [2024-07-24 02:12:12.898875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.316 [2024-07-24 02:12:12.898903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.316 qpair failed and we were unable to recover it. 00:33:58.316 [2024-07-24 02:12:12.899071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.316 [2024-07-24 02:12:12.899099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.316 qpair failed and we were unable to recover it. 00:33:58.316 [2024-07-24 02:12:12.899235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.316 [2024-07-24 02:12:12.899263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.316 qpair failed and we were unable to recover it. 00:33:58.316 [2024-07-24 02:12:12.899429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.316 [2024-07-24 02:12:12.899456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.316 qpair failed and we were unable to recover it. 00:33:58.316 [2024-07-24 02:12:12.899629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.316 [2024-07-24 02:12:12.899658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.316 qpair failed and we were unable to recover it. 00:33:58.316 [2024-07-24 02:12:12.899794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.316 [2024-07-24 02:12:12.899820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.316 qpair failed and we were unable to recover it. 00:33:58.316 [2024-07-24 02:12:12.899981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.316 [2024-07-24 02:12:12.900006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.316 qpair failed and we were unable to recover it. 00:33:58.316 [2024-07-24 02:12:12.900159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.316 [2024-07-24 02:12:12.900187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.316 qpair failed and we were unable to recover it. 00:33:58.316 [2024-07-24 02:12:12.900322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.316 [2024-07-24 02:12:12.900348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.316 qpair failed and we were unable to recover it. 00:33:58.316 [2024-07-24 02:12:12.900457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.316 [2024-07-24 02:12:12.900486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.316 qpair failed and we were unable to recover it. 00:33:58.316 [2024-07-24 02:12:12.900611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.316 [2024-07-24 02:12:12.900637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.316 qpair failed and we were unable to recover it. 00:33:58.316 [2024-07-24 02:12:12.900792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.316 [2024-07-24 02:12:12.900817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.316 qpair failed and we were unable to recover it. 00:33:58.316 [2024-07-24 02:12:12.900951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.316 [2024-07-24 02:12:12.900976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.316 qpair failed and we were unable to recover it. 00:33:58.316 [2024-07-24 02:12:12.901097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.316 [2024-07-24 02:12:12.901123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.316 qpair failed and we were unable to recover it. 00:33:58.316 [2024-07-24 02:12:12.901255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.316 [2024-07-24 02:12:12.901280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.316 qpair failed and we were unable to recover it. 00:33:58.316 [2024-07-24 02:12:12.901466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.316 [2024-07-24 02:12:12.901506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.316 qpair failed and we were unable to recover it. 00:33:58.316 [2024-07-24 02:12:12.901691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.317 [2024-07-24 02:12:12.901722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.317 qpair failed and we were unable to recover it. 00:33:58.317 [2024-07-24 02:12:12.901879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.317 [2024-07-24 02:12:12.901905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.317 qpair failed and we were unable to recover it. 00:33:58.317 [2024-07-24 02:12:12.902032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.317 [2024-07-24 02:12:12.902072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.317 qpair failed and we were unable to recover it. 00:33:58.317 [2024-07-24 02:12:12.902209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.317 [2024-07-24 02:12:12.902237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.317 qpair failed and we were unable to recover it. 00:33:58.317 [2024-07-24 02:12:12.902395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.317 [2024-07-24 02:12:12.902422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.317 qpair failed and we were unable to recover it. 00:33:58.317 [2024-07-24 02:12:12.902518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.317 [2024-07-24 02:12:12.902544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.317 qpair failed and we were unable to recover it. 00:33:58.317 [2024-07-24 02:12:12.902728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.317 [2024-07-24 02:12:12.902754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.317 qpair failed and we were unable to recover it. 00:33:58.317 [2024-07-24 02:12:12.902917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.317 [2024-07-24 02:12:12.902943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.317 qpair failed and we were unable to recover it. 00:33:58.317 [2024-07-24 02:12:12.903096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.317 [2024-07-24 02:12:12.903127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.317 qpair failed and we were unable to recover it. 00:33:58.317 [2024-07-24 02:12:12.903269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.317 [2024-07-24 02:12:12.903297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.317 qpair failed and we were unable to recover it. 00:33:58.317 [2024-07-24 02:12:12.903484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.317 [2024-07-24 02:12:12.903510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.317 qpair failed and we were unable to recover it. 00:33:58.317 [2024-07-24 02:12:12.903748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.317 [2024-07-24 02:12:12.903800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.317 qpair failed and we were unable to recover it. 00:33:58.317 [2024-07-24 02:12:12.903922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.317 [2024-07-24 02:12:12.903951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.317 qpair failed and we were unable to recover it. 00:33:58.317 [2024-07-24 02:12:12.904100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.317 [2024-07-24 02:12:12.904125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.317 qpair failed and we were unable to recover it. 00:33:58.317 [2024-07-24 02:12:12.904254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.317 [2024-07-24 02:12:12.904297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.317 qpair failed and we were unable to recover it. 00:33:58.317 [2024-07-24 02:12:12.904468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.317 [2024-07-24 02:12:12.904494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.317 qpair failed and we were unable to recover it. 00:33:58.317 [2024-07-24 02:12:12.904593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.317 [2024-07-24 02:12:12.904620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.317 qpair failed and we were unable to recover it. 00:33:58.317 [2024-07-24 02:12:12.904753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.317 [2024-07-24 02:12:12.904778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.317 qpair failed and we were unable to recover it. 00:33:58.317 [2024-07-24 02:12:12.904966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.317 [2024-07-24 02:12:12.904994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.317 qpair failed and we were unable to recover it. 00:33:58.317 [2024-07-24 02:12:12.905154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.317 [2024-07-24 02:12:12.905183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.317 qpair failed and we were unable to recover it. 00:33:58.317 [2024-07-24 02:12:12.905350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.317 [2024-07-24 02:12:12.905395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.317 qpair failed and we were unable to recover it. 00:33:58.317 [2024-07-24 02:12:12.905540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.317 [2024-07-24 02:12:12.905567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.317 qpair failed and we were unable to recover it. 00:33:58.317 [2024-07-24 02:12:12.905708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.317 [2024-07-24 02:12:12.905734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.317 qpair failed and we were unable to recover it. 00:33:58.317 [2024-07-24 02:12:12.905867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.317 [2024-07-24 02:12:12.905893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.317 qpair failed and we were unable to recover it. 00:33:58.317 [2024-07-24 02:12:12.906022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.317 [2024-07-24 02:12:12.906050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.317 qpair failed and we were unable to recover it. 00:33:58.317 [2024-07-24 02:12:12.906230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.317 [2024-07-24 02:12:12.906256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.317 qpair failed and we were unable to recover it. 00:33:58.317 [2024-07-24 02:12:12.906393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.317 [2024-07-24 02:12:12.906420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.317 qpair failed and we were unable to recover it. 00:33:58.317 [2024-07-24 02:12:12.906560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.317 [2024-07-24 02:12:12.906587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.317 qpair failed and we were unable to recover it. 00:33:58.317 [2024-07-24 02:12:12.906719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.317 [2024-07-24 02:12:12.906745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.317 qpair failed and we were unable to recover it. 00:33:58.317 [2024-07-24 02:12:12.906874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.317 [2024-07-24 02:12:12.906916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.317 qpair failed and we were unable to recover it. 00:33:58.317 [2024-07-24 02:12:12.907059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.317 [2024-07-24 02:12:12.907087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.317 qpair failed and we were unable to recover it. 00:33:58.317 [2024-07-24 02:12:12.907215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.317 [2024-07-24 02:12:12.907241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.317 qpair failed and we were unable to recover it. 00:33:58.317 [2024-07-24 02:12:12.907404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.317 [2024-07-24 02:12:12.907431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.317 qpair failed and we were unable to recover it. 00:33:58.317 [2024-07-24 02:12:12.907566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.317 [2024-07-24 02:12:12.907609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.317 qpair failed and we were unable to recover it. 00:33:58.317 [2024-07-24 02:12:12.907769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.317 [2024-07-24 02:12:12.907795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.317 qpair failed and we were unable to recover it. 00:33:58.317 [2024-07-24 02:12:12.907924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.317 [2024-07-24 02:12:12.907949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.317 qpair failed and we were unable to recover it. 00:33:58.317 [2024-07-24 02:12:12.908099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.317 [2024-07-24 02:12:12.908127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.317 qpair failed and we were unable to recover it. 00:33:58.318 [2024-07-24 02:12:12.908258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.318 [2024-07-24 02:12:12.908285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.318 qpair failed and we were unable to recover it. 00:33:58.318 [2024-07-24 02:12:12.908424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.318 [2024-07-24 02:12:12.908452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.318 qpair failed and we were unable to recover it. 00:33:58.318 [2024-07-24 02:12:12.908603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.318 [2024-07-24 02:12:12.908632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.318 qpair failed and we were unable to recover it. 00:33:58.318 [2024-07-24 02:12:12.908779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.318 [2024-07-24 02:12:12.908805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.318 qpair failed and we were unable to recover it. 00:33:58.318 [2024-07-24 02:12:12.908930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.318 [2024-07-24 02:12:12.908956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.318 qpair failed and we were unable to recover it. 00:33:58.318 [2024-07-24 02:12:12.909152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.318 [2024-07-24 02:12:12.909178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.318 qpair failed and we were unable to recover it. 00:33:58.318 [2024-07-24 02:12:12.909314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.318 [2024-07-24 02:12:12.909345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.318 qpair failed and we were unable to recover it. 00:33:58.318 [2024-07-24 02:12:12.909477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.318 [2024-07-24 02:12:12.909502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.318 qpair failed and we were unable to recover it. 00:33:58.318 [2024-07-24 02:12:12.909687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.318 [2024-07-24 02:12:12.909716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.318 qpair failed and we were unable to recover it. 00:33:58.318 [2024-07-24 02:12:12.909835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.318 [2024-07-24 02:12:12.909862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.318 qpair failed and we were unable to recover it. 00:33:58.318 [2024-07-24 02:12:12.909972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.318 [2024-07-24 02:12:12.909999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.318 qpair failed and we were unable to recover it. 00:33:58.318 [2024-07-24 02:12:12.910156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.318 [2024-07-24 02:12:12.910184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.318 qpair failed and we were unable to recover it. 00:33:58.318 [2024-07-24 02:12:12.910308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.318 [2024-07-24 02:12:12.910343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.318 qpair failed and we were unable to recover it. 00:33:58.318 [2024-07-24 02:12:12.910478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.318 [2024-07-24 02:12:12.910504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.318 qpair failed and we were unable to recover it. 00:33:58.318 [2024-07-24 02:12:12.910653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.318 [2024-07-24 02:12:12.910681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.318 qpair failed and we were unable to recover it. 00:33:58.318 [2024-07-24 02:12:12.910858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.318 [2024-07-24 02:12:12.910883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.318 qpair failed and we were unable to recover it. 00:33:58.318 [2024-07-24 02:12:12.911035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.318 [2024-07-24 02:12:12.911097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.318 qpair failed and we were unable to recover it. 00:33:58.318 [2024-07-24 02:12:12.911278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.318 [2024-07-24 02:12:12.911304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.318 qpair failed and we were unable to recover it. 00:33:58.318 [2024-07-24 02:12:12.911474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.318 [2024-07-24 02:12:12.911500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.318 qpair failed and we were unable to recover it. 00:33:58.318 [2024-07-24 02:12:12.911707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.318 [2024-07-24 02:12:12.911759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.318 qpair failed and we were unable to recover it. 00:33:58.318 [2024-07-24 02:12:12.911904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.318 [2024-07-24 02:12:12.911933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.318 qpair failed and we were unable to recover it. 00:33:58.318 [2024-07-24 02:12:12.912109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.318 [2024-07-24 02:12:12.912138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.318 qpair failed and we were unable to recover it. 00:33:58.318 [2024-07-24 02:12:12.912293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.318 [2024-07-24 02:12:12.912330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.318 qpair failed and we were unable to recover it. 00:33:58.318 [2024-07-24 02:12:12.912508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.318 [2024-07-24 02:12:12.912534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.318 qpair failed and we were unable to recover it. 00:33:58.318 [2024-07-24 02:12:12.912699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.318 [2024-07-24 02:12:12.912725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.318 qpair failed and we were unable to recover it. 00:33:58.318 [2024-07-24 02:12:12.912925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.318 [2024-07-24 02:12:12.912978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.318 qpair failed and we were unable to recover it. 00:33:58.318 [2024-07-24 02:12:12.913093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.318 [2024-07-24 02:12:12.913121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.318 qpair failed and we were unable to recover it. 00:33:58.318 [2024-07-24 02:12:12.913249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.318 [2024-07-24 02:12:12.913276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.318 qpair failed and we were unable to recover it. 00:33:58.318 [2024-07-24 02:12:12.913431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.318 [2024-07-24 02:12:12.913461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.318 qpair failed and we were unable to recover it. 00:33:58.318 [2024-07-24 02:12:12.913597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.318 [2024-07-24 02:12:12.913624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.318 qpair failed and we were unable to recover it. 00:33:58.318 [2024-07-24 02:12:12.913736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.318 [2024-07-24 02:12:12.913761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.318 qpair failed and we were unable to recover it. 00:33:58.318 [2024-07-24 02:12:12.913869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.318 [2024-07-24 02:12:12.913894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.318 qpair failed and we were unable to recover it. 00:33:58.318 [2024-07-24 02:12:12.914017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.318 [2024-07-24 02:12:12.914043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.318 qpair failed and we were unable to recover it. 00:33:58.318 [2024-07-24 02:12:12.914177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.318 [2024-07-24 02:12:12.914203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.318 qpair failed and we were unable to recover it. 00:33:58.318 [2024-07-24 02:12:12.914335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.318 [2024-07-24 02:12:12.914362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.318 qpair failed and we were unable to recover it. 00:33:58.318 [2024-07-24 02:12:12.914479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.318 [2024-07-24 02:12:12.914505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.318 qpair failed and we were unable to recover it. 00:33:58.318 [2024-07-24 02:12:12.914663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.318 [2024-07-24 02:12:12.914688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.319 qpair failed and we were unable to recover it. 00:33:58.319 [2024-07-24 02:12:12.914860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.319 [2024-07-24 02:12:12.914919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.319 qpair failed and we were unable to recover it. 00:33:58.319 [2024-07-24 02:12:12.915092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.319 [2024-07-24 02:12:12.915121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.319 qpair failed and we were unable to recover it. 00:33:58.319 [2024-07-24 02:12:12.915264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.319 [2024-07-24 02:12:12.915290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.319 qpair failed and we were unable to recover it. 00:33:58.319 [2024-07-24 02:12:12.915434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.319 [2024-07-24 02:12:12.915461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.319 qpair failed and we were unable to recover it. 00:33:58.319 [2024-07-24 02:12:12.915615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.319 [2024-07-24 02:12:12.915644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.319 qpair failed and we were unable to recover it. 00:33:58.319 [2024-07-24 02:12:12.915797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.319 [2024-07-24 02:12:12.915823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.319 qpair failed and we were unable to recover it. 00:33:58.319 [2024-07-24 02:12:12.915985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.319 [2024-07-24 02:12:12.916011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.319 qpair failed and we were unable to recover it. 00:33:58.319 [2024-07-24 02:12:12.916152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.319 [2024-07-24 02:12:12.916180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.319 qpair failed and we were unable to recover it. 00:33:58.319 [2024-07-24 02:12:12.916361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.319 [2024-07-24 02:12:12.916387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.319 qpair failed and we were unable to recover it. 00:33:58.319 [2024-07-24 02:12:12.916529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.319 [2024-07-24 02:12:12.916555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.319 qpair failed and we were unable to recover it. 00:33:58.319 [2024-07-24 02:12:12.916707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.319 [2024-07-24 02:12:12.916735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.319 qpair failed and we were unable to recover it. 00:33:58.319 [2024-07-24 02:12:12.916952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.319 [2024-07-24 02:12:12.916978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.319 qpair failed and we were unable to recover it. 00:33:58.319 [2024-07-24 02:12:12.917133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.319 [2024-07-24 02:12:12.917161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.319 qpair failed and we were unable to recover it. 00:33:58.319 [2024-07-24 02:12:12.917305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.319 [2024-07-24 02:12:12.917345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.319 qpair failed and we were unable to recover it. 00:33:58.319 [2024-07-24 02:12:12.917521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.319 [2024-07-24 02:12:12.917547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.319 qpair failed and we were unable to recover it. 00:33:58.319 [2024-07-24 02:12:12.917706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.319 [2024-07-24 02:12:12.917735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.319 qpair failed and we were unable to recover it. 00:33:58.319 [2024-07-24 02:12:12.917880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.319 [2024-07-24 02:12:12.917910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.319 qpair failed and we were unable to recover it. 00:33:58.319 [2024-07-24 02:12:12.918064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.319 [2024-07-24 02:12:12.918092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.319 qpair failed and we were unable to recover it. 00:33:58.319 [2024-07-24 02:12:12.918222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.319 [2024-07-24 02:12:12.918264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.319 qpair failed and we were unable to recover it. 00:33:58.319 [2024-07-24 02:12:12.918430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.319 [2024-07-24 02:12:12.918457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.319 qpair failed and we were unable to recover it. 00:33:58.319 [2024-07-24 02:12:12.918588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.319 [2024-07-24 02:12:12.918614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.319 qpair failed and we were unable to recover it. 00:33:58.319 [2024-07-24 02:12:12.918718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.319 [2024-07-24 02:12:12.918744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.319 qpair failed and we were unable to recover it. 00:33:58.319 [2024-07-24 02:12:12.918904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.319 [2024-07-24 02:12:12.918932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.319 qpair failed and we were unable to recover it. 00:33:58.319 [2024-07-24 02:12:12.919076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.319 [2024-07-24 02:12:12.919102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.319 qpair failed and we were unable to recover it. 00:33:58.319 [2024-07-24 02:12:12.919235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.319 [2024-07-24 02:12:12.919279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.319 qpair failed and we were unable to recover it. 00:33:58.319 [2024-07-24 02:12:12.919442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.319 [2024-07-24 02:12:12.919468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.319 qpair failed and we were unable to recover it. 00:33:58.319 [2024-07-24 02:12:12.919628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.319 [2024-07-24 02:12:12.919664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.319 qpair failed and we were unable to recover it. 00:33:58.319 [2024-07-24 02:12:12.919892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.319 [2024-07-24 02:12:12.919922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.319 qpair failed and we were unable to recover it. 00:33:58.319 [2024-07-24 02:12:12.920065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.319 [2024-07-24 02:12:12.920094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.319 qpair failed and we were unable to recover it. 00:33:58.319 [2024-07-24 02:12:12.920240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.319 [2024-07-24 02:12:12.920266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.319 qpair failed and we were unable to recover it. 00:33:58.319 [2024-07-24 02:12:12.920400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.319 [2024-07-24 02:12:12.920427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.319 qpair failed and we were unable to recover it. 00:33:58.319 [2024-07-24 02:12:12.920561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.319 [2024-07-24 02:12:12.920588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.319 qpair failed and we were unable to recover it. 00:33:58.319 [2024-07-24 02:12:12.920821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.319 [2024-07-24 02:12:12.920847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.319 qpair failed and we were unable to recover it. 00:33:58.319 [2024-07-24 02:12:12.921040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.319 [2024-07-24 02:12:12.921090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.319 qpair failed and we were unable to recover it. 00:33:58.319 [2024-07-24 02:12:12.921237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.319 [2024-07-24 02:12:12.921266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.319 qpair failed and we were unable to recover it. 00:33:58.319 [2024-07-24 02:12:12.921426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.319 [2024-07-24 02:12:12.921452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.319 qpair failed and we were unable to recover it. 00:33:58.319 [2024-07-24 02:12:12.921579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.320 [2024-07-24 02:12:12.921604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.320 qpair failed and we were unable to recover it. 00:33:58.320 [2024-07-24 02:12:12.921762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.320 [2024-07-24 02:12:12.921790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.320 qpair failed and we were unable to recover it. 00:33:58.320 [2024-07-24 02:12:12.921941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.320 [2024-07-24 02:12:12.921967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.320 qpair failed and we were unable to recover it. 00:33:58.320 [2024-07-24 02:12:12.922104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.320 [2024-07-24 02:12:12.922134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.320 qpair failed and we were unable to recover it. 00:33:58.320 [2024-07-24 02:12:12.922298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.320 [2024-07-24 02:12:12.922332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.320 qpair failed and we were unable to recover it. 00:33:58.320 [2024-07-24 02:12:12.922473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.320 [2024-07-24 02:12:12.922499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.320 qpair failed and we were unable to recover it. 00:33:58.320 [2024-07-24 02:12:12.922606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.320 [2024-07-24 02:12:12.922647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.320 qpair failed and we were unable to recover it. 00:33:58.320 [2024-07-24 02:12:12.922819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.320 [2024-07-24 02:12:12.922847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.320 qpair failed and we were unable to recover it. 00:33:58.320 [2024-07-24 02:12:12.922972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.320 [2024-07-24 02:12:12.922998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.320 qpair failed and we were unable to recover it. 00:33:58.320 [2024-07-24 02:12:12.923145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.320 [2024-07-24 02:12:12.923170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.320 qpair failed and we were unable to recover it. 00:33:58.320 [2024-07-24 02:12:12.923327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.320 [2024-07-24 02:12:12.923356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.320 qpair failed and we were unable to recover it. 00:33:58.320 [2024-07-24 02:12:12.923502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.320 [2024-07-24 02:12:12.923528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.320 qpair failed and we were unable to recover it. 00:33:58.320 [2024-07-24 02:12:12.923652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.320 [2024-07-24 02:12:12.923694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.320 qpair failed and we were unable to recover it. 00:33:58.320 [2024-07-24 02:12:12.923840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.320 [2024-07-24 02:12:12.923868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.320 qpair failed and we were unable to recover it. 00:33:58.320 [2024-07-24 02:12:12.924016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.320 [2024-07-24 02:12:12.924041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.320 qpair failed and we were unable to recover it. 00:33:58.320 [2024-07-24 02:12:12.924177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.320 [2024-07-24 02:12:12.924203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.320 qpair failed and we were unable to recover it. 00:33:58.320 [2024-07-24 02:12:12.924334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.320 [2024-07-24 02:12:12.924361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.320 qpair failed and we were unable to recover it. 00:33:58.320 [2024-07-24 02:12:12.924495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.320 [2024-07-24 02:12:12.924522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.320 qpair failed and we were unable to recover it. 00:33:58.320 [2024-07-24 02:12:12.924681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.320 [2024-07-24 02:12:12.924749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.320 qpair failed and we were unable to recover it. 00:33:58.320 [2024-07-24 02:12:12.924892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.320 [2024-07-24 02:12:12.924921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.320 qpair failed and we were unable to recover it. 00:33:58.320 [2024-07-24 02:12:12.925075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.320 [2024-07-24 02:12:12.925100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.320 qpair failed and we were unable to recover it. 00:33:58.320 [2024-07-24 02:12:12.925276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.320 [2024-07-24 02:12:12.925304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.320 qpair failed and we were unable to recover it. 00:33:58.320 [2024-07-24 02:12:12.925464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.320 [2024-07-24 02:12:12.925490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.320 qpair failed and we were unable to recover it. 00:33:58.320 [2024-07-24 02:12:12.925626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.320 [2024-07-24 02:12:12.925651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.320 qpair failed and we were unable to recover it. 00:33:58.320 [2024-07-24 02:12:12.925754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.320 [2024-07-24 02:12:12.925779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.320 qpair failed and we were unable to recover it. 00:33:58.320 [2024-07-24 02:12:12.925936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.320 [2024-07-24 02:12:12.925965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.320 qpair failed and we were unable to recover it. 00:33:58.320 [2024-07-24 02:12:12.926123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.320 [2024-07-24 02:12:12.926149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.320 qpair failed and we were unable to recover it. 00:33:58.320 [2024-07-24 02:12:12.926312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.320 [2024-07-24 02:12:12.926349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.320 qpair failed and we were unable to recover it. 00:33:58.320 [2024-07-24 02:12:12.926483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.320 [2024-07-24 02:12:12.926510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.320 qpair failed and we were unable to recover it. 00:33:58.320 [2024-07-24 02:12:12.926640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.320 [2024-07-24 02:12:12.926666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.320 qpair failed and we were unable to recover it. 00:33:58.320 [2024-07-24 02:12:12.926796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.320 [2024-07-24 02:12:12.926821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.320 qpair failed and we were unable to recover it. 00:33:58.320 [2024-07-24 02:12:12.926927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.320 [2024-07-24 02:12:12.926957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.320 qpair failed and we were unable to recover it. 00:33:58.320 [2024-07-24 02:12:12.927113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.320 [2024-07-24 02:12:12.927138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.320 qpair failed and we were unable to recover it. 00:33:58.320 [2024-07-24 02:12:12.927287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.320 [2024-07-24 02:12:12.927323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.320 qpair failed and we were unable to recover it. 00:33:58.320 [2024-07-24 02:12:12.927501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.320 [2024-07-24 02:12:12.927527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.320 qpair failed and we were unable to recover it. 00:33:58.320 [2024-07-24 02:12:12.927664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.320 [2024-07-24 02:12:12.927689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.320 qpair failed and we were unable to recover it. 00:33:58.320 [2024-07-24 02:12:12.927817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.320 [2024-07-24 02:12:12.927859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.320 qpair failed and we were unable to recover it. 00:33:58.320 [2024-07-24 02:12:12.927994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.321 [2024-07-24 02:12:12.928022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.321 qpair failed and we were unable to recover it. 00:33:58.321 [2024-07-24 02:12:12.928176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.321 [2024-07-24 02:12:12.928201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.321 qpair failed and we were unable to recover it. 00:33:58.321 [2024-07-24 02:12:12.928306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.321 [2024-07-24 02:12:12.928343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.321 qpair failed and we were unable to recover it. 00:33:58.321 [2024-07-24 02:12:12.928454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.321 [2024-07-24 02:12:12.928480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.321 qpair failed and we were unable to recover it. 00:33:58.321 [2024-07-24 02:12:12.928640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.321 [2024-07-24 02:12:12.928665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.321 qpair failed and we were unable to recover it. 00:33:58.321 [2024-07-24 02:12:12.928815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.321 [2024-07-24 02:12:12.928843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.321 qpair failed and we were unable to recover it. 00:33:58.321 [2024-07-24 02:12:12.929013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.321 [2024-07-24 02:12:12.929041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.321 qpair failed and we were unable to recover it. 00:33:58.321 [2024-07-24 02:12:12.929181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.321 [2024-07-24 02:12:12.929209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.321 qpair failed and we were unable to recover it. 00:33:58.321 [2024-07-24 02:12:12.929411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.321 [2024-07-24 02:12:12.929451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.321 qpair failed and we were unable to recover it. 00:33:58.321 [2024-07-24 02:12:12.929570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.321 [2024-07-24 02:12:12.929614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.321 qpair failed and we were unable to recover it. 00:33:58.321 [2024-07-24 02:12:12.929796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.321 [2024-07-24 02:12:12.929821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.321 qpair failed and we were unable to recover it. 00:33:58.321 [2024-07-24 02:12:12.929999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.321 [2024-07-24 02:12:12.930027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.321 qpair failed and we were unable to recover it. 00:33:58.321 [2024-07-24 02:12:12.930162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.321 [2024-07-24 02:12:12.930191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.321 qpair failed and we were unable to recover it. 00:33:58.321 [2024-07-24 02:12:12.930339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.321 [2024-07-24 02:12:12.930366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.321 qpair failed and we were unable to recover it. 00:33:58.321 [2024-07-24 02:12:12.930529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.321 [2024-07-24 02:12:12.930555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.321 qpair failed and we were unable to recover it. 00:33:58.321 [2024-07-24 02:12:12.930682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.321 [2024-07-24 02:12:12.930709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.321 qpair failed and we were unable to recover it. 00:33:58.321 [2024-07-24 02:12:12.930835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.321 [2024-07-24 02:12:12.930861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.321 qpair failed and we were unable to recover it. 00:33:58.321 [2024-07-24 02:12:12.931021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.321 [2024-07-24 02:12:12.931062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.321 qpair failed and we were unable to recover it. 00:33:58.321 [2024-07-24 02:12:12.931185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.321 [2024-07-24 02:12:12.931213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.321 qpair failed and we were unable to recover it. 00:33:58.321 [2024-07-24 02:12:12.931360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.321 [2024-07-24 02:12:12.931387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.321 qpair failed and we were unable to recover it. 00:33:58.321 [2024-07-24 02:12:12.931495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.321 [2024-07-24 02:12:12.931521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.321 qpair failed and we were unable to recover it. 00:33:58.321 [2024-07-24 02:12:12.931638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.321 [2024-07-24 02:12:12.931665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.321 qpair failed and we were unable to recover it. 00:33:58.321 [2024-07-24 02:12:12.931822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.321 [2024-07-24 02:12:12.931848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.321 qpair failed and we were unable to recover it. 00:33:58.321 [2024-07-24 02:12:12.931999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.321 [2024-07-24 02:12:12.932027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.321 qpair failed and we were unable to recover it. 00:33:58.321 [2024-07-24 02:12:12.932163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.321 [2024-07-24 02:12:12.932192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.321 qpair failed and we were unable to recover it. 00:33:58.321 [2024-07-24 02:12:12.932347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.321 [2024-07-24 02:12:12.932373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.321 qpair failed and we were unable to recover it. 00:33:58.321 [2024-07-24 02:12:12.932509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.321 [2024-07-24 02:12:12.932534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.321 qpair failed and we were unable to recover it. 00:33:58.321 [2024-07-24 02:12:12.932679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.321 [2024-07-24 02:12:12.932708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.321 qpair failed and we were unable to recover it. 00:33:58.321 [2024-07-24 02:12:12.932867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.321 [2024-07-24 02:12:12.932893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.321 qpair failed and we were unable to recover it. 00:33:58.321 [2024-07-24 02:12:12.933025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.321 [2024-07-24 02:12:12.933068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.321 qpair failed and we were unable to recover it. 00:33:58.321 [2024-07-24 02:12:12.933240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.321 [2024-07-24 02:12:12.933268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.321 qpair failed and we were unable to recover it. 00:33:58.321 [2024-07-24 02:12:12.933453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.321 [2024-07-24 02:12:12.933479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.322 qpair failed and we were unable to recover it. 00:33:58.322 [2024-07-24 02:12:12.933606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.322 [2024-07-24 02:12:12.933648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.322 qpair failed and we were unable to recover it. 00:33:58.322 [2024-07-24 02:12:12.933828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.322 [2024-07-24 02:12:12.933857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.322 qpair failed and we were unable to recover it. 00:33:58.322 [2024-07-24 02:12:12.933983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.322 [2024-07-24 02:12:12.934012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.322 qpair failed and we were unable to recover it. 00:33:58.322 [2024-07-24 02:12:12.934152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.322 [2024-07-24 02:12:12.934178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.322 qpair failed and we were unable to recover it. 00:33:58.322 [2024-07-24 02:12:12.934307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.322 [2024-07-24 02:12:12.934343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.322 qpair failed and we were unable to recover it. 00:33:58.322 [2024-07-24 02:12:12.934495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.322 [2024-07-24 02:12:12.934522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.322 qpair failed and we were unable to recover it. 00:33:58.322 [2024-07-24 02:12:12.934698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.322 [2024-07-24 02:12:12.934727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.322 qpair failed and we were unable to recover it. 00:33:58.322 [2024-07-24 02:12:12.934830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.322 [2024-07-24 02:12:12.934858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.322 qpair failed and we were unable to recover it. 00:33:58.322 [2024-07-24 02:12:12.935008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.322 [2024-07-24 02:12:12.935035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.322 qpair failed and we were unable to recover it. 00:33:58.322 [2024-07-24 02:12:12.935180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.322 [2024-07-24 02:12:12.935208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.322 qpair failed and we were unable to recover it. 00:33:58.322 [2024-07-24 02:12:12.935395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.322 [2024-07-24 02:12:12.935421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.322 qpair failed and we were unable to recover it. 00:33:58.322 [2024-07-24 02:12:12.935535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.322 [2024-07-24 02:12:12.935560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.322 qpair failed and we were unable to recover it. 00:33:58.322 [2024-07-24 02:12:12.935694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.322 [2024-07-24 02:12:12.935736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.322 qpair failed and we were unable to recover it. 00:33:58.322 [2024-07-24 02:12:12.935890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.322 [2024-07-24 02:12:12.935918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.322 qpair failed and we were unable to recover it. 00:33:58.322 [2024-07-24 02:12:12.936097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.322 [2024-07-24 02:12:12.936122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.322 qpair failed and we were unable to recover it. 00:33:58.322 [2024-07-24 02:12:12.936253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.322 [2024-07-24 02:12:12.936297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.322 qpair failed and we were unable to recover it. 00:33:58.322 [2024-07-24 02:12:12.936463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.322 [2024-07-24 02:12:12.936490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.322 qpair failed and we were unable to recover it. 00:33:58.322 [2024-07-24 02:12:12.936604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.322 [2024-07-24 02:12:12.936630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.322 qpair failed and we were unable to recover it. 00:33:58.322 [2024-07-24 02:12:12.936764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.322 [2024-07-24 02:12:12.936789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.322 qpair failed and we were unable to recover it. 00:33:58.322 [2024-07-24 02:12:12.936945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.322 [2024-07-24 02:12:12.936974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.322 qpair failed and we were unable to recover it. 00:33:58.322 [2024-07-24 02:12:12.937155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.322 [2024-07-24 02:12:12.937181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.322 qpair failed and we were unable to recover it. 00:33:58.322 [2024-07-24 02:12:12.937297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.322 [2024-07-24 02:12:12.937334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.322 qpair failed and we were unable to recover it. 00:33:58.322 [2024-07-24 02:12:12.937464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.322 [2024-07-24 02:12:12.937490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.322 qpair failed and we were unable to recover it. 00:33:58.322 [2024-07-24 02:12:12.937600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.322 [2024-07-24 02:12:12.937626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.322 qpair failed and we were unable to recover it. 00:33:58.322 [2024-07-24 02:12:12.937781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.322 [2024-07-24 02:12:12.937823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.322 qpair failed and we were unable to recover it. 00:33:58.322 [2024-07-24 02:12:12.937971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.322 [2024-07-24 02:12:12.937999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.322 qpair failed and we were unable to recover it. 00:33:58.322 [2024-07-24 02:12:12.938145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.322 [2024-07-24 02:12:12.938171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.322 qpair failed and we were unable to recover it. 00:33:58.322 [2024-07-24 02:12:12.938303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.322 [2024-07-24 02:12:12.938341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.322 qpair failed and we were unable to recover it. 00:33:58.322 [2024-07-24 02:12:12.938569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.322 [2024-07-24 02:12:12.938611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.322 qpair failed and we were unable to recover it. 00:33:58.322 [2024-07-24 02:12:12.938799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.322 [2024-07-24 02:12:12.938831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.322 qpair failed and we were unable to recover it. 00:33:58.322 [2024-07-24 02:12:12.939015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.322 [2024-07-24 02:12:12.939067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.322 qpair failed and we were unable to recover it. 00:33:58.322 [2024-07-24 02:12:12.939225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.322 [2024-07-24 02:12:12.939254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.322 qpair failed and we were unable to recover it. 00:33:58.322 [2024-07-24 02:12:12.939408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.322 [2024-07-24 02:12:12.939434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.322 qpair failed and we were unable to recover it. 00:33:58.322 [2024-07-24 02:12:12.939562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.322 [2024-07-24 02:12:12.939588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.322 qpair failed and we were unable to recover it. 00:33:58.322 [2024-07-24 02:12:12.939719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.322 [2024-07-24 02:12:12.939749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.322 qpair failed and we were unable to recover it. 00:33:58.322 [2024-07-24 02:12:12.939936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.322 [2024-07-24 02:12:12.939961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.322 qpair failed and we were unable to recover it. 00:33:58.322 [2024-07-24 02:12:12.940126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.323 [2024-07-24 02:12:12.940181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.323 qpair failed and we were unable to recover it. 00:33:58.323 [2024-07-24 02:12:12.940357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.323 [2024-07-24 02:12:12.940386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.323 qpair failed and we were unable to recover it. 00:33:58.323 [2024-07-24 02:12:12.940512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.323 [2024-07-24 02:12:12.940537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.323 qpair failed and we were unable to recover it. 00:33:58.323 [2024-07-24 02:12:12.940673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.323 [2024-07-24 02:12:12.940698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.323 qpair failed and we were unable to recover it. 00:33:58.323 [2024-07-24 02:12:12.940855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.323 [2024-07-24 02:12:12.940880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.323 qpair failed and we were unable to recover it. 00:33:58.323 [2024-07-24 02:12:12.941014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.323 [2024-07-24 02:12:12.941039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.323 qpair failed and we were unable to recover it. 00:33:58.323 [2024-07-24 02:12:12.941174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.323 [2024-07-24 02:12:12.941202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.323 qpair failed and we were unable to recover it. 00:33:58.323 [2024-07-24 02:12:12.941407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.323 [2024-07-24 02:12:12.941434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.323 qpair failed and we were unable to recover it. 00:33:58.323 [2024-07-24 02:12:12.941582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.323 [2024-07-24 02:12:12.941607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.323 qpair failed and we were unable to recover it. 00:33:58.323 [2024-07-24 02:12:12.941818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.323 [2024-07-24 02:12:12.941869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.323 qpair failed and we were unable to recover it. 00:33:58.323 [2024-07-24 02:12:12.941984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.323 [2024-07-24 02:12:12.942012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.323 qpair failed and we were unable to recover it. 00:33:58.323 [2024-07-24 02:12:12.942192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.323 [2024-07-24 02:12:12.942218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.323 qpair failed and we were unable to recover it. 00:33:58.323 [2024-07-24 02:12:12.942368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.323 [2024-07-24 02:12:12.942399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.323 qpair failed and we were unable to recover it. 00:33:58.323 [2024-07-24 02:12:12.942550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.323 [2024-07-24 02:12:12.942578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.323 qpair failed and we were unable to recover it. 00:33:58.323 [2024-07-24 02:12:12.942711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.323 [2024-07-24 02:12:12.942737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.323 qpair failed and we were unable to recover it. 00:33:58.323 [2024-07-24 02:12:12.942841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.323 [2024-07-24 02:12:12.942866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.323 qpair failed and we were unable to recover it. 00:33:58.323 [2024-07-24 02:12:12.943018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.323 [2024-07-24 02:12:12.943048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.323 qpair failed and we were unable to recover it. 00:33:58.323 [2024-07-24 02:12:12.943206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.323 [2024-07-24 02:12:12.943232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.323 qpair failed and we were unable to recover it. 00:33:58.323 [2024-07-24 02:12:12.943408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.323 [2024-07-24 02:12:12.943437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.323 qpair failed and we were unable to recover it. 00:33:58.323 [2024-07-24 02:12:12.943582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.323 [2024-07-24 02:12:12.943610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.323 qpair failed and we were unable to recover it. 00:33:58.323 [2024-07-24 02:12:12.943767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.323 [2024-07-24 02:12:12.943797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.323 qpair failed and we were unable to recover it. 00:33:58.323 [2024-07-24 02:12:12.943974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.323 [2024-07-24 02:12:12.944002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.323 qpair failed and we were unable to recover it. 00:33:58.323 [2024-07-24 02:12:12.944153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.323 [2024-07-24 02:12:12.944178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.323 qpair failed and we were unable to recover it. 00:33:58.323 [2024-07-24 02:12:12.944313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.323 [2024-07-24 02:12:12.944345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.323 qpair failed and we were unable to recover it. 00:33:58.323 [2024-07-24 02:12:12.944474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.323 [2024-07-24 02:12:12.944499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.323 qpair failed and we were unable to recover it. 00:33:58.323 [2024-07-24 02:12:12.944628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.323 [2024-07-24 02:12:12.944657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.323 qpair failed and we were unable to recover it. 00:33:58.323 [2024-07-24 02:12:12.944834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.323 [2024-07-24 02:12:12.944860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.323 qpair failed and we were unable to recover it. 00:33:58.323 [2024-07-24 02:12:12.945065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.323 [2024-07-24 02:12:12.945122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.323 qpair failed and we were unable to recover it. 00:33:58.323 [2024-07-24 02:12:12.945293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.323 [2024-07-24 02:12:12.945329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.323 qpair failed and we were unable to recover it. 00:33:58.323 [2024-07-24 02:12:12.945464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.323 [2024-07-24 02:12:12.945489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.323 qpair failed and we were unable to recover it. 00:33:58.323 [2024-07-24 02:12:12.945647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.323 [2024-07-24 02:12:12.945673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.323 qpair failed and we were unable to recover it. 00:33:58.323 [2024-07-24 02:12:12.945805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.323 [2024-07-24 02:12:12.945831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.323 qpair failed and we were unable to recover it. 00:33:58.323 [2024-07-24 02:12:12.946031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.323 [2024-07-24 02:12:12.946056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.323 qpair failed and we were unable to recover it. 00:33:58.323 [2024-07-24 02:12:12.946200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.323 [2024-07-24 02:12:12.946228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.323 qpair failed and we were unable to recover it. 00:33:58.323 [2024-07-24 02:12:12.946344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.323 [2024-07-24 02:12:12.946373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.323 qpair failed and we were unable to recover it. 00:33:58.323 [2024-07-24 02:12:12.946522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.323 [2024-07-24 02:12:12.946548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.323 qpair failed and we were unable to recover it. 00:33:58.323 [2024-07-24 02:12:12.946682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.323 [2024-07-24 02:12:12.946724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.323 qpair failed and we were unable to recover it. 00:33:58.323 [2024-07-24 02:12:12.946870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.324 [2024-07-24 02:12:12.946898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.324 qpair failed and we were unable to recover it. 00:33:58.324 [2024-07-24 02:12:12.947040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.324 [2024-07-24 02:12:12.947065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.324 qpair failed and we were unable to recover it. 00:33:58.324 [2024-07-24 02:12:12.947196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.324 [2024-07-24 02:12:12.947222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.324 qpair failed and we were unable to recover it. 00:33:58.324 [2024-07-24 02:12:12.947389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.324 [2024-07-24 02:12:12.947415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.324 qpair failed and we were unable to recover it. 00:33:58.324 [2024-07-24 02:12:12.947518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.324 [2024-07-24 02:12:12.947543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.324 qpair failed and we were unable to recover it. 00:33:58.324 [2024-07-24 02:12:12.947702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.324 [2024-07-24 02:12:12.947727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.324 qpair failed and we were unable to recover it. 00:33:58.324 [2024-07-24 02:12:12.947879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.324 [2024-07-24 02:12:12.947907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.324 qpair failed and we were unable to recover it. 00:33:58.324 [2024-07-24 02:12:12.948039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.324 [2024-07-24 02:12:12.948064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.324 qpair failed and we were unable to recover it. 00:33:58.324 [2024-07-24 02:12:12.948196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.324 [2024-07-24 02:12:12.948221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.324 qpair failed and we were unable to recover it. 00:33:58.324 [2024-07-24 02:12:12.948348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.324 [2024-07-24 02:12:12.948394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.324 qpair failed and we were unable to recover it. 00:33:58.324 [2024-07-24 02:12:12.948527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.324 [2024-07-24 02:12:12.948552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.324 qpair failed and we were unable to recover it. 00:33:58.324 [2024-07-24 02:12:12.948688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.324 [2024-07-24 02:12:12.948714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.324 qpair failed and we were unable to recover it. 00:33:58.324 [2024-07-24 02:12:12.948840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.324 [2024-07-24 02:12:12.948868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.324 qpair failed and we were unable to recover it. 00:33:58.324 [2024-07-24 02:12:12.948986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.324 [2024-07-24 02:12:12.949011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.324 qpair failed and we were unable to recover it. 00:33:58.324 [2024-07-24 02:12:12.949153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.324 [2024-07-24 02:12:12.949179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.324 qpair failed and we were unable to recover it. 00:33:58.324 [2024-07-24 02:12:12.949307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.324 [2024-07-24 02:12:12.949342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.324 qpair failed and we were unable to recover it. 00:33:58.324 [2024-07-24 02:12:12.949469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.324 [2024-07-24 02:12:12.949495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.324 qpair failed and we were unable to recover it. 00:33:58.324 [2024-07-24 02:12:12.949606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.324 [2024-07-24 02:12:12.949632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.324 qpair failed and we were unable to recover it. 00:33:58.324 [2024-07-24 02:12:12.949766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.324 [2024-07-24 02:12:12.949793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.324 qpair failed and we were unable to recover it. 00:33:58.324 [2024-07-24 02:12:12.949927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.324 [2024-07-24 02:12:12.949953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.324 qpair failed and we were unable to recover it. 00:33:58.324 [2024-07-24 02:12:12.950105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.324 [2024-07-24 02:12:12.950133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.324 qpair failed and we were unable to recover it. 00:33:58.324 [2024-07-24 02:12:12.950303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.324 [2024-07-24 02:12:12.950343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.324 qpair failed and we were unable to recover it. 00:33:58.324 [2024-07-24 02:12:12.950492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.324 [2024-07-24 02:12:12.950517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.324 qpair failed and we were unable to recover it. 00:33:58.324 [2024-07-24 02:12:12.950617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.324 [2024-07-24 02:12:12.950643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.324 qpair failed and we were unable to recover it. 00:33:58.324 [2024-07-24 02:12:12.950831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.324 [2024-07-24 02:12:12.950860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.324 qpair failed and we were unable to recover it. 00:33:58.324 [2024-07-24 02:12:12.951008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.324 [2024-07-24 02:12:12.951033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.324 qpair failed and we were unable to recover it. 00:33:58.324 [2024-07-24 02:12:12.951187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.324 [2024-07-24 02:12:12.951215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.324 qpair failed and we were unable to recover it. 00:33:58.324 [2024-07-24 02:12:12.951377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.324 [2024-07-24 02:12:12.951404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.324 qpair failed and we were unable to recover it. 00:33:58.324 [2024-07-24 02:12:12.951506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.324 [2024-07-24 02:12:12.951532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.324 qpair failed and we were unable to recover it. 00:33:58.324 [2024-07-24 02:12:12.951668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.324 [2024-07-24 02:12:12.951693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.324 qpair failed and we were unable to recover it. 00:33:58.324 [2024-07-24 02:12:12.951821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.324 [2024-07-24 02:12:12.951849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.324 qpair failed and we were unable to recover it. 00:33:58.324 [2024-07-24 02:12:12.952002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.324 [2024-07-24 02:12:12.952028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.324 qpair failed and we were unable to recover it. 00:33:58.324 [2024-07-24 02:12:12.952160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.324 [2024-07-24 02:12:12.952186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.324 qpair failed and we were unable to recover it. 00:33:58.324 [2024-07-24 02:12:12.952296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.324 [2024-07-24 02:12:12.952329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.324 qpair failed and we were unable to recover it. 00:33:58.324 [2024-07-24 02:12:12.952471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.324 [2024-07-24 02:12:12.952496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.324 qpair failed and we were unable to recover it. 00:33:58.324 [2024-07-24 02:12:12.952606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.324 [2024-07-24 02:12:12.952632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.324 qpair failed and we were unable to recover it. 00:33:58.324 [2024-07-24 02:12:12.952769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.324 [2024-07-24 02:12:12.952797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.324 qpair failed and we were unable to recover it. 00:33:58.324 [2024-07-24 02:12:12.952951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.325 [2024-07-24 02:12:12.952978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.325 qpair failed and we were unable to recover it. 00:33:58.325 [2024-07-24 02:12:12.953116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.325 [2024-07-24 02:12:12.953158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.325 qpair failed and we were unable to recover it. 00:33:58.325 [2024-07-24 02:12:12.953272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.325 [2024-07-24 02:12:12.953300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.325 qpair failed and we were unable to recover it. 00:33:58.325 [2024-07-24 02:12:12.953444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.325 [2024-07-24 02:12:12.953470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.325 qpair failed and we were unable to recover it. 00:33:58.325 [2024-07-24 02:12:12.953602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.325 [2024-07-24 02:12:12.953628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.325 qpair failed and we were unable to recover it. 00:33:58.325 [2024-07-24 02:12:12.953762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.325 [2024-07-24 02:12:12.953790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.325 qpair failed and we were unable to recover it. 00:33:58.325 [2024-07-24 02:12:12.953941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.325 [2024-07-24 02:12:12.953966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.325 qpair failed and we were unable to recover it. 00:33:58.325 [2024-07-24 02:12:12.954094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.325 [2024-07-24 02:12:12.954135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.325 qpair failed and we were unable to recover it. 00:33:58.325 [2024-07-24 02:12:12.954286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.325 [2024-07-24 02:12:12.954314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.325 qpair failed and we were unable to recover it. 00:33:58.325 [2024-07-24 02:12:12.954448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.325 [2024-07-24 02:12:12.954475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.325 qpair failed and we were unable to recover it. 00:33:58.325 [2024-07-24 02:12:12.954577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.325 [2024-07-24 02:12:12.954603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.325 qpair failed and we were unable to recover it. 00:33:58.325 [2024-07-24 02:12:12.954752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.325 [2024-07-24 02:12:12.954780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.325 qpair failed and we were unable to recover it. 00:33:58.325 [2024-07-24 02:12:12.954959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.325 [2024-07-24 02:12:12.954985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.325 qpair failed and we were unable to recover it. 00:33:58.325 [2024-07-24 02:12:12.955139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.325 [2024-07-24 02:12:12.955167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.325 qpair failed and we were unable to recover it. 00:33:58.325 [2024-07-24 02:12:12.955304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.325 [2024-07-24 02:12:12.955347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.325 qpair failed and we were unable to recover it. 00:33:58.325 [2024-07-24 02:12:12.955509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.325 [2024-07-24 02:12:12.955535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.325 qpair failed and we were unable to recover it. 00:33:58.325 [2024-07-24 02:12:12.955673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.325 [2024-07-24 02:12:12.955699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.325 qpair failed and we were unable to recover it. 00:33:58.325 [2024-07-24 02:12:12.955883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.325 [2024-07-24 02:12:12.955911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.325 qpair failed and we were unable to recover it. 00:33:58.325 [2024-07-24 02:12:12.956066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.325 [2024-07-24 02:12:12.956091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.325 qpair failed and we were unable to recover it. 00:33:58.325 [2024-07-24 02:12:12.956249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.325 [2024-07-24 02:12:12.956292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.325 qpair failed and we were unable to recover it. 00:33:58.325 [2024-07-24 02:12:12.956471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.325 [2024-07-24 02:12:12.956497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.325 qpair failed and we were unable to recover it. 00:33:58.325 [2024-07-24 02:12:12.956607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.325 [2024-07-24 02:12:12.956633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.325 qpair failed and we were unable to recover it. 00:33:58.325 [2024-07-24 02:12:12.956764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.325 [2024-07-24 02:12:12.956789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.325 qpair failed and we were unable to recover it. 00:33:58.325 [2024-07-24 02:12:12.956935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.325 [2024-07-24 02:12:12.956980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.325 qpair failed and we were unable to recover it. 00:33:58.325 [2024-07-24 02:12:12.957186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.325 [2024-07-24 02:12:12.957214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.325 qpair failed and we were unable to recover it. 00:33:58.325 [2024-07-24 02:12:12.957395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.325 [2024-07-24 02:12:12.957422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.325 qpair failed and we were unable to recover it. 00:33:58.325 [2024-07-24 02:12:12.957554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.325 [2024-07-24 02:12:12.957580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.325 qpair failed and we were unable to recover it. 00:33:58.325 [2024-07-24 02:12:12.957726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.325 [2024-07-24 02:12:12.957752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.325 qpair failed and we were unable to recover it. 00:33:58.325 [2024-07-24 02:12:12.957852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.325 [2024-07-24 02:12:12.957877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.325 qpair failed and we were unable to recover it. 00:33:58.325 [2024-07-24 02:12:12.958090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.325 [2024-07-24 02:12:12.958120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.325 qpair failed and we were unable to recover it. 00:33:58.325 [2024-07-24 02:12:12.958270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.325 [2024-07-24 02:12:12.958299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.325 qpair failed and we were unable to recover it. 00:33:58.325 [2024-07-24 02:12:12.958457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.325 [2024-07-24 02:12:12.958483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.325 qpair failed and we were unable to recover it. 00:33:58.325 [2024-07-24 02:12:12.958590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.325 [2024-07-24 02:12:12.958633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.325 qpair failed and we were unable to recover it. 00:33:58.325 [2024-07-24 02:12:12.958749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.325 [2024-07-24 02:12:12.958778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.325 qpair failed and we were unable to recover it. 00:33:58.325 [2024-07-24 02:12:12.958945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.325 [2024-07-24 02:12:12.958974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.325 qpair failed and we were unable to recover it. 00:33:58.325 [2024-07-24 02:12:12.959107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.325 [2024-07-24 02:12:12.959135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.325 qpair failed and we were unable to recover it. 00:33:58.325 [2024-07-24 02:12:12.959250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.325 [2024-07-24 02:12:12.959278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.325 qpair failed and we were unable to recover it. 00:33:58.325 [2024-07-24 02:12:12.959471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.325 [2024-07-24 02:12:12.959498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.326 qpair failed and we were unable to recover it. 00:33:58.326 [2024-07-24 02:12:12.959626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.326 [2024-07-24 02:12:12.959656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.326 qpair failed and we were unable to recover it. 00:33:58.326 [2024-07-24 02:12:12.959807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.326 [2024-07-24 02:12:12.959835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.326 qpair failed and we were unable to recover it. 00:33:58.326 [2024-07-24 02:12:12.960008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.326 [2024-07-24 02:12:12.960036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.326 qpair failed and we were unable to recover it. 00:33:58.326 [2024-07-24 02:12:12.960169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.326 [2024-07-24 02:12:12.960198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.326 qpair failed and we were unable to recover it. 00:33:58.326 [2024-07-24 02:12:12.960370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.326 [2024-07-24 02:12:12.960398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.326 qpair failed and we were unable to recover it. 00:33:58.326 [2024-07-24 02:12:12.960507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.326 [2024-07-24 02:12:12.960534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.326 qpair failed and we were unable to recover it. 00:33:58.326 [2024-07-24 02:12:12.960670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.326 [2024-07-24 02:12:12.960712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.326 qpair failed and we were unable to recover it. 00:33:58.326 [2024-07-24 02:12:12.960833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.326 [2024-07-24 02:12:12.960862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.326 qpair failed and we were unable to recover it. 00:33:58.326 [2024-07-24 02:12:12.961011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.326 [2024-07-24 02:12:12.961039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.326 qpair failed and we were unable to recover it. 00:33:58.326 [2024-07-24 02:12:12.961156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.326 [2024-07-24 02:12:12.961185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.326 qpair failed and we were unable to recover it. 00:33:58.326 [2024-07-24 02:12:12.961337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.326 [2024-07-24 02:12:12.961381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.326 qpair failed and we were unable to recover it. 00:33:58.326 [2024-07-24 02:12:12.961516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.326 [2024-07-24 02:12:12.961542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.326 qpair failed and we were unable to recover it. 00:33:58.326 [2024-07-24 02:12:12.961640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.326 [2024-07-24 02:12:12.961665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.326 qpair failed and we were unable to recover it. 00:33:58.326 [2024-07-24 02:12:12.961796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.326 [2024-07-24 02:12:12.961824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.326 qpair failed and we were unable to recover it. 00:33:58.326 [2024-07-24 02:12:12.962031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.326 [2024-07-24 02:12:12.962059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.326 qpair failed and we were unable to recover it. 00:33:58.326 [2024-07-24 02:12:12.962207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.326 [2024-07-24 02:12:12.962235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.326 qpair failed and we were unable to recover it. 00:33:58.326 [2024-07-24 02:12:12.962375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.326 [2024-07-24 02:12:12.962401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.326 qpair failed and we were unable to recover it. 00:33:58.326 [2024-07-24 02:12:12.962634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.326 [2024-07-24 02:12:12.962673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.326 qpair failed and we were unable to recover it. 00:33:58.326 [2024-07-24 02:12:12.962837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.326 [2024-07-24 02:12:12.962883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.326 qpair failed and we were unable to recover it. 00:33:58.326 [2024-07-24 02:12:12.963050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.326 [2024-07-24 02:12:12.963101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.326 qpair failed and we were unable to recover it. 00:33:58.326 [2024-07-24 02:12:12.963275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.326 [2024-07-24 02:12:12.963302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.326 qpair failed and we were unable to recover it. 00:33:58.326 [2024-07-24 02:12:12.963422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.326 [2024-07-24 02:12:12.963448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.326 qpair failed and we were unable to recover it. 00:33:58.326 [2024-07-24 02:12:12.963559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.326 [2024-07-24 02:12:12.963586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.326 qpair failed and we were unable to recover it. 00:33:58.326 [2024-07-24 02:12:12.963751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.326 [2024-07-24 02:12:12.963795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.326 qpair failed and we were unable to recover it. 00:33:58.326 [2024-07-24 02:12:12.963943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.326 [2024-07-24 02:12:12.963987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.326 qpair failed and we were unable to recover it. 00:33:58.326 [2024-07-24 02:12:12.964144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.326 [2024-07-24 02:12:12.964169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.326 qpair failed and we were unable to recover it. 00:33:58.326 [2024-07-24 02:12:12.964297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.326 [2024-07-24 02:12:12.964358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.326 qpair failed and we were unable to recover it. 00:33:58.326 [2024-07-24 02:12:12.964490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.326 [2024-07-24 02:12:12.964533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.326 qpair failed and we were unable to recover it. 00:33:58.326 [2024-07-24 02:12:12.964711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.326 [2024-07-24 02:12:12.964755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.326 qpair failed and we were unable to recover it. 00:33:58.326 [2024-07-24 02:12:12.965028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.326 [2024-07-24 02:12:12.965075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.326 qpair failed and we were unable to recover it. 00:33:58.326 [2024-07-24 02:12:12.965218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.326 [2024-07-24 02:12:12.965243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.326 qpair failed and we were unable to recover it. 00:33:58.326 [2024-07-24 02:12:12.965469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.327 [2024-07-24 02:12:12.965514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.327 qpair failed and we were unable to recover it. 00:33:58.327 [2024-07-24 02:12:12.965648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.327 [2024-07-24 02:12:12.965693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.327 qpair failed and we were unable to recover it. 00:33:58.327 [2024-07-24 02:12:12.965851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.327 [2024-07-24 02:12:12.965893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.327 qpair failed and we were unable to recover it. 00:33:58.327 [2024-07-24 02:12:12.966060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.327 [2024-07-24 02:12:12.966102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.327 qpair failed and we were unable to recover it. 00:33:58.327 [2024-07-24 02:12:12.966255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.327 [2024-07-24 02:12:12.966282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.327 qpair failed and we were unable to recover it. 00:33:58.327 [2024-07-24 02:12:12.966421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.327 [2024-07-24 02:12:12.966467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.327 qpair failed and we were unable to recover it. 00:33:58.327 [2024-07-24 02:12:12.966600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.327 [2024-07-24 02:12:12.966652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.327 qpair failed and we were unable to recover it. 00:33:58.327 [2024-07-24 02:12:12.966791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.327 [2024-07-24 02:12:12.966834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.327 qpair failed and we were unable to recover it. 00:33:58.327 [2024-07-24 02:12:12.967134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.327 [2024-07-24 02:12:12.967170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.327 qpair failed and we were unable to recover it. 00:33:58.327 [2024-07-24 02:12:12.967329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.327 [2024-07-24 02:12:12.967358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.327 qpair failed and we were unable to recover it. 00:33:58.327 [2024-07-24 02:12:12.967472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.327 [2024-07-24 02:12:12.967499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.327 qpair failed and we were unable to recover it. 00:33:58.327 [2024-07-24 02:12:12.967653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.327 [2024-07-24 02:12:12.967678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.327 qpair failed and we were unable to recover it. 00:33:58.327 [2024-07-24 02:12:12.967827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.327 [2024-07-24 02:12:12.967853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.327 qpair failed and we were unable to recover it. 00:33:58.327 [2024-07-24 02:12:12.968008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.327 [2024-07-24 02:12:12.968034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.327 qpair failed and we were unable to recover it. 00:33:58.327 [2024-07-24 02:12:12.968134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.327 [2024-07-24 02:12:12.968170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.327 qpair failed and we were unable to recover it. 00:33:58.327 [2024-07-24 02:12:12.968292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.327 [2024-07-24 02:12:12.968329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.327 qpair failed and we were unable to recover it. 00:33:58.327 [2024-07-24 02:12:12.968439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.327 [2024-07-24 02:12:12.968465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.327 qpair failed and we were unable to recover it. 00:33:58.327 [2024-07-24 02:12:12.968627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.327 [2024-07-24 02:12:12.968653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.327 qpair failed and we were unable to recover it. 00:33:58.327 [2024-07-24 02:12:12.968791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.327 [2024-07-24 02:12:12.968817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.327 qpair failed and we were unable to recover it. 00:33:58.327 [2024-07-24 02:12:12.968952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.327 [2024-07-24 02:12:12.968978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.327 qpair failed and we were unable to recover it. 00:33:58.327 [2024-07-24 02:12:12.969114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.327 [2024-07-24 02:12:12.969141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.327 qpair failed and we were unable to recover it. 00:33:58.327 [2024-07-24 02:12:12.969296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.327 [2024-07-24 02:12:12.969335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.327 qpair failed and we were unable to recover it. 00:33:58.327 [2024-07-24 02:12:12.969439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.327 [2024-07-24 02:12:12.969465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.327 qpair failed and we were unable to recover it. 00:33:58.327 [2024-07-24 02:12:12.969604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.327 [2024-07-24 02:12:12.969635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.327 qpair failed and we were unable to recover it. 00:33:58.327 [2024-07-24 02:12:12.969786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.327 [2024-07-24 02:12:12.969811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.327 qpair failed and we were unable to recover it. 00:33:58.327 [2024-07-24 02:12:12.969925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.327 [2024-07-24 02:12:12.969951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.327 qpair failed and we were unable to recover it. 00:33:58.327 [2024-07-24 02:12:12.970083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.327 [2024-07-24 02:12:12.970120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.327 qpair failed and we were unable to recover it. 00:33:58.327 [2024-07-24 02:12:12.970255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.327 [2024-07-24 02:12:12.970281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.327 qpair failed and we were unable to recover it. 00:33:58.327 [2024-07-24 02:12:12.970398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.327 [2024-07-24 02:12:12.970424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.327 qpair failed and we were unable to recover it. 00:33:58.327 [2024-07-24 02:12:12.970537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.327 [2024-07-24 02:12:12.970562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.327 qpair failed and we were unable to recover it. 00:33:58.327 [2024-07-24 02:12:12.970738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.327 [2024-07-24 02:12:12.970764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.327 qpair failed and we were unable to recover it. 00:33:58.327 [2024-07-24 02:12:12.970895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.327 [2024-07-24 02:12:12.970922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.327 qpair failed and we were unable to recover it. 00:33:58.327 [2024-07-24 02:12:12.971096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.327 [2024-07-24 02:12:12.971122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.327 qpair failed and we were unable to recover it. 00:33:58.327 [2024-07-24 02:12:12.971286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.327 [2024-07-24 02:12:12.971345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.327 qpair failed and we were unable to recover it. 00:33:58.327 [2024-07-24 02:12:12.971464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.327 [2024-07-24 02:12:12.971492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.327 qpair failed and we were unable to recover it. 00:33:58.327 [2024-07-24 02:12:12.971622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.327 [2024-07-24 02:12:12.971648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.327 qpair failed and we were unable to recover it. 00:33:58.327 [2024-07-24 02:12:12.971818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.327 [2024-07-24 02:12:12.971852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.327 qpair failed and we were unable to recover it. 00:33:58.328 [2024-07-24 02:12:12.972052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.328 [2024-07-24 02:12:12.972104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.328 qpair failed and we were unable to recover it. 00:33:58.328 [2024-07-24 02:12:12.972231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.328 [2024-07-24 02:12:12.972261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.328 qpair failed and we were unable to recover it. 00:33:58.328 [2024-07-24 02:12:12.972395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.328 [2024-07-24 02:12:12.972424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.328 qpair failed and we were unable to recover it. 00:33:58.328 [2024-07-24 02:12:12.972543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.328 [2024-07-24 02:12:12.972570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.328 qpair failed and we were unable to recover it. 00:33:58.328 [2024-07-24 02:12:12.972747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.328 [2024-07-24 02:12:12.972775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.328 qpair failed and we were unable to recover it. 00:33:58.328 [2024-07-24 02:12:12.972960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.328 [2024-07-24 02:12:12.973000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.328 qpair failed and we were unable to recover it. 00:33:58.328 [2024-07-24 02:12:12.973138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.328 [2024-07-24 02:12:12.973167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.328 qpair failed and we were unable to recover it. 00:33:58.328 [2024-07-24 02:12:12.973311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.328 [2024-07-24 02:12:12.973369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.328 qpair failed and we were unable to recover it. 00:33:58.328 [2024-07-24 02:12:12.973478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.328 [2024-07-24 02:12:12.973504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.328 qpair failed and we were unable to recover it. 00:33:58.328 [2024-07-24 02:12:12.973670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.328 [2024-07-24 02:12:12.973696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.328 qpair failed and we were unable to recover it. 00:33:58.328 [2024-07-24 02:12:12.973855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.328 [2024-07-24 02:12:12.973881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.328 qpair failed and we were unable to recover it. 00:33:58.328 [2024-07-24 02:12:12.974009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.328 [2024-07-24 02:12:12.974041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.328 qpair failed and we were unable to recover it. 00:33:58.328 [2024-07-24 02:12:12.974172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.328 [2024-07-24 02:12:12.974203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.328 qpair failed and we were unable to recover it. 00:33:58.328 [2024-07-24 02:12:12.974387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.328 [2024-07-24 02:12:12.974415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.328 qpair failed and we were unable to recover it. 00:33:58.328 [2024-07-24 02:12:12.974525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.328 [2024-07-24 02:12:12.974551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.328 qpair failed and we were unable to recover it. 00:33:58.328 [2024-07-24 02:12:12.974697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.328 [2024-07-24 02:12:12.974723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.328 qpair failed and we were unable to recover it. 00:33:58.328 [2024-07-24 02:12:12.974862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.328 [2024-07-24 02:12:12.974890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.328 qpair failed and we were unable to recover it. 00:33:58.328 [2024-07-24 02:12:12.975031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.328 [2024-07-24 02:12:12.975058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.328 qpair failed and we were unable to recover it. 00:33:58.328 [2024-07-24 02:12:12.975206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.328 [2024-07-24 02:12:12.975235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.328 qpair failed and we were unable to recover it. 00:33:58.328 [2024-07-24 02:12:12.975383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.328 [2024-07-24 02:12:12.975410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.328 qpair failed and we were unable to recover it. 00:33:58.328 [2024-07-24 02:12:12.975515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.328 [2024-07-24 02:12:12.975542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.328 qpair failed and we were unable to recover it. 00:33:58.328 [2024-07-24 02:12:12.975688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.328 [2024-07-24 02:12:12.975722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.328 qpair failed and we were unable to recover it. 00:33:58.328 [2024-07-24 02:12:12.975854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.328 [2024-07-24 02:12:12.975881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.328 qpair failed and we were unable to recover it. 00:33:58.328 [2024-07-24 02:12:12.976014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.328 [2024-07-24 02:12:12.976041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.328 qpair failed and we were unable to recover it. 00:33:58.328 [2024-07-24 02:12:12.976228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.328 [2024-07-24 02:12:12.976258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.328 qpair failed and we were unable to recover it. 00:33:58.328 [2024-07-24 02:12:12.976501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.328 [2024-07-24 02:12:12.976541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.328 qpair failed and we were unable to recover it. 00:33:58.328 [2024-07-24 02:12:12.976699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.328 [2024-07-24 02:12:12.976738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.328 qpair failed and we were unable to recover it. 00:33:58.328 [2024-07-24 02:12:12.976882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.328 [2024-07-24 02:12:12.976910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.328 qpair failed and we were unable to recover it. 00:33:58.328 [2024-07-24 02:12:12.977061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.328 [2024-07-24 02:12:12.977102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.328 qpair failed and we were unable to recover it. 00:33:58.328 [2024-07-24 02:12:12.977269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.328 [2024-07-24 02:12:12.977303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.328 qpair failed and we were unable to recover it. 00:33:58.328 [2024-07-24 02:12:12.977539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.328 [2024-07-24 02:12:12.977567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.328 qpair failed and we were unable to recover it. 00:33:58.328 [2024-07-24 02:12:12.977758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.328 [2024-07-24 02:12:12.977786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.328 qpair failed and we were unable to recover it. 00:33:58.328 [2024-07-24 02:12:12.977967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.328 [2024-07-24 02:12:12.977996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.328 qpair failed and we were unable to recover it. 00:33:58.328 [2024-07-24 02:12:12.978163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.328 [2024-07-24 02:12:12.978191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.328 qpair failed and we were unable to recover it. 00:33:58.328 [2024-07-24 02:12:12.978353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.328 [2024-07-24 02:12:12.978380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.328 qpair failed and we were unable to recover it. 00:33:58.328 [2024-07-24 02:12:12.978487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.328 [2024-07-24 02:12:12.978513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.328 qpair failed and we were unable to recover it. 00:33:58.328 [2024-07-24 02:12:12.978625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.329 [2024-07-24 02:12:12.978651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.329 qpair failed and we were unable to recover it. 00:33:58.329 [2024-07-24 02:12:12.978881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.329 [2024-07-24 02:12:12.978909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.329 qpair failed and we were unable to recover it. 00:33:58.329 [2024-07-24 02:12:12.979074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.329 [2024-07-24 02:12:12.979121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.329 qpair failed and we were unable to recover it. 00:33:58.329 [2024-07-24 02:12:12.979275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.329 [2024-07-24 02:12:12.979329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.329 qpair failed and we were unable to recover it. 00:33:58.329 [2024-07-24 02:12:12.979494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.329 [2024-07-24 02:12:12.979522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.329 qpair failed and we were unable to recover it. 00:33:58.329 [2024-07-24 02:12:12.979665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.329 [2024-07-24 02:12:12.979692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.329 qpair failed and we were unable to recover it. 00:33:58.329 [2024-07-24 02:12:12.979801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.329 [2024-07-24 02:12:12.979828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.329 qpair failed and we were unable to recover it. 00:33:58.329 [2024-07-24 02:12:12.979972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.329 [2024-07-24 02:12:12.979999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.329 qpair failed and we were unable to recover it. 00:33:58.329 [2024-07-24 02:12:12.980128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.329 [2024-07-24 02:12:12.980157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.329 qpair failed and we were unable to recover it. 00:33:58.329 [2024-07-24 02:12:12.980328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.329 [2024-07-24 02:12:12.980375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.329 qpair failed and we were unable to recover it. 00:33:58.329 [2024-07-24 02:12:12.980491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.329 [2024-07-24 02:12:12.980517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.329 qpair failed and we were unable to recover it. 00:33:58.329 [2024-07-24 02:12:12.980639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.329 [2024-07-24 02:12:12.980669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.329 qpair failed and we were unable to recover it. 00:33:58.329 [2024-07-24 02:12:12.980859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.329 [2024-07-24 02:12:12.980899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.329 qpair failed and we were unable to recover it. 00:33:58.329 [2024-07-24 02:12:12.981039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.329 [2024-07-24 02:12:12.981065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.329 qpair failed and we were unable to recover it. 00:33:58.329 [2024-07-24 02:12:12.981176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.329 [2024-07-24 02:12:12.981202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.329 qpair failed and we were unable to recover it. 00:33:58.329 [2024-07-24 02:12:12.981358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.329 [2024-07-24 02:12:12.981385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.329 qpair failed and we were unable to recover it. 00:33:58.329 [2024-07-24 02:12:12.981491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.329 [2024-07-24 02:12:12.981516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.329 qpair failed and we were unable to recover it. 00:33:58.329 [2024-07-24 02:12:12.981648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.329 [2024-07-24 02:12:12.981677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.329 qpair failed and we were unable to recover it. 00:33:58.329 [2024-07-24 02:12:12.981805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.329 [2024-07-24 02:12:12.981834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.329 qpair failed and we were unable to recover it. 00:33:58.329 [2024-07-24 02:12:12.981971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.329 [2024-07-24 02:12:12.981999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.329 qpair failed and we were unable to recover it. 00:33:58.329 [2024-07-24 02:12:12.982146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.329 [2024-07-24 02:12:12.982209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.329 qpair failed and we were unable to recover it. 00:33:58.329 [2024-07-24 02:12:12.982334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.329 [2024-07-24 02:12:12.982363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.329 qpair failed and we were unable to recover it. 00:33:58.329 [2024-07-24 02:12:12.982495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.329 [2024-07-24 02:12:12.982539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.329 qpair failed and we were unable to recover it. 00:33:58.329 [2024-07-24 02:12:12.982672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.329 [2024-07-24 02:12:12.982716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.329 qpair failed and we were unable to recover it. 00:33:58.329 [2024-07-24 02:12:12.982849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.329 [2024-07-24 02:12:12.982891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.329 qpair failed and we were unable to recover it. 00:33:58.329 [2024-07-24 02:12:12.983046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.329 [2024-07-24 02:12:12.983090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.329 qpair failed and we were unable to recover it. 00:33:58.329 [2024-07-24 02:12:12.983214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.329 [2024-07-24 02:12:12.983240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.329 qpair failed and we were unable to recover it. 00:33:58.329 [2024-07-24 02:12:12.983372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.329 [2024-07-24 02:12:12.983431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.329 qpair failed and we were unable to recover it. 00:33:58.329 [2024-07-24 02:12:12.983570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.329 [2024-07-24 02:12:12.983603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.329 qpair failed and we were unable to recover it. 00:33:58.329 [2024-07-24 02:12:12.983767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.329 [2024-07-24 02:12:12.983797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.329 qpair failed and we were unable to recover it. 00:33:58.329 [2024-07-24 02:12:12.983957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.329 [2024-07-24 02:12:12.984005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.329 qpair failed and we were unable to recover it. 00:33:58.329 [2024-07-24 02:12:12.984180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.329 [2024-07-24 02:12:12.984209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.329 qpair failed and we were unable to recover it. 00:33:58.329 [2024-07-24 02:12:12.984399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.329 [2024-07-24 02:12:12.984427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.329 qpair failed and we were unable to recover it. 00:33:58.329 [2024-07-24 02:12:12.984552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.329 [2024-07-24 02:12:12.984582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.329 qpair failed and we were unable to recover it. 00:33:58.329 [2024-07-24 02:12:12.984742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.329 [2024-07-24 02:12:12.984768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.329 qpair failed and we were unable to recover it. 00:33:58.329 [2024-07-24 02:12:12.984882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.329 [2024-07-24 02:12:12.984909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.329 qpair failed and we were unable to recover it. 00:33:58.329 [2024-07-24 02:12:12.985050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.329 [2024-07-24 02:12:12.985077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.329 qpair failed and we were unable to recover it. 00:33:58.329 [2024-07-24 02:12:12.985246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.330 [2024-07-24 02:12:12.985273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.330 qpair failed and we were unable to recover it. 00:33:58.330 [2024-07-24 02:12:12.985400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.330 [2024-07-24 02:12:12.985430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.330 qpair failed and we were unable to recover it. 00:33:58.330 [2024-07-24 02:12:12.985545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.330 [2024-07-24 02:12:12.985574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.330 qpair failed and we were unable to recover it. 00:33:58.330 [2024-07-24 02:12:12.985733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.330 [2024-07-24 02:12:12.985762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.330 qpair failed and we were unable to recover it. 00:33:58.330 [2024-07-24 02:12:12.985964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.330 [2024-07-24 02:12:12.986011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.330 qpair failed and we were unable to recover it. 00:33:58.330 [2024-07-24 02:12:12.986139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.330 [2024-07-24 02:12:12.986165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.330 qpair failed and we were unable to recover it. 00:33:58.330 [2024-07-24 02:12:12.986288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.330 [2024-07-24 02:12:12.986325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.330 qpair failed and we were unable to recover it. 00:33:58.330 [2024-07-24 02:12:12.986459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.330 [2024-07-24 02:12:12.986486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.330 qpair failed and we were unable to recover it. 00:33:58.330 [2024-07-24 02:12:12.986594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.330 [2024-07-24 02:12:12.986620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.330 qpair failed and we were unable to recover it. 00:33:58.330 [2024-07-24 02:12:12.986760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.330 [2024-07-24 02:12:12.986786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.330 qpair failed and we were unable to recover it. 00:33:58.330 [2024-07-24 02:12:12.986974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.330 [2024-07-24 02:12:12.987013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.330 qpair failed and we were unable to recover it. 00:33:58.330 [2024-07-24 02:12:12.987146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.330 [2024-07-24 02:12:12.987175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.330 qpair failed and we were unable to recover it. 00:33:58.330 [2024-07-24 02:12:12.987409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.330 [2024-07-24 02:12:12.987449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.330 qpair failed and we were unable to recover it. 00:33:58.330 [2024-07-24 02:12:12.987564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.330 [2024-07-24 02:12:12.987591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.330 qpair failed and we were unable to recover it. 00:33:58.330 [2024-07-24 02:12:12.987733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.330 [2024-07-24 02:12:12.987758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.330 qpair failed and we were unable to recover it. 00:33:58.330 [2024-07-24 02:12:12.987936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.330 [2024-07-24 02:12:12.987966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.330 qpair failed and we were unable to recover it. 00:33:58.330 [2024-07-24 02:12:12.988178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.330 [2024-07-24 02:12:12.988207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.330 qpair failed and we were unable to recover it. 00:33:58.330 [2024-07-24 02:12:12.988327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.330 [2024-07-24 02:12:12.988370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.330 qpair failed and we were unable to recover it. 00:33:58.330 [2024-07-24 02:12:12.988512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.330 [2024-07-24 02:12:12.988538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.330 qpair failed and we were unable to recover it. 00:33:58.330 [2024-07-24 02:12:12.988683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.330 [2024-07-24 02:12:12.988719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.330 qpair failed and we were unable to recover it. 00:33:58.330 [2024-07-24 02:12:12.988874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.330 [2024-07-24 02:12:12.988903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.330 qpair failed and we were unable to recover it. 00:33:58.330 [2024-07-24 02:12:12.989037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.330 [2024-07-24 02:12:12.989075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.330 qpair failed and we were unable to recover it. 00:33:58.330 [2024-07-24 02:12:12.989212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.330 [2024-07-24 02:12:12.989240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.330 qpair failed and we were unable to recover it. 00:33:58.330 [2024-07-24 02:12:12.989395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.330 [2024-07-24 02:12:12.989421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.330 qpair failed and we were unable to recover it. 00:33:58.330 [2024-07-24 02:12:12.989541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.330 [2024-07-24 02:12:12.989567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.330 qpair failed and we were unable to recover it. 00:33:58.330 [2024-07-24 02:12:12.989711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.330 [2024-07-24 02:12:12.989737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.330 qpair failed and we were unable to recover it. 00:33:58.330 [2024-07-24 02:12:12.989832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.330 [2024-07-24 02:12:12.989858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.330 qpair failed and we were unable to recover it. 00:33:58.330 [2024-07-24 02:12:12.990031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.330 [2024-07-24 02:12:12.990059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.330 qpair failed and we were unable to recover it. 00:33:58.330 [2024-07-24 02:12:12.990272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.330 [2024-07-24 02:12:12.990300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.330 qpair failed and we were unable to recover it. 00:33:58.330 [2024-07-24 02:12:12.990481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.330 [2024-07-24 02:12:12.990507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.330 qpair failed and we were unable to recover it. 00:33:58.330 [2024-07-24 02:12:12.990678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.330 [2024-07-24 02:12:12.990704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.330 qpair failed and we were unable to recover it. 00:33:58.330 [2024-07-24 02:12:12.990910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.330 [2024-07-24 02:12:12.990956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.330 qpair failed and we were unable to recover it. 00:33:58.330 [2024-07-24 02:12:12.991106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.330 [2024-07-24 02:12:12.991135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.330 qpair failed and we were unable to recover it. 00:33:58.330 [2024-07-24 02:12:12.991249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.330 [2024-07-24 02:12:12.991277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.330 qpair failed and we were unable to recover it. 00:33:58.330 [2024-07-24 02:12:12.991435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.330 [2024-07-24 02:12:12.991462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.330 qpair failed and we were unable to recover it. 00:33:58.330 [2024-07-24 02:12:12.991574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.330 [2024-07-24 02:12:12.991600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.330 qpair failed and we were unable to recover it. 00:33:58.330 [2024-07-24 02:12:12.991722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.330 [2024-07-24 02:12:12.991748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.330 qpair failed and we were unable to recover it. 00:33:58.330 [2024-07-24 02:12:12.991906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.331 [2024-07-24 02:12:12.991939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.331 qpair failed and we were unable to recover it. 00:33:58.331 [2024-07-24 02:12:12.992092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.331 [2024-07-24 02:12:12.992120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.331 qpair failed and we were unable to recover it. 00:33:58.331 [2024-07-24 02:12:12.992242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.331 [2024-07-24 02:12:12.992272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.331 qpair failed and we were unable to recover it. 00:33:58.331 [2024-07-24 02:12:12.992416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.331 [2024-07-24 02:12:12.992443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.331 qpair failed and we were unable to recover it. 00:33:58.331 [2024-07-24 02:12:12.992549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.331 [2024-07-24 02:12:12.992575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.331 qpair failed and we were unable to recover it. 00:33:58.331 [2024-07-24 02:12:12.992744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.331 [2024-07-24 02:12:12.992772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.331 qpair failed and we were unable to recover it. 00:33:58.331 [2024-07-24 02:12:12.992930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.331 [2024-07-24 02:12:12.992958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.331 qpair failed and we were unable to recover it. 00:33:58.331 [2024-07-24 02:12:12.993148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.331 [2024-07-24 02:12:12.993176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.331 qpair failed and we were unable to recover it. 00:33:58.331 [2024-07-24 02:12:12.993332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.331 [2024-07-24 02:12:12.993361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.331 qpair failed and we were unable to recover it. 00:33:58.331 [2024-07-24 02:12:12.993477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.331 [2024-07-24 02:12:12.993502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.331 qpair failed and we were unable to recover it. 00:33:58.331 [2024-07-24 02:12:12.993617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.331 [2024-07-24 02:12:12.993642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.331 qpair failed and we were unable to recover it. 00:33:58.331 [2024-07-24 02:12:12.993790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.331 [2024-07-24 02:12:12.993820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.331 qpair failed and we were unable to recover it. 00:33:58.331 [2024-07-24 02:12:12.993989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.331 [2024-07-24 02:12:12.994018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.331 qpair failed and we were unable to recover it. 00:33:58.331 [2024-07-24 02:12:12.994193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.331 [2024-07-24 02:12:12.994221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.331 qpair failed and we were unable to recover it. 00:33:58.331 [2024-07-24 02:12:12.994349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.331 [2024-07-24 02:12:12.994390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.331 qpair failed and we were unable to recover it. 00:33:58.331 [2024-07-24 02:12:12.994548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.331 [2024-07-24 02:12:12.994573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.331 qpair failed and we were unable to recover it. 00:33:58.331 [2024-07-24 02:12:12.994755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.331 [2024-07-24 02:12:12.994783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.331 qpair failed and we were unable to recover it. 00:33:58.331 [2024-07-24 02:12:12.994981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.331 [2024-07-24 02:12:12.995029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.331 qpair failed and we were unable to recover it. 00:33:58.331 [2024-07-24 02:12:12.995200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.331 [2024-07-24 02:12:12.995228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.331 qpair failed and we were unable to recover it. 00:33:58.331 [2024-07-24 02:12:12.995388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.331 [2024-07-24 02:12:12.995415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.331 qpair failed and we were unable to recover it. 00:33:58.331 [2024-07-24 02:12:12.995548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.331 [2024-07-24 02:12:12.995573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.331 qpair failed and we were unable to recover it. 00:33:58.331 [2024-07-24 02:12:12.995707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.331 [2024-07-24 02:12:12.995732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.331 qpair failed and we were unable to recover it. 00:33:58.331 [2024-07-24 02:12:12.995867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.331 [2024-07-24 02:12:12.995892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.331 qpair failed and we were unable to recover it. 00:33:58.331 [2024-07-24 02:12:12.996051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.331 [2024-07-24 02:12:12.996079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.331 qpair failed and we were unable to recover it. 00:33:58.331 [2024-07-24 02:12:12.996208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.331 [2024-07-24 02:12:12.996252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.331 qpair failed and we were unable to recover it. 00:33:58.331 [2024-07-24 02:12:12.996416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.331 [2024-07-24 02:12:12.996442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.331 qpair failed and we were unable to recover it. 00:33:58.331 [2024-07-24 02:12:12.996555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.331 [2024-07-24 02:12:12.996582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.331 qpair failed and we were unable to recover it. 00:33:58.331 [2024-07-24 02:12:12.996757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.331 [2024-07-24 02:12:12.996786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.331 qpair failed and we were unable to recover it. 00:33:58.331 [2024-07-24 02:12:12.996935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.331 [2024-07-24 02:12:12.996964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.331 qpair failed and we were unable to recover it. 00:33:58.331 [2024-07-24 02:12:12.997184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.331 [2024-07-24 02:12:12.997213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.331 qpair failed and we were unable to recover it. 00:33:58.331 [2024-07-24 02:12:12.997348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.331 [2024-07-24 02:12:12.997374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.331 qpair failed and we were unable to recover it. 00:33:58.331 [2024-07-24 02:12:12.997483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.331 [2024-07-24 02:12:12.997509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.331 qpair failed and we were unable to recover it. 00:33:58.331 [2024-07-24 02:12:12.997666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.332 [2024-07-24 02:12:12.997694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.332 qpair failed and we were unable to recover it. 00:33:58.332 [2024-07-24 02:12:12.997822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.332 [2024-07-24 02:12:12.997864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.332 qpair failed and we were unable to recover it. 00:33:58.332 [2024-07-24 02:12:12.998011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.332 [2024-07-24 02:12:12.998039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.332 qpair failed and we were unable to recover it. 00:33:58.332 [2024-07-24 02:12:12.998195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.332 [2024-07-24 02:12:12.998223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.332 qpair failed and we were unable to recover it. 00:33:58.332 [2024-07-24 02:12:12.998335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.332 [2024-07-24 02:12:12.998362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.332 qpair failed and we were unable to recover it. 00:33:58.332 [2024-07-24 02:12:12.998496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.332 [2024-07-24 02:12:12.998521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.332 qpair failed and we were unable to recover it. 00:33:58.332 [2024-07-24 02:12:12.998638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.332 [2024-07-24 02:12:12.998666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.332 qpair failed and we were unable to recover it. 00:33:58.332 [2024-07-24 02:12:12.998839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.332 [2024-07-24 02:12:12.998867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.332 qpair failed and we were unable to recover it. 00:33:58.332 [2024-07-24 02:12:12.999013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.332 [2024-07-24 02:12:12.999041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.332 qpair failed and we were unable to recover it. 00:33:58.332 [2024-07-24 02:12:12.999183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.332 [2024-07-24 02:12:12.999211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.332 qpair failed and we were unable to recover it. 00:33:58.332 [2024-07-24 02:12:12.999380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.332 [2024-07-24 02:12:12.999407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.332 qpair failed and we were unable to recover it. 00:33:58.332 [2024-07-24 02:12:12.999517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.332 [2024-07-24 02:12:12.999542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.332 qpair failed and we were unable to recover it. 00:33:58.332 [2024-07-24 02:12:12.999675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.332 [2024-07-24 02:12:12.999704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.332 qpair failed and we were unable to recover it. 00:33:58.332 [2024-07-24 02:12:12.999863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.332 [2024-07-24 02:12:12.999891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.332 qpair failed and we were unable to recover it. 00:33:58.332 [2024-07-24 02:12:13.000029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.332 [2024-07-24 02:12:13.000057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.332 qpair failed and we were unable to recover it. 00:33:58.332 [2024-07-24 02:12:13.000166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.332 [2024-07-24 02:12:13.000207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.332 qpair failed and we were unable to recover it. 00:33:58.332 [2024-07-24 02:12:13.000362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.332 [2024-07-24 02:12:13.000388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.332 qpair failed and we were unable to recover it. 00:33:58.332 [2024-07-24 02:12:13.000495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.332 [2024-07-24 02:12:13.000521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.332 qpair failed and we were unable to recover it. 00:33:58.332 [2024-07-24 02:12:13.000644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.332 [2024-07-24 02:12:13.000674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.332 qpair failed and we were unable to recover it. 00:33:58.332 [2024-07-24 02:12:13.000803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.332 [2024-07-24 02:12:13.000851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.332 qpair failed and we were unable to recover it. 00:33:58.332 [2024-07-24 02:12:13.001020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.332 [2024-07-24 02:12:13.001048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.332 qpair failed and we were unable to recover it. 00:33:58.332 [2024-07-24 02:12:13.001195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.332 [2024-07-24 02:12:13.001222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.332 qpair failed and we were unable to recover it. 00:33:58.332 [2024-07-24 02:12:13.001360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.332 [2024-07-24 02:12:13.001386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.332 qpair failed and we were unable to recover it. 00:33:58.332 [2024-07-24 02:12:13.001530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.332 [2024-07-24 02:12:13.001556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.332 qpair failed and we were unable to recover it. 00:33:58.332 [2024-07-24 02:12:13.001693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.332 [2024-07-24 02:12:13.001720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.332 qpair failed and we were unable to recover it. 00:33:58.332 [2024-07-24 02:12:13.001910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.332 [2024-07-24 02:12:13.001938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.332 qpair failed and we were unable to recover it. 00:33:58.332 [2024-07-24 02:12:13.002074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.332 [2024-07-24 02:12:13.002117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.332 qpair failed and we were unable to recover it. 00:33:58.332 [2024-07-24 02:12:13.002267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.332 [2024-07-24 02:12:13.002295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.332 qpair failed and we were unable to recover it. 00:33:58.332 [2024-07-24 02:12:13.002439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.332 [2024-07-24 02:12:13.002464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.332 qpair failed and we were unable to recover it. 00:33:58.332 [2024-07-24 02:12:13.002579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.332 [2024-07-24 02:12:13.002631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.332 qpair failed and we were unable to recover it. 00:33:58.332 [2024-07-24 02:12:13.002745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.332 [2024-07-24 02:12:13.002772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.332 qpair failed and we were unable to recover it. 00:33:58.332 [2024-07-24 02:12:13.002940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.332 [2024-07-24 02:12:13.002967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.332 qpair failed and we were unable to recover it. 00:33:58.332 [2024-07-24 02:12:13.003129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.332 [2024-07-24 02:12:13.003153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.332 qpair failed and we were unable to recover it. 00:33:58.332 [2024-07-24 02:12:13.003286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.332 [2024-07-24 02:12:13.003324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.332 qpair failed and we were unable to recover it. 00:33:58.332 [2024-07-24 02:12:13.003473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.332 [2024-07-24 02:12:13.003499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.332 qpair failed and we were unable to recover it. 00:33:58.332 [2024-07-24 02:12:13.003616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.332 [2024-07-24 02:12:13.003641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.332 qpair failed and we were unable to recover it. 00:33:58.332 [2024-07-24 02:12:13.003740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.332 [2024-07-24 02:12:13.003787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.332 qpair failed and we were unable to recover it. 00:33:58.332 [2024-07-24 02:12:13.003939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.333 [2024-07-24 02:12:13.003966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.333 qpair failed and we were unable to recover it. 00:33:58.333 [2024-07-24 02:12:13.004132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.333 [2024-07-24 02:12:13.004160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.333 qpair failed and we were unable to recover it. 00:33:58.333 [2024-07-24 02:12:13.004311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.333 [2024-07-24 02:12:13.004351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.333 qpair failed and we were unable to recover it. 00:33:58.333 [2024-07-24 02:12:13.004465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.333 [2024-07-24 02:12:13.004490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.333 qpair failed and we were unable to recover it. 00:33:58.333 [2024-07-24 02:12:13.004590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.333 [2024-07-24 02:12:13.004633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.333 qpair failed and we were unable to recover it. 00:33:58.333 [2024-07-24 02:12:13.004774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.333 [2024-07-24 02:12:13.004799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.333 qpair failed and we were unable to recover it. 00:33:58.333 [2024-07-24 02:12:13.005847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.333 [2024-07-24 02:12:13.005881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.333 qpair failed and we were unable to recover it. 00:33:58.333 [2024-07-24 02:12:13.006057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.333 [2024-07-24 02:12:13.006087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.333 qpair failed and we were unable to recover it. 00:33:58.333 [2024-07-24 02:12:13.006867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.333 [2024-07-24 02:12:13.006899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.333 qpair failed and we were unable to recover it. 00:33:58.333 [2024-07-24 02:12:13.007051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.333 [2024-07-24 02:12:13.007081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.333 qpair failed and we were unable to recover it. 00:33:58.333 [2024-07-24 02:12:13.007209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.333 [2024-07-24 02:12:13.007238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.333 qpair failed and we were unable to recover it. 00:33:58.333 [2024-07-24 02:12:13.007405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.333 [2024-07-24 02:12:13.007431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.333 qpair failed and we were unable to recover it. 00:33:58.333 [2024-07-24 02:12:13.007549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.333 [2024-07-24 02:12:13.007574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.333 qpair failed and we were unable to recover it. 00:33:58.333 [2024-07-24 02:12:13.007772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.333 [2024-07-24 02:12:13.007800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.333 qpair failed and we were unable to recover it. 00:33:58.333 [2024-07-24 02:12:13.007931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.333 [2024-07-24 02:12:13.007987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.333 qpair failed and we were unable to recover it. 00:33:58.333 [2024-07-24 02:12:13.008157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.333 [2024-07-24 02:12:13.008185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.333 qpair failed and we were unable to recover it. 00:33:58.333 [2024-07-24 02:12:13.008350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.333 [2024-07-24 02:12:13.008376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.333 qpair failed and we were unable to recover it. 00:33:58.333 [2024-07-24 02:12:13.008485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.333 [2024-07-24 02:12:13.008510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.333 qpair failed and we were unable to recover it. 00:33:58.333 [2024-07-24 02:12:13.008656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.333 [2024-07-24 02:12:13.008681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.333 qpair failed and we were unable to recover it. 00:33:58.333 [2024-07-24 02:12:13.008824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.333 [2024-07-24 02:12:13.008849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.333 qpair failed and we were unable to recover it. 00:33:58.333 [2024-07-24 02:12:13.008980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.333 [2024-07-24 02:12:13.009005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.333 qpair failed and we were unable to recover it. 00:33:58.333 [2024-07-24 02:12:13.009109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.333 [2024-07-24 02:12:13.009135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.333 qpair failed and we were unable to recover it. 00:33:58.333 [2024-07-24 02:12:13.009241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.333 [2024-07-24 02:12:13.009267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.333 qpair failed and we were unable to recover it. 00:33:58.333 [2024-07-24 02:12:13.009400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.333 [2024-07-24 02:12:13.009429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.333 qpair failed and we were unable to recover it. 00:33:58.333 [2024-07-24 02:12:13.009584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.333 [2024-07-24 02:12:13.009612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.333 qpair failed and we were unable to recover it. 00:33:58.333 [2024-07-24 02:12:13.009761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.333 [2024-07-24 02:12:13.009789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.333 qpair failed and we were unable to recover it. 00:33:58.333 [2024-07-24 02:12:13.009942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.333 [2024-07-24 02:12:13.009972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.333 qpair failed and we were unable to recover it. 00:33:58.333 [2024-07-24 02:12:13.010109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.333 [2024-07-24 02:12:13.010134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.333 qpair failed and we were unable to recover it. 00:33:58.333 [2024-07-24 02:12:13.010288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.333 [2024-07-24 02:12:13.010336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.333 qpair failed and we were unable to recover it. 00:33:58.333 [2024-07-24 02:12:13.010467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.333 [2024-07-24 02:12:13.010496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.333 qpair failed and we were unable to recover it. 00:33:58.333 [2024-07-24 02:12:13.010639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.333 [2024-07-24 02:12:13.010667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.333 qpair failed and we were unable to recover it. 00:33:58.333 [2024-07-24 02:12:13.010833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.333 [2024-07-24 02:12:13.010861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.333 qpair failed and we were unable to recover it. 00:33:58.333 [2024-07-24 02:12:13.011025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.333 [2024-07-24 02:12:13.011050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.333 qpair failed and we were unable to recover it. 00:33:58.333 [2024-07-24 02:12:13.011180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.333 [2024-07-24 02:12:13.011215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.333 qpair failed and we were unable to recover it. 00:33:58.333 [2024-07-24 02:12:13.011382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.333 [2024-07-24 02:12:13.011411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.333 qpair failed and we were unable to recover it. 00:33:58.333 [2024-07-24 02:12:13.011578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.333 [2024-07-24 02:12:13.011605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.333 qpair failed and we were unable to recover it. 00:33:58.333 [2024-07-24 02:12:13.011806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.333 [2024-07-24 02:12:13.011833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.333 qpair failed and we were unable to recover it. 00:33:58.333 [2024-07-24 02:12:13.012644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.334 [2024-07-24 02:12:13.012677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.334 qpair failed and we were unable to recover it. 00:33:58.334 [2024-07-24 02:12:13.012892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.334 [2024-07-24 02:12:13.012920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.334 qpair failed and we were unable to recover it. 00:33:58.334 [2024-07-24 02:12:13.013071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.334 [2024-07-24 02:12:13.013100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.334 qpair failed and we were unable to recover it. 00:33:58.334 [2024-07-24 02:12:13.013278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.334 [2024-07-24 02:12:13.013330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.334 qpair failed and we were unable to recover it. 00:33:58.334 [2024-07-24 02:12:13.013462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.334 [2024-07-24 02:12:13.013493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.334 qpair failed and we were unable to recover it. 00:33:58.334 [2024-07-24 02:12:13.013635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.334 [2024-07-24 02:12:13.013679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.334 qpair failed and we were unable to recover it. 00:33:58.334 [2024-07-24 02:12:13.013838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.334 [2024-07-24 02:12:13.013882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.334 qpair failed and we were unable to recover it. 00:33:58.334 [2024-07-24 02:12:13.014706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.334 [2024-07-24 02:12:13.014736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.334 qpair failed and we were unable to recover it. 00:33:58.334 [2024-07-24 02:12:13.014883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.334 [2024-07-24 02:12:13.014913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.334 qpair failed and we were unable to recover it. 00:33:58.334 [2024-07-24 02:12:13.015604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.334 [2024-07-24 02:12:13.015642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.334 qpair failed and we were unable to recover it. 00:33:58.334 [2024-07-24 02:12:13.015847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.334 [2024-07-24 02:12:13.015894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.334 qpair failed and we were unable to recover it. 00:33:58.334 [2024-07-24 02:12:13.016017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.334 [2024-07-24 02:12:13.016043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.334 qpair failed and we were unable to recover it. 00:33:58.334 [2024-07-24 02:12:13.016148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.334 [2024-07-24 02:12:13.016173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.334 qpair failed and we were unable to recover it. 00:33:58.334 [2024-07-24 02:12:13.016304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.334 [2024-07-24 02:12:13.016348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.334 qpair failed and we were unable to recover it. 00:33:58.334 [2024-07-24 02:12:13.016457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.334 [2024-07-24 02:12:13.016484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.334 qpair failed and we were unable to recover it. 00:33:58.334 [2024-07-24 02:12:13.016696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.334 [2024-07-24 02:12:13.016722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.334 qpair failed and we were unable to recover it. 00:33:58.334 [2024-07-24 02:12:13.016950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.334 [2024-07-24 02:12:13.016980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.334 qpair failed and we were unable to recover it. 00:33:58.334 [2024-07-24 02:12:13.017146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.334 [2024-07-24 02:12:13.017171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.334 qpair failed and we were unable to recover it. 00:33:58.334 [2024-07-24 02:12:13.017313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.334 [2024-07-24 02:12:13.017347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.334 qpair failed and we were unable to recover it. 00:33:58.334 [2024-07-24 02:12:13.017520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.334 [2024-07-24 02:12:13.017547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.334 qpair failed and we were unable to recover it. 00:33:58.334 [2024-07-24 02:12:13.017713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.334 [2024-07-24 02:12:13.017742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.334 qpair failed and we were unable to recover it. 00:33:58.334 [2024-07-24 02:12:13.017981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.334 [2024-07-24 02:12:13.018024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.334 qpair failed and we were unable to recover it. 00:33:58.334 [2024-07-24 02:12:13.018160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.334 [2024-07-24 02:12:13.018186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.334 qpair failed and we were unable to recover it. 00:33:58.334 [2024-07-24 02:12:13.018297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.334 [2024-07-24 02:12:13.018330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.334 qpair failed and we were unable to recover it. 00:33:58.334 [2024-07-24 02:12:13.018469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.334 [2024-07-24 02:12:13.018512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.334 qpair failed and we were unable to recover it. 00:33:58.334 [2024-07-24 02:12:13.018617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.334 [2024-07-24 02:12:13.018643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.334 qpair failed and we were unable to recover it. 00:33:58.334 [2024-07-24 02:12:13.018752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.334 [2024-07-24 02:12:13.018778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.334 qpair failed and we were unable to recover it. 00:33:58.334 [2024-07-24 02:12:13.019490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.334 [2024-07-24 02:12:13.019520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.334 qpair failed and we were unable to recover it. 00:33:58.334 [2024-07-24 02:12:13.019695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.334 [2024-07-24 02:12:13.019721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.334 qpair failed and we were unable to recover it. 00:33:58.334 [2024-07-24 02:12:13.019889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.334 [2024-07-24 02:12:13.019915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.334 qpair failed and we were unable to recover it. 00:33:58.334 [2024-07-24 02:12:13.020088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.334 [2024-07-24 02:12:13.020113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.334 qpair failed and we were unable to recover it. 00:33:58.334 [2024-07-24 02:12:13.020219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.334 [2024-07-24 02:12:13.020245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.334 qpair failed and we were unable to recover it. 00:33:58.334 [2024-07-24 02:12:13.020398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.334 [2024-07-24 02:12:13.020442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.334 qpair failed and we were unable to recover it. 00:33:58.334 [2024-07-24 02:12:13.020650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.334 [2024-07-24 02:12:13.020697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.334 qpair failed and we were unable to recover it. 00:33:58.334 [2024-07-24 02:12:13.020911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.334 [2024-07-24 02:12:13.020954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.334 qpair failed and we were unable to recover it. 00:33:58.334 [2024-07-24 02:12:13.021165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.334 [2024-07-24 02:12:13.021190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.334 qpair failed and we were unable to recover it. 00:33:58.334 [2024-07-24 02:12:13.021333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.334 [2024-07-24 02:12:13.021361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.334 qpair failed and we were unable to recover it. 00:33:58.335 [2024-07-24 02:12:13.021546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.335 [2024-07-24 02:12:13.021594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.335 qpair failed and we were unable to recover it. 00:33:58.335 [2024-07-24 02:12:13.021748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.335 [2024-07-24 02:12:13.021790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.335 qpair failed and we were unable to recover it. 00:33:58.335 [2024-07-24 02:12:13.022003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.335 [2024-07-24 02:12:13.022029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.335 qpair failed and we were unable to recover it. 00:33:58.335 [2024-07-24 02:12:13.022161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.335 [2024-07-24 02:12:13.022186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.335 qpair failed and we were unable to recover it. 00:33:58.335 [2024-07-24 02:12:13.022333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.335 [2024-07-24 02:12:13.022359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.335 qpair failed and we were unable to recover it. 00:33:58.335 [2024-07-24 02:12:13.022542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.335 [2024-07-24 02:12:13.022586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.335 qpair failed and we were unable to recover it. 00:33:58.335 [2024-07-24 02:12:13.022746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.335 [2024-07-24 02:12:13.022789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.335 qpair failed and we were unable to recover it. 00:33:58.335 [2024-07-24 02:12:13.022970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.335 [2024-07-24 02:12:13.023013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.335 qpair failed and we were unable to recover it. 00:33:58.335 [2024-07-24 02:12:13.023154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.335 [2024-07-24 02:12:13.023180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.335 qpair failed and we were unable to recover it. 00:33:58.335 [2024-07-24 02:12:13.023304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.335 [2024-07-24 02:12:13.023335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.335 qpair failed and we were unable to recover it. 00:33:58.335 [2024-07-24 02:12:13.023493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.335 [2024-07-24 02:12:13.023536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.335 qpair failed and we were unable to recover it. 00:33:58.335 [2024-07-24 02:12:13.023695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.335 [2024-07-24 02:12:13.023729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.335 qpair failed and we were unable to recover it. 00:33:58.335 [2024-07-24 02:12:13.023848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.335 [2024-07-24 02:12:13.023875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.335 qpair failed and we were unable to recover it. 00:33:58.335 [2024-07-24 02:12:13.023982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.335 [2024-07-24 02:12:13.024008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.335 qpair failed and we were unable to recover it. 00:33:58.335 [2024-07-24 02:12:13.024170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.335 [2024-07-24 02:12:13.024196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.335 qpair failed and we were unable to recover it. 00:33:58.335 [2024-07-24 02:12:13.024329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.335 [2024-07-24 02:12:13.024356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.335 qpair failed and we were unable to recover it. 00:33:58.335 [2024-07-24 02:12:13.024472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.335 [2024-07-24 02:12:13.024501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.335 qpair failed and we were unable to recover it. 00:33:58.335 [2024-07-24 02:12:13.024660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.335 [2024-07-24 02:12:13.024709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.335 qpair failed and we were unable to recover it. 00:33:58.335 [2024-07-24 02:12:13.024876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.335 [2024-07-24 02:12:13.024901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.335 qpair failed and we were unable to recover it. 00:33:58.335 [2024-07-24 02:12:13.025009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.335 [2024-07-24 02:12:13.025039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.335 qpair failed and we were unable to recover it. 00:33:58.335 [2024-07-24 02:12:13.025140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.335 [2024-07-24 02:12:13.025165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.335 qpair failed and we were unable to recover it. 00:33:58.335 [2024-07-24 02:12:13.025293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.335 [2024-07-24 02:12:13.025324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.335 qpair failed and we were unable to recover it. 00:33:58.335 [2024-07-24 02:12:13.025488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.335 [2024-07-24 02:12:13.025531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.335 qpair failed and we were unable to recover it. 00:33:58.335 [2024-07-24 02:12:13.025699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.335 [2024-07-24 02:12:13.025741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.335 qpair failed and we were unable to recover it. 00:33:58.335 [2024-07-24 02:12:13.025874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.335 [2024-07-24 02:12:13.025900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.335 qpair failed and we were unable to recover it. 00:33:58.335 [2024-07-24 02:12:13.026018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.335 [2024-07-24 02:12:13.026043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.335 qpair failed and we were unable to recover it. 00:33:58.335 [2024-07-24 02:12:13.026180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.335 [2024-07-24 02:12:13.026205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.335 qpair failed and we were unable to recover it. 00:33:58.335 [2024-07-24 02:12:13.026331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.335 [2024-07-24 02:12:13.026357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.335 qpair failed and we were unable to recover it. 00:33:58.335 [2024-07-24 02:12:13.026488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.335 [2024-07-24 02:12:13.026534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.335 qpair failed and we were unable to recover it. 00:33:58.335 [2024-07-24 02:12:13.026723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.335 [2024-07-24 02:12:13.026782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.335 qpair failed and we were unable to recover it. 00:33:58.335 [2024-07-24 02:12:13.026976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.335 [2024-07-24 02:12:13.027020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.335 qpair failed and we were unable to recover it. 00:33:58.335 [2024-07-24 02:12:13.027150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.335 [2024-07-24 02:12:13.027176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.335 qpair failed and we were unable to recover it. 00:33:58.336 [2024-07-24 02:12:13.027325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.336 [2024-07-24 02:12:13.027351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.336 qpair failed and we were unable to recover it. 00:33:58.336 [2024-07-24 02:12:13.027505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.336 [2024-07-24 02:12:13.027549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.336 qpair failed and we were unable to recover it. 00:33:58.336 [2024-07-24 02:12:13.027701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.336 [2024-07-24 02:12:13.027743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.336 qpair failed and we were unable to recover it. 00:33:58.336 [2024-07-24 02:12:13.027880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.336 [2024-07-24 02:12:13.027906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.336 qpair failed and we were unable to recover it. 00:33:58.336 [2024-07-24 02:12:13.028053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.336 [2024-07-24 02:12:13.028079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.336 qpair failed and we were unable to recover it. 00:33:58.336 [2024-07-24 02:12:13.028207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.336 [2024-07-24 02:12:13.028232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.336 qpair failed and we were unable to recover it. 00:33:58.336 [2024-07-24 02:12:13.028393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.336 [2024-07-24 02:12:13.028441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.336 qpair failed and we were unable to recover it. 00:33:58.336 [2024-07-24 02:12:13.028564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.336 [2024-07-24 02:12:13.028592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.336 qpair failed and we were unable to recover it. 00:33:58.336 [2024-07-24 02:12:13.028735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.336 [2024-07-24 02:12:13.028761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.336 qpair failed and we were unable to recover it. 00:33:58.336 [2024-07-24 02:12:13.028902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.336 [2024-07-24 02:12:13.028930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.336 qpair failed and we were unable to recover it. 00:33:58.336 [2024-07-24 02:12:13.029033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.336 [2024-07-24 02:12:13.029059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.336 qpair failed and we were unable to recover it. 00:33:58.336 [2024-07-24 02:12:13.029185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.336 [2024-07-24 02:12:13.029212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.336 qpair failed and we were unable to recover it. 00:33:58.336 [2024-07-24 02:12:13.029360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.336 [2024-07-24 02:12:13.029386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.336 qpair failed and we were unable to recover it. 00:33:58.336 [2024-07-24 02:12:13.029506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.336 [2024-07-24 02:12:13.029546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.336 qpair failed and we were unable to recover it. 00:33:58.336 [2024-07-24 02:12:13.029698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.336 [2024-07-24 02:12:13.029726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.336 qpair failed and we were unable to recover it. 00:33:58.336 [2024-07-24 02:12:13.029880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.336 [2024-07-24 02:12:13.029911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.336 qpair failed and we were unable to recover it. 00:33:58.336 [2024-07-24 02:12:13.030025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.336 [2024-07-24 02:12:13.030052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.336 qpair failed and we were unable to recover it. 00:33:58.336 [2024-07-24 02:12:13.030224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.336 [2024-07-24 02:12:13.030250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.336 qpair failed and we were unable to recover it. 00:33:58.336 [2024-07-24 02:12:13.030380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.336 [2024-07-24 02:12:13.030407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.336 qpair failed and we were unable to recover it. 00:33:58.336 [2024-07-24 02:12:13.030524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.336 [2024-07-24 02:12:13.030567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.336 qpair failed and we were unable to recover it. 00:33:58.336 [2024-07-24 02:12:13.030761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.336 [2024-07-24 02:12:13.030790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.336 qpair failed and we were unable to recover it. 00:33:58.336 [2024-07-24 02:12:13.030980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.336 [2024-07-24 02:12:13.031008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.336 qpair failed and we were unable to recover it. 00:33:58.336 [2024-07-24 02:12:13.031157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.336 [2024-07-24 02:12:13.031186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.336 qpair failed and we were unable to recover it. 00:33:58.336 [2024-07-24 02:12:13.031334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.336 [2024-07-24 02:12:13.031379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.336 qpair failed and we were unable to recover it. 00:33:58.336 [2024-07-24 02:12:13.031534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.336 [2024-07-24 02:12:13.031563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.336 qpair failed and we were unable to recover it. 00:33:58.336 [2024-07-24 02:12:13.031797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.336 [2024-07-24 02:12:13.031842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.336 qpair failed and we were unable to recover it. 00:33:58.336 [2024-07-24 02:12:13.032028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.336 [2024-07-24 02:12:13.032056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.336 qpair failed and we were unable to recover it. 00:33:58.336 [2024-07-24 02:12:13.032207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.336 [2024-07-24 02:12:13.032241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.336 qpair failed and we were unable to recover it. 00:33:58.336 [2024-07-24 02:12:13.033066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.336 [2024-07-24 02:12:13.033100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.336 qpair failed and we were unable to recover it. 00:33:58.336 [2024-07-24 02:12:13.033288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.336 [2024-07-24 02:12:13.033323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.336 qpair failed and we were unable to recover it. 00:33:58.336 [2024-07-24 02:12:13.033488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.336 [2024-07-24 02:12:13.033515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.336 qpair failed and we were unable to recover it. 00:33:58.336 [2024-07-24 02:12:13.033622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.336 [2024-07-24 02:12:13.033648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.336 qpair failed and we were unable to recover it. 00:33:58.336 [2024-07-24 02:12:13.033760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.336 [2024-07-24 02:12:13.033785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.336 qpair failed and we were unable to recover it. 00:33:58.336 [2024-07-24 02:12:13.033948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.336 [2024-07-24 02:12:13.033973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.336 qpair failed and we were unable to recover it. 00:33:58.336 [2024-07-24 02:12:13.034120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.336 [2024-07-24 02:12:13.034148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.336 qpair failed and we were unable to recover it. 00:33:58.336 [2024-07-24 02:12:13.034329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.336 [2024-07-24 02:12:13.034372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.336 qpair failed and we were unable to recover it. 00:33:58.337 [2024-07-24 02:12:13.034488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.337 [2024-07-24 02:12:13.034514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.337 qpair failed and we were unable to recover it. 00:33:58.337 [2024-07-24 02:12:13.034662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.337 [2024-07-24 02:12:13.034691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.337 qpair failed and we were unable to recover it. 00:33:58.337 [2024-07-24 02:12:13.034815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.337 [2024-07-24 02:12:13.034845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.337 qpair failed and we were unable to recover it. 00:33:58.337 [2024-07-24 02:12:13.034997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.337 [2024-07-24 02:12:13.035039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.337 qpair failed and we were unable to recover it. 00:33:58.337 [2024-07-24 02:12:13.035219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.337 [2024-07-24 02:12:13.035256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.337 qpair failed and we were unable to recover it. 00:33:58.337 [2024-07-24 02:12:13.035418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.337 [2024-07-24 02:12:13.035445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.337 qpair failed and we were unable to recover it. 00:33:58.337 [2024-07-24 02:12:13.035606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.337 [2024-07-24 02:12:13.035632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.337 qpair failed and we were unable to recover it. 00:33:58.337 [2024-07-24 02:12:13.035806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.337 [2024-07-24 02:12:13.035862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.337 qpair failed and we were unable to recover it. 00:33:58.337 [2024-07-24 02:12:13.036047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.337 [2024-07-24 02:12:13.036075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.337 qpair failed and we were unable to recover it. 00:33:58.337 [2024-07-24 02:12:13.036232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.337 [2024-07-24 02:12:13.036260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.337 qpair failed and we were unable to recover it. 00:33:58.337 [2024-07-24 02:12:13.036401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.337 [2024-07-24 02:12:13.036427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.337 qpair failed and we were unable to recover it. 00:33:58.337 [2024-07-24 02:12:13.036564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.337 [2024-07-24 02:12:13.036589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.337 qpair failed and we were unable to recover it. 00:33:58.337 [2024-07-24 02:12:13.036739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.337 [2024-07-24 02:12:13.036765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.337 qpair failed and we were unable to recover it. 00:33:58.337 [2024-07-24 02:12:13.036951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.337 [2024-07-24 02:12:13.036979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.337 qpair failed and we were unable to recover it. 00:33:58.337 [2024-07-24 02:12:13.037150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.337 [2024-07-24 02:12:13.037179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.337 qpair failed and we were unable to recover it. 00:33:58.337 [2024-07-24 02:12:13.037372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.337 [2024-07-24 02:12:13.037398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.337 qpair failed and we were unable to recover it. 00:33:58.337 [2024-07-24 02:12:13.037510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.337 [2024-07-24 02:12:13.037537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.337 qpair failed and we were unable to recover it. 00:33:58.337 [2024-07-24 02:12:13.037664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.337 [2024-07-24 02:12:13.037691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.337 qpair failed and we were unable to recover it. 00:33:58.337 [2024-07-24 02:12:13.037842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.337 [2024-07-24 02:12:13.037903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.337 qpair failed and we were unable to recover it. 00:33:58.337 [2024-07-24 02:12:13.038111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.337 [2024-07-24 02:12:13.038142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.337 qpair failed and we were unable to recover it. 00:33:58.337 [2024-07-24 02:12:13.038297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.337 [2024-07-24 02:12:13.038333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.337 qpair failed and we were unable to recover it. 00:33:58.337 [2024-07-24 02:12:13.038464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.337 [2024-07-24 02:12:13.038489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.337 qpair failed and we were unable to recover it. 00:33:58.337 [2024-07-24 02:12:13.038626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.337 [2024-07-24 02:12:13.038660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.337 qpair failed and we were unable to recover it. 00:33:58.337 [2024-07-24 02:12:13.038808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.337 [2024-07-24 02:12:13.038837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.337 qpair failed and we were unable to recover it. 00:33:58.337 [2024-07-24 02:12:13.038995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.337 [2024-07-24 02:12:13.039022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.337 qpair failed and we were unable to recover it. 00:33:58.337 [2024-07-24 02:12:13.039190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.337 [2024-07-24 02:12:13.039216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.337 qpair failed and we were unable to recover it. 00:33:58.337 [2024-07-24 02:12:13.039372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.337 [2024-07-24 02:12:13.039399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.337 qpair failed and we were unable to recover it. 00:33:58.337 [2024-07-24 02:12:13.039553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.337 [2024-07-24 02:12:13.039582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.337 qpair failed and we were unable to recover it. 00:33:58.337 [2024-07-24 02:12:13.039845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.337 [2024-07-24 02:12:13.039896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.337 qpair failed and we were unable to recover it. 00:33:58.337 [2024-07-24 02:12:13.040043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.337 [2024-07-24 02:12:13.040072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.337 qpair failed and we were unable to recover it. 00:33:58.337 [2024-07-24 02:12:13.040226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.337 [2024-07-24 02:12:13.040254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.337 qpair failed and we were unable to recover it. 00:33:58.337 [2024-07-24 02:12:13.040453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.337 [2024-07-24 02:12:13.040487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.337 qpair failed and we were unable to recover it. 00:33:58.337 [2024-07-24 02:12:13.040615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.337 [2024-07-24 02:12:13.040656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.337 qpair failed and we were unable to recover it. 00:33:58.337 [2024-07-24 02:12:13.040831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.337 [2024-07-24 02:12:13.040859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.337 qpair failed and we were unable to recover it. 00:33:58.337 [2024-07-24 02:12:13.040973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.337 [2024-07-24 02:12:13.041003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.337 qpair failed and we were unable to recover it. 00:33:58.337 [2024-07-24 02:12:13.041190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.337 [2024-07-24 02:12:13.041215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.337 qpair failed and we were unable to recover it. 00:33:58.338 [2024-07-24 02:12:13.041380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.338 [2024-07-24 02:12:13.041407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.338 qpair failed and we were unable to recover it. 00:33:58.338 [2024-07-24 02:12:13.041566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.338 [2024-07-24 02:12:13.041591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.338 qpair failed and we were unable to recover it. 00:33:58.338 [2024-07-24 02:12:13.041715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.338 [2024-07-24 02:12:13.041740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.338 qpair failed and we were unable to recover it. 00:33:58.338 [2024-07-24 02:12:13.041870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.338 [2024-07-24 02:12:13.041896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.338 qpair failed and we were unable to recover it. 00:33:58.338 [2024-07-24 02:12:13.042031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.338 [2024-07-24 02:12:13.042057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.338 qpair failed and we were unable to recover it. 00:33:58.338 [2024-07-24 02:12:13.042218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.338 [2024-07-24 02:12:13.042247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.338 qpair failed and we were unable to recover it. 00:33:58.338 [2024-07-24 02:12:13.042419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.338 [2024-07-24 02:12:13.042445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.338 qpair failed and we were unable to recover it. 00:33:58.338 [2024-07-24 02:12:13.042574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.338 [2024-07-24 02:12:13.042600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.338 qpair failed and we were unable to recover it. 00:33:58.338 [2024-07-24 02:12:13.042732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.338 [2024-07-24 02:12:13.042758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.338 qpair failed and we were unable to recover it. 00:33:58.338 [2024-07-24 02:12:13.042925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.338 [2024-07-24 02:12:13.042951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.338 qpair failed and we were unable to recover it. 00:33:58.338 [2024-07-24 02:12:13.043103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.338 [2024-07-24 02:12:13.043132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.338 qpair failed and we were unable to recover it. 00:33:58.338 [2024-07-24 02:12:13.043282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.338 [2024-07-24 02:12:13.043310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.338 qpair failed and we were unable to recover it. 00:33:58.338 [2024-07-24 02:12:13.043497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.338 [2024-07-24 02:12:13.043524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.338 qpair failed and we were unable to recover it. 00:33:58.338 [2024-07-24 02:12:13.043659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.338 [2024-07-24 02:12:13.043684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.338 qpair failed and we were unable to recover it. 00:33:58.338 [2024-07-24 02:12:13.043785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.338 [2024-07-24 02:12:13.043829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.338 qpair failed and we were unable to recover it. 00:33:58.338 [2024-07-24 02:12:13.044005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.338 [2024-07-24 02:12:13.044033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.338 qpair failed and we were unable to recover it. 00:33:58.338 [2024-07-24 02:12:13.044178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.338 [2024-07-24 02:12:13.044206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.338 qpair failed and we were unable to recover it. 00:33:58.338 [2024-07-24 02:12:13.044373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.338 [2024-07-24 02:12:13.044400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.338 qpair failed and we were unable to recover it. 00:33:58.338 [2024-07-24 02:12:13.044558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.338 [2024-07-24 02:12:13.044584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.338 qpair failed and we were unable to recover it. 00:33:58.338 [2024-07-24 02:12:13.044713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.338 [2024-07-24 02:12:13.044748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.338 qpair failed and we were unable to recover it. 00:33:58.338 [2024-07-24 02:12:13.044885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.338 [2024-07-24 02:12:13.044911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.338 qpair failed and we were unable to recover it. 00:33:58.338 [2024-07-24 02:12:13.045047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.338 [2024-07-24 02:12:13.045073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.338 qpair failed and we were unable to recover it. 00:33:58.338 [2024-07-24 02:12:13.045235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.338 [2024-07-24 02:12:13.045294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.338 qpair failed and we were unable to recover it. 00:33:58.338 [2024-07-24 02:12:13.045469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.338 [2024-07-24 02:12:13.045497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.338 qpair failed and we were unable to recover it. 00:33:58.338 [2024-07-24 02:12:13.045660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.338 [2024-07-24 02:12:13.045686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.338 qpair failed and we were unable to recover it. 00:33:58.338 [2024-07-24 02:12:13.045866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.338 [2024-07-24 02:12:13.045909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.338 qpair failed and we were unable to recover it. 00:33:58.338 [2024-07-24 02:12:13.046062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.338 [2024-07-24 02:12:13.046105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.338 qpair failed and we were unable to recover it. 00:33:58.338 [2024-07-24 02:12:13.046205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.338 [2024-07-24 02:12:13.046231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.338 qpair failed and we were unable to recover it. 00:33:58.338 [2024-07-24 02:12:13.046401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.338 [2024-07-24 02:12:13.046428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.338 qpair failed and we were unable to recover it. 00:33:58.338 [2024-07-24 02:12:13.046534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.338 [2024-07-24 02:12:13.046560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.338 qpair failed and we were unable to recover it. 00:33:58.338 [2024-07-24 02:12:13.046704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.338 [2024-07-24 02:12:13.046730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.338 qpair failed and we were unable to recover it. 00:33:58.338 [2024-07-24 02:12:13.046858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.338 [2024-07-24 02:12:13.046890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.338 qpair failed and we were unable to recover it. 00:33:58.338 [2024-07-24 02:12:13.047000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.338 [2024-07-24 02:12:13.047027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.338 qpair failed and we were unable to recover it. 00:33:58.338 [2024-07-24 02:12:13.047174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.338 [2024-07-24 02:12:13.047199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.338 qpair failed and we were unable to recover it. 00:33:58.338 [2024-07-24 02:12:13.047370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.339 [2024-07-24 02:12:13.047396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.339 qpair failed and we were unable to recover it. 00:33:58.339 [2024-07-24 02:12:13.047526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.339 [2024-07-24 02:12:13.047551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.339 qpair failed and we were unable to recover it. 00:33:58.339 [2024-07-24 02:12:13.047706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.339 [2024-07-24 02:12:13.047732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.339 qpair failed and we were unable to recover it. 00:33:58.339 [2024-07-24 02:12:13.047886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.339 [2024-07-24 02:12:13.047911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.339 qpair failed and we were unable to recover it. 00:33:58.339 [2024-07-24 02:12:13.048033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.339 [2024-07-24 02:12:13.048058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.339 qpair failed and we were unable to recover it. 00:33:58.339 [2024-07-24 02:12:13.048194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.339 [2024-07-24 02:12:13.048220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.339 qpair failed and we were unable to recover it. 00:33:58.339 [2024-07-24 02:12:13.048353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.339 [2024-07-24 02:12:13.048379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.339 qpair failed and we were unable to recover it. 00:33:58.339 [2024-07-24 02:12:13.048502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.339 [2024-07-24 02:12:13.048527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.339 qpair failed and we were unable to recover it. 00:33:58.339 [2024-07-24 02:12:13.048632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.339 [2024-07-24 02:12:13.048657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.339 qpair failed and we were unable to recover it. 00:33:58.339 [2024-07-24 02:12:13.048794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.339 [2024-07-24 02:12:13.048819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.339 qpair failed and we were unable to recover it. 00:33:58.339 [2024-07-24 02:12:13.048965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.339 [2024-07-24 02:12:13.048991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.339 qpair failed and we were unable to recover it. 00:33:58.339 [2024-07-24 02:12:13.049129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.339 [2024-07-24 02:12:13.049154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.339 qpair failed and we were unable to recover it. 00:33:58.339 [2024-07-24 02:12:13.049310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.339 [2024-07-24 02:12:13.049340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.339 qpair failed and we were unable to recover it. 00:33:58.339 [2024-07-24 02:12:13.049451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.339 [2024-07-24 02:12:13.049476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.339 qpair failed and we were unable to recover it. 00:33:58.339 [2024-07-24 02:12:13.049579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.339 [2024-07-24 02:12:13.049605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.339 qpair failed and we were unable to recover it. 00:33:58.339 [2024-07-24 02:12:13.049725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.339 [2024-07-24 02:12:13.049750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.339 qpair failed and we were unable to recover it. 00:33:58.339 [2024-07-24 02:12:13.049857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.339 [2024-07-24 02:12:13.049882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.339 qpair failed and we were unable to recover it. 00:33:58.339 [2024-07-24 02:12:13.050016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.339 [2024-07-24 02:12:13.050042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.339 qpair failed and we were unable to recover it. 00:33:58.339 [2024-07-24 02:12:13.050176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.339 [2024-07-24 02:12:13.050201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.339 qpair failed and we were unable to recover it. 00:33:58.339 [2024-07-24 02:12:13.050328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.339 [2024-07-24 02:12:13.050355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.339 qpair failed and we were unable to recover it. 00:33:58.339 [2024-07-24 02:12:13.050484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.339 [2024-07-24 02:12:13.050528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.339 qpair failed and we were unable to recover it. 00:33:58.339 [2024-07-24 02:12:13.050683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.339 [2024-07-24 02:12:13.050726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.339 qpair failed and we were unable to recover it. 00:33:58.339 [2024-07-24 02:12:13.050824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.339 [2024-07-24 02:12:13.050850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.339 qpair failed and we were unable to recover it. 00:33:58.339 [2024-07-24 02:12:13.050988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.339 [2024-07-24 02:12:13.051013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.339 qpair failed and we were unable to recover it. 00:33:58.339 [2024-07-24 02:12:13.051140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.339 [2024-07-24 02:12:13.051166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.339 qpair failed and we were unable to recover it. 00:33:58.339 [2024-07-24 02:12:13.051298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.339 [2024-07-24 02:12:13.051350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.339 qpair failed and we were unable to recover it. 00:33:58.339 [2024-07-24 02:12:13.051474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.339 [2024-07-24 02:12:13.051502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.339 qpair failed and we were unable to recover it. 00:33:58.339 [2024-07-24 02:12:13.051644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.339 [2024-07-24 02:12:13.051686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.339 qpair failed and we were unable to recover it. 00:33:58.339 [2024-07-24 02:12:13.051825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.339 [2024-07-24 02:12:13.051855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.339 qpair failed and we were unable to recover it. 00:33:58.339 [2024-07-24 02:12:13.052017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.339 [2024-07-24 02:12:13.052042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.339 qpair failed and we were unable to recover it. 00:33:58.339 [2024-07-24 02:12:13.052254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.339 [2024-07-24 02:12:13.052279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.339 qpair failed and we were unable to recover it. 00:33:58.339 [2024-07-24 02:12:13.052431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.339 [2024-07-24 02:12:13.052458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.339 qpair failed and we were unable to recover it. 00:33:58.339 [2024-07-24 02:12:13.052620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.339 [2024-07-24 02:12:13.052654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.339 qpair failed and we were unable to recover it. 00:33:58.339 [2024-07-24 02:12:13.052808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.339 [2024-07-24 02:12:13.052853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.339 qpair failed and we were unable to recover it. 00:33:58.339 [2024-07-24 02:12:13.053037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.339 [2024-07-24 02:12:13.053063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.339 qpair failed and we were unable to recover it. 00:33:58.339 [2024-07-24 02:12:13.053197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.339 [2024-07-24 02:12:13.053223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.339 qpair failed and we were unable to recover it. 00:33:58.339 [2024-07-24 02:12:13.053392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.340 [2024-07-24 02:12:13.053436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.340 qpair failed and we were unable to recover it. 00:33:58.340 [2024-07-24 02:12:13.053608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.340 [2024-07-24 02:12:13.053638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.340 qpair failed and we were unable to recover it. 00:33:58.340 [2024-07-24 02:12:13.053831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.340 [2024-07-24 02:12:13.053885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.340 qpair failed and we were unable to recover it. 00:33:58.340 [2024-07-24 02:12:13.054020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.340 [2024-07-24 02:12:13.054045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.340 qpair failed and we were unable to recover it. 00:33:58.340 [2024-07-24 02:12:13.054152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.340 [2024-07-24 02:12:13.054178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.340 qpair failed and we were unable to recover it. 00:33:58.340 [2024-07-24 02:12:13.054282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.340 [2024-07-24 02:12:13.054308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.340 qpair failed and we were unable to recover it. 00:33:58.340 [2024-07-24 02:12:13.054529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.340 [2024-07-24 02:12:13.054555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.340 qpair failed and we were unable to recover it. 00:33:58.340 [2024-07-24 02:12:13.054686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.340 [2024-07-24 02:12:13.054712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.340 qpair failed and we were unable to recover it. 00:33:58.340 [2024-07-24 02:12:13.054851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.340 [2024-07-24 02:12:13.054878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.340 qpair failed and we were unable to recover it. 00:33:58.340 [2024-07-24 02:12:13.055021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.340 [2024-07-24 02:12:13.055046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.340 qpair failed and we were unable to recover it. 00:33:58.340 [2024-07-24 02:12:13.055206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.340 [2024-07-24 02:12:13.055232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.340 qpair failed and we were unable to recover it. 00:33:58.340 [2024-07-24 02:12:13.055377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.340 [2024-07-24 02:12:13.055406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.340 qpair failed and we were unable to recover it. 00:33:58.340 [2024-07-24 02:12:13.055598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.340 [2024-07-24 02:12:13.055641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.340 qpair failed and we were unable to recover it. 00:33:58.340 [2024-07-24 02:12:13.055802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.340 [2024-07-24 02:12:13.055844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.340 qpair failed and we were unable to recover it. 00:33:58.340 [2024-07-24 02:12:13.055959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.340 [2024-07-24 02:12:13.055985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.340 qpair failed and we were unable to recover it. 00:33:58.340 [2024-07-24 02:12:13.056143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.340 [2024-07-24 02:12:13.056168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.340 qpair failed and we were unable to recover it. 00:33:58.340 [2024-07-24 02:12:13.056303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.340 [2024-07-24 02:12:13.056335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.340 qpair failed and we were unable to recover it. 00:33:58.340 [2024-07-24 02:12:13.056512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.340 [2024-07-24 02:12:13.056554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.340 qpair failed and we were unable to recover it. 00:33:58.340 [2024-07-24 02:12:13.056721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.340 [2024-07-24 02:12:13.056765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.340 qpair failed and we were unable to recover it. 00:33:58.340 [2024-07-24 02:12:13.056900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.340 [2024-07-24 02:12:13.056948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.340 qpair failed and we were unable to recover it. 00:33:58.340 [2024-07-24 02:12:13.057119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.340 [2024-07-24 02:12:13.057144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.340 qpair failed and we were unable to recover it. 00:33:58.340 [2024-07-24 02:12:13.057279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.340 [2024-07-24 02:12:13.057305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.340 qpair failed and we were unable to recover it. 00:33:58.340 [2024-07-24 02:12:13.057454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.340 [2024-07-24 02:12:13.057498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.340 qpair failed and we were unable to recover it. 00:33:58.340 [2024-07-24 02:12:13.057681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.340 [2024-07-24 02:12:13.057729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.340 qpair failed and we were unable to recover it. 00:33:58.340 [2024-07-24 02:12:13.057852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.340 [2024-07-24 02:12:13.057880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.340 qpair failed and we were unable to recover it. 00:33:58.340 [2024-07-24 02:12:13.058034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.340 [2024-07-24 02:12:13.058059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.340 qpair failed and we were unable to recover it. 00:33:58.340 [2024-07-24 02:12:13.058206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.340 [2024-07-24 02:12:13.058232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.340 qpair failed and we were unable to recover it. 00:33:58.340 [2024-07-24 02:12:13.058368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.340 [2024-07-24 02:12:13.058394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.340 qpair failed and we were unable to recover it. 00:33:58.340 [2024-07-24 02:12:13.058522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.340 [2024-07-24 02:12:13.058547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.340 qpair failed and we were unable to recover it. 00:33:58.340 [2024-07-24 02:12:13.058690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.340 [2024-07-24 02:12:13.058715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.340 qpair failed and we were unable to recover it. 00:33:58.340 [2024-07-24 02:12:13.058875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.340 [2024-07-24 02:12:13.058900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.341 qpair failed and we were unable to recover it. 00:33:58.341 [2024-07-24 02:12:13.059024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.341 [2024-07-24 02:12:13.059050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.341 qpair failed and we were unable to recover it. 00:33:58.341 [2024-07-24 02:12:13.059211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.341 [2024-07-24 02:12:13.059241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.341 qpair failed and we were unable to recover it. 00:33:58.341 [2024-07-24 02:12:13.059391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.341 [2024-07-24 02:12:13.059435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.341 qpair failed and we were unable to recover it. 00:33:58.341 [2024-07-24 02:12:13.059617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.341 [2024-07-24 02:12:13.059661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.341 qpair failed and we were unable to recover it. 00:33:58.341 [2024-07-24 02:12:13.059844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.341 [2024-07-24 02:12:13.059872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.341 qpair failed and we were unable to recover it. 00:33:58.341 [2024-07-24 02:12:13.060019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.341 [2024-07-24 02:12:13.060045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.341 qpair failed and we were unable to recover it. 00:33:58.341 [2024-07-24 02:12:13.060141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.341 [2024-07-24 02:12:13.060167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.341 qpair failed and we were unable to recover it. 00:33:58.341 [2024-07-24 02:12:13.060290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.341 [2024-07-24 02:12:13.060361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.341 qpair failed and we were unable to recover it. 00:33:58.341 [2024-07-24 02:12:13.060512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.341 [2024-07-24 02:12:13.060556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.341 qpair failed and we were unable to recover it. 00:33:58.341 [2024-07-24 02:12:13.060737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.341 [2024-07-24 02:12:13.060780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.341 qpair failed and we were unable to recover it. 00:33:58.341 [2024-07-24 02:12:13.060961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.341 [2024-07-24 02:12:13.061003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.341 qpair failed and we were unable to recover it. 00:33:58.341 [2024-07-24 02:12:13.061140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.341 [2024-07-24 02:12:13.061166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.341 qpair failed and we were unable to recover it. 00:33:58.341 [2024-07-24 02:12:13.061300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.341 [2024-07-24 02:12:13.061338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.341 qpair failed and we were unable to recover it. 00:33:58.341 [2024-07-24 02:12:13.061446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.341 [2024-07-24 02:12:13.061471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.341 qpair failed and we were unable to recover it. 00:33:58.341 [2024-07-24 02:12:13.061603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.341 [2024-07-24 02:12:13.061640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.341 qpair failed and we were unable to recover it. 00:33:58.341 [2024-07-24 02:12:13.061807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.341 [2024-07-24 02:12:13.061833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.341 qpair failed and we were unable to recover it. 00:33:58.341 [2024-07-24 02:12:13.061957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.341 [2024-07-24 02:12:13.062000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.341 qpair failed and we were unable to recover it. 00:33:58.341 [2024-07-24 02:12:13.062159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.341 [2024-07-24 02:12:13.062185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.341 qpair failed and we were unable to recover it. 00:33:58.341 [2024-07-24 02:12:13.062326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.341 [2024-07-24 02:12:13.062364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.341 qpair failed and we were unable to recover it. 00:33:58.341 [2024-07-24 02:12:13.062526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.341 [2024-07-24 02:12:13.062552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.341 qpair failed and we were unable to recover it. 00:33:58.341 [2024-07-24 02:12:13.062652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.341 [2024-07-24 02:12:13.062678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.341 qpair failed and we were unable to recover it. 00:33:58.341 [2024-07-24 02:12:13.062791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.341 [2024-07-24 02:12:13.062817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.341 qpair failed and we were unable to recover it. 00:33:58.341 [2024-07-24 02:12:13.062985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.341 [2024-07-24 02:12:13.063011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.341 qpair failed and we were unable to recover it. 00:33:58.341 [2024-07-24 02:12:13.063149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.341 [2024-07-24 02:12:13.063174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.341 qpair failed and we were unable to recover it. 00:33:58.341 [2024-07-24 02:12:13.063307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.341 [2024-07-24 02:12:13.063338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.341 qpair failed and we were unable to recover it. 00:33:58.341 [2024-07-24 02:12:13.063499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.341 [2024-07-24 02:12:13.063524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.341 qpair failed and we were unable to recover it. 00:33:58.341 [2024-07-24 02:12:13.063669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.341 [2024-07-24 02:12:13.063695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.341 qpair failed and we were unable to recover it. 00:33:58.341 [2024-07-24 02:12:13.063832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.341 [2024-07-24 02:12:13.063858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.341 qpair failed and we were unable to recover it. 00:33:58.341 [2024-07-24 02:12:13.064024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.341 [2024-07-24 02:12:13.064049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.341 qpair failed and we were unable to recover it. 00:33:58.341 [2024-07-24 02:12:13.064203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.341 [2024-07-24 02:12:13.064228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.341 qpair failed and we were unable to recover it. 00:33:58.341 [2024-07-24 02:12:13.064387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.341 [2024-07-24 02:12:13.064432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.341 qpair failed and we were unable to recover it. 00:33:58.341 [2024-07-24 02:12:13.064562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.341 [2024-07-24 02:12:13.064591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.341 qpair failed and we were unable to recover it. 00:33:58.341 [2024-07-24 02:12:13.064802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.341 [2024-07-24 02:12:13.064856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.341 qpair failed and we were unable to recover it. 00:33:58.341 [2024-07-24 02:12:13.065012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.341 [2024-07-24 02:12:13.065042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.341 qpair failed and we were unable to recover it. 00:33:58.341 [2024-07-24 02:12:13.065220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.341 [2024-07-24 02:12:13.065249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.341 qpair failed and we were unable to recover it. 00:33:58.341 [2024-07-24 02:12:13.065447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.341 [2024-07-24 02:12:13.065477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.341 qpair failed and we were unable to recover it. 00:33:58.342 [2024-07-24 02:12:13.065621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.342 [2024-07-24 02:12:13.065651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.342 qpair failed and we were unable to recover it. 00:33:58.342 [2024-07-24 02:12:13.065803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.342 [2024-07-24 02:12:13.065832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.342 qpair failed and we were unable to recover it. 00:33:58.342 [2024-07-24 02:12:13.065949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.342 [2024-07-24 02:12:13.065977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.342 qpair failed and we were unable to recover it. 00:33:58.342 [2024-07-24 02:12:13.066100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.342 [2024-07-24 02:12:13.066131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.342 qpair failed and we were unable to recover it. 00:33:58.342 [2024-07-24 02:12:13.066322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.342 [2024-07-24 02:12:13.066376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.342 qpair failed and we were unable to recover it. 00:33:58.342 [2024-07-24 02:12:13.066508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.342 [2024-07-24 02:12:13.066539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.342 qpair failed and we were unable to recover it. 00:33:58.342 [2024-07-24 02:12:13.066674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.342 [2024-07-24 02:12:13.066700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.342 qpair failed and we were unable to recover it. 00:33:58.342 [2024-07-24 02:12:13.066836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.342 [2024-07-24 02:12:13.066864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.342 qpair failed and we were unable to recover it. 00:33:58.342 [2024-07-24 02:12:13.067049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.342 [2024-07-24 02:12:13.067094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.342 qpair failed and we were unable to recover it. 00:33:58.342 [2024-07-24 02:12:13.067237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.342 [2024-07-24 02:12:13.067262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.342 qpair failed and we were unable to recover it. 00:33:58.342 [2024-07-24 02:12:13.067417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.342 [2024-07-24 02:12:13.067443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.342 qpair failed and we were unable to recover it. 00:33:58.342 [2024-07-24 02:12:13.067567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.342 [2024-07-24 02:12:13.067595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.342 qpair failed and we were unable to recover it. 00:33:58.342 [2024-07-24 02:12:13.067735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.342 [2024-07-24 02:12:13.067763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.342 qpair failed and we were unable to recover it. 00:33:58.342 [2024-07-24 02:12:13.067940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.342 [2024-07-24 02:12:13.067983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.342 qpair failed and we were unable to recover it. 00:33:58.342 [2024-07-24 02:12:13.068116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.342 [2024-07-24 02:12:13.068151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.342 qpair failed and we were unable to recover it. 00:33:58.342 [2024-07-24 02:12:13.068331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.342 [2024-07-24 02:12:13.068358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.342 qpair failed and we were unable to recover it. 00:33:58.342 [2024-07-24 02:12:13.068514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.342 [2024-07-24 02:12:13.068557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.342 qpair failed and we were unable to recover it. 00:33:58.342 [2024-07-24 02:12:13.068711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.342 [2024-07-24 02:12:13.068753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.342 qpair failed and we were unable to recover it. 00:33:58.342 [2024-07-24 02:12:13.068910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.342 [2024-07-24 02:12:13.068953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.342 qpair failed and we were unable to recover it. 00:33:58.342 [2024-07-24 02:12:13.069110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.342 [2024-07-24 02:12:13.069135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.342 qpair failed and we were unable to recover it. 00:33:58.342 [2024-07-24 02:12:13.069265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.342 [2024-07-24 02:12:13.069291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.342 qpair failed and we were unable to recover it. 00:33:58.342 [2024-07-24 02:12:13.069461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.342 [2024-07-24 02:12:13.069504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.342 qpair failed and we were unable to recover it. 00:33:58.342 [2024-07-24 02:12:13.069690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.342 [2024-07-24 02:12:13.069738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.342 qpair failed and we were unable to recover it. 00:33:58.342 [2024-07-24 02:12:13.069905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.342 [2024-07-24 02:12:13.069946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.342 qpair failed and we were unable to recover it. 00:33:58.342 [2024-07-24 02:12:13.070071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.342 [2024-07-24 02:12:13.070097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.342 qpair failed and we were unable to recover it. 00:33:58.342 [2024-07-24 02:12:13.070232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.342 [2024-07-24 02:12:13.070257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.342 qpair failed and we were unable to recover it. 00:33:58.342 [2024-07-24 02:12:13.070426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.342 [2024-07-24 02:12:13.070476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.342 qpair failed and we were unable to recover it. 00:33:58.342 [2024-07-24 02:12:13.070662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.342 [2024-07-24 02:12:13.070705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.342 qpair failed and we were unable to recover it. 00:33:58.342 [2024-07-24 02:12:13.070831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.342 [2024-07-24 02:12:13.070860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.342 qpair failed and we were unable to recover it. 00:33:58.342 [2024-07-24 02:12:13.071030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.342 [2024-07-24 02:12:13.071072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.342 qpair failed and we were unable to recover it. 00:33:58.342 [2024-07-24 02:12:13.071232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.342 [2024-07-24 02:12:13.071258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.342 qpair failed and we were unable to recover it. 00:33:58.342 [2024-07-24 02:12:13.071451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.342 [2024-07-24 02:12:13.071495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.342 qpair failed and we were unable to recover it. 00:33:58.342 [2024-07-24 02:12:13.071678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.342 [2024-07-24 02:12:13.071720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.342 qpair failed and we were unable to recover it. 00:33:58.342 [2024-07-24 02:12:13.071881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.342 [2024-07-24 02:12:13.071909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.342 qpair failed and we were unable to recover it. 00:33:58.342 [2024-07-24 02:12:13.072062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.342 [2024-07-24 02:12:13.072088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.342 qpair failed and we were unable to recover it. 00:33:58.342 [2024-07-24 02:12:13.072184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.342 [2024-07-24 02:12:13.072210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.342 qpair failed and we were unable to recover it. 00:33:58.342 [2024-07-24 02:12:13.072334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.343 [2024-07-24 02:12:13.072361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.343 qpair failed and we were unable to recover it. 00:33:58.343 [2024-07-24 02:12:13.072511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.343 [2024-07-24 02:12:13.072556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.343 qpair failed and we were unable to recover it. 00:33:58.343 [2024-07-24 02:12:13.072712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.343 [2024-07-24 02:12:13.072740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.343 qpair failed and we were unable to recover it. 00:33:58.343 [2024-07-24 02:12:13.072882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.343 [2024-07-24 02:12:13.072928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.343 qpair failed and we were unable to recover it. 00:33:58.343 [2024-07-24 02:12:13.073094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.343 [2024-07-24 02:12:13.073128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.343 qpair failed and we were unable to recover it. 00:33:58.343 [2024-07-24 02:12:13.073264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.343 [2024-07-24 02:12:13.073289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.343 qpair failed and we were unable to recover it. 00:33:58.343 [2024-07-24 02:12:13.073402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.343 [2024-07-24 02:12:13.073428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.343 qpair failed and we were unable to recover it. 00:33:58.343 [2024-07-24 02:12:13.073587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.343 [2024-07-24 02:12:13.073613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.343 qpair failed and we were unable to recover it. 00:33:58.343 [2024-07-24 02:12:13.073767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.343 [2024-07-24 02:12:13.073795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.343 qpair failed and we were unable to recover it. 00:33:58.343 [2024-07-24 02:12:13.074039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.343 [2024-07-24 02:12:13.074088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.343 qpair failed and we were unable to recover it. 00:33:58.343 [2024-07-24 02:12:13.074227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.343 [2024-07-24 02:12:13.074252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.343 qpair failed and we were unable to recover it. 00:33:58.343 [2024-07-24 02:12:13.074414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.343 [2024-07-24 02:12:13.074459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.343 qpair failed and we were unable to recover it. 00:33:58.343 [2024-07-24 02:12:13.074631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.343 [2024-07-24 02:12:13.074674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.343 qpair failed and we were unable to recover it. 00:33:58.343 [2024-07-24 02:12:13.074831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.343 [2024-07-24 02:12:13.074873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.343 qpair failed and we were unable to recover it. 00:33:58.343 [2024-07-24 02:12:13.074985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.343 [2024-07-24 02:12:13.075011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.343 qpair failed and we were unable to recover it. 00:33:58.343 [2024-07-24 02:12:13.075169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.343 [2024-07-24 02:12:13.075195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.343 qpair failed and we were unable to recover it. 00:33:58.343 [2024-07-24 02:12:13.075333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.343 [2024-07-24 02:12:13.075360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.343 qpair failed and we were unable to recover it. 00:33:58.343 [2024-07-24 02:12:13.075515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.343 [2024-07-24 02:12:13.075558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.343 qpair failed and we were unable to recover it. 00:33:58.343 [2024-07-24 02:12:13.075703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.343 [2024-07-24 02:12:13.075746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.343 qpair failed and we were unable to recover it. 00:33:58.343 [2024-07-24 02:12:13.075867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.343 [2024-07-24 02:12:13.075909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.343 qpair failed and we were unable to recover it. 00:33:58.343 [2024-07-24 02:12:13.076064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.343 [2024-07-24 02:12:13.076090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.343 qpair failed and we were unable to recover it. 00:33:58.343 [2024-07-24 02:12:13.076222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.343 [2024-07-24 02:12:13.076247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.343 qpair failed and we were unable to recover it. 00:33:58.343 [2024-07-24 02:12:13.076379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.343 [2024-07-24 02:12:13.076408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.343 qpair failed and we were unable to recover it. 00:33:58.343 [2024-07-24 02:12:13.076570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.343 [2024-07-24 02:12:13.076598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.343 qpair failed and we were unable to recover it. 00:33:58.343 [2024-07-24 02:12:13.076799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.343 [2024-07-24 02:12:13.076842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.343 qpair failed and we were unable to recover it. 00:33:58.343 [2024-07-24 02:12:13.076980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.343 [2024-07-24 02:12:13.077006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.343 qpair failed and we were unable to recover it. 00:33:58.343 [2024-07-24 02:12:13.077138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.343 [2024-07-24 02:12:13.077164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.343 qpair failed and we were unable to recover it. 00:33:58.343 [2024-07-24 02:12:13.077298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.343 [2024-07-24 02:12:13.077328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.343 qpair failed and we were unable to recover it. 00:33:58.343 [2024-07-24 02:12:13.077477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.343 [2024-07-24 02:12:13.077520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.343 qpair failed and we were unable to recover it. 00:33:58.343 [2024-07-24 02:12:13.077640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.343 [2024-07-24 02:12:13.077682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.343 qpair failed and we were unable to recover it. 00:33:58.343 [2024-07-24 02:12:13.077804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.343 [2024-07-24 02:12:13.077849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.343 qpair failed and we were unable to recover it. 00:33:58.343 [2024-07-24 02:12:13.078013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.343 [2024-07-24 02:12:13.078038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.343 qpair failed and we were unable to recover it. 00:33:58.343 [2024-07-24 02:12:13.078171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.343 [2024-07-24 02:12:13.078196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.343 qpair failed and we were unable to recover it. 00:33:58.343 [2024-07-24 02:12:13.078303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.343 [2024-07-24 02:12:13.078335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.343 qpair failed and we were unable to recover it. 00:33:58.343 [2024-07-24 02:12:13.078465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.343 [2024-07-24 02:12:13.078508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.343 qpair failed and we were unable to recover it. 00:33:58.343 [2024-07-24 02:12:13.078663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.343 [2024-07-24 02:12:13.078705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.343 qpair failed and we were unable to recover it. 00:33:58.343 [2024-07-24 02:12:13.078865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.343 [2024-07-24 02:12:13.078907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.343 qpair failed and we were unable to recover it. 00:33:58.344 [2024-07-24 02:12:13.079037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.344 [2024-07-24 02:12:13.079062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.344 qpair failed and we were unable to recover it. 00:33:58.344 [2024-07-24 02:12:13.079219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.344 [2024-07-24 02:12:13.079244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.344 qpair failed and we were unable to recover it. 00:33:58.344 [2024-07-24 02:12:13.079397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.344 [2024-07-24 02:12:13.079440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.344 qpair failed and we were unable to recover it. 00:33:58.344 [2024-07-24 02:12:13.079548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.344 [2024-07-24 02:12:13.079574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.344 qpair failed and we were unable to recover it. 00:33:58.344 [2024-07-24 02:12:13.079722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.344 [2024-07-24 02:12:13.079764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.344 qpair failed and we were unable to recover it. 00:33:58.344 [2024-07-24 02:12:13.079921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.344 [2024-07-24 02:12:13.079963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.344 qpair failed and we were unable to recover it. 00:33:58.344 [2024-07-24 02:12:13.080100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.344 [2024-07-24 02:12:13.080124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.344 qpair failed and we were unable to recover it. 00:33:58.344 [2024-07-24 02:12:13.080284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.344 [2024-07-24 02:12:13.080310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.344 qpair failed and we were unable to recover it. 00:33:58.344 [2024-07-24 02:12:13.080468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.344 [2024-07-24 02:12:13.080511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.344 qpair failed and we were unable to recover it. 00:33:58.344 [2024-07-24 02:12:13.080675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.344 [2024-07-24 02:12:13.080718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.344 qpair failed and we were unable to recover it. 00:33:58.344 [2024-07-24 02:12:13.080872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.344 [2024-07-24 02:12:13.080900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.344 qpair failed and we were unable to recover it. 00:33:58.344 [2024-07-24 02:12:13.081041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.344 [2024-07-24 02:12:13.081067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.344 qpair failed and we were unable to recover it. 00:33:58.344 [2024-07-24 02:12:13.081175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.344 [2024-07-24 02:12:13.081205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.344 qpair failed and we were unable to recover it. 00:33:58.344 [2024-07-24 02:12:13.081370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.344 [2024-07-24 02:12:13.081400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.344 qpair failed and we were unable to recover it. 00:33:58.344 [2024-07-24 02:12:13.081544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.344 [2024-07-24 02:12:13.081572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.344 qpair failed and we were unable to recover it. 00:33:58.344 [2024-07-24 02:12:13.081753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.344 [2024-07-24 02:12:13.081795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.344 qpair failed and we were unable to recover it. 00:33:58.344 [2024-07-24 02:12:13.081928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.344 [2024-07-24 02:12:13.081954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.344 qpair failed and we were unable to recover it. 00:33:58.344 [2024-07-24 02:12:13.082114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.344 [2024-07-24 02:12:13.082140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.344 qpair failed and we were unable to recover it. 00:33:58.344 [2024-07-24 02:12:13.082245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.344 [2024-07-24 02:12:13.082271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.344 qpair failed and we were unable to recover it. 00:33:58.344 [2024-07-24 02:12:13.082435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.344 [2024-07-24 02:12:13.082461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.344 qpair failed and we were unable to recover it. 00:33:58.344 [2024-07-24 02:12:13.082566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.344 [2024-07-24 02:12:13.082592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.344 qpair failed and we were unable to recover it. 00:33:58.344 [2024-07-24 02:12:13.082726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.344 [2024-07-24 02:12:13.082769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.344 qpair failed and we were unable to recover it. 00:33:58.344 [2024-07-24 02:12:13.082922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.344 [2024-07-24 02:12:13.082948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.344 qpair failed and we were unable to recover it. 00:33:58.344 [2024-07-24 02:12:13.083107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.344 [2024-07-24 02:12:13.083132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.344 qpair failed and we were unable to recover it. 00:33:58.344 [2024-07-24 02:12:13.083234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.344 [2024-07-24 02:12:13.083260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.344 qpair failed and we were unable to recover it. 00:33:58.344 [2024-07-24 02:12:13.083396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.344 [2024-07-24 02:12:13.083422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.344 qpair failed and we were unable to recover it. 00:33:58.344 [2024-07-24 02:12:13.083578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.344 [2024-07-24 02:12:13.083621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.344 qpair failed and we were unable to recover it. 00:33:58.344 [2024-07-24 02:12:13.083782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.344 [2024-07-24 02:12:13.083808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.344 qpair failed and we were unable to recover it. 00:33:58.344 [2024-07-24 02:12:13.083916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.344 [2024-07-24 02:12:13.083942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.344 qpair failed and we were unable to recover it. 00:33:58.344 [2024-07-24 02:12:13.084075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.344 [2024-07-24 02:12:13.084101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.344 qpair failed and we were unable to recover it. 00:33:58.344 [2024-07-24 02:12:13.084263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.344 [2024-07-24 02:12:13.084288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.344 qpair failed and we were unable to recover it. 00:33:58.344 [2024-07-24 02:12:13.084419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.344 [2024-07-24 02:12:13.084467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.344 qpair failed and we were unable to recover it. 00:33:58.344 [2024-07-24 02:12:13.084595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.344 [2024-07-24 02:12:13.084624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.344 qpair failed and we were unable to recover it. 00:33:58.344 [2024-07-24 02:12:13.084839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.344 [2024-07-24 02:12:13.084882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.344 qpair failed and we were unable to recover it. 00:33:58.344 [2024-07-24 02:12:13.085017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.344 [2024-07-24 02:12:13.085042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.344 qpair failed and we were unable to recover it. 00:33:58.344 [2024-07-24 02:12:13.085172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.344 [2024-07-24 02:12:13.085198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.344 qpair failed and we were unable to recover it. 00:33:58.344 [2024-07-24 02:12:13.085326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.344 [2024-07-24 02:12:13.085354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.344 qpair failed and we were unable to recover it. 00:33:58.344 [2024-07-24 02:12:13.085515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.344 [2024-07-24 02:12:13.085542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.344 qpair failed and we were unable to recover it. 00:33:58.344 [2024-07-24 02:12:13.085690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.345 [2024-07-24 02:12:13.085716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.345 qpair failed and we were unable to recover it. 00:33:58.345 [2024-07-24 02:12:13.085885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.345 [2024-07-24 02:12:13.085911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.345 qpair failed and we were unable to recover it. 00:33:58.345 [2024-07-24 02:12:13.086072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.345 [2024-07-24 02:12:13.086098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.345 qpair failed and we were unable to recover it. 00:33:58.345 [2024-07-24 02:12:13.086264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.345 [2024-07-24 02:12:13.086290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.345 qpair failed and we were unable to recover it. 00:33:58.345 [2024-07-24 02:12:13.086401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.345 [2024-07-24 02:12:13.086428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.345 qpair failed and we were unable to recover it. 00:33:58.345 [2024-07-24 02:12:13.086559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.345 [2024-07-24 02:12:13.086602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.345 qpair failed and we were unable to recover it. 00:33:58.345 [2024-07-24 02:12:13.086744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.345 [2024-07-24 02:12:13.086787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.345 qpair failed and we were unable to recover it. 00:33:58.345 [2024-07-24 02:12:13.086942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.345 [2024-07-24 02:12:13.086986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.345 qpair failed and we were unable to recover it. 00:33:58.345 [2024-07-24 02:12:13.087145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.345 [2024-07-24 02:12:13.087171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.345 qpair failed and we were unable to recover it. 00:33:58.345 [2024-07-24 02:12:13.087323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.345 [2024-07-24 02:12:13.087366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.345 qpair failed and we were unable to recover it. 00:33:58.345 [2024-07-24 02:12:13.087495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.345 [2024-07-24 02:12:13.087521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.345 qpair failed and we were unable to recover it. 00:33:58.345 [2024-07-24 02:12:13.087643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.345 [2024-07-24 02:12:13.087668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.345 qpair failed and we were unable to recover it. 00:33:58.345 [2024-07-24 02:12:13.087826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.345 [2024-07-24 02:12:13.087851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.345 qpair failed and we were unable to recover it. 00:33:58.345 [2024-07-24 02:12:13.088014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.345 [2024-07-24 02:12:13.088039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.345 qpair failed and we were unable to recover it. 00:33:58.345 [2024-07-24 02:12:13.088169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.345 [2024-07-24 02:12:13.088200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.345 qpair failed and we were unable to recover it. 00:33:58.345 [2024-07-24 02:12:13.088368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.345 [2024-07-24 02:12:13.088396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.345 qpair failed and we were unable to recover it. 00:33:58.345 [2024-07-24 02:12:13.088594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.345 [2024-07-24 02:12:13.088638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.345 qpair failed and we were unable to recover it. 00:33:58.345 [2024-07-24 02:12:13.088790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.345 [2024-07-24 02:12:13.088832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.345 qpair failed and we were unable to recover it. 00:33:58.345 [2024-07-24 02:12:13.088946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.345 [2024-07-24 02:12:13.088971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.345 qpair failed and we were unable to recover it. 00:33:58.345 [2024-07-24 02:12:13.089135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.345 [2024-07-24 02:12:13.089161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.345 qpair failed and we were unable to recover it. 00:33:58.345 [2024-07-24 02:12:13.089297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.345 [2024-07-24 02:12:13.089328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.345 qpair failed and we were unable to recover it. 00:33:58.345 [2024-07-24 02:12:13.089439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.345 [2024-07-24 02:12:13.089465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.345 qpair failed and we were unable to recover it. 00:33:58.345 [2024-07-24 02:12:13.089599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.345 [2024-07-24 02:12:13.089624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.345 qpair failed and we were unable to recover it. 00:33:58.345 [2024-07-24 02:12:13.089756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.345 [2024-07-24 02:12:13.089781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.345 qpair failed and we were unable to recover it. 00:33:58.345 [2024-07-24 02:12:13.089915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.345 [2024-07-24 02:12:13.089941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.345 qpair failed and we were unable to recover it. 00:33:58.345 [2024-07-24 02:12:13.090042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.345 [2024-07-24 02:12:13.090068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.345 qpair failed and we were unable to recover it. 00:33:58.345 [2024-07-24 02:12:13.090164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.345 [2024-07-24 02:12:13.090190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.345 qpair failed and we were unable to recover it. 00:33:58.345 [2024-07-24 02:12:13.090326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.345 [2024-07-24 02:12:13.090352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.345 qpair failed and we were unable to recover it. 00:33:58.345 [2024-07-24 02:12:13.090472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.345 [2024-07-24 02:12:13.090497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.345 qpair failed and we were unable to recover it. 00:33:58.345 [2024-07-24 02:12:13.090626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.345 [2024-07-24 02:12:13.090652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.345 qpair failed and we were unable to recover it. 00:33:58.345 [2024-07-24 02:12:13.090831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.345 [2024-07-24 02:12:13.090873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.345 qpair failed and we were unable to recover it. 00:33:58.345 [2024-07-24 02:12:13.091006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.345 [2024-07-24 02:12:13.091031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.345 qpair failed and we were unable to recover it. 00:33:58.345 [2024-07-24 02:12:13.091138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.345 [2024-07-24 02:12:13.091163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.345 qpair failed and we were unable to recover it. 00:33:58.345 [2024-07-24 02:12:13.091288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.345 [2024-07-24 02:12:13.091313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.345 qpair failed and we were unable to recover it. 00:33:58.345 [2024-07-24 02:12:13.091457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.346 [2024-07-24 02:12:13.091484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.346 qpair failed and we were unable to recover it. 00:33:58.346 [2024-07-24 02:12:13.091653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.346 [2024-07-24 02:12:13.091678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.346 qpair failed and we were unable to recover it. 00:33:58.346 [2024-07-24 02:12:13.091812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.346 [2024-07-24 02:12:13.091838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.346 qpair failed and we were unable to recover it. 00:33:58.346 [2024-07-24 02:12:13.091976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.346 [2024-07-24 02:12:13.092002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.346 qpair failed and we were unable to recover it. 00:33:58.346 [2024-07-24 02:12:13.092110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.346 [2024-07-24 02:12:13.092136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.346 qpair failed and we were unable to recover it. 00:33:58.346 [2024-07-24 02:12:13.092269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.346 [2024-07-24 02:12:13.092295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.346 qpair failed and we were unable to recover it. 00:33:58.346 [2024-07-24 02:12:13.092438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.346 [2024-07-24 02:12:13.092463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.346 qpair failed and we were unable to recover it. 00:33:58.346 [2024-07-24 02:12:13.092594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.346 [2024-07-24 02:12:13.092638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.346 qpair failed and we were unable to recover it. 00:33:58.346 [2024-07-24 02:12:13.092828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.346 [2024-07-24 02:12:13.092856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.346 qpair failed and we were unable to recover it. 00:33:58.346 [2024-07-24 02:12:13.092981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.346 [2024-07-24 02:12:13.093006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.346 qpair failed and we were unable to recover it. 00:33:58.346 [2024-07-24 02:12:13.093141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.346 [2024-07-24 02:12:13.093166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.346 qpair failed and we were unable to recover it. 00:33:58.346 [2024-07-24 02:12:13.093395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.346 [2024-07-24 02:12:13.093438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.346 qpair failed and we were unable to recover it. 00:33:58.346 [2024-07-24 02:12:13.093616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.346 [2024-07-24 02:12:13.093658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.346 qpair failed and we were unable to recover it. 00:33:58.346 [2024-07-24 02:12:13.093873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.346 [2024-07-24 02:12:13.093914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.346 qpair failed and we were unable to recover it. 00:33:58.346 [2024-07-24 02:12:13.094104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.346 [2024-07-24 02:12:13.094147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.346 qpair failed and we were unable to recover it. 00:33:58.346 [2024-07-24 02:12:13.094282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.346 [2024-07-24 02:12:13.094308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.346 qpair failed and we were unable to recover it. 00:33:58.346 [2024-07-24 02:12:13.094466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.346 [2024-07-24 02:12:13.094509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.346 qpair failed and we were unable to recover it. 00:33:58.346 [2024-07-24 02:12:13.094685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.346 [2024-07-24 02:12:13.094728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.346 qpair failed and we were unable to recover it. 00:33:58.346 [2024-07-24 02:12:13.094852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.346 [2024-07-24 02:12:13.094880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.346 qpair failed and we were unable to recover it. 00:33:58.346 [2024-07-24 02:12:13.095075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.346 [2024-07-24 02:12:13.095103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.346 qpair failed and we were unable to recover it. 00:33:58.346 [2024-07-24 02:12:13.095249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.346 [2024-07-24 02:12:13.095279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.346 qpair failed and we were unable to recover it. 00:33:58.346 [2024-07-24 02:12:13.095435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.346 [2024-07-24 02:12:13.095479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.346 qpair failed and we were unable to recover it. 00:33:58.346 [2024-07-24 02:12:13.095646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.346 [2024-07-24 02:12:13.095673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.346 qpair failed and we were unable to recover it. 00:33:58.346 [2024-07-24 02:12:13.095800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.346 [2024-07-24 02:12:13.095826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.346 qpair failed and we were unable to recover it. 00:33:58.346 [2024-07-24 02:12:13.095949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.346 [2024-07-24 02:12:13.095978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.346 qpair failed and we were unable to recover it. 00:33:58.346 [2024-07-24 02:12:13.096105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.346 [2024-07-24 02:12:13.096131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.346 qpair failed and we were unable to recover it. 00:33:58.346 [2024-07-24 02:12:13.096260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.346 [2024-07-24 02:12:13.096286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.346 qpair failed and we were unable to recover it. 00:33:58.346 [2024-07-24 02:12:13.096500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.346 [2024-07-24 02:12:13.096529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.346 qpair failed and we were unable to recover it. 00:33:58.346 [2024-07-24 02:12:13.096692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.346 [2024-07-24 02:12:13.096735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.346 qpair failed and we were unable to recover it. 00:33:58.346 [2024-07-24 02:12:13.096883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.346 [2024-07-24 02:12:13.096925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.346 qpair failed and we were unable to recover it. 00:33:58.346 [2024-07-24 02:12:13.097062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.346 [2024-07-24 02:12:13.097088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.346 qpair failed and we were unable to recover it. 00:33:58.346 [2024-07-24 02:12:13.097197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.346 [2024-07-24 02:12:13.097222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.346 qpair failed and we were unable to recover it. 00:33:58.346 [2024-07-24 02:12:13.097359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.346 [2024-07-24 02:12:13.097385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.346 qpair failed and we were unable to recover it. 00:33:58.346 [2024-07-24 02:12:13.097549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.346 [2024-07-24 02:12:13.097574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.346 qpair failed and we were unable to recover it. 00:33:58.346 [2024-07-24 02:12:13.097713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.346 [2024-07-24 02:12:13.097738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.346 qpair failed and we were unable to recover it. 00:33:58.346 [2024-07-24 02:12:13.097872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.346 [2024-07-24 02:12:13.097897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.346 qpair failed and we were unable to recover it. 00:33:58.346 [2024-07-24 02:12:13.098036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.346 [2024-07-24 02:12:13.098061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.346 qpair failed and we were unable to recover it. 00:33:58.346 [2024-07-24 02:12:13.098216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.346 [2024-07-24 02:12:13.098241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.346 qpair failed and we were unable to recover it. 00:33:58.346 [2024-07-24 02:12:13.098363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.346 [2024-07-24 02:12:13.098392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.346 qpair failed and we were unable to recover it. 00:33:58.346 [2024-07-24 02:12:13.098535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.346 [2024-07-24 02:12:13.098578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.347 qpair failed and we were unable to recover it. 00:33:58.347 [2024-07-24 02:12:13.098793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.347 [2024-07-24 02:12:13.098834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.347 qpair failed and we were unable to recover it. 00:33:58.347 [2024-07-24 02:12:13.098966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.347 [2024-07-24 02:12:13.098992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.347 qpair failed and we were unable to recover it. 00:33:58.347 [2024-07-24 02:12:13.099153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.347 [2024-07-24 02:12:13.099178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.347 qpair failed and we were unable to recover it. 00:33:58.347 [2024-07-24 02:12:13.099329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.347 [2024-07-24 02:12:13.099358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.347 qpair failed and we were unable to recover it. 00:33:58.347 [2024-07-24 02:12:13.099556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.347 [2024-07-24 02:12:13.099584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.347 qpair failed and we were unable to recover it. 00:33:58.347 [2024-07-24 02:12:13.099758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.347 [2024-07-24 02:12:13.099801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.347 qpair failed and we were unable to recover it. 00:33:58.347 [2024-07-24 02:12:13.099900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.347 [2024-07-24 02:12:13.099925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.347 qpair failed and we were unable to recover it. 00:33:58.347 [2024-07-24 02:12:13.100032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.347 [2024-07-24 02:12:13.100057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.347 qpair failed and we were unable to recover it. 00:33:58.347 [2024-07-24 02:12:13.100195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.347 [2024-07-24 02:12:13.100221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.347 qpair failed and we were unable to recover it. 00:33:58.347 [2024-07-24 02:12:13.100373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.347 [2024-07-24 02:12:13.100402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.347 qpair failed and we were unable to recover it. 00:33:58.347 [2024-07-24 02:12:13.100583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.347 [2024-07-24 02:12:13.100610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.347 qpair failed and we were unable to recover it. 00:33:58.347 [2024-07-24 02:12:13.100775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.347 [2024-07-24 02:12:13.100800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.347 qpair failed and we were unable to recover it. 00:33:58.347 [2024-07-24 02:12:13.100924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.347 [2024-07-24 02:12:13.100949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.347 qpair failed and we were unable to recover it. 00:33:58.347 [2024-07-24 02:12:13.101079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.347 [2024-07-24 02:12:13.101104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.347 qpair failed and we were unable to recover it. 00:33:58.347 [2024-07-24 02:12:13.101234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.347 [2024-07-24 02:12:13.101259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.347 qpair failed and we were unable to recover it. 00:33:58.347 [2024-07-24 02:12:13.101408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.347 [2024-07-24 02:12:13.101451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.347 qpair failed and we were unable to recover it. 00:33:58.347 [2024-07-24 02:12:13.101604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.347 [2024-07-24 02:12:13.101646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.347 qpair failed and we were unable to recover it. 00:33:58.347 [2024-07-24 02:12:13.101833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.347 [2024-07-24 02:12:13.101860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.347 qpair failed and we were unable to recover it. 00:33:58.347 [2024-07-24 02:12:13.101976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.347 [2024-07-24 02:12:13.102001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.347 qpair failed and we were unable to recover it. 00:33:58.347 [2024-07-24 02:12:13.102114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.347 [2024-07-24 02:12:13.102139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.347 qpair failed and we were unable to recover it. 00:33:58.347 [2024-07-24 02:12:13.102293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.347 [2024-07-24 02:12:13.102327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.347 qpair failed and we were unable to recover it. 00:33:58.347 [2024-07-24 02:12:13.102484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.347 [2024-07-24 02:12:13.102526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.347 qpair failed and we were unable to recover it. 00:33:58.347 [2024-07-24 02:12:13.102677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.347 [2024-07-24 02:12:13.102718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.347 qpair failed and we were unable to recover it. 00:33:58.347 [2024-07-24 02:12:13.102930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.347 [2024-07-24 02:12:13.102955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.347 qpair failed and we were unable to recover it. 00:33:58.347 [2024-07-24 02:12:13.103115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.347 [2024-07-24 02:12:13.103140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.347 qpair failed and we were unable to recover it. 00:33:58.347 [2024-07-24 02:12:13.103271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.347 [2024-07-24 02:12:13.103296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.347 qpair failed and we were unable to recover it. 00:33:58.347 [2024-07-24 02:12:13.103436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.347 [2024-07-24 02:12:13.103461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.347 qpair failed and we were unable to recover it. 00:33:58.347 [2024-07-24 02:12:13.103618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.347 [2024-07-24 02:12:13.103661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.347 qpair failed and we were unable to recover it. 00:33:58.347 [2024-07-24 02:12:13.103874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.347 [2024-07-24 02:12:13.103917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.347 qpair failed and we were unable to recover it. 00:33:58.347 [2024-07-24 02:12:13.104098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.347 [2024-07-24 02:12:13.104141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.347 qpair failed and we were unable to recover it. 00:33:58.347 [2024-07-24 02:12:13.104265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.347 [2024-07-24 02:12:13.104291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.347 qpair failed and we were unable to recover it. 00:33:58.347 [2024-07-24 02:12:13.104425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.347 [2024-07-24 02:12:13.104473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.347 qpair failed and we were unable to recover it. 00:33:58.347 [2024-07-24 02:12:13.104596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.347 [2024-07-24 02:12:13.104638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.347 qpair failed and we were unable to recover it. 00:33:58.347 [2024-07-24 02:12:13.104763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.347 [2024-07-24 02:12:13.104788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.347 qpair failed and we were unable to recover it. 00:33:58.347 [2024-07-24 02:12:13.104928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.347 [2024-07-24 02:12:13.104953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.347 qpair failed and we were unable to recover it. 00:33:58.347 [2024-07-24 02:12:13.105053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.347 [2024-07-24 02:12:13.105078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.347 qpair failed and we were unable to recover it. 00:33:58.347 [2024-07-24 02:12:13.105210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.347 [2024-07-24 02:12:13.105235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.347 qpair failed and we were unable to recover it. 00:33:58.347 [2024-07-24 02:12:13.105395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.347 [2024-07-24 02:12:13.105421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.347 qpair failed and we were unable to recover it. 00:33:58.347 [2024-07-24 02:12:13.105558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.347 [2024-07-24 02:12:13.105583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.347 qpair failed and we were unable to recover it. 00:33:58.348 [2024-07-24 02:12:13.105683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.348 [2024-07-24 02:12:13.105708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.348 qpair failed and we were unable to recover it. 00:33:58.348 [2024-07-24 02:12:13.105808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.348 [2024-07-24 02:12:13.105833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.348 qpair failed and we were unable to recover it. 00:33:58.348 [2024-07-24 02:12:13.105943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.348 [2024-07-24 02:12:13.105968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.348 qpair failed and we were unable to recover it. 00:33:58.348 [2024-07-24 02:12:13.106100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.348 [2024-07-24 02:12:13.106125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.348 qpair failed and we were unable to recover it. 00:33:58.348 [2024-07-24 02:12:13.106232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.348 [2024-07-24 02:12:13.106257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.348 qpair failed and we were unable to recover it. 00:33:58.348 [2024-07-24 02:12:13.106430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.348 [2024-07-24 02:12:13.106457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.348 qpair failed and we were unable to recover it. 00:33:58.348 [2024-07-24 02:12:13.106668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.348 [2024-07-24 02:12:13.106693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.348 qpair failed and we were unable to recover it. 00:33:58.348 [2024-07-24 02:12:13.106826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.348 [2024-07-24 02:12:13.106851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.348 qpair failed and we were unable to recover it. 00:33:58.348 [2024-07-24 02:12:13.106989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.348 [2024-07-24 02:12:13.107014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.348 qpair failed and we were unable to recover it. 00:33:58.348 [2024-07-24 02:12:13.107147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.348 [2024-07-24 02:12:13.107172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.348 qpair failed and we were unable to recover it. 00:33:58.348 [2024-07-24 02:12:13.107328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.348 [2024-07-24 02:12:13.107353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.348 qpair failed and we were unable to recover it. 00:33:58.348 [2024-07-24 02:12:13.107499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.348 [2024-07-24 02:12:13.107527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.348 qpair failed and we were unable to recover it. 00:33:58.348 [2024-07-24 02:12:13.107700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.348 [2024-07-24 02:12:13.107744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.348 qpair failed and we were unable to recover it. 00:33:58.348 [2024-07-24 02:12:13.107936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.348 [2024-07-24 02:12:13.107979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.348 qpair failed and we were unable to recover it. 00:33:58.348 [2024-07-24 02:12:13.108143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.348 [2024-07-24 02:12:13.108168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.348 qpair failed and we were unable to recover it. 00:33:58.348 [2024-07-24 02:12:13.108328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.348 [2024-07-24 02:12:13.108357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.348 qpair failed and we were unable to recover it. 00:33:58.348 [2024-07-24 02:12:13.108525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.348 [2024-07-24 02:12:13.108568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.348 qpair failed and we were unable to recover it. 00:33:58.348 [2024-07-24 02:12:13.108750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.348 [2024-07-24 02:12:13.108792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.348 qpair failed and we were unable to recover it. 00:33:58.348 [2024-07-24 02:12:13.108913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.348 [2024-07-24 02:12:13.108955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.348 qpair failed and we were unable to recover it. 00:33:58.348 [2024-07-24 02:12:13.109102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.348 [2024-07-24 02:12:13.109127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.348 qpair failed and we were unable to recover it. 00:33:58.348 [2024-07-24 02:12:13.109237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.348 [2024-07-24 02:12:13.109262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.348 qpair failed and we were unable to recover it. 00:33:58.348 [2024-07-24 02:12:13.109370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.348 [2024-07-24 02:12:13.109402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.348 qpair failed and we were unable to recover it. 00:33:58.348 [2024-07-24 02:12:13.109501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.348 [2024-07-24 02:12:13.109528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.348 qpair failed and we were unable to recover it. 00:33:58.348 [2024-07-24 02:12:13.109664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.348 [2024-07-24 02:12:13.109711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.348 qpair failed and we were unable to recover it. 00:33:58.348 [2024-07-24 02:12:13.109858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.348 [2024-07-24 02:12:13.109901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.348 qpair failed and we were unable to recover it. 00:33:58.348 [2024-07-24 02:12:13.110010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.348 [2024-07-24 02:12:13.110035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.348 qpair failed and we were unable to recover it. 00:33:58.348 [2024-07-24 02:12:13.110144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.348 [2024-07-24 02:12:13.110172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.348 qpair failed and we were unable to recover it. 00:33:58.348 [2024-07-24 02:12:13.110338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.348 [2024-07-24 02:12:13.110364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.348 qpair failed and we were unable to recover it. 00:33:58.348 [2024-07-24 02:12:13.110479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.348 [2024-07-24 02:12:13.110522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.348 qpair failed and we were unable to recover it. 00:33:58.348 [2024-07-24 02:12:13.110659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.348 [2024-07-24 02:12:13.110685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.348 qpair failed and we were unable to recover it. 00:33:58.348 [2024-07-24 02:12:13.110840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.348 [2024-07-24 02:12:13.110865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.348 qpair failed and we were unable to recover it. 00:33:58.348 [2024-07-24 02:12:13.110971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.348 [2024-07-24 02:12:13.110996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.348 qpair failed and we were unable to recover it. 00:33:58.348 [2024-07-24 02:12:13.111133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.348 [2024-07-24 02:12:13.111159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.348 qpair failed and we were unable to recover it. 00:33:58.348 [2024-07-24 02:12:13.111304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.348 [2024-07-24 02:12:13.111336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.348 qpair failed and we were unable to recover it. 00:33:58.348 [2024-07-24 02:12:13.111437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.348 [2024-07-24 02:12:13.111464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.348 qpair failed and we were unable to recover it. 00:33:58.348 [2024-07-24 02:12:13.111659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.348 [2024-07-24 02:12:13.111702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.348 qpair failed and we were unable to recover it. 00:33:58.348 [2024-07-24 02:12:13.111855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.348 [2024-07-24 02:12:13.111898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.348 qpair failed and we were unable to recover it. 00:33:58.348 [2024-07-24 02:12:13.112055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.348 [2024-07-24 02:12:13.112080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.348 qpair failed and we were unable to recover it. 00:33:58.348 [2024-07-24 02:12:13.112240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.348 [2024-07-24 02:12:13.112266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.348 qpair failed and we were unable to recover it. 00:33:58.348 [2024-07-24 02:12:13.112421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.348 [2024-07-24 02:12:13.112450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.349 qpair failed and we were unable to recover it. 00:33:58.349 [2024-07-24 02:12:13.112614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.349 [2024-07-24 02:12:13.112658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.349 qpair failed and we were unable to recover it. 00:33:58.349 [2024-07-24 02:12:13.112785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.349 [2024-07-24 02:12:13.112828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.349 qpair failed and we were unable to recover it. 00:33:58.349 [2024-07-24 02:12:13.112972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.349 [2024-07-24 02:12:13.113000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.349 qpair failed and we were unable to recover it. 00:33:58.349 [2024-07-24 02:12:13.113126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.349 [2024-07-24 02:12:13.113152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.349 qpair failed and we were unable to recover it. 00:33:58.349 [2024-07-24 02:12:13.113325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.349 [2024-07-24 02:12:13.113351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.349 qpair failed and we were unable to recover it. 00:33:58.349 [2024-07-24 02:12:13.113501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.349 [2024-07-24 02:12:13.113544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.349 qpair failed and we were unable to recover it. 00:33:58.349 [2024-07-24 02:12:13.113712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.349 [2024-07-24 02:12:13.113756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.349 qpair failed and we were unable to recover it. 00:33:58.349 [2024-07-24 02:12:13.113908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.349 [2024-07-24 02:12:13.113951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.349 qpair failed and we were unable to recover it. 00:33:58.349 [2024-07-24 02:12:13.114100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.349 [2024-07-24 02:12:13.114125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.349 qpair failed and we were unable to recover it. 00:33:58.349 [2024-07-24 02:12:13.114259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.349 [2024-07-24 02:12:13.114286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.349 qpair failed and we were unable to recover it. 00:33:58.349 [2024-07-24 02:12:13.114437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.349 [2024-07-24 02:12:13.114480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.349 qpair failed and we were unable to recover it. 00:33:58.349 [2024-07-24 02:12:13.114623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.349 [2024-07-24 02:12:13.114664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.349 qpair failed and we were unable to recover it. 00:33:58.349 [2024-07-24 02:12:13.114848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.349 [2024-07-24 02:12:13.114895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.349 qpair failed and we were unable to recover it. 00:33:58.349 [2024-07-24 02:12:13.115049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.349 [2024-07-24 02:12:13.115077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.349 qpair failed and we were unable to recover it. 00:33:58.349 [2024-07-24 02:12:13.115249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.349 [2024-07-24 02:12:13.115274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.349 qpair failed and we were unable to recover it. 00:33:58.349 [2024-07-24 02:12:13.115462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.349 [2024-07-24 02:12:13.115505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.349 qpair failed and we were unable to recover it. 00:33:58.349 [2024-07-24 02:12:13.115654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.349 [2024-07-24 02:12:13.115697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.349 qpair failed and we were unable to recover it. 00:33:58.349 [2024-07-24 02:12:13.115824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.349 [2024-07-24 02:12:13.115868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.349 qpair failed and we were unable to recover it. 00:33:58.349 [2024-07-24 02:12:13.115999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.349 [2024-07-24 02:12:13.116024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.349 qpair failed and we were unable to recover it. 00:33:58.349 [2024-07-24 02:12:13.116129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.349 [2024-07-24 02:12:13.116154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.349 qpair failed and we were unable to recover it. 00:33:58.349 [2024-07-24 02:12:13.116285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.349 [2024-07-24 02:12:13.116311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.349 qpair failed and we were unable to recover it. 00:33:58.349 [2024-07-24 02:12:13.116472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.349 [2024-07-24 02:12:13.116519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.349 qpair failed and we were unable to recover it. 00:33:58.349 [2024-07-24 02:12:13.116675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.349 [2024-07-24 02:12:13.116719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.349 qpair failed and we were unable to recover it. 00:33:58.349 [2024-07-24 02:12:13.116832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.349 [2024-07-24 02:12:13.116860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.349 qpair failed and we were unable to recover it. 00:33:58.349 [2024-07-24 02:12:13.117013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.349 [2024-07-24 02:12:13.117039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.349 qpair failed and we were unable to recover it. 00:33:58.349 [2024-07-24 02:12:13.117185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.349 [2024-07-24 02:12:13.117210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.349 qpair failed and we were unable to recover it. 00:33:58.349 [2024-07-24 02:12:13.117359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.349 [2024-07-24 02:12:13.117386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.349 qpair failed and we were unable to recover it. 00:33:58.349 [2024-07-24 02:12:13.117504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.349 [2024-07-24 02:12:13.117532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.349 qpair failed and we were unable to recover it. 00:33:58.349 [2024-07-24 02:12:13.117695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.349 [2024-07-24 02:12:13.117738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.349 qpair failed and we were unable to recover it. 00:33:58.349 [2024-07-24 02:12:13.117874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.349 [2024-07-24 02:12:13.117899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.349 qpair failed and we were unable to recover it. 00:33:58.349 [2024-07-24 02:12:13.118040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.349 [2024-07-24 02:12:13.118067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.349 qpair failed and we were unable to recover it. 00:33:58.349 [2024-07-24 02:12:13.118229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.349 [2024-07-24 02:12:13.118255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.349 qpair failed and we were unable to recover it. 00:33:58.349 [2024-07-24 02:12:13.118366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.349 [2024-07-24 02:12:13.118395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.349 qpair failed and we were unable to recover it. 00:33:58.349 [2024-07-24 02:12:13.118542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.349 [2024-07-24 02:12:13.118585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.350 qpair failed and we were unable to recover it. 00:33:58.350 [2024-07-24 02:12:13.118743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.350 [2024-07-24 02:12:13.118788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.350 qpair failed and we were unable to recover it. 00:33:58.350 [2024-07-24 02:12:13.118970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.350 [2024-07-24 02:12:13.118998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.350 qpair failed and we were unable to recover it. 00:33:58.350 [2024-07-24 02:12:13.119140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.350 [2024-07-24 02:12:13.119166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.350 qpair failed and we were unable to recover it. 00:33:58.350 [2024-07-24 02:12:13.119292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.350 [2024-07-24 02:12:13.119340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.350 qpair failed and we were unable to recover it. 00:33:58.350 [2024-07-24 02:12:13.119552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.350 [2024-07-24 02:12:13.119580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.350 qpair failed and we were unable to recover it. 00:33:58.350 [2024-07-24 02:12:13.119748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.350 [2024-07-24 02:12:13.119791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.350 qpair failed and we were unable to recover it. 00:33:58.350 [2024-07-24 02:12:13.119953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.350 [2024-07-24 02:12:13.119978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.350 qpair failed and we were unable to recover it. 00:33:58.350 [2024-07-24 02:12:13.120111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.350 [2024-07-24 02:12:13.120136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.350 qpair failed and we were unable to recover it. 00:33:58.350 [2024-07-24 02:12:13.120265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.350 [2024-07-24 02:12:13.120290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.350 qpair failed and we were unable to recover it. 00:33:58.350 [2024-07-24 02:12:13.120478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.350 [2024-07-24 02:12:13.120521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.350 qpair failed and we were unable to recover it. 00:33:58.350 [2024-07-24 02:12:13.120719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.350 [2024-07-24 02:12:13.120761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.350 qpair failed and we were unable to recover it. 00:33:58.350 [2024-07-24 02:12:13.120916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.350 [2024-07-24 02:12:13.120958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.350 qpair failed and we were unable to recover it. 00:33:58.350 [2024-07-24 02:12:13.121066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.350 [2024-07-24 02:12:13.121091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.350 qpair failed and we were unable to recover it. 00:33:58.350 [2024-07-24 02:12:13.121201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.350 [2024-07-24 02:12:13.121227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.350 qpair failed and we were unable to recover it. 00:33:58.350 [2024-07-24 02:12:13.121383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.350 [2024-07-24 02:12:13.121412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.350 qpair failed and we were unable to recover it. 00:33:58.350 [2024-07-24 02:12:13.121547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.350 [2024-07-24 02:12:13.121575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.350 qpair failed and we were unable to recover it. 00:33:58.350 [2024-07-24 02:12:13.121715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.350 [2024-07-24 02:12:13.121740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.350 qpair failed and we were unable to recover it. 00:33:58.350 [2024-07-24 02:12:13.121872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.350 [2024-07-24 02:12:13.121897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.350 qpair failed and we were unable to recover it. 00:33:58.350 [2024-07-24 02:12:13.122033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.350 [2024-07-24 02:12:13.122058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.350 qpair failed and we were unable to recover it. 00:33:58.350 [2024-07-24 02:12:13.122157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.350 [2024-07-24 02:12:13.122182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.350 qpair failed and we were unable to recover it. 00:33:58.350 [2024-07-24 02:12:13.122335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.350 [2024-07-24 02:12:13.122361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.350 qpair failed and we were unable to recover it. 00:33:58.350 [2024-07-24 02:12:13.122470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.350 [2024-07-24 02:12:13.122495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.350 qpair failed and we were unable to recover it. 00:33:58.350 [2024-07-24 02:12:13.122632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.350 [2024-07-24 02:12:13.122658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.350 qpair failed and we were unable to recover it. 00:33:58.350 [2024-07-24 02:12:13.122768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.350 [2024-07-24 02:12:13.122795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.350 qpair failed and we were unable to recover it. 00:33:58.350 [2024-07-24 02:12:13.122904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.350 [2024-07-24 02:12:13.122930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.350 qpair failed and we were unable to recover it. 00:33:58.350 [2024-07-24 02:12:13.123068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.350 [2024-07-24 02:12:13.123094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.350 qpair failed and we were unable to recover it. 00:33:58.350 [2024-07-24 02:12:13.123250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.350 [2024-07-24 02:12:13.123276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.350 qpair failed and we were unable to recover it. 00:33:58.350 [2024-07-24 02:12:13.123467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.350 [2024-07-24 02:12:13.123513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.350 qpair failed and we were unable to recover it. 00:33:58.350 [2024-07-24 02:12:13.123672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.350 [2024-07-24 02:12:13.123715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.350 qpair failed and we were unable to recover it. 00:33:58.350 [2024-07-24 02:12:13.123865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.350 [2024-07-24 02:12:13.123909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.350 qpair failed and we were unable to recover it. 00:33:58.350 [2024-07-24 02:12:13.124044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.350 [2024-07-24 02:12:13.124069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.350 qpair failed and we were unable to recover it. 00:33:58.350 [2024-07-24 02:12:13.124225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.350 [2024-07-24 02:12:13.124250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.350 qpair failed and we were unable to recover it. 00:33:58.350 [2024-07-24 02:12:13.124402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.350 [2024-07-24 02:12:13.124446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.350 qpair failed and we were unable to recover it. 00:33:58.350 [2024-07-24 02:12:13.124599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.350 [2024-07-24 02:12:13.124627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.350 qpair failed and we were unable to recover it. 00:33:58.350 [2024-07-24 02:12:13.124788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.350 [2024-07-24 02:12:13.124830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.350 qpair failed and we were unable to recover it. 00:33:58.350 [2024-07-24 02:12:13.124944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.350 [2024-07-24 02:12:13.124969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.350 qpair failed and we were unable to recover it. 00:33:58.350 [2024-07-24 02:12:13.125072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.350 [2024-07-24 02:12:13.125099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.350 qpair failed and we were unable to recover it. 00:33:58.350 [2024-07-24 02:12:13.125233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.350 [2024-07-24 02:12:13.125260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.350 qpair failed and we were unable to recover it. 00:33:58.350 [2024-07-24 02:12:13.125405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.350 [2024-07-24 02:12:13.125448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.350 qpair failed and we were unable to recover it. 00:33:58.350 [2024-07-24 02:12:13.125598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.350 [2024-07-24 02:12:13.125626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.350 qpair failed and we were unable to recover it. 00:33:58.350 [2024-07-24 02:12:13.125823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.351 [2024-07-24 02:12:13.125865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.351 qpair failed and we were unable to recover it. 00:33:58.351 [2024-07-24 02:12:13.125996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.351 [2024-07-24 02:12:13.126022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.351 qpair failed and we were unable to recover it. 00:33:58.351 [2024-07-24 02:12:13.126166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.351 [2024-07-24 02:12:13.126191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.351 qpair failed and we were unable to recover it. 00:33:58.351 [2024-07-24 02:12:13.126325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.351 [2024-07-24 02:12:13.126352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.351 qpair failed and we were unable to recover it. 00:33:58.351 [2024-07-24 02:12:13.126531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.351 [2024-07-24 02:12:13.126576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.351 qpair failed and we were unable to recover it. 00:33:58.351 [2024-07-24 02:12:13.126726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.351 [2024-07-24 02:12:13.126769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.351 qpair failed and we were unable to recover it. 00:33:58.351 [2024-07-24 02:12:13.126947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.351 [2024-07-24 02:12:13.126989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.351 qpair failed and we were unable to recover it. 00:33:58.351 [2024-07-24 02:12:13.127158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.351 [2024-07-24 02:12:13.127183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.351 qpair failed and we were unable to recover it. 00:33:58.351 [2024-07-24 02:12:13.127343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.351 [2024-07-24 02:12:13.127369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.351 qpair failed and we were unable to recover it. 00:33:58.351 [2024-07-24 02:12:13.127517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.351 [2024-07-24 02:12:13.127559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.351 qpair failed and we were unable to recover it. 00:33:58.351 [2024-07-24 02:12:13.127702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.351 [2024-07-24 02:12:13.127748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.351 qpair failed and we were unable to recover it. 00:33:58.351 [2024-07-24 02:12:13.127900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.351 [2024-07-24 02:12:13.127928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.351 qpair failed and we were unable to recover it. 00:33:58.351 [2024-07-24 02:12:13.128069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.351 [2024-07-24 02:12:13.128095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.351 qpair failed and we were unable to recover it. 00:33:58.351 [2024-07-24 02:12:13.128255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.351 [2024-07-24 02:12:13.128281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.351 qpair failed and we were unable to recover it. 00:33:58.351 [2024-07-24 02:12:13.128444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.351 [2024-07-24 02:12:13.128488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.351 qpair failed and we were unable to recover it. 00:33:58.351 [2024-07-24 02:12:13.128616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.351 [2024-07-24 02:12:13.128644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.351 qpair failed and we were unable to recover it. 00:33:58.351 [2024-07-24 02:12:13.128802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.351 [2024-07-24 02:12:13.128846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.351 qpair failed and we were unable to recover it. 00:33:58.351 [2024-07-24 02:12:13.128961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.351 [2024-07-24 02:12:13.128987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.351 qpair failed and we were unable to recover it. 00:33:58.351 [2024-07-24 02:12:13.129147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.351 [2024-07-24 02:12:13.129173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.351 qpair failed and we were unable to recover it. 00:33:58.351 [2024-07-24 02:12:13.129315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.351 [2024-07-24 02:12:13.129357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.351 qpair failed and we were unable to recover it. 00:33:58.351 [2024-07-24 02:12:13.129461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.351 [2024-07-24 02:12:13.129487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.351 qpair failed and we were unable to recover it. 00:33:58.351 [2024-07-24 02:12:13.129620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.351 [2024-07-24 02:12:13.129645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.351 qpair failed and we were unable to recover it. 00:33:58.351 [2024-07-24 02:12:13.129796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.351 [2024-07-24 02:12:13.129840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.351 qpair failed and we were unable to recover it. 00:33:58.351 [2024-07-24 02:12:13.129998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.351 [2024-07-24 02:12:13.130023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.351 qpair failed and we were unable to recover it. 00:33:58.351 [2024-07-24 02:12:13.130114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.351 [2024-07-24 02:12:13.130140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.351 qpair failed and we were unable to recover it. 00:33:58.351 [2024-07-24 02:12:13.130274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.351 [2024-07-24 02:12:13.130299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.351 qpair failed and we were unable to recover it. 00:33:58.351 [2024-07-24 02:12:13.130458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.351 [2024-07-24 02:12:13.130501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.351 qpair failed and we were unable to recover it. 00:33:58.351 [2024-07-24 02:12:13.130656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.351 [2024-07-24 02:12:13.130702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.351 qpair failed and we were unable to recover it. 00:33:58.351 [2024-07-24 02:12:13.130825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.351 [2024-07-24 02:12:13.130867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.351 qpair failed and we were unable to recover it. 00:33:58.351 [2024-07-24 02:12:13.131006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.351 [2024-07-24 02:12:13.131032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.351 qpair failed and we were unable to recover it. 00:33:58.351 [2024-07-24 02:12:13.131134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.351 [2024-07-24 02:12:13.131160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.351 qpair failed and we were unable to recover it. 00:33:58.351 [2024-07-24 02:12:13.131294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.351 [2024-07-24 02:12:13.131326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.351 qpair failed and we were unable to recover it. 00:33:58.351 [2024-07-24 02:12:13.131482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.351 [2024-07-24 02:12:13.131526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.351 qpair failed and we were unable to recover it. 00:33:58.351 [2024-07-24 02:12:13.131704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.351 [2024-07-24 02:12:13.131749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.351 qpair failed and we were unable to recover it. 00:33:58.351 [2024-07-24 02:12:13.131889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.351 [2024-07-24 02:12:13.131933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.351 qpair failed and we were unable to recover it. 00:33:58.351 [2024-07-24 02:12:13.132038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.351 [2024-07-24 02:12:13.132063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.351 qpair failed and we were unable to recover it. 00:33:58.351 [2024-07-24 02:12:13.132207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.351 [2024-07-24 02:12:13.132232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.351 qpair failed and we were unable to recover it. 00:33:58.351 [2024-07-24 02:12:13.132407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.351 [2024-07-24 02:12:13.132450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.351 qpair failed and we were unable to recover it. 00:33:58.351 [2024-07-24 02:12:13.132594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.351 [2024-07-24 02:12:13.132623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.351 qpair failed and we were unable to recover it. 00:33:58.351 [2024-07-24 02:12:13.132768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.351 [2024-07-24 02:12:13.132793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.351 qpair failed and we were unable to recover it. 00:33:58.351 [2024-07-24 02:12:13.132937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.351 [2024-07-24 02:12:13.132962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.351 qpair failed and we were unable to recover it. 00:33:58.351 [2024-07-24 02:12:13.133104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.351 [2024-07-24 02:12:13.133129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.351 qpair failed and we were unable to recover it. 00:33:58.351 [2024-07-24 02:12:13.133253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.351 [2024-07-24 02:12:13.133278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.351 qpair failed and we were unable to recover it. 00:33:58.352 [2024-07-24 02:12:13.133400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.352 [2024-07-24 02:12:13.133427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.352 qpair failed and we were unable to recover it. 00:33:58.352 [2024-07-24 02:12:13.133618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.352 [2024-07-24 02:12:13.133660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.352 qpair failed and we were unable to recover it. 00:33:58.352 [2024-07-24 02:12:13.133792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.352 [2024-07-24 02:12:13.133818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.352 qpair failed and we were unable to recover it. 00:33:58.352 [2024-07-24 02:12:13.133928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.352 [2024-07-24 02:12:13.133953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.352 qpair failed and we were unable to recover it. 00:33:58.352 [2024-07-24 02:12:13.134084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.352 [2024-07-24 02:12:13.134110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.352 qpair failed and we were unable to recover it. 00:33:58.352 [2024-07-24 02:12:13.134243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.352 [2024-07-24 02:12:13.134268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.352 qpair failed and we were unable to recover it. 00:33:58.352 [2024-07-24 02:12:13.134375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.352 [2024-07-24 02:12:13.134402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.352 qpair failed and we were unable to recover it. 00:33:58.352 [2024-07-24 02:12:13.134585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.352 [2024-07-24 02:12:13.134627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.352 qpair failed and we were unable to recover it. 00:33:58.352 [2024-07-24 02:12:13.134773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.352 [2024-07-24 02:12:13.134815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.352 qpair failed and we were unable to recover it. 00:33:58.352 [2024-07-24 02:12:13.134946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.352 [2024-07-24 02:12:13.134972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.352 qpair failed and we were unable to recover it. 00:33:58.352 [2024-07-24 02:12:13.135134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.352 [2024-07-24 02:12:13.135160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.352 qpair failed and we were unable to recover it. 00:33:58.352 [2024-07-24 02:12:13.135266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.352 [2024-07-24 02:12:13.135291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.352 qpair failed and we were unable to recover it. 00:33:58.352 [2024-07-24 02:12:13.135401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.352 [2024-07-24 02:12:13.135431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.352 qpair failed and we were unable to recover it. 00:33:58.352 [2024-07-24 02:12:13.135568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.352 [2024-07-24 02:12:13.135595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.352 qpair failed and we were unable to recover it. 00:33:58.352 [2024-07-24 02:12:13.135750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.352 [2024-07-24 02:12:13.135776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.352 qpair failed and we were unable to recover it. 00:33:58.352 [2024-07-24 02:12:13.135927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.352 [2024-07-24 02:12:13.135955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.352 qpair failed and we were unable to recover it. 00:33:58.352 [2024-07-24 02:12:13.136071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.352 [2024-07-24 02:12:13.136096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.352 qpair failed and we were unable to recover it. 00:33:58.352 [2024-07-24 02:12:13.136225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.352 [2024-07-24 02:12:13.136250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.352 qpair failed and we were unable to recover it. 00:33:58.352 [2024-07-24 02:12:13.136351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.352 [2024-07-24 02:12:13.136377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.352 qpair failed and we were unable to recover it. 00:33:58.352 [2024-07-24 02:12:13.136526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.352 [2024-07-24 02:12:13.136551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.352 qpair failed and we were unable to recover it. 00:33:58.352 [2024-07-24 02:12:13.136702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.352 [2024-07-24 02:12:13.136745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.352 qpair failed and we were unable to recover it. 00:33:58.352 [2024-07-24 02:12:13.136921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.352 [2024-07-24 02:12:13.136969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.352 qpair failed and we were unable to recover it. 00:33:58.352 [2024-07-24 02:12:13.137129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.352 [2024-07-24 02:12:13.137154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.352 qpair failed and we were unable to recover it. 00:33:58.352 [2024-07-24 02:12:13.137307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.352 [2024-07-24 02:12:13.137338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.352 qpair failed and we were unable to recover it. 00:33:58.352 [2024-07-24 02:12:13.137520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.352 [2024-07-24 02:12:13.137563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.352 qpair failed and we were unable to recover it. 00:33:58.352 [2024-07-24 02:12:13.137717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.352 [2024-07-24 02:12:13.137760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.352 qpair failed and we were unable to recover it. 00:33:58.352 [2024-07-24 02:12:13.137938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.352 [2024-07-24 02:12:13.137981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.352 qpair failed and we were unable to recover it. 00:33:58.352 [2024-07-24 02:12:13.138111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.352 [2024-07-24 02:12:13.138136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.352 qpair failed and we were unable to recover it. 00:33:58.352 [2024-07-24 02:12:13.138267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.352 [2024-07-24 02:12:13.138293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.352 qpair failed and we were unable to recover it. 00:33:58.352 [2024-07-24 02:12:13.138450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.352 [2024-07-24 02:12:13.138493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.352 qpair failed and we were unable to recover it. 00:33:58.352 [2024-07-24 02:12:13.138634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.352 [2024-07-24 02:12:13.138677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.352 qpair failed and we were unable to recover it. 00:33:58.352 [2024-07-24 02:12:13.138856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.352 [2024-07-24 02:12:13.138904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.352 qpair failed and we were unable to recover it. 00:33:58.352 [2024-07-24 02:12:13.139052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.352 [2024-07-24 02:12:13.139095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.352 qpair failed and we were unable to recover it. 00:33:58.352 [2024-07-24 02:12:13.139254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.352 [2024-07-24 02:12:13.139279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.352 qpair failed and we were unable to recover it. 00:33:58.352 [2024-07-24 02:12:13.139441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.352 [2024-07-24 02:12:13.139485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.352 qpair failed and we were unable to recover it. 00:33:58.352 [2024-07-24 02:12:13.139632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.352 [2024-07-24 02:12:13.139675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.352 qpair failed and we were unable to recover it. 00:33:58.352 [2024-07-24 02:12:13.139822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.352 [2024-07-24 02:12:13.139863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.352 qpair failed and we were unable to recover it. 00:33:58.352 [2024-07-24 02:12:13.140014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.352 [2024-07-24 02:12:13.140056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.352 qpair failed and we were unable to recover it. 00:33:58.352 [2024-07-24 02:12:13.140167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.352 [2024-07-24 02:12:13.140194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.352 qpair failed and we were unable to recover it. 00:33:58.352 [2024-07-24 02:12:13.140326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.352 [2024-07-24 02:12:13.140353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.352 qpair failed and we were unable to recover it. 00:33:58.352 [2024-07-24 02:12:13.140477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.352 [2024-07-24 02:12:13.140503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.352 qpair failed and we were unable to recover it. 00:33:58.352 [2024-07-24 02:12:13.140613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.352 [2024-07-24 02:12:13.140638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.352 qpair failed and we were unable to recover it. 00:33:58.352 [2024-07-24 02:12:13.140768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.352 [2024-07-24 02:12:13.140793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.352 qpair failed and we were unable to recover it. 00:33:58.352 [2024-07-24 02:12:13.140931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.352 [2024-07-24 02:12:13.140956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.352 qpair failed and we were unable to recover it. 00:33:58.353 [2024-07-24 02:12:13.141112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.353 [2024-07-24 02:12:13.141138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.353 qpair failed and we were unable to recover it. 00:33:58.353 [2024-07-24 02:12:13.141298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.353 [2024-07-24 02:12:13.141339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.353 qpair failed and we were unable to recover it. 00:33:58.353 [2024-07-24 02:12:13.141507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.353 [2024-07-24 02:12:13.141533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.353 qpair failed and we were unable to recover it. 00:33:58.353 [2024-07-24 02:12:13.141691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.353 [2024-07-24 02:12:13.141717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.353 qpair failed and we were unable to recover it. 00:33:58.353 [2024-07-24 02:12:13.141844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.353 [2024-07-24 02:12:13.141869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.353 qpair failed and we were unable to recover it. 00:33:58.353 [2024-07-24 02:12:13.141997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.353 [2024-07-24 02:12:13.142022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.353 qpair failed and we were unable to recover it. 00:33:58.353 [2024-07-24 02:12:13.142156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.353 [2024-07-24 02:12:13.142182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.353 qpair failed and we were unable to recover it. 00:33:58.353 [2024-07-24 02:12:13.142310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.353 [2024-07-24 02:12:13.142346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.353 qpair failed and we were unable to recover it. 00:33:58.353 [2024-07-24 02:12:13.142478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.353 [2024-07-24 02:12:13.142521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.353 qpair failed and we were unable to recover it. 00:33:58.353 [2024-07-24 02:12:13.142680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.353 [2024-07-24 02:12:13.142722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.353 qpair failed and we were unable to recover it. 00:33:58.353 [2024-07-24 02:12:13.142872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.353 [2024-07-24 02:12:13.142916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.353 qpair failed and we were unable to recover it. 00:33:58.353 [2024-07-24 02:12:13.143051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.353 [2024-07-24 02:12:13.143078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.353 qpair failed and we were unable to recover it. 00:33:58.353 [2024-07-24 02:12:13.143214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.353 [2024-07-24 02:12:13.143240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.353 qpair failed and we were unable to recover it. 00:33:58.353 [2024-07-24 02:12:13.143388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.353 [2024-07-24 02:12:13.143431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.353 qpair failed and we were unable to recover it. 00:33:58.353 [2024-07-24 02:12:13.143616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.353 [2024-07-24 02:12:13.143659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.353 qpair failed and we were unable to recover it. 00:33:58.353 [2024-07-24 02:12:13.143807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.353 [2024-07-24 02:12:13.143850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.353 qpair failed and we were unable to recover it. 00:33:58.353 [2024-07-24 02:12:13.143962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.353 [2024-07-24 02:12:13.143987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.353 qpair failed and we were unable to recover it. 00:33:58.353 [2024-07-24 02:12:13.144087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.353 [2024-07-24 02:12:13.144112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.353 qpair failed and we were unable to recover it. 00:33:58.353 [2024-07-24 02:12:13.144245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.353 [2024-07-24 02:12:13.144270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.353 qpair failed and we were unable to recover it. 00:33:58.353 [2024-07-24 02:12:13.144409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.353 [2024-07-24 02:12:13.144434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.353 qpair failed and we were unable to recover it. 00:33:58.353 [2024-07-24 02:12:13.144544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.353 [2024-07-24 02:12:13.144569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.353 qpair failed and we were unable to recover it. 00:33:58.353 [2024-07-24 02:12:13.144732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.353 [2024-07-24 02:12:13.144758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.353 qpair failed and we were unable to recover it. 00:33:58.353 [2024-07-24 02:12:13.144895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.353 [2024-07-24 02:12:13.144921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.353 qpair failed and we were unable to recover it. 00:33:58.353 [2024-07-24 02:12:13.145052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.353 [2024-07-24 02:12:13.145078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.353 qpair failed and we were unable to recover it. 00:33:58.353 [2024-07-24 02:12:13.145219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.353 [2024-07-24 02:12:13.145244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.353 qpair failed and we were unable to recover it. 00:33:58.353 [2024-07-24 02:12:13.145397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.353 [2024-07-24 02:12:13.145426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.353 qpair failed and we were unable to recover it. 00:33:58.353 [2024-07-24 02:12:13.145591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.353 [2024-07-24 02:12:13.145634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.353 qpair failed and we were unable to recover it. 00:33:58.353 [2024-07-24 02:12:13.145753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.353 [2024-07-24 02:12:13.145781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.353 qpair failed and we were unable to recover it. 00:33:58.353 [2024-07-24 02:12:13.145953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.353 [2024-07-24 02:12:13.145979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.353 qpair failed and we were unable to recover it. 00:33:58.353 [2024-07-24 02:12:13.146111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.353 [2024-07-24 02:12:13.146136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.353 qpair failed and we were unable to recover it. 00:33:58.353 [2024-07-24 02:12:13.146240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.353 [2024-07-24 02:12:13.146265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.353 qpair failed and we were unable to recover it. 00:33:58.353 [2024-07-24 02:12:13.146418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.353 [2024-07-24 02:12:13.146447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.353 qpair failed and we were unable to recover it. 00:33:58.353 [2024-07-24 02:12:13.146625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.353 [2024-07-24 02:12:13.146668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.353 qpair failed and we were unable to recover it. 00:33:58.353 [2024-07-24 02:12:13.146842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.353 [2024-07-24 02:12:13.146885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.353 qpair failed and we were unable to recover it. 00:33:58.353 [2024-07-24 02:12:13.147049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.353 [2024-07-24 02:12:13.147074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.353 qpair failed and we were unable to recover it. 00:33:58.353 [2024-07-24 02:12:13.147199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.353 [2024-07-24 02:12:13.147225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.353 qpair failed and we were unable to recover it. 00:33:58.353 [2024-07-24 02:12:13.147387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.353 [2024-07-24 02:12:13.147416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.353 qpair failed and we were unable to recover it. 00:33:58.353 [2024-07-24 02:12:13.147567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.353 [2024-07-24 02:12:13.147613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.353 qpair failed and we were unable to recover it. 00:33:58.353 [2024-07-24 02:12:13.147796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.353 [2024-07-24 02:12:13.147838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.353 qpair failed and we were unable to recover it. 00:33:58.353 [2024-07-24 02:12:13.147973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.353 [2024-07-24 02:12:13.147998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.353 qpair failed and we were unable to recover it. 00:33:58.353 [2024-07-24 02:12:13.148123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.353 [2024-07-24 02:12:13.148149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.354 qpair failed and we were unable to recover it. 00:33:58.354 [2024-07-24 02:12:13.148247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.354 [2024-07-24 02:12:13.148272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.354 qpair failed and we were unable to recover it. 00:33:58.354 [2024-07-24 02:12:13.148414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.354 [2024-07-24 02:12:13.148439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.354 qpair failed and we were unable to recover it. 00:33:58.354 [2024-07-24 02:12:13.148544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.354 [2024-07-24 02:12:13.148569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.354 qpair failed and we were unable to recover it. 00:33:58.354 [2024-07-24 02:12:13.148732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.354 [2024-07-24 02:12:13.148757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.354 qpair failed and we were unable to recover it. 00:33:58.354 [2024-07-24 02:12:13.148890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.354 [2024-07-24 02:12:13.148915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.354 qpair failed and we were unable to recover it. 00:33:58.354 [2024-07-24 02:12:13.149029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.354 [2024-07-24 02:12:13.149054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.354 qpair failed and we were unable to recover it. 00:33:58.354 [2024-07-24 02:12:13.149221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.354 [2024-07-24 02:12:13.149250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.354 qpair failed and we were unable to recover it. 00:33:58.354 [2024-07-24 02:12:13.149356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.354 [2024-07-24 02:12:13.149382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.354 qpair failed and we were unable to recover it. 00:33:58.354 [2024-07-24 02:12:13.149515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.354 [2024-07-24 02:12:13.149558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.354 qpair failed and we were unable to recover it. 00:33:58.354 [2024-07-24 02:12:13.149742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.354 [2024-07-24 02:12:13.149771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.354 qpair failed and we were unable to recover it. 00:33:58.354 [2024-07-24 02:12:13.149945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.354 [2024-07-24 02:12:13.149987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.354 qpair failed and we were unable to recover it. 00:33:58.354 [2024-07-24 02:12:13.150154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.354 [2024-07-24 02:12:13.150179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.354 qpair failed and we were unable to recover it. 00:33:58.354 [2024-07-24 02:12:13.150323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.354 [2024-07-24 02:12:13.150349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.354 qpair failed and we were unable to recover it. 00:33:58.354 [2024-07-24 02:12:13.150528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.354 [2024-07-24 02:12:13.150574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.354 qpair failed and we were unable to recover it. 00:33:58.354 [2024-07-24 02:12:13.150717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.354 [2024-07-24 02:12:13.150761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.354 qpair failed and we were unable to recover it. 00:33:58.354 [2024-07-24 02:12:13.150914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.354 [2024-07-24 02:12:13.150956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.354 qpair failed and we were unable to recover it. 00:33:58.354 [2024-07-24 02:12:13.151085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.354 [2024-07-24 02:12:13.151110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.354 qpair failed and we were unable to recover it. 00:33:58.354 [2024-07-24 02:12:13.151269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.354 [2024-07-24 02:12:13.151294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.354 qpair failed and we were unable to recover it. 00:33:58.354 [2024-07-24 02:12:13.151456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.354 [2024-07-24 02:12:13.151499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.354 qpair failed and we were unable to recover it. 00:33:58.354 [2024-07-24 02:12:13.151625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.354 [2024-07-24 02:12:13.151666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.354 qpair failed and we were unable to recover it. 00:33:58.354 [2024-07-24 02:12:13.151816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.354 [2024-07-24 02:12:13.151859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.354 qpair failed and we were unable to recover it. 00:33:58.354 [2024-07-24 02:12:13.151998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.354 [2024-07-24 02:12:13.152024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.354 qpair failed and we were unable to recover it. 00:33:58.354 [2024-07-24 02:12:13.152180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.354 [2024-07-24 02:12:13.152206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.354 qpair failed and we were unable to recover it. 00:33:58.354 [2024-07-24 02:12:13.152365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.354 [2024-07-24 02:12:13.152391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.354 qpair failed and we were unable to recover it. 00:33:58.354 [2024-07-24 02:12:13.152505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.354 [2024-07-24 02:12:13.152531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.354 qpair failed and we were unable to recover it. 00:33:58.354 [2024-07-24 02:12:13.152637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.354 [2024-07-24 02:12:13.152663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.354 qpair failed and we were unable to recover it. 00:33:58.354 [2024-07-24 02:12:13.152770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.354 [2024-07-24 02:12:13.152797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.354 qpair failed and we were unable to recover it. 00:33:58.354 [2024-07-24 02:12:13.152909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.354 [2024-07-24 02:12:13.152934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.354 qpair failed and we were unable to recover it. 00:33:58.354 [2024-07-24 02:12:13.153062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.354 [2024-07-24 02:12:13.153087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.354 qpair failed and we were unable to recover it. 00:33:58.354 [2024-07-24 02:12:13.153221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.355 [2024-07-24 02:12:13.153248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.355 qpair failed and we were unable to recover it. 00:33:58.355 [2024-07-24 02:12:13.153410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.355 [2024-07-24 02:12:13.153436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.355 qpair failed and we were unable to recover it. 00:33:58.355 [2024-07-24 02:12:13.153549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.355 [2024-07-24 02:12:13.153574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.355 qpair failed and we were unable to recover it. 00:33:58.355 [2024-07-24 02:12:13.153732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.355 [2024-07-24 02:12:13.153757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.355 qpair failed and we were unable to recover it. 00:33:58.355 [2024-07-24 02:12:13.153896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.355 [2024-07-24 02:12:13.153923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.355 qpair failed and we were unable to recover it. 00:33:58.355 [2024-07-24 02:12:13.154062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.355 [2024-07-24 02:12:13.154088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.355 qpair failed and we were unable to recover it. 00:33:58.355 [2024-07-24 02:12:13.154247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.355 [2024-07-24 02:12:13.154272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.355 qpair failed and we were unable to recover it. 00:33:58.355 [2024-07-24 02:12:13.154402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.355 [2024-07-24 02:12:13.154446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.355 qpair failed and we were unable to recover it. 00:33:58.355 [2024-07-24 02:12:13.154573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.355 [2024-07-24 02:12:13.154619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.355 qpair failed and we were unable to recover it. 00:33:58.355 [2024-07-24 02:12:13.154800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.355 [2024-07-24 02:12:13.154842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.355 qpair failed and we were unable to recover it. 00:33:58.355 [2024-07-24 02:12:13.154945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.355 [2024-07-24 02:12:13.154971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.355 qpair failed and we were unable to recover it. 00:33:58.355 [2024-07-24 02:12:13.155108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.355 [2024-07-24 02:12:13.155133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.355 qpair failed and we were unable to recover it. 00:33:58.355 [2024-07-24 02:12:13.155271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.355 [2024-07-24 02:12:13.155296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.355 qpair failed and we were unable to recover it. 00:33:58.355 [2024-07-24 02:12:13.155420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.355 [2024-07-24 02:12:13.155449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.355 qpair failed and we were unable to recover it. 00:33:58.355 [2024-07-24 02:12:13.155626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.355 [2024-07-24 02:12:13.155668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.355 qpair failed and we were unable to recover it. 00:33:58.355 [2024-07-24 02:12:13.155784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.355 [2024-07-24 02:12:13.155827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.355 qpair failed and we were unable to recover it. 00:33:58.355 [2024-07-24 02:12:13.155961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.355 [2024-07-24 02:12:13.155986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.355 qpair failed and we were unable to recover it. 00:33:58.355 [2024-07-24 02:12:13.156122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.355 [2024-07-24 02:12:13.156151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.355 qpair failed and we were unable to recover it. 00:33:58.355 [2024-07-24 02:12:13.156311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.355 [2024-07-24 02:12:13.156341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.355 qpair failed and we were unable to recover it. 00:33:58.355 [2024-07-24 02:12:13.156464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.355 [2024-07-24 02:12:13.156506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.355 qpair failed and we were unable to recover it. 00:33:58.355 [2024-07-24 02:12:13.156661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.355 [2024-07-24 02:12:13.156703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.355 qpair failed and we were unable to recover it. 00:33:58.355 [2024-07-24 02:12:13.156834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.355 [2024-07-24 02:12:13.156859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.355 qpair failed and we were unable to recover it. 00:33:58.355 [2024-07-24 02:12:13.156996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.355 [2024-07-24 02:12:13.157022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.355 qpair failed and we were unable to recover it. 00:33:58.355 [2024-07-24 02:12:13.157157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.355 [2024-07-24 02:12:13.157182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.355 qpair failed and we were unable to recover it. 00:33:58.355 [2024-07-24 02:12:13.157344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.355 [2024-07-24 02:12:13.157371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.355 qpair failed and we were unable to recover it. 00:33:58.355 [2024-07-24 02:12:13.157523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.355 [2024-07-24 02:12:13.157570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.355 qpair failed and we were unable to recover it. 00:33:58.355 [2024-07-24 02:12:13.157701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.355 [2024-07-24 02:12:13.157746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.355 qpair failed and we were unable to recover it. 00:33:58.355 [2024-07-24 02:12:13.157919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.355 [2024-07-24 02:12:13.157947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.355 qpair failed and we were unable to recover it. 00:33:58.355 [2024-07-24 02:12:13.158072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.355 [2024-07-24 02:12:13.158097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.355 qpair failed and we were unable to recover it. 00:33:58.355 [2024-07-24 02:12:13.158232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.355 [2024-07-24 02:12:13.158257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.355 qpair failed and we were unable to recover it. 00:33:58.355 [2024-07-24 02:12:13.158378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.355 [2024-07-24 02:12:13.158407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.355 qpair failed and we were unable to recover it. 00:33:58.355 [2024-07-24 02:12:13.158607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.355 [2024-07-24 02:12:13.158651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.355 qpair failed and we were unable to recover it. 00:33:58.355 [2024-07-24 02:12:13.158805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.355 [2024-07-24 02:12:13.158847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.355 qpair failed and we were unable to recover it. 00:33:58.355 [2024-07-24 02:12:13.158956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.355 [2024-07-24 02:12:13.158981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.355 qpair failed and we were unable to recover it. 00:33:58.355 [2024-07-24 02:12:13.159115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.355 [2024-07-24 02:12:13.159140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.355 qpair failed and we were unable to recover it. 00:33:58.355 [2024-07-24 02:12:13.159245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.355 [2024-07-24 02:12:13.159271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.355 qpair failed and we were unable to recover it. 00:33:58.355 [2024-07-24 02:12:13.159455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.355 [2024-07-24 02:12:13.159499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.355 qpair failed and we were unable to recover it. 00:33:58.355 [2024-07-24 02:12:13.159644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.355 [2024-07-24 02:12:13.159686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.355 qpair failed and we were unable to recover it. 00:33:58.355 [2024-07-24 02:12:13.159836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.355 [2024-07-24 02:12:13.159877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.355 qpair failed and we were unable to recover it. 00:33:58.355 [2024-07-24 02:12:13.160037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.355 [2024-07-24 02:12:13.160062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.356 qpair failed and we were unable to recover it. 00:33:58.356 [2024-07-24 02:12:13.160217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.356 [2024-07-24 02:12:13.160243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.356 qpair failed and we were unable to recover it. 00:33:58.356 [2024-07-24 02:12:13.160400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.356 [2024-07-24 02:12:13.160444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.356 qpair failed and we were unable to recover it. 00:33:58.356 [2024-07-24 02:12:13.160590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.356 [2024-07-24 02:12:13.160633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.356 qpair failed and we were unable to recover it. 00:33:58.356 [2024-07-24 02:12:13.160783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.356 [2024-07-24 02:12:13.160825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.356 qpair failed and we were unable to recover it. 00:33:58.356 [2024-07-24 02:12:13.160987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.356 [2024-07-24 02:12:13.161013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.356 qpair failed and we were unable to recover it. 00:33:58.356 [2024-07-24 02:12:13.161158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.356 [2024-07-24 02:12:13.161183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.356 qpair failed and we were unable to recover it. 00:33:58.356 [2024-07-24 02:12:13.161283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.356 [2024-07-24 02:12:13.161310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.356 qpair failed and we were unable to recover it. 00:33:58.356 [2024-07-24 02:12:13.161477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.356 [2024-07-24 02:12:13.161520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.356 qpair failed and we were unable to recover it. 00:33:58.356 [2024-07-24 02:12:13.161700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.356 [2024-07-24 02:12:13.161743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.356 qpair failed and we were unable to recover it. 00:33:58.356 [2024-07-24 02:12:13.161898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.356 [2024-07-24 02:12:13.161940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.356 qpair failed and we were unable to recover it. 00:33:58.356 [2024-07-24 02:12:13.162073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.356 [2024-07-24 02:12:13.162099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.356 qpair failed and we were unable to recover it. 00:33:58.356 [2024-07-24 02:12:13.162222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.356 [2024-07-24 02:12:13.162247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.356 qpair failed and we were unable to recover it. 00:33:58.356 [2024-07-24 02:12:13.162416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.356 [2024-07-24 02:12:13.162460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.356 qpair failed and we were unable to recover it. 00:33:58.356 [2024-07-24 02:12:13.162639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.356 [2024-07-24 02:12:13.162683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.356 qpair failed and we were unable to recover it. 00:33:58.356 [2024-07-24 02:12:13.162860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.356 [2024-07-24 02:12:13.162903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.356 qpair failed and we were unable to recover it. 00:33:58.356 [2024-07-24 02:12:13.163033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.356 [2024-07-24 02:12:13.163059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.356 qpair failed and we were unable to recover it. 00:33:58.356 [2024-07-24 02:12:13.163219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.356 [2024-07-24 02:12:13.163245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.356 qpair failed and we were unable to recover it. 00:33:58.356 [2024-07-24 02:12:13.163429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.356 [2024-07-24 02:12:13.163480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.356 qpair failed and we were unable to recover it. 00:33:58.356 [2024-07-24 02:12:13.163598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.356 [2024-07-24 02:12:13.163626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.356 qpair failed and we were unable to recover it. 00:33:58.356 [2024-07-24 02:12:13.163831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.356 [2024-07-24 02:12:13.163859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.356 qpair failed and we were unable to recover it. 00:33:58.356 [2024-07-24 02:12:13.164006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.356 [2024-07-24 02:12:13.164031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.356 qpair failed and we were unable to recover it. 00:33:58.356 [2024-07-24 02:12:13.164185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.356 [2024-07-24 02:12:13.164210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.356 qpair failed and we were unable to recover it. 00:33:58.356 [2024-07-24 02:12:13.164369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.356 [2024-07-24 02:12:13.164398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.356 qpair failed and we were unable to recover it. 00:33:58.356 [2024-07-24 02:12:13.164543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.356 [2024-07-24 02:12:13.164587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.356 qpair failed and we were unable to recover it. 00:33:58.356 [2024-07-24 02:12:13.164734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.356 [2024-07-24 02:12:13.164774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.356 qpair failed and we were unable to recover it. 00:33:58.356 [2024-07-24 02:12:13.164905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.356 [2024-07-24 02:12:13.164931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.356 qpair failed and we were unable to recover it. 00:33:58.356 [2024-07-24 02:12:13.165090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.356 [2024-07-24 02:12:13.165115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.356 qpair failed and we were unable to recover it. 00:33:58.356 [2024-07-24 02:12:13.165248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.356 [2024-07-24 02:12:13.165273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.356 qpair failed and we were unable to recover it. 00:33:58.356 [2024-07-24 02:12:13.165445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.356 [2024-07-24 02:12:13.165488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.356 qpair failed and we were unable to recover it. 00:33:58.356 [2024-07-24 02:12:13.165612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.356 [2024-07-24 02:12:13.165654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.356 qpair failed and we were unable to recover it. 00:33:58.356 [2024-07-24 02:12:13.165787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.356 [2024-07-24 02:12:13.165816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.356 qpair failed and we were unable to recover it. 00:33:58.356 [2024-07-24 02:12:13.166000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.356 [2024-07-24 02:12:13.166025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.356 qpair failed and we were unable to recover it. 00:33:58.356 [2024-07-24 02:12:13.166153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.356 [2024-07-24 02:12:13.166178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.356 qpair failed and we were unable to recover it. 00:33:58.356 [2024-07-24 02:12:13.166336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.356 [2024-07-24 02:12:13.166362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.356 qpair failed and we were unable to recover it. 00:33:58.356 [2024-07-24 02:12:13.166518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.356 [2024-07-24 02:12:13.166546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.356 qpair failed and we were unable to recover it. 00:33:58.356 [2024-07-24 02:12:13.166706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.356 [2024-07-24 02:12:13.166749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.356 qpair failed and we were unable to recover it. 00:33:58.356 [2024-07-24 02:12:13.166908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.356 [2024-07-24 02:12:13.166934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.356 qpair failed and we were unable to recover it. 00:33:58.356 [2024-07-24 02:12:13.167065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.356 [2024-07-24 02:12:13.167090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.356 qpair failed and we were unable to recover it. 00:33:58.356 [2024-07-24 02:12:13.167223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.356 [2024-07-24 02:12:13.167249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.356 qpair failed and we were unable to recover it. 00:33:58.356 [2024-07-24 02:12:13.167405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.356 [2024-07-24 02:12:13.167450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.356 qpair failed and we were unable to recover it. 00:33:58.356 [2024-07-24 02:12:13.167638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.356 [2024-07-24 02:12:13.167681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.356 qpair failed and we were unable to recover it. 00:33:58.356 [2024-07-24 02:12:13.167834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.356 [2024-07-24 02:12:13.167878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.356 qpair failed and we were unable to recover it. 00:33:58.356 [2024-07-24 02:12:13.167986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.357 [2024-07-24 02:12:13.168011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.357 qpair failed and we were unable to recover it. 00:33:58.357 [2024-07-24 02:12:13.168143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.357 [2024-07-24 02:12:13.168169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.357 qpair failed and we were unable to recover it. 00:33:58.357 [2024-07-24 02:12:13.168338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.357 [2024-07-24 02:12:13.168381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.357 qpair failed and we were unable to recover it. 00:33:58.357 [2024-07-24 02:12:13.168498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.357 [2024-07-24 02:12:13.168542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.357 qpair failed and we were unable to recover it. 00:33:58.357 [2024-07-24 02:12:13.168686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.357 [2024-07-24 02:12:13.168729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.357 qpair failed and we were unable to recover it. 00:33:58.357 [2024-07-24 02:12:13.168888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.357 [2024-07-24 02:12:13.168913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.357 qpair failed and we were unable to recover it. 00:33:58.357 [2024-07-24 02:12:13.169047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.357 [2024-07-24 02:12:13.169072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.357 qpair failed and we were unable to recover it. 00:33:58.357 [2024-07-24 02:12:13.169182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.357 [2024-07-24 02:12:13.169207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.357 qpair failed and we were unable to recover it. 00:33:58.357 [2024-07-24 02:12:13.169320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.357 [2024-07-24 02:12:13.169346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.357 qpair failed and we were unable to recover it. 00:33:58.357 [2024-07-24 02:12:13.169524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.357 [2024-07-24 02:12:13.169568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.357 qpair failed and we were unable to recover it. 00:33:58.357 [2024-07-24 02:12:13.169728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.357 [2024-07-24 02:12:13.169771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.357 qpair failed and we were unable to recover it. 00:33:58.357 [2024-07-24 02:12:13.169903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.357 [2024-07-24 02:12:13.169929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.357 qpair failed and we were unable to recover it. 00:33:58.357 [2024-07-24 02:12:13.170085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.357 [2024-07-24 02:12:13.170110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.357 qpair failed and we were unable to recover it. 00:33:58.357 [2024-07-24 02:12:13.170242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.357 [2024-07-24 02:12:13.170267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.357 qpair failed and we were unable to recover it. 00:33:58.357 [2024-07-24 02:12:13.170398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.357 [2024-07-24 02:12:13.170423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.357 qpair failed and we were unable to recover it. 00:33:58.357 [2024-07-24 02:12:13.170579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.357 [2024-07-24 02:12:13.170625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.357 qpair failed and we were unable to recover it. 00:33:58.357 [2024-07-24 02:12:13.170806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.357 [2024-07-24 02:12:13.170834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.357 qpair failed and we were unable to recover it. 00:33:58.357 [2024-07-24 02:12:13.170952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.357 [2024-07-24 02:12:13.170978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.357 qpair failed and we were unable to recover it. 00:33:58.357 [2024-07-24 02:12:13.171107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.357 [2024-07-24 02:12:13.171133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.357 qpair failed and we were unable to recover it. 00:33:58.357 [2024-07-24 02:12:13.171265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.357 [2024-07-24 02:12:13.171290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.357 qpair failed and we were unable to recover it. 00:33:58.357 [2024-07-24 02:12:13.171477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.357 [2024-07-24 02:12:13.171506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.357 qpair failed and we were unable to recover it. 00:33:58.357 [2024-07-24 02:12:13.171640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.357 [2024-07-24 02:12:13.171668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.357 qpair failed and we were unable to recover it. 00:33:58.357 [2024-07-24 02:12:13.171869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.357 [2024-07-24 02:12:13.171912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.357 qpair failed and we were unable to recover it. 00:33:58.357 [2024-07-24 02:12:13.172024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.357 [2024-07-24 02:12:13.172050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.357 qpair failed and we were unable to recover it. 00:33:58.357 [2024-07-24 02:12:13.172181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.357 [2024-07-24 02:12:13.172207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.357 qpair failed and we were unable to recover it. 00:33:58.357 [2024-07-24 02:12:13.172341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.357 [2024-07-24 02:12:13.172366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.357 qpair failed and we were unable to recover it. 00:33:58.357 [2024-07-24 02:12:13.172468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.357 [2024-07-24 02:12:13.172494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.357 qpair failed and we were unable to recover it. 00:33:58.357 [2024-07-24 02:12:13.172648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.357 [2024-07-24 02:12:13.172692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.357 qpair failed and we were unable to recover it. 00:33:58.357 [2024-07-24 02:12:13.172838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.357 [2024-07-24 02:12:13.172881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.357 qpair failed and we were unable to recover it. 00:33:58.357 [2024-07-24 02:12:13.172990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.357 [2024-07-24 02:12:13.173015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.357 qpair failed and we were unable to recover it. 00:33:58.357 [2024-07-24 02:12:13.173171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.357 [2024-07-24 02:12:13.173196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.357 qpair failed and we were unable to recover it. 00:33:58.357 [2024-07-24 02:12:13.173335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.357 [2024-07-24 02:12:13.173360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.357 qpair failed and we were unable to recover it. 00:33:58.357 [2024-07-24 02:12:13.173512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.357 [2024-07-24 02:12:13.173555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.357 qpair failed and we were unable to recover it. 00:33:58.357 [2024-07-24 02:12:13.173716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.357 [2024-07-24 02:12:13.173759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.357 qpair failed and we were unable to recover it. 00:33:58.357 [2024-07-24 02:12:13.173919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.357 [2024-07-24 02:12:13.173963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.357 qpair failed and we were unable to recover it. 00:33:58.357 [2024-07-24 02:12:13.174101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.357 [2024-07-24 02:12:13.174127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.357 qpair failed and we were unable to recover it. 00:33:58.357 [2024-07-24 02:12:13.174279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.357 [2024-07-24 02:12:13.174304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.357 qpair failed and we were unable to recover it. 00:33:58.357 [2024-07-24 02:12:13.174445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.357 [2024-07-24 02:12:13.174471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.357 qpair failed and we were unable to recover it. 00:33:58.357 [2024-07-24 02:12:13.174631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.357 [2024-07-24 02:12:13.174673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.357 qpair failed and we were unable to recover it. 00:33:58.357 [2024-07-24 02:12:13.174776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.357 [2024-07-24 02:12:13.174802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.357 qpair failed and we were unable to recover it. 00:33:58.357 [2024-07-24 02:12:13.174988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.357 [2024-07-24 02:12:13.175031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.357 qpair failed and we were unable to recover it. 00:33:58.357 [2024-07-24 02:12:13.175185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.357 [2024-07-24 02:12:13.175211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.357 qpair failed and we were unable to recover it. 00:33:58.357 [2024-07-24 02:12:13.175357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.357 [2024-07-24 02:12:13.175385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.358 qpair failed and we were unable to recover it. 00:33:58.358 [2024-07-24 02:12:13.175586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.358 [2024-07-24 02:12:13.175630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.358 qpair failed and we were unable to recover it. 00:33:58.358 [2024-07-24 02:12:13.175748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.358 [2024-07-24 02:12:13.175776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.358 qpair failed and we were unable to recover it. 00:33:58.641 [2024-07-24 02:12:13.175918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.641 [2024-07-24 02:12:13.175945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.641 qpair failed and we were unable to recover it. 00:33:58.641 [2024-07-24 02:12:13.176100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.641 [2024-07-24 02:12:13.176126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.641 qpair failed and we were unable to recover it. 00:33:58.641 [2024-07-24 02:12:13.176223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.641 [2024-07-24 02:12:13.176249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.641 qpair failed and we were unable to recover it. 00:33:58.641 [2024-07-24 02:12:13.176347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.641 [2024-07-24 02:12:13.176373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.641 qpair failed and we were unable to recover it. 00:33:58.641 [2024-07-24 02:12:13.176483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.641 [2024-07-24 02:12:13.176508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.641 qpair failed and we were unable to recover it. 00:33:58.641 [2024-07-24 02:12:13.176609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.641 [2024-07-24 02:12:13.176636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.641 qpair failed and we were unable to recover it. 00:33:58.641 [2024-07-24 02:12:13.176775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.641 [2024-07-24 02:12:13.176800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.641 qpair failed and we were unable to recover it. 00:33:58.641 [2024-07-24 02:12:13.176911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.641 [2024-07-24 02:12:13.176936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.641 qpair failed and we were unable to recover it. 00:33:58.641 [2024-07-24 02:12:13.177035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.641 [2024-07-24 02:12:13.177061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.641 qpair failed and we were unable to recover it. 00:33:58.641 [2024-07-24 02:12:13.177190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.641 [2024-07-24 02:12:13.177215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.641 qpair failed and we were unable to recover it. 00:33:58.641 [2024-07-24 02:12:13.177354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.641 [2024-07-24 02:12:13.177384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.641 qpair failed and we were unable to recover it. 00:33:58.641 [2024-07-24 02:12:13.177488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.641 [2024-07-24 02:12:13.177513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.641 qpair failed and we were unable to recover it. 00:33:58.641 [2024-07-24 02:12:13.177616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.641 [2024-07-24 02:12:13.177641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.641 qpair failed and we were unable to recover it. 00:33:58.641 [2024-07-24 02:12:13.177745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.641 [2024-07-24 02:12:13.177771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.641 qpair failed and we were unable to recover it. 00:33:58.641 [2024-07-24 02:12:13.177889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.641 [2024-07-24 02:12:13.177914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.641 qpair failed and we were unable to recover it. 00:33:58.641 [2024-07-24 02:12:13.178053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.641 [2024-07-24 02:12:13.178078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.641 qpair failed and we were unable to recover it. 00:33:58.641 [2024-07-24 02:12:13.178212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.641 [2024-07-24 02:12:13.178237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.641 qpair failed and we were unable to recover it. 00:33:58.641 [2024-07-24 02:12:13.178393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.641 [2024-07-24 02:12:13.178419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.641 qpair failed and we were unable to recover it. 00:33:58.641 [2024-07-24 02:12:13.178531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.641 [2024-07-24 02:12:13.178556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.641 qpair failed and we were unable to recover it. 00:33:58.641 [2024-07-24 02:12:13.178712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.641 [2024-07-24 02:12:13.178737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.641 qpair failed and we were unable to recover it. 00:33:58.641 [2024-07-24 02:12:13.178844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.641 [2024-07-24 02:12:13.178869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.641 qpair failed and we were unable to recover it. 00:33:58.641 [2024-07-24 02:12:13.178998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.641 [2024-07-24 02:12:13.179023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.641 qpair failed and we were unable to recover it. 00:33:58.641 [2024-07-24 02:12:13.179135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.641 [2024-07-24 02:12:13.179161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.641 qpair failed and we were unable to recover it. 00:33:58.641 [2024-07-24 02:12:13.179290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.641 [2024-07-24 02:12:13.179321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.641 qpair failed and we were unable to recover it. 00:33:58.641 [2024-07-24 02:12:13.179473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.641 [2024-07-24 02:12:13.179500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.641 qpair failed and we were unable to recover it. 00:33:58.641 [2024-07-24 02:12:13.179670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.641 [2024-07-24 02:12:13.179696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.641 qpair failed and we were unable to recover it. 00:33:58.641 [2024-07-24 02:12:13.179833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.641 [2024-07-24 02:12:13.179858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.641 qpair failed and we were unable to recover it. 00:33:58.641 [2024-07-24 02:12:13.179995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.641 [2024-07-24 02:12:13.180020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.641 qpair failed and we were unable to recover it. 00:33:58.641 [2024-07-24 02:12:13.180191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.641 [2024-07-24 02:12:13.180217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.641 qpair failed and we were unable to recover it. 00:33:58.641 [2024-07-24 02:12:13.180346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.641 [2024-07-24 02:12:13.180372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.641 qpair failed and we were unable to recover it. 00:33:58.641 [2024-07-24 02:12:13.180530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.641 [2024-07-24 02:12:13.180576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.641 qpair failed and we were unable to recover it. 00:33:58.641 [2024-07-24 02:12:13.180762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.641 [2024-07-24 02:12:13.180804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.641 qpair failed and we were unable to recover it. 00:33:58.641 [2024-07-24 02:12:13.180956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.641 [2024-07-24 02:12:13.180998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.641 qpair failed and we were unable to recover it. 00:33:58.641 [2024-07-24 02:12:13.181112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.641 [2024-07-24 02:12:13.181137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.641 qpair failed and we were unable to recover it. 00:33:58.641 [2024-07-24 02:12:13.181272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.641 [2024-07-24 02:12:13.181299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.641 qpair failed and we were unable to recover it. 00:33:58.641 [2024-07-24 02:12:13.181499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.641 [2024-07-24 02:12:13.181541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.642 qpair failed and we were unable to recover it. 00:33:58.642 [2024-07-24 02:12:13.181688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.642 [2024-07-24 02:12:13.181731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.642 qpair failed and we were unable to recover it. 00:33:58.642 [2024-07-24 02:12:13.181896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.642 [2024-07-24 02:12:13.181939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.642 qpair failed and we were unable to recover it. 00:33:58.642 [2024-07-24 02:12:13.182074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.642 [2024-07-24 02:12:13.182100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.642 qpair failed and we were unable to recover it. 00:33:58.642 [2024-07-24 02:12:13.182225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.642 [2024-07-24 02:12:13.182249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.642 qpair failed and we were unable to recover it. 00:33:58.642 [2024-07-24 02:12:13.182373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.642 [2024-07-24 02:12:13.182402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.642 qpair failed and we were unable to recover it. 00:33:58.642 [2024-07-24 02:12:13.182598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.642 [2024-07-24 02:12:13.182641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.642 qpair failed and we were unable to recover it. 00:33:58.642 [2024-07-24 02:12:13.182790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.642 [2024-07-24 02:12:13.182831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.642 qpair failed and we were unable to recover it. 00:33:58.642 [2024-07-24 02:12:13.182942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.642 [2024-07-24 02:12:13.182967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.642 qpair failed and we were unable to recover it. 00:33:58.642 [2024-07-24 02:12:13.183098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.642 [2024-07-24 02:12:13.183122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.642 qpair failed and we were unable to recover it. 00:33:58.642 [2024-07-24 02:12:13.183221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.642 [2024-07-24 02:12:13.183247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.642 qpair failed and we were unable to recover it. 00:33:58.642 [2024-07-24 02:12:13.183398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.642 [2024-07-24 02:12:13.183441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.642 qpair failed and we were unable to recover it. 00:33:58.642 [2024-07-24 02:12:13.183576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.642 [2024-07-24 02:12:13.183602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.642 qpair failed and we were unable to recover it. 00:33:58.642 [2024-07-24 02:12:13.183739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.642 [2024-07-24 02:12:13.183765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.642 qpair failed and we were unable to recover it. 00:33:58.642 [2024-07-24 02:12:13.183925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.642 [2024-07-24 02:12:13.183951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.642 qpair failed and we were unable to recover it. 00:33:58.642 [2024-07-24 02:12:13.184059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.642 [2024-07-24 02:12:13.184088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.642 qpair failed and we were unable to recover it. 00:33:58.642 [2024-07-24 02:12:13.184246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.642 [2024-07-24 02:12:13.184271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.642 qpair failed and we were unable to recover it. 00:33:58.642 [2024-07-24 02:12:13.184403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.642 [2024-07-24 02:12:13.184429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.642 qpair failed and we were unable to recover it. 00:33:58.642 [2024-07-24 02:12:13.184562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.642 [2024-07-24 02:12:13.184587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.642 qpair failed and we were unable to recover it. 00:33:58.642 [2024-07-24 02:12:13.184726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.642 [2024-07-24 02:12:13.184751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.642 qpair failed and we were unable to recover it. 00:33:58.642 [2024-07-24 02:12:13.184886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.642 [2024-07-24 02:12:13.184911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.642 qpair failed and we were unable to recover it. 00:33:58.642 [2024-07-24 02:12:13.185041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.642 [2024-07-24 02:12:13.185067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.642 qpair failed and we were unable to recover it. 00:33:58.642 [2024-07-24 02:12:13.185170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.642 [2024-07-24 02:12:13.185197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.642 qpair failed and we were unable to recover it. 00:33:58.642 [2024-07-24 02:12:13.185331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.642 [2024-07-24 02:12:13.185357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.642 qpair failed and we were unable to recover it. 00:33:58.642 [2024-07-24 02:12:13.185511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.642 [2024-07-24 02:12:13.185556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.642 qpair failed and we were unable to recover it. 00:33:58.642 [2024-07-24 02:12:13.185688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.642 [2024-07-24 02:12:13.185731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.642 qpair failed and we were unable to recover it. 00:33:58.642 [2024-07-24 02:12:13.185832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.642 [2024-07-24 02:12:13.185858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.642 qpair failed and we were unable to recover it. 00:33:58.642 [2024-07-24 02:12:13.185958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.642 [2024-07-24 02:12:13.185983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.642 qpair failed and we were unable to recover it. 00:33:58.642 [2024-07-24 02:12:13.186116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.642 [2024-07-24 02:12:13.186142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.642 qpair failed and we were unable to recover it. 00:33:58.642 [2024-07-24 02:12:13.186273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.642 [2024-07-24 02:12:13.186299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.642 qpair failed and we were unable to recover it. 00:33:58.642 [2024-07-24 02:12:13.186425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.642 [2024-07-24 02:12:13.186469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.642 qpair failed and we were unable to recover it. 00:33:58.642 [2024-07-24 02:12:13.186569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.642 [2024-07-24 02:12:13.186594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.642 qpair failed and we were unable to recover it. 00:33:58.642 [2024-07-24 02:12:13.186751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.642 [2024-07-24 02:12:13.186777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.642 qpair failed and we were unable to recover it. 00:33:58.642 [2024-07-24 02:12:13.186903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.642 [2024-07-24 02:12:13.186929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.642 qpair failed and we were unable to recover it. 00:33:58.642 [2024-07-24 02:12:13.187088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.642 [2024-07-24 02:12:13.187113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.642 qpair failed and we were unable to recover it. 00:33:58.642 [2024-07-24 02:12:13.187246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.642 [2024-07-24 02:12:13.187271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.642 qpair failed and we were unable to recover it. 00:33:58.642 [2024-07-24 02:12:13.187430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.642 [2024-07-24 02:12:13.187456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.642 qpair failed and we were unable to recover it. 00:33:58.642 [2024-07-24 02:12:13.187588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.642 [2024-07-24 02:12:13.187613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.642 qpair failed and we were unable to recover it. 00:33:58.642 [2024-07-24 02:12:13.187748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.642 [2024-07-24 02:12:13.187775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.642 qpair failed and we were unable to recover it. 00:33:58.642 [2024-07-24 02:12:13.187934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.642 [2024-07-24 02:12:13.187960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.642 qpair failed and we were unable to recover it. 00:33:58.642 [2024-07-24 02:12:13.188059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.642 [2024-07-24 02:12:13.188084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.642 qpair failed and we were unable to recover it. 00:33:58.642 [2024-07-24 02:12:13.188244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.642 [2024-07-24 02:12:13.188269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.642 qpair failed and we were unable to recover it. 00:33:58.642 [2024-07-24 02:12:13.188427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.642 [2024-07-24 02:12:13.188471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.642 qpair failed and we were unable to recover it. 00:33:58.642 [2024-07-24 02:12:13.188604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.642 [2024-07-24 02:12:13.188647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.642 qpair failed and we were unable to recover it. 00:33:58.642 [2024-07-24 02:12:13.188790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.642 [2024-07-24 02:12:13.188817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.642 qpair failed and we were unable to recover it. 00:33:58.642 [2024-07-24 02:12:13.188924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.642 [2024-07-24 02:12:13.188949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.642 qpair failed and we were unable to recover it. 00:33:58.642 [2024-07-24 02:12:13.189104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.642 [2024-07-24 02:12:13.189129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.642 qpair failed and we were unable to recover it. 00:33:58.642 [2024-07-24 02:12:13.189290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.642 [2024-07-24 02:12:13.189331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.642 qpair failed and we were unable to recover it. 00:33:58.642 [2024-07-24 02:12:13.189497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.642 [2024-07-24 02:12:13.189523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.642 qpair failed and we were unable to recover it. 00:33:58.642 [2024-07-24 02:12:13.189641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.642 [2024-07-24 02:12:13.189669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.642 qpair failed and we were unable to recover it. 00:33:58.642 [2024-07-24 02:12:13.189851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.642 [2024-07-24 02:12:13.189877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.642 qpair failed and we were unable to recover it. 00:33:58.642 [2024-07-24 02:12:13.190033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.642 [2024-07-24 02:12:13.190061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.642 qpair failed and we were unable to recover it. 00:33:58.642 [2024-07-24 02:12:13.190212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.642 [2024-07-24 02:12:13.190237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.642 qpair failed and we were unable to recover it. 00:33:58.642 [2024-07-24 02:12:13.190406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.642 [2024-07-24 02:12:13.190454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.642 qpair failed and we were unable to recover it. 00:33:58.642 [2024-07-24 02:12:13.190599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.642 [2024-07-24 02:12:13.190642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.642 qpair failed and we were unable to recover it. 00:33:58.642 [2024-07-24 02:12:13.190799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.642 [2024-07-24 02:12:13.190846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.642 qpair failed and we were unable to recover it. 00:33:58.643 [2024-07-24 02:12:13.191001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.643 [2024-07-24 02:12:13.191029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.643 qpair failed and we were unable to recover it. 00:33:58.643 [2024-07-24 02:12:13.191174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.643 [2024-07-24 02:12:13.191199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.643 qpair failed and we were unable to recover it. 00:33:58.643 [2024-07-24 02:12:13.191305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.643 [2024-07-24 02:12:13.191338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.643 qpair failed and we were unable to recover it. 00:33:58.643 [2024-07-24 02:12:13.191450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.643 [2024-07-24 02:12:13.191476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.643 qpair failed and we were unable to recover it. 00:33:58.643 [2024-07-24 02:12:13.191577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.643 [2024-07-24 02:12:13.191602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.643 qpair failed and we were unable to recover it. 00:33:58.643 [2024-07-24 02:12:13.191735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.643 [2024-07-24 02:12:13.191760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.643 qpair failed and we were unable to recover it. 00:33:58.643 [2024-07-24 02:12:13.191899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.643 [2024-07-24 02:12:13.191925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.643 qpair failed and we were unable to recover it. 00:33:58.643 [2024-07-24 02:12:13.192054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.643 [2024-07-24 02:12:13.192080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.643 qpair failed and we were unable to recover it. 00:33:58.643 [2024-07-24 02:12:13.192236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.643 [2024-07-24 02:12:13.192261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.643 qpair failed and we were unable to recover it. 00:33:58.643 [2024-07-24 02:12:13.192441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.643 [2024-07-24 02:12:13.192485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.643 qpair failed and we were unable to recover it. 00:33:58.643 [2024-07-24 02:12:13.192601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.643 [2024-07-24 02:12:13.192630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.643 qpair failed and we were unable to recover it. 00:33:58.643 [2024-07-24 02:12:13.192799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.643 [2024-07-24 02:12:13.192842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.643 qpair failed and we were unable to recover it. 00:33:58.643 [2024-07-24 02:12:13.192954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.643 [2024-07-24 02:12:13.192979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.643 qpair failed and we were unable to recover it. 00:33:58.643 [2024-07-24 02:12:13.193116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.643 [2024-07-24 02:12:13.193142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.643 qpair failed and we were unable to recover it. 00:33:58.643 [2024-07-24 02:12:13.193278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.643 [2024-07-24 02:12:13.193304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.643 qpair failed and we were unable to recover it. 00:33:58.643 [2024-07-24 02:12:13.193495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.643 [2024-07-24 02:12:13.193538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.643 qpair failed and we were unable to recover it. 00:33:58.643 [2024-07-24 02:12:13.193682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.643 [2024-07-24 02:12:13.193729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.643 qpair failed and we were unable to recover it. 00:33:58.643 [2024-07-24 02:12:13.193886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.643 [2024-07-24 02:12:13.193911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.643 qpair failed and we were unable to recover it. 00:33:58.643 [2024-07-24 02:12:13.194042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.643 [2024-07-24 02:12:13.194068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.643 qpair failed and we were unable to recover it. 00:33:58.643 [2024-07-24 02:12:13.194201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.643 [2024-07-24 02:12:13.194226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.643 qpair failed and we were unable to recover it. 00:33:58.643 [2024-07-24 02:12:13.194352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.643 [2024-07-24 02:12:13.194378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.643 qpair failed and we were unable to recover it. 00:33:58.643 [2024-07-24 02:12:13.194538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.643 [2024-07-24 02:12:13.194581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.643 qpair failed and we were unable to recover it. 00:33:58.643 [2024-07-24 02:12:13.194731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.643 [2024-07-24 02:12:13.194774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.643 qpair failed and we were unable to recover it. 00:33:58.643 [2024-07-24 02:12:13.194922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.643 [2024-07-24 02:12:13.194970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.643 qpair failed and we were unable to recover it. 00:33:58.643 [2024-07-24 02:12:13.195078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.643 [2024-07-24 02:12:13.195103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.643 qpair failed and we were unable to recover it. 00:33:58.643 [2024-07-24 02:12:13.195262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.643 [2024-07-24 02:12:13.195287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.643 qpair failed and we were unable to recover it. 00:33:58.643 [2024-07-24 02:12:13.195491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.643 [2024-07-24 02:12:13.195537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.643 qpair failed and we were unable to recover it. 00:33:58.643 [2024-07-24 02:12:13.195701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.643 [2024-07-24 02:12:13.195743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.643 qpair failed and we were unable to recover it. 00:33:58.643 [2024-07-24 02:12:13.195893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.643 [2024-07-24 02:12:13.195936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.643 qpair failed and we were unable to recover it. 00:33:58.643 [2024-07-24 02:12:13.196097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.643 [2024-07-24 02:12:13.196122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.643 qpair failed and we were unable to recover it. 00:33:58.643 [2024-07-24 02:12:13.196282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.643 [2024-07-24 02:12:13.196307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.643 qpair failed and we were unable to recover it. 00:33:58.643 [2024-07-24 02:12:13.196494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.643 [2024-07-24 02:12:13.196522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.643 qpair failed and we were unable to recover it. 00:33:58.643 [2024-07-24 02:12:13.196722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.643 [2024-07-24 02:12:13.196765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.643 qpair failed and we were unable to recover it. 00:33:58.643 [2024-07-24 02:12:13.196921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.643 [2024-07-24 02:12:13.196963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.643 qpair failed and we were unable to recover it. 00:33:58.643 [2024-07-24 02:12:13.197128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.643 [2024-07-24 02:12:13.197153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.643 qpair failed and we were unable to recover it. 00:33:58.643 [2024-07-24 02:12:13.197276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.643 [2024-07-24 02:12:13.197301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.643 qpair failed and we were unable to recover it. 00:33:58.643 [2024-07-24 02:12:13.197488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.643 [2024-07-24 02:12:13.197532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.643 qpair failed and we were unable to recover it. 00:33:58.643 [2024-07-24 02:12:13.197679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.643 [2024-07-24 02:12:13.197722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.643 qpair failed and we were unable to recover it. 00:33:58.643 [2024-07-24 02:12:13.197873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.643 [2024-07-24 02:12:13.197915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.643 qpair failed and we were unable to recover it. 00:33:58.643 [2024-07-24 02:12:13.198043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.643 [2024-07-24 02:12:13.198090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.643 qpair failed and we were unable to recover it. 00:33:58.643 [2024-07-24 02:12:13.198249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.643 [2024-07-24 02:12:13.198275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.643 qpair failed and we were unable to recover it. 00:33:58.643 [2024-07-24 02:12:13.198413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.643 [2024-07-24 02:12:13.198455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.643 qpair failed and we were unable to recover it. 00:33:58.643 [2024-07-24 02:12:13.198587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.643 [2024-07-24 02:12:13.198612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.643 qpair failed and we were unable to recover it. 00:33:58.643 [2024-07-24 02:12:13.198778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.643 [2024-07-24 02:12:13.198804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.643 qpair failed and we were unable to recover it. 00:33:58.643 [2024-07-24 02:12:13.198954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.643 [2024-07-24 02:12:13.198999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.643 qpair failed and we were unable to recover it. 00:33:58.643 [2024-07-24 02:12:13.199132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.643 [2024-07-24 02:12:13.199157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.643 qpair failed and we were unable to recover it. 00:33:58.643 [2024-07-24 02:12:13.199260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.643 [2024-07-24 02:12:13.199287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.643 qpair failed and we were unable to recover it. 00:33:58.643 [2024-07-24 02:12:13.199477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.643 [2024-07-24 02:12:13.199519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.643 qpair failed and we were unable to recover it. 00:33:58.643 [2024-07-24 02:12:13.199677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.643 [2024-07-24 02:12:13.199721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.643 qpair failed and we were unable to recover it. 00:33:58.643 [2024-07-24 02:12:13.199890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.643 [2024-07-24 02:12:13.199916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.643 qpair failed and we were unable to recover it. 00:33:58.643 [2024-07-24 02:12:13.200046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.643 [2024-07-24 02:12:13.200071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.643 qpair failed and we were unable to recover it. 00:33:58.643 [2024-07-24 02:12:13.200207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.643 [2024-07-24 02:12:13.200233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.643 qpair failed and we were unable to recover it. 00:33:58.643 [2024-07-24 02:12:13.200408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.643 [2024-07-24 02:12:13.200451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.643 qpair failed and we were unable to recover it. 00:33:58.643 [2024-07-24 02:12:13.200610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.643 [2024-07-24 02:12:13.200653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.643 qpair failed and we were unable to recover it. 00:33:58.643 [2024-07-24 02:12:13.200784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.643 [2024-07-24 02:12:13.200809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.643 qpair failed and we were unable to recover it. 00:33:58.643 [2024-07-24 02:12:13.200943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.643 [2024-07-24 02:12:13.200969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.643 qpair failed and we were unable to recover it. 00:33:58.643 [2024-07-24 02:12:13.201103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.643 [2024-07-24 02:12:13.201128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.643 qpair failed and we were unable to recover it. 00:33:58.643 [2024-07-24 02:12:13.201301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.643 [2024-07-24 02:12:13.201345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.643 qpair failed and we were unable to recover it. 00:33:58.643 [2024-07-24 02:12:13.201508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.643 [2024-07-24 02:12:13.201533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.643 qpair failed and we were unable to recover it. 00:33:58.643 [2024-07-24 02:12:13.201662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.643 [2024-07-24 02:12:13.201687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.643 qpair failed and we were unable to recover it. 00:33:58.643 [2024-07-24 02:12:13.201868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.643 [2024-07-24 02:12:13.201896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.643 qpair failed and we were unable to recover it. 00:33:58.643 [2024-07-24 02:12:13.202071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.643 [2024-07-24 02:12:13.202096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.643 qpair failed and we were unable to recover it. 00:33:58.643 [2024-07-24 02:12:13.202227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.643 [2024-07-24 02:12:13.202252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.643 qpair failed and we were unable to recover it. 00:33:58.643 [2024-07-24 02:12:13.202401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.643 [2024-07-24 02:12:13.202445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.643 qpair failed and we were unable to recover it. 00:33:58.643 [2024-07-24 02:12:13.202596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.643 [2024-07-24 02:12:13.202639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.643 qpair failed and we were unable to recover it. 00:33:58.643 [2024-07-24 02:12:13.202770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.643 [2024-07-24 02:12:13.202812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.643 qpair failed and we were unable to recover it. 00:33:58.643 [2024-07-24 02:12:13.202996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.643 [2024-07-24 02:12:13.203039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.643 qpair failed and we were unable to recover it. 00:33:58.643 [2024-07-24 02:12:13.203164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.643 [2024-07-24 02:12:13.203189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.643 qpair failed and we were unable to recover it. 00:33:58.643 [2024-07-24 02:12:13.203325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.643 [2024-07-24 02:12:13.203352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.643 qpair failed and we were unable to recover it. 00:33:58.643 [2024-07-24 02:12:13.203510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.644 [2024-07-24 02:12:13.203552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.644 qpair failed and we were unable to recover it. 00:33:58.644 [2024-07-24 02:12:13.203667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.644 [2024-07-24 02:12:13.203695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.644 qpair failed and we were unable to recover it. 00:33:58.644 [2024-07-24 02:12:13.203840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.644 [2024-07-24 02:12:13.203865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.644 qpair failed and we were unable to recover it. 00:33:58.644 [2024-07-24 02:12:13.204006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.644 [2024-07-24 02:12:13.204031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.644 qpair failed and we were unable to recover it. 00:33:58.644 [2024-07-24 02:12:13.204168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.644 [2024-07-24 02:12:13.204193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.644 qpair failed and we were unable to recover it. 00:33:58.644 [2024-07-24 02:12:13.204292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.644 [2024-07-24 02:12:13.204325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.644 qpair failed and we were unable to recover it. 00:33:58.644 [2024-07-24 02:12:13.204445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.644 [2024-07-24 02:12:13.204470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.644 qpair failed and we were unable to recover it. 00:33:58.644 [2024-07-24 02:12:13.204598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.644 [2024-07-24 02:12:13.204623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.644 qpair failed and we were unable to recover it. 00:33:58.644 [2024-07-24 02:12:13.204732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.644 [2024-07-24 02:12:13.204759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.644 qpair failed and we were unable to recover it. 00:33:58.644 [2024-07-24 02:12:13.204917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.644 [2024-07-24 02:12:13.204942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.644 qpair failed and we were unable to recover it. 00:33:58.644 [2024-07-24 02:12:13.205070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.644 [2024-07-24 02:12:13.205100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.644 qpair failed and we were unable to recover it. 00:33:58.644 [2024-07-24 02:12:13.205202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.644 [2024-07-24 02:12:13.205227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.644 qpair failed and we were unable to recover it. 00:33:58.644 [2024-07-24 02:12:13.205334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.644 [2024-07-24 02:12:13.205361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.644 qpair failed and we were unable to recover it. 00:33:58.644 [2024-07-24 02:12:13.205519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.644 [2024-07-24 02:12:13.205561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.644 qpair failed and we were unable to recover it. 00:33:58.644 [2024-07-24 02:12:13.205727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.644 [2024-07-24 02:12:13.205754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.644 qpair failed and we were unable to recover it. 00:33:58.644 [2024-07-24 02:12:13.205914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.644 [2024-07-24 02:12:13.205939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.644 qpair failed and we were unable to recover it. 00:33:58.644 [2024-07-24 02:12:13.206068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.644 [2024-07-24 02:12:13.206093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.644 qpair failed and we were unable to recover it. 00:33:58.644 [2024-07-24 02:12:13.206221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.644 [2024-07-24 02:12:13.206246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.644 qpair failed and we were unable to recover it. 00:33:58.644 [2024-07-24 02:12:13.206414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.644 [2024-07-24 02:12:13.206458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.644 qpair failed and we were unable to recover it. 00:33:58.644 [2024-07-24 02:12:13.206577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.644 [2024-07-24 02:12:13.206619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.644 qpair failed and we were unable to recover it. 00:33:58.644 [2024-07-24 02:12:13.206771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.644 [2024-07-24 02:12:13.206814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.644 qpair failed and we were unable to recover it. 00:33:58.644 [2024-07-24 02:12:13.206957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.644 [2024-07-24 02:12:13.206982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.644 qpair failed and we were unable to recover it. 00:33:58.644 [2024-07-24 02:12:13.207108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.644 [2024-07-24 02:12:13.207133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.644 qpair failed and we were unable to recover it. 00:33:58.644 [2024-07-24 02:12:13.207267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.644 [2024-07-24 02:12:13.207294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.644 qpair failed and we were unable to recover it. 00:33:58.644 [2024-07-24 02:12:13.207485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.644 [2024-07-24 02:12:13.207527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.644 qpair failed and we were unable to recover it. 00:33:58.644 [2024-07-24 02:12:13.207686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.644 [2024-07-24 02:12:13.207728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.644 qpair failed and we were unable to recover it. 00:33:58.644 [2024-07-24 02:12:13.207891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.644 [2024-07-24 02:12:13.207917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.644 qpair failed and we were unable to recover it. 00:33:58.644 [2024-07-24 02:12:13.208026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.644 [2024-07-24 02:12:13.208051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.644 qpair failed and we were unable to recover it. 00:33:58.644 [2024-07-24 02:12:13.208152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.644 [2024-07-24 02:12:13.208177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.644 qpair failed and we were unable to recover it. 00:33:58.644 [2024-07-24 02:12:13.208310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.644 [2024-07-24 02:12:13.208352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.644 qpair failed and we were unable to recover it. 00:33:58.644 [2024-07-24 02:12:13.208489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.644 [2024-07-24 02:12:13.208514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.644 qpair failed and we were unable to recover it. 00:33:58.644 [2024-07-24 02:12:13.208670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.644 [2024-07-24 02:12:13.208695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.644 qpair failed and we were unable to recover it. 00:33:58.644 [2024-07-24 02:12:13.208847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.644 [2024-07-24 02:12:13.208892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.644 qpair failed and we were unable to recover it. 00:33:58.644 [2024-07-24 02:12:13.209051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.644 [2024-07-24 02:12:13.209076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.644 qpair failed and we were unable to recover it. 00:33:58.644 [2024-07-24 02:12:13.209239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.644 [2024-07-24 02:12:13.209264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.644 qpair failed and we were unable to recover it. 00:33:58.644 [2024-07-24 02:12:13.209410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.644 [2024-07-24 02:12:13.209452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.644 qpair failed and we were unable to recover it. 00:33:58.644 [2024-07-24 02:12:13.209606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.644 [2024-07-24 02:12:13.209634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.644 qpair failed and we were unable to recover it. 00:33:58.644 [2024-07-24 02:12:13.209781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.644 [2024-07-24 02:12:13.209824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.644 qpair failed and we were unable to recover it. 00:33:58.644 [2024-07-24 02:12:13.209930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.644 [2024-07-24 02:12:13.209955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.644 qpair failed and we were unable to recover it. 00:33:58.644 [2024-07-24 02:12:13.210089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.644 [2024-07-24 02:12:13.210114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.644 qpair failed and we were unable to recover it. 00:33:58.644 [2024-07-24 02:12:13.210219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.644 [2024-07-24 02:12:13.210245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.644 qpair failed and we were unable to recover it. 00:33:58.644 [2024-07-24 02:12:13.210413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.644 [2024-07-24 02:12:13.210456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.644 qpair failed and we were unable to recover it. 00:33:58.644 [2024-07-24 02:12:13.210618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.644 [2024-07-24 02:12:13.210643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.644 qpair failed and we were unable to recover it. 00:33:58.644 [2024-07-24 02:12:13.210765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.644 [2024-07-24 02:12:13.210795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.644 qpair failed and we were unable to recover it. 00:33:58.644 [2024-07-24 02:12:13.210968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.644 [2024-07-24 02:12:13.210993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.644 qpair failed and we were unable to recover it. 00:33:58.644 [2024-07-24 02:12:13.211126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.644 [2024-07-24 02:12:13.211151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.644 qpair failed and we were unable to recover it. 00:33:58.644 [2024-07-24 02:12:13.211251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.644 [2024-07-24 02:12:13.211277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.644 qpair failed and we were unable to recover it. 00:33:58.644 [2024-07-24 02:12:13.211508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.644 [2024-07-24 02:12:13.211551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.644 qpair failed and we were unable to recover it. 00:33:58.644 [2024-07-24 02:12:13.211734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.644 [2024-07-24 02:12:13.211763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.644 qpair failed and we were unable to recover it. 00:33:58.644 [2024-07-24 02:12:13.211905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.644 [2024-07-24 02:12:13.211933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.644 qpair failed and we were unable to recover it. 00:33:58.644 [2024-07-24 02:12:13.212071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.644 [2024-07-24 02:12:13.212098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.644 qpair failed and we were unable to recover it. 00:33:58.644 [2024-07-24 02:12:13.212222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.644 [2024-07-24 02:12:13.212250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.644 qpair failed and we were unable to recover it. 00:33:58.644 [2024-07-24 02:12:13.212400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.644 [2024-07-24 02:12:13.212426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.644 qpair failed and we were unable to recover it. 00:33:58.644 [2024-07-24 02:12:13.212579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.644 [2024-07-24 02:12:13.212606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.644 qpair failed and we were unable to recover it. 00:33:58.644 [2024-07-24 02:12:13.212738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.644 [2024-07-24 02:12:13.212778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.644 qpair failed and we were unable to recover it. 00:33:58.644 [2024-07-24 02:12:13.212950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.644 [2024-07-24 02:12:13.212977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.644 qpair failed and we were unable to recover it. 00:33:58.644 [2024-07-24 02:12:13.213085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.644 [2024-07-24 02:12:13.213113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.644 qpair failed and we were unable to recover it. 00:33:58.644 [2024-07-24 02:12:13.213285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.644 [2024-07-24 02:12:13.213312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.644 qpair failed and we were unable to recover it. 00:33:58.644 [2024-07-24 02:12:13.213468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.644 [2024-07-24 02:12:13.213493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.644 qpair failed and we were unable to recover it. 00:33:58.644 [2024-07-24 02:12:13.213642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.644 [2024-07-24 02:12:13.213669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.644 qpair failed and we were unable to recover it. 00:33:58.644 [2024-07-24 02:12:13.213818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.644 [2024-07-24 02:12:13.213846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.644 qpair failed and we were unable to recover it. 00:33:58.644 [2024-07-24 02:12:13.214013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.645 [2024-07-24 02:12:13.214041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.645 qpair failed and we were unable to recover it. 00:33:58.645 [2024-07-24 02:12:13.214250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.645 [2024-07-24 02:12:13.214281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.645 qpair failed and we were unable to recover it. 00:33:58.645 [2024-07-24 02:12:13.214437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.645 [2024-07-24 02:12:13.214463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.645 qpair failed and we were unable to recover it. 00:33:58.645 [2024-07-24 02:12:13.214573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.645 [2024-07-24 02:12:13.214599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.645 qpair failed and we were unable to recover it. 00:33:58.645 [2024-07-24 02:12:13.214745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.645 [2024-07-24 02:12:13.214789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.645 qpair failed and we were unable to recover it. 00:33:58.645 [2024-07-24 02:12:13.214950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.645 [2024-07-24 02:12:13.214992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.645 qpair failed and we were unable to recover it. 00:33:58.645 [2024-07-24 02:12:13.215148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.645 [2024-07-24 02:12:13.215173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.645 qpair failed and we were unable to recover it. 00:33:58.645 [2024-07-24 02:12:13.215322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.645 [2024-07-24 02:12:13.215362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.645 qpair failed and we were unable to recover it. 00:33:58.645 [2024-07-24 02:12:13.215515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.645 [2024-07-24 02:12:13.215558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.645 qpair failed and we were unable to recover it. 00:33:58.645 [2024-07-24 02:12:13.215703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.645 [2024-07-24 02:12:13.215745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.645 qpair failed and we were unable to recover it. 00:33:58.645 [2024-07-24 02:12:13.215886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.645 [2024-07-24 02:12:13.215928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.645 qpair failed and we were unable to recover it. 00:33:58.645 [2024-07-24 02:12:13.216061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.645 [2024-07-24 02:12:13.216087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.645 qpair failed and we were unable to recover it. 00:33:58.645 [2024-07-24 02:12:13.216244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.645 [2024-07-24 02:12:13.216270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.645 qpair failed and we were unable to recover it. 00:33:58.645 [2024-07-24 02:12:13.216434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.645 [2024-07-24 02:12:13.216473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.645 qpair failed and we were unable to recover it. 00:33:58.645 [2024-07-24 02:12:13.216609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.645 [2024-07-24 02:12:13.216639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.645 qpair failed and we were unable to recover it. 00:33:58.645 [2024-07-24 02:12:13.216780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.645 [2024-07-24 02:12:13.216808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.645 qpair failed and we were unable to recover it. 00:33:58.645 [2024-07-24 02:12:13.216989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.645 [2024-07-24 02:12:13.217022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.645 qpair failed and we were unable to recover it. 00:33:58.645 [2024-07-24 02:12:13.217193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.645 [2024-07-24 02:12:13.217220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.645 qpair failed and we were unable to recover it. 00:33:58.645 [2024-07-24 02:12:13.217336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.645 [2024-07-24 02:12:13.217378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.645 qpair failed and we were unable to recover it. 00:33:58.645 [2024-07-24 02:12:13.217533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.645 [2024-07-24 02:12:13.217557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.645 qpair failed and we were unable to recover it. 00:33:58.645 [2024-07-24 02:12:13.217712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.645 [2024-07-24 02:12:13.217740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.645 qpair failed and we were unable to recover it. 00:33:58.645 [2024-07-24 02:12:13.217886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.645 [2024-07-24 02:12:13.217915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.645 qpair failed and we were unable to recover it. 00:33:58.645 [2024-07-24 02:12:13.218022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.645 [2024-07-24 02:12:13.218049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.645 qpair failed and we were unable to recover it. 00:33:58.645 [2024-07-24 02:12:13.218200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.645 [2024-07-24 02:12:13.218227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.645 qpair failed and we were unable to recover it. 00:33:58.645 [2024-07-24 02:12:13.218342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.645 [2024-07-24 02:12:13.218386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.645 qpair failed and we were unable to recover it. 00:33:58.645 [2024-07-24 02:12:13.218515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.645 [2024-07-24 02:12:13.218540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.645 qpair failed and we were unable to recover it. 00:33:58.645 [2024-07-24 02:12:13.218640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.645 [2024-07-24 02:12:13.218665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.645 qpair failed and we were unable to recover it. 00:33:58.645 [2024-07-24 02:12:13.218823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.645 [2024-07-24 02:12:13.218850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.645 qpair failed and we were unable to recover it. 00:33:58.645 [2024-07-24 02:12:13.218969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.645 [2024-07-24 02:12:13.219010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.645 qpair failed and we were unable to recover it. 00:33:58.645 [2024-07-24 02:12:13.219157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.645 [2024-07-24 02:12:13.219186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.645 qpair failed and we were unable to recover it. 00:33:58.645 [2024-07-24 02:12:13.219377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.645 [2024-07-24 02:12:13.219402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.645 qpair failed and we were unable to recover it. 00:33:58.645 [2024-07-24 02:12:13.219508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.645 [2024-07-24 02:12:13.219533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.645 qpair failed and we were unable to recover it. 00:33:58.645 [2024-07-24 02:12:13.219664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.645 [2024-07-24 02:12:13.219689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.645 qpair failed and we were unable to recover it. 00:33:58.645 [2024-07-24 02:12:13.219810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.645 [2024-07-24 02:12:13.219837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.645 qpair failed and we were unable to recover it. 00:33:58.645 [2024-07-24 02:12:13.219956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.645 [2024-07-24 02:12:13.219996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.645 qpair failed and we were unable to recover it. 00:33:58.645 [2024-07-24 02:12:13.220134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.645 [2024-07-24 02:12:13.220161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.645 qpair failed and we were unable to recover it. 00:33:58.645 [2024-07-24 02:12:13.220267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.645 [2024-07-24 02:12:13.220294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.645 qpair failed and we were unable to recover it. 00:33:58.645 [2024-07-24 02:12:13.220418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.645 [2024-07-24 02:12:13.220444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.645 qpair failed and we were unable to recover it. 00:33:58.645 [2024-07-24 02:12:13.220579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.645 [2024-07-24 02:12:13.220622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.645 qpair failed and we were unable to recover it. 00:33:58.645 [2024-07-24 02:12:13.220788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.645 [2024-07-24 02:12:13.220816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.645 qpair failed and we were unable to recover it. 00:33:58.645 [2024-07-24 02:12:13.220973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.645 [2024-07-24 02:12:13.221000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.645 qpair failed and we were unable to recover it. 00:33:58.645 [2024-07-24 02:12:13.221106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.645 [2024-07-24 02:12:13.221133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.645 qpair failed and we were unable to recover it. 00:33:58.645 [2024-07-24 02:12:13.221270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.645 [2024-07-24 02:12:13.221295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.645 qpair failed and we were unable to recover it. 00:33:58.645 [2024-07-24 02:12:13.221430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.645 [2024-07-24 02:12:13.221455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.645 qpair failed and we were unable to recover it. 00:33:58.645 [2024-07-24 02:12:13.221566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.645 [2024-07-24 02:12:13.221591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.645 qpair failed and we were unable to recover it. 00:33:58.645 [2024-07-24 02:12:13.221754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.645 [2024-07-24 02:12:13.221781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.645 qpair failed and we were unable to recover it. 00:33:58.645 [2024-07-24 02:12:13.221986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.645 [2024-07-24 02:12:13.222013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.645 qpair failed and we were unable to recover it. 00:33:58.645 [2024-07-24 02:12:13.222119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.645 [2024-07-24 02:12:13.222146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.645 qpair failed and we were unable to recover it. 00:33:58.645 [2024-07-24 02:12:13.222291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.645 [2024-07-24 02:12:13.222325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.645 qpair failed and we were unable to recover it. 00:33:58.645 [2024-07-24 02:12:13.222442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.645 [2024-07-24 02:12:13.222468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.645 qpair failed and we were unable to recover it. 00:33:58.645 [2024-07-24 02:12:13.222640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.645 [2024-07-24 02:12:13.222667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.645 qpair failed and we were unable to recover it. 00:33:58.645 [2024-07-24 02:12:13.222780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.645 [2024-07-24 02:12:13.222809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.645 qpair failed and we were unable to recover it. 00:33:58.645 [2024-07-24 02:12:13.222935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.645 [2024-07-24 02:12:13.222976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.645 qpair failed and we were unable to recover it. 00:33:58.645 [2024-07-24 02:12:13.223113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.645 [2024-07-24 02:12:13.223140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.645 qpair failed and we were unable to recover it. 00:33:58.645 [2024-07-24 02:12:13.223282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.645 [2024-07-24 02:12:13.223310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.645 qpair failed and we were unable to recover it. 00:33:58.645 [2024-07-24 02:12:13.223441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.645 [2024-07-24 02:12:13.223466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.645 qpair failed and we were unable to recover it. 00:33:58.645 [2024-07-24 02:12:13.223622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.645 [2024-07-24 02:12:13.223647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.645 qpair failed and we were unable to recover it. 00:33:58.645 [2024-07-24 02:12:13.223818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.645 [2024-07-24 02:12:13.223849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.645 qpair failed and we were unable to recover it. 00:33:58.645 [2024-07-24 02:12:13.224021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.645 [2024-07-24 02:12:13.224048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.645 qpair failed and we were unable to recover it. 00:33:58.645 [2024-07-24 02:12:13.224162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.645 [2024-07-24 02:12:13.224189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.645 qpair failed and we were unable to recover it. 00:33:58.645 [2024-07-24 02:12:13.224301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.645 [2024-07-24 02:12:13.224342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.645 qpair failed and we were unable to recover it. 00:33:58.645 [2024-07-24 02:12:13.224495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.645 [2024-07-24 02:12:13.224520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.645 qpair failed and we were unable to recover it. 00:33:58.645 [2024-07-24 02:12:13.224671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.645 [2024-07-24 02:12:13.224698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.645 qpair failed and we were unable to recover it. 00:33:58.645 [2024-07-24 02:12:13.224814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.645 [2024-07-24 02:12:13.224841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.645 qpair failed and we were unable to recover it. 00:33:58.645 [2024-07-24 02:12:13.225043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.645 [2024-07-24 02:12:13.225071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.645 qpair failed and we were unable to recover it. 00:33:58.645 [2024-07-24 02:12:13.225213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.645 [2024-07-24 02:12:13.225240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.645 qpair failed and we were unable to recover it. 00:33:58.645 [2024-07-24 02:12:13.225398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.645 [2024-07-24 02:12:13.225423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.645 qpair failed and we were unable to recover it. 00:33:58.645 [2024-07-24 02:12:13.225532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.645 [2024-07-24 02:12:13.225558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.645 qpair failed and we were unable to recover it. 00:33:58.645 [2024-07-24 02:12:13.225659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.645 [2024-07-24 02:12:13.225683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.645 qpair failed and we were unable to recover it. 00:33:58.645 [2024-07-24 02:12:13.225793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.645 [2024-07-24 02:12:13.225821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.645 qpair failed and we were unable to recover it. 00:33:58.645 [2024-07-24 02:12:13.225985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.646 [2024-07-24 02:12:13.226012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.646 qpair failed and we were unable to recover it. 00:33:58.646 [2024-07-24 02:12:13.226187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.646 [2024-07-24 02:12:13.226214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.646 qpair failed and we were unable to recover it. 00:33:58.646 [2024-07-24 02:12:13.226365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.646 [2024-07-24 02:12:13.226391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.646 qpair failed and we were unable to recover it. 00:33:58.646 [2024-07-24 02:12:13.226549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.646 [2024-07-24 02:12:13.226573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.646 qpair failed and we were unable to recover it. 00:33:58.646 [2024-07-24 02:12:13.226754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.646 [2024-07-24 02:12:13.226781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.646 qpair failed and we were unable to recover it. 00:33:58.646 [2024-07-24 02:12:13.226972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.646 [2024-07-24 02:12:13.226999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.646 qpair failed and we were unable to recover it. 00:33:58.646 [2024-07-24 02:12:13.227230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.646 [2024-07-24 02:12:13.227257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.646 qpair failed and we were unable to recover it. 00:33:58.646 [2024-07-24 02:12:13.227396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.646 [2024-07-24 02:12:13.227422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.646 qpair failed and we were unable to recover it. 00:33:58.646 [2024-07-24 02:12:13.227524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.646 [2024-07-24 02:12:13.227548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.646 qpair failed and we were unable to recover it. 00:33:58.646 [2024-07-24 02:12:13.227703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.646 [2024-07-24 02:12:13.227727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.646 qpair failed and we were unable to recover it. 00:33:58.646 [2024-07-24 02:12:13.227878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.646 [2024-07-24 02:12:13.227905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.646 qpair failed and we were unable to recover it. 00:33:58.646 [2024-07-24 02:12:13.228083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.646 [2024-07-24 02:12:13.228110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.646 qpair failed and we were unable to recover it. 00:33:58.646 [2024-07-24 02:12:13.228281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.646 [2024-07-24 02:12:13.228305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.646 qpair failed and we were unable to recover it. 00:33:58.646 [2024-07-24 02:12:13.228442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.646 [2024-07-24 02:12:13.228466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.646 qpair failed and we were unable to recover it. 00:33:58.646 [2024-07-24 02:12:13.228639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.646 [2024-07-24 02:12:13.228666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.646 qpair failed and we were unable to recover it. 00:33:58.646 [2024-07-24 02:12:13.228781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.646 [2024-07-24 02:12:13.228808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.646 qpair failed and we were unable to recover it. 00:33:58.646 [2024-07-24 02:12:13.228948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.646 [2024-07-24 02:12:13.228976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.646 qpair failed and we were unable to recover it. 00:33:58.646 [2024-07-24 02:12:13.229117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.646 [2024-07-24 02:12:13.229144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.646 qpair failed and we were unable to recover it. 00:33:58.646 [2024-07-24 02:12:13.229326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.646 [2024-07-24 02:12:13.229351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.646 qpair failed and we were unable to recover it. 00:33:58.646 [2024-07-24 02:12:13.229462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.646 [2024-07-24 02:12:13.229487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.646 qpair failed and we were unable to recover it. 00:33:58.646 [2024-07-24 02:12:13.229601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.646 [2024-07-24 02:12:13.229625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.646 qpair failed and we were unable to recover it. 00:33:58.646 [2024-07-24 02:12:13.229722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.646 [2024-07-24 02:12:13.229747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.646 qpair failed and we were unable to recover it. 00:33:58.646 [2024-07-24 02:12:13.229851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.646 [2024-07-24 02:12:13.229876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.646 qpair failed and we were unable to recover it. 00:33:58.646 [2024-07-24 02:12:13.229994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.646 [2024-07-24 02:12:13.230022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.646 qpair failed and we were unable to recover it. 00:33:58.646 [2024-07-24 02:12:13.230191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.646 [2024-07-24 02:12:13.230218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.646 qpair failed and we were unable to recover it. 00:33:58.646 [2024-07-24 02:12:13.230369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.646 [2024-07-24 02:12:13.230394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.646 qpair failed and we were unable to recover it. 00:33:58.646 [2024-07-24 02:12:13.230498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.646 [2024-07-24 02:12:13.230523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.646 qpair failed and we were unable to recover it. 00:33:58.646 [2024-07-24 02:12:13.230624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.646 [2024-07-24 02:12:13.230665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.646 qpair failed and we were unable to recover it. 00:33:58.646 [2024-07-24 02:12:13.230838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.646 [2024-07-24 02:12:13.230865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.646 qpair failed and we were unable to recover it. 00:33:58.646 [2024-07-24 02:12:13.231001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.646 [2024-07-24 02:12:13.231028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.646 qpair failed and we were unable to recover it. 00:33:58.646 [2024-07-24 02:12:13.231168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.646 [2024-07-24 02:12:13.231196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.646 qpair failed and we were unable to recover it. 00:33:58.646 [2024-07-24 02:12:13.231385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.646 [2024-07-24 02:12:13.231410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.646 qpair failed and we were unable to recover it. 00:33:58.646 [2024-07-24 02:12:13.231533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.646 [2024-07-24 02:12:13.231558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.646 qpair failed and we were unable to recover it. 00:33:58.646 [2024-07-24 02:12:13.231685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.646 [2024-07-24 02:12:13.231712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.646 qpair failed and we were unable to recover it. 00:33:58.646 [2024-07-24 02:12:13.231848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.646 [2024-07-24 02:12:13.231875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.646 qpair failed and we were unable to recover it. 00:33:58.646 [2024-07-24 02:12:13.232013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.646 [2024-07-24 02:12:13.232040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.646 qpair failed and we were unable to recover it. 00:33:58.646 [2024-07-24 02:12:13.232231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.646 [2024-07-24 02:12:13.232286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.646 qpair failed and we were unable to recover it. 00:33:58.646 [2024-07-24 02:12:13.232444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.646 [2024-07-24 02:12:13.232479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.646 qpair failed and we were unable to recover it. 00:33:58.646 [2024-07-24 02:12:13.232598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.646 [2024-07-24 02:12:13.232624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.646 qpair failed and we were unable to recover it. 00:33:58.646 [2024-07-24 02:12:13.232756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.646 [2024-07-24 02:12:13.232786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.646 qpair failed and we were unable to recover it. 00:33:58.646 [2024-07-24 02:12:13.232967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.646 [2024-07-24 02:12:13.232995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.646 qpair failed and we were unable to recover it. 00:33:58.646 [2024-07-24 02:12:13.233149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.646 [2024-07-24 02:12:13.233174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.646 qpair failed and we were unable to recover it. 00:33:58.646 [2024-07-24 02:12:13.233341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.646 [2024-07-24 02:12:13.233368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.646 qpair failed and we were unable to recover it. 00:33:58.646 [2024-07-24 02:12:13.233490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.646 [2024-07-24 02:12:13.233518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.646 qpair failed and we were unable to recover it. 00:33:58.646 [2024-07-24 02:12:13.233728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.646 [2024-07-24 02:12:13.233756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.646 qpair failed and we were unable to recover it. 00:33:58.646 [2024-07-24 02:12:13.233984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.646 [2024-07-24 02:12:13.234041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.646 qpair failed and we were unable to recover it. 00:33:58.646 [2024-07-24 02:12:13.234225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.646 [2024-07-24 02:12:13.234249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.646 qpair failed and we were unable to recover it. 00:33:58.646 [2024-07-24 02:12:13.234395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.646 [2024-07-24 02:12:13.234423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.646 qpair failed and we were unable to recover it. 00:33:58.646 [2024-07-24 02:12:13.234696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.646 [2024-07-24 02:12:13.234748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.646 qpair failed and we were unable to recover it. 00:33:58.646 [2024-07-24 02:12:13.234895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.646 [2024-07-24 02:12:13.234938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.646 qpair failed and we were unable to recover it. 00:33:58.646 [2024-07-24 02:12:13.235071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.646 [2024-07-24 02:12:13.235096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.646 qpair failed and we were unable to recover it. 00:33:58.646 [2024-07-24 02:12:13.235201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.646 [2024-07-24 02:12:13.235225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.646 qpair failed and we were unable to recover it. 00:33:58.646 [2024-07-24 02:12:13.235356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.646 [2024-07-24 02:12:13.235381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.646 qpair failed and we were unable to recover it. 00:33:58.646 [2024-07-24 02:12:13.235563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.646 [2024-07-24 02:12:13.235591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.646 qpair failed and we were unable to recover it. 00:33:58.646 [2024-07-24 02:12:13.235771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.646 [2024-07-24 02:12:13.235798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.646 qpair failed and we were unable to recover it. 00:33:58.646 [2024-07-24 02:12:13.235926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.646 [2024-07-24 02:12:13.235955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.646 qpair failed and we were unable to recover it. 00:33:58.646 [2024-07-24 02:12:13.236112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.646 [2024-07-24 02:12:13.236137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.646 qpair failed and we were unable to recover it. 00:33:58.646 [2024-07-24 02:12:13.236294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.646 [2024-07-24 02:12:13.236324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.646 qpair failed and we were unable to recover it. 00:33:58.646 [2024-07-24 02:12:13.236499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.646 [2024-07-24 02:12:13.236527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.646 qpair failed and we were unable to recover it. 00:33:58.646 [2024-07-24 02:12:13.236705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.646 [2024-07-24 02:12:13.236729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.646 qpair failed and we were unable to recover it. 00:33:58.646 [2024-07-24 02:12:13.236906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.646 [2024-07-24 02:12:13.236967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.646 qpair failed and we were unable to recover it. 00:33:58.646 [2024-07-24 02:12:13.237115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.646 [2024-07-24 02:12:13.237140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.646 qpair failed and we were unable to recover it. 00:33:58.646 [2024-07-24 02:12:13.237240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.646 [2024-07-24 02:12:13.237264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.646 qpair failed and we were unable to recover it. 00:33:58.646 [2024-07-24 02:12:13.237384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.646 [2024-07-24 02:12:13.237411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.646 qpair failed and we were unable to recover it. 00:33:58.646 [2024-07-24 02:12:13.237699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.646 [2024-07-24 02:12:13.237750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.646 qpair failed and we were unable to recover it. 00:33:58.646 [2024-07-24 02:12:13.237928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.647 [2024-07-24 02:12:13.237953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.647 qpair failed and we were unable to recover it. 00:33:58.647 [2024-07-24 02:12:13.238086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.647 [2024-07-24 02:12:13.238111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.647 qpair failed and we were unable to recover it. 00:33:58.647 [2024-07-24 02:12:13.238264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.647 [2024-07-24 02:12:13.238289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.647 qpair failed and we were unable to recover it. 00:33:58.647 [2024-07-24 02:12:13.238443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.647 [2024-07-24 02:12:13.238470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.647 qpair failed and we were unable to recover it. 00:33:58.647 [2024-07-24 02:12:13.238645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.647 [2024-07-24 02:12:13.238672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.647 qpair failed and we were unable to recover it. 00:33:58.647 [2024-07-24 02:12:13.238840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.647 [2024-07-24 02:12:13.238867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.647 qpair failed and we were unable to recover it. 00:33:58.647 [2024-07-24 02:12:13.238992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.647 [2024-07-24 02:12:13.239017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.647 qpair failed and we were unable to recover it. 00:33:58.647 [2024-07-24 02:12:13.239148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.647 [2024-07-24 02:12:13.239173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.647 qpair failed and we were unable to recover it. 00:33:58.647 [2024-07-24 02:12:13.239298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.647 [2024-07-24 02:12:13.239329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.647 qpair failed and we were unable to recover it. 00:33:58.647 [2024-07-24 02:12:13.239454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.647 [2024-07-24 02:12:13.239481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.647 qpair failed and we were unable to recover it. 00:33:58.647 [2024-07-24 02:12:13.239599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.647 [2024-07-24 02:12:13.239626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.647 qpair failed and we were unable to recover it. 00:33:58.647 [2024-07-24 02:12:13.239824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.647 [2024-07-24 02:12:13.239852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.647 qpair failed and we were unable to recover it. 00:33:58.647 [2024-07-24 02:12:13.239976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.647 [2024-07-24 02:12:13.240000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.647 qpair failed and we were unable to recover it. 00:33:58.647 [2024-07-24 02:12:13.240127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.647 [2024-07-24 02:12:13.240151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.647 qpair failed and we were unable to recover it. 00:33:58.647 [2024-07-24 02:12:13.240284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.647 [2024-07-24 02:12:13.240308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.647 qpair failed and we were unable to recover it. 00:33:58.647 [2024-07-24 02:12:13.240415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.647 [2024-07-24 02:12:13.240441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.647 qpair failed and we were unable to recover it. 00:33:58.647 [2024-07-24 02:12:13.240538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.647 [2024-07-24 02:12:13.240563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.647 qpair failed and we were unable to recover it. 00:33:58.647 [2024-07-24 02:12:13.240657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.647 [2024-07-24 02:12:13.240685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.647 qpair failed and we were unable to recover it. 00:33:58.647 [2024-07-24 02:12:13.240818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.647 [2024-07-24 02:12:13.240843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.647 qpair failed and we were unable to recover it. 00:33:58.647 [2024-07-24 02:12:13.241009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.647 [2024-07-24 02:12:13.241034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.647 qpair failed and we were unable to recover it. 00:33:58.647 [2024-07-24 02:12:13.241147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.647 [2024-07-24 02:12:13.241171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.647 qpair failed and we were unable to recover it. 00:33:58.647 [2024-07-24 02:12:13.241276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.647 [2024-07-24 02:12:13.241302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.647 qpair failed and we were unable to recover it. 00:33:58.647 [2024-07-24 02:12:13.241472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.647 [2024-07-24 02:12:13.241497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.647 qpair failed and we were unable to recover it. 00:33:58.647 [2024-07-24 02:12:13.241659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.647 [2024-07-24 02:12:13.241684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.647 qpair failed and we were unable to recover it. 00:33:58.647 [2024-07-24 02:12:13.241795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.647 [2024-07-24 02:12:13.241819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.647 qpair failed and we were unable to recover it. 00:33:58.647 [2024-07-24 02:12:13.241923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.647 [2024-07-24 02:12:13.241947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.647 qpair failed and we were unable to recover it. 00:33:58.647 [2024-07-24 02:12:13.242109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.647 [2024-07-24 02:12:13.242134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.647 qpair failed and we were unable to recover it. 00:33:58.647 [2024-07-24 02:12:13.242261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.647 [2024-07-24 02:12:13.242285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.647 qpair failed and we were unable to recover it. 00:33:58.647 [2024-07-24 02:12:13.242400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.647 [2024-07-24 02:12:13.242427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.647 qpair failed and we were unable to recover it. 00:33:58.647 [2024-07-24 02:12:13.242587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.647 [2024-07-24 02:12:13.242612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.647 qpair failed and we were unable to recover it. 00:33:58.647 [2024-07-24 02:12:13.242770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.647 [2024-07-24 02:12:13.242794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.647 qpair failed and we were unable to recover it. 00:33:58.647 [2024-07-24 02:12:13.242926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.647 [2024-07-24 02:12:13.242950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.647 qpair failed and we were unable to recover it. 00:33:58.647 [2024-07-24 02:12:13.243107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.647 [2024-07-24 02:12:13.243131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.647 qpair failed and we were unable to recover it. 00:33:58.647 [2024-07-24 02:12:13.243286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.647 [2024-07-24 02:12:13.243313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.647 qpair failed and we were unable to recover it. 00:33:58.647 [2024-07-24 02:12:13.243494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.647 [2024-07-24 02:12:13.243518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.647 qpair failed and we were unable to recover it. 00:33:58.647 [2024-07-24 02:12:13.243652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.647 [2024-07-24 02:12:13.243677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.647 qpair failed and we were unable to recover it. 00:33:58.647 [2024-07-24 02:12:13.243833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.647 [2024-07-24 02:12:13.243857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.647 qpair failed and we were unable to recover it. 00:33:58.647 [2024-07-24 02:12:13.243995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.647 [2024-07-24 02:12:13.244021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.647 qpair failed and we were unable to recover it. 00:33:58.647 [2024-07-24 02:12:13.244183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.647 [2024-07-24 02:12:13.244210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.647 qpair failed and we were unable to recover it. 00:33:58.647 [2024-07-24 02:12:13.244339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.647 [2024-07-24 02:12:13.244365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.647 qpair failed and we were unable to recover it. 00:33:58.647 [2024-07-24 02:12:13.244492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.647 [2024-07-24 02:12:13.244516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.647 qpair failed and we were unable to recover it. 00:33:58.647 [2024-07-24 02:12:13.244677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.647 [2024-07-24 02:12:13.244702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.647 qpair failed and we were unable to recover it. 00:33:58.647 [2024-07-24 02:12:13.244792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.647 [2024-07-24 02:12:13.244817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.647 qpair failed and we were unable to recover it. 00:33:58.647 [2024-07-24 02:12:13.244970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.647 [2024-07-24 02:12:13.244995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.647 qpair failed and we were unable to recover it. 00:33:58.647 [2024-07-24 02:12:13.245125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.647 [2024-07-24 02:12:13.245150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.647 qpair failed and we were unable to recover it. 00:33:58.647 [2024-07-24 02:12:13.245261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.647 [2024-07-24 02:12:13.245286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.647 qpair failed and we were unable to recover it. 00:33:58.647 [2024-07-24 02:12:13.245421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.647 [2024-07-24 02:12:13.245446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.647 qpair failed and we were unable to recover it. 00:33:58.647 [2024-07-24 02:12:13.245604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.647 [2024-07-24 02:12:13.245629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.647 qpair failed and we were unable to recover it. 00:33:58.647 [2024-07-24 02:12:13.245730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.647 [2024-07-24 02:12:13.245755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.647 qpair failed and we were unable to recover it. 00:33:58.647 [2024-07-24 02:12:13.245909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.647 [2024-07-24 02:12:13.245933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.647 qpair failed and we were unable to recover it. 00:33:58.647 [2024-07-24 02:12:13.246061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.647 [2024-07-24 02:12:13.246085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.647 qpair failed and we were unable to recover it. 00:33:58.647 [2024-07-24 02:12:13.246207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.647 [2024-07-24 02:12:13.246231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.647 qpair failed and we were unable to recover it. 00:33:58.647 [2024-07-24 02:12:13.246353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.647 [2024-07-24 02:12:13.246378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.647 qpair failed and we were unable to recover it. 00:33:58.647 [2024-07-24 02:12:13.246536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.647 [2024-07-24 02:12:13.246561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.647 qpair failed and we were unable to recover it. 00:33:58.647 [2024-07-24 02:12:13.246693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.647 [2024-07-24 02:12:13.246718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.647 qpair failed and we were unable to recover it. 00:33:58.647 [2024-07-24 02:12:13.246815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.647 [2024-07-24 02:12:13.246839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.647 qpair failed and we were unable to recover it. 00:33:58.647 [2024-07-24 02:12:13.246945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.647 [2024-07-24 02:12:13.246970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.647 qpair failed and we were unable to recover it. 00:33:58.647 [2024-07-24 02:12:13.247070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.647 [2024-07-24 02:12:13.247094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.647 qpair failed and we were unable to recover it. 00:33:58.647 [2024-07-24 02:12:13.247205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.647 [2024-07-24 02:12:13.247235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.647 qpair failed and we were unable to recover it. 00:33:58.647 [2024-07-24 02:12:13.247379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.647 [2024-07-24 02:12:13.247406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.647 qpair failed and we were unable to recover it. 00:33:58.647 [2024-07-24 02:12:13.247513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.647 [2024-07-24 02:12:13.247537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.647 qpair failed and we were unable to recover it. 00:33:58.647 [2024-07-24 02:12:13.247666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.647 [2024-07-24 02:12:13.247691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.647 qpair failed and we were unable to recover it. 00:33:58.647 [2024-07-24 02:12:13.247823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.647 [2024-07-24 02:12:13.247847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.647 qpair failed and we were unable to recover it. 00:33:58.647 [2024-07-24 02:12:13.247954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.647 [2024-07-24 02:12:13.247979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.647 qpair failed and we were unable to recover it. 00:33:58.647 [2024-07-24 02:12:13.248136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.647 [2024-07-24 02:12:13.248161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.647 qpair failed and we were unable to recover it. 00:33:58.647 [2024-07-24 02:12:13.248288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.647 [2024-07-24 02:12:13.248312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.647 qpair failed and we were unable to recover it. 00:33:58.647 [2024-07-24 02:12:13.248480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.647 [2024-07-24 02:12:13.248505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.647 qpair failed and we were unable to recover it. 00:33:58.647 [2024-07-24 02:12:13.248616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.647 [2024-07-24 02:12:13.248641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.647 qpair failed and we were unable to recover it. 00:33:58.647 [2024-07-24 02:12:13.248748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.647 [2024-07-24 02:12:13.248773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.647 qpair failed and we were unable to recover it. 00:33:58.647 [2024-07-24 02:12:13.248874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.647 [2024-07-24 02:12:13.248899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.647 qpair failed and we were unable to recover it. 00:33:58.647 [2024-07-24 02:12:13.248992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.647 [2024-07-24 02:12:13.249017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.647 qpair failed and we were unable to recover it. 00:33:58.647 [2024-07-24 02:12:13.249117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.648 [2024-07-24 02:12:13.249142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.648 qpair failed and we were unable to recover it. 00:33:58.648 [2024-07-24 02:12:13.249305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.648 [2024-07-24 02:12:13.249335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.648 qpair failed and we were unable to recover it. 00:33:58.648 [2024-07-24 02:12:13.249441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.648 [2024-07-24 02:12:13.249465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.648 qpair failed and we were unable to recover it. 00:33:58.648 [2024-07-24 02:12:13.249598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.648 [2024-07-24 02:12:13.249623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.648 qpair failed and we were unable to recover it. 00:33:58.648 [2024-07-24 02:12:13.249751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.648 [2024-07-24 02:12:13.249776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.648 qpair failed and we were unable to recover it. 00:33:58.648 [2024-07-24 02:12:13.249877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.648 [2024-07-24 02:12:13.249902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.648 qpair failed and we were unable to recover it. 00:33:58.648 [2024-07-24 02:12:13.250000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.648 [2024-07-24 02:12:13.250024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.648 qpair failed and we were unable to recover it. 00:33:58.648 [2024-07-24 02:12:13.250200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.648 [2024-07-24 02:12:13.250227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.648 qpair failed and we were unable to recover it. 00:33:58.648 [2024-07-24 02:12:13.250397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.648 [2024-07-24 02:12:13.250425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.648 qpair failed and we were unable to recover it. 00:33:58.648 [2024-07-24 02:12:13.250589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.648 [2024-07-24 02:12:13.250615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.648 qpair failed and we were unable to recover it. 00:33:58.648 [2024-07-24 02:12:13.250741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.648 [2024-07-24 02:12:13.250766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.648 qpair failed and we were unable to recover it. 00:33:58.648 [2024-07-24 02:12:13.250920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.648 [2024-07-24 02:12:13.250945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.648 qpair failed and we were unable to recover it. 00:33:58.648 [2024-07-24 02:12:13.251070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.648 [2024-07-24 02:12:13.251095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.648 qpair failed and we were unable to recover it. 00:33:58.648 [2024-07-24 02:12:13.251251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.648 [2024-07-24 02:12:13.251275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.648 qpair failed and we were unable to recover it. 00:33:58.648 [2024-07-24 02:12:13.251409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.648 [2024-07-24 02:12:13.251438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.648 qpair failed and we were unable to recover it. 00:33:58.648 [2024-07-24 02:12:13.251540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.648 [2024-07-24 02:12:13.251565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.648 qpair failed and we were unable to recover it. 00:33:58.648 [2024-07-24 02:12:13.251691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.648 [2024-07-24 02:12:13.251716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.648 qpair failed and we were unable to recover it. 00:33:58.648 [2024-07-24 02:12:13.251813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.648 [2024-07-24 02:12:13.251838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.648 qpair failed and we were unable to recover it. 00:33:58.648 [2024-07-24 02:12:13.251939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.648 [2024-07-24 02:12:13.251964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.648 qpair failed and we were unable to recover it. 00:33:58.648 [2024-07-24 02:12:13.252067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.648 [2024-07-24 02:12:13.252091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.648 qpair failed and we were unable to recover it. 00:33:58.648 [2024-07-24 02:12:13.252245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.648 [2024-07-24 02:12:13.252273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.648 qpair failed and we were unable to recover it. 00:33:58.648 [2024-07-24 02:12:13.252412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.648 [2024-07-24 02:12:13.252445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.648 qpair failed and we were unable to recover it. 00:33:58.648 [2024-07-24 02:12:13.252614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.648 [2024-07-24 02:12:13.252641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.648 qpair failed and we were unable to recover it. 00:33:58.648 [2024-07-24 02:12:13.252797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.648 [2024-07-24 02:12:13.252840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.648 qpair failed and we were unable to recover it. 00:33:58.648 [2024-07-24 02:12:13.252971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.648 [2024-07-24 02:12:13.252996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.648 qpair failed and we were unable to recover it. 00:33:58.648 [2024-07-24 02:12:13.253094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.648 [2024-07-24 02:12:13.253119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.648 qpair failed and we were unable to recover it. 00:33:58.648 [2024-07-24 02:12:13.253218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.648 [2024-07-24 02:12:13.253244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.648 qpair failed and we were unable to recover it. 00:33:58.648 [2024-07-24 02:12:13.253410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.648 [2024-07-24 02:12:13.253440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.648 qpair failed and we were unable to recover it. 00:33:58.648 [2024-07-24 02:12:13.253589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.648 [2024-07-24 02:12:13.253617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.648 qpair failed and we were unable to recover it. 00:33:58.648 [2024-07-24 02:12:13.253771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.648 [2024-07-24 02:12:13.253811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.648 qpair failed and we were unable to recover it. 00:33:58.648 [2024-07-24 02:12:13.254029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.648 [2024-07-24 02:12:13.254057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.648 qpair failed and we were unable to recover it. 00:33:58.648 [2024-07-24 02:12:13.254191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.648 [2024-07-24 02:12:13.254218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.648 qpair failed and we were unable to recover it. 00:33:58.648 [2024-07-24 02:12:13.254356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.648 [2024-07-24 02:12:13.254382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.648 qpair failed and we were unable to recover it. 00:33:58.648 [2024-07-24 02:12:13.254513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.648 [2024-07-24 02:12:13.254538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.648 qpair failed and we were unable to recover it. 00:33:58.648 [2024-07-24 02:12:13.254690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.648 [2024-07-24 02:12:13.254733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.648 qpair failed and we were unable to recover it. 00:33:58.648 [2024-07-24 02:12:13.254885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.648 [2024-07-24 02:12:13.254914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.648 qpair failed and we were unable to recover it. 00:33:58.648 [2024-07-24 02:12:13.255087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.648 [2024-07-24 02:12:13.255112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.648 qpair failed and we were unable to recover it. 00:33:58.648 [2024-07-24 02:12:13.255208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.648 [2024-07-24 02:12:13.255233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.648 qpair failed and we were unable to recover it. 00:33:58.648 [2024-07-24 02:12:13.255338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.648 [2024-07-24 02:12:13.255364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.648 qpair failed and we were unable to recover it. 00:33:58.648 [2024-07-24 02:12:13.255520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.648 [2024-07-24 02:12:13.255563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.648 qpair failed and we were unable to recover it. 00:33:58.648 [2024-07-24 02:12:13.255710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.648 [2024-07-24 02:12:13.255738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.648 qpair failed and we were unable to recover it. 00:33:58.648 [2024-07-24 02:12:13.255884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.648 [2024-07-24 02:12:13.255916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.648 qpair failed and we were unable to recover it. 00:33:58.648 [2024-07-24 02:12:13.256050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.648 [2024-07-24 02:12:13.256077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.648 qpair failed and we were unable to recover it. 00:33:58.648 [2024-07-24 02:12:13.256206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.648 [2024-07-24 02:12:13.256232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.648 qpair failed and we were unable to recover it. 00:33:58.648 [2024-07-24 02:12:13.256379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.648 [2024-07-24 02:12:13.256423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.648 qpair failed and we were unable to recover it. 00:33:58.648 [2024-07-24 02:12:13.256527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.648 [2024-07-24 02:12:13.256555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.648 qpair failed and we were unable to recover it. 00:33:58.648 [2024-07-24 02:12:13.256734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.648 [2024-07-24 02:12:13.256777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.648 qpair failed and we were unable to recover it. 00:33:58.648 [2024-07-24 02:12:13.256910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.648 [2024-07-24 02:12:13.256935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.648 qpair failed and we were unable to recover it. 00:33:58.648 [2024-07-24 02:12:13.257064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.648 [2024-07-24 02:12:13.257089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.648 qpair failed and we were unable to recover it. 00:33:58.648 [2024-07-24 02:12:13.257221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.648 [2024-07-24 02:12:13.257246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.648 qpair failed and we were unable to recover it. 00:33:58.648 [2024-07-24 02:12:13.257371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.648 [2024-07-24 02:12:13.257400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.648 qpair failed and we were unable to recover it. 00:33:58.648 [2024-07-24 02:12:13.257540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.648 [2024-07-24 02:12:13.257582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.648 qpair failed and we were unable to recover it. 00:33:58.648 [2024-07-24 02:12:13.257730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.648 [2024-07-24 02:12:13.257774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.648 qpair failed and we were unable to recover it. 00:33:58.648 [2024-07-24 02:12:13.257940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.648 [2024-07-24 02:12:13.257965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.648 qpair failed and we were unable to recover it. 00:33:58.648 [2024-07-24 02:12:13.258097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.648 [2024-07-24 02:12:13.258124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.648 qpair failed and we were unable to recover it. 00:33:58.648 [2024-07-24 02:12:13.258241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.648 [2024-07-24 02:12:13.258266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.648 qpair failed and we were unable to recover it. 00:33:58.648 [2024-07-24 02:12:13.258387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.648 [2024-07-24 02:12:13.258417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.648 qpair failed and we were unable to recover it. 00:33:58.648 [2024-07-24 02:12:13.258576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.648 [2024-07-24 02:12:13.258602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.648 qpair failed and we were unable to recover it. 00:33:58.648 [2024-07-24 02:12:13.258731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.648 [2024-07-24 02:12:13.258773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.648 qpair failed and we were unable to recover it. 00:33:58.648 [2024-07-24 02:12:13.258938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.648 [2024-07-24 02:12:13.258963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.648 qpair failed and we were unable to recover it. 00:33:58.648 [2024-07-24 02:12:13.259094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.648 [2024-07-24 02:12:13.259119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.648 qpair failed and we were unable to recover it. 00:33:58.648 [2024-07-24 02:12:13.259218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.648 [2024-07-24 02:12:13.259243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.648 qpair failed and we were unable to recover it. 00:33:58.648 [2024-07-24 02:12:13.259374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.648 [2024-07-24 02:12:13.259401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.648 qpair failed and we were unable to recover it. 00:33:58.648 [2024-07-24 02:12:13.259538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.648 [2024-07-24 02:12:13.259563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.648 qpair failed and we were unable to recover it. 00:33:58.648 [2024-07-24 02:12:13.259720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.648 [2024-07-24 02:12:13.259764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.648 qpair failed and we were unable to recover it. 00:33:58.648 [2024-07-24 02:12:13.259880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.648 [2024-07-24 02:12:13.259906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.648 qpair failed and we were unable to recover it. 00:33:58.648 [2024-07-24 02:12:13.260031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.648 [2024-07-24 02:12:13.260056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.648 qpair failed and we were unable to recover it. 00:33:58.648 [2024-07-24 02:12:13.260184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.648 [2024-07-24 02:12:13.260209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.648 qpair failed and we were unable to recover it. 00:33:58.648 [2024-07-24 02:12:13.260351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.648 [2024-07-24 02:12:13.260377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.648 qpair failed and we were unable to recover it. 00:33:58.648 [2024-07-24 02:12:13.260481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.648 [2024-07-24 02:12:13.260506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.648 qpair failed and we were unable to recover it. 00:33:58.648 [2024-07-24 02:12:13.260617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.649 [2024-07-24 02:12:13.260642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.649 qpair failed and we were unable to recover it. 00:33:58.649 [2024-07-24 02:12:13.260755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.649 [2024-07-24 02:12:13.260781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.649 qpair failed and we were unable to recover it. 00:33:58.649 [2024-07-24 02:12:13.260909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.649 [2024-07-24 02:12:13.260936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.649 qpair failed and we were unable to recover it. 00:33:58.649 [2024-07-24 02:12:13.261064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.649 [2024-07-24 02:12:13.261090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.649 qpair failed and we were unable to recover it. 00:33:58.649 [2024-07-24 02:12:13.261220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.649 [2024-07-24 02:12:13.261245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.649 qpair failed and we were unable to recover it. 00:33:58.649 [2024-07-24 02:12:13.261420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.649 [2024-07-24 02:12:13.261459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.649 qpair failed and we were unable to recover it. 00:33:58.649 [2024-07-24 02:12:13.261605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.649 [2024-07-24 02:12:13.261632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.649 qpair failed and we were unable to recover it. 00:33:58.649 [2024-07-24 02:12:13.261740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.649 [2024-07-24 02:12:13.261782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.649 qpair failed and we were unable to recover it. 00:33:58.649 [2024-07-24 02:12:13.261926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.649 [2024-07-24 02:12:13.261953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.649 qpair failed and we were unable to recover it. 00:33:58.649 [2024-07-24 02:12:13.262090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.649 [2024-07-24 02:12:13.262117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.649 qpair failed and we were unable to recover it. 00:33:58.649 [2024-07-24 02:12:13.262274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.649 [2024-07-24 02:12:13.262301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.649 qpair failed and we were unable to recover it. 00:33:58.649 [2024-07-24 02:12:13.262517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.649 [2024-07-24 02:12:13.262547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.649 qpair failed and we were unable to recover it. 00:33:58.649 [2024-07-24 02:12:13.262725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.649 [2024-07-24 02:12:13.262768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.649 qpair failed and we were unable to recover it. 00:33:58.649 [2024-07-24 02:12:13.262893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.649 [2024-07-24 02:12:13.262936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.649 qpair failed and we were unable to recover it. 00:33:58.649 [2024-07-24 02:12:13.263067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.649 [2024-07-24 02:12:13.263092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.649 qpair failed and we were unable to recover it. 00:33:58.649 [2024-07-24 02:12:13.263226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.649 [2024-07-24 02:12:13.263252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.649 qpair failed and we were unable to recover it. 00:33:58.649 [2024-07-24 02:12:13.263409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.649 [2024-07-24 02:12:13.263453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.649 qpair failed and we were unable to recover it. 00:33:58.649 [2024-07-24 02:12:13.263598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.649 [2024-07-24 02:12:13.263626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.649 qpair failed and we were unable to recover it. 00:33:58.649 [2024-07-24 02:12:13.263806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.649 [2024-07-24 02:12:13.263848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.649 qpair failed and we were unable to recover it. 00:33:58.649 [2024-07-24 02:12:13.264003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.649 [2024-07-24 02:12:13.264028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.649 qpair failed and we were unable to recover it. 00:33:58.649 [2024-07-24 02:12:13.264131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.649 [2024-07-24 02:12:13.264157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.649 qpair failed and we were unable to recover it. 00:33:58.649 [2024-07-24 02:12:13.264286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.649 [2024-07-24 02:12:13.264311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.649 qpair failed and we were unable to recover it. 00:33:58.649 [2024-07-24 02:12:13.264436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.649 [2024-07-24 02:12:13.264464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.649 qpair failed and we were unable to recover it. 00:33:58.649 [2024-07-24 02:12:13.264630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.649 [2024-07-24 02:12:13.264658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.649 qpair failed and we were unable to recover it. 00:33:58.649 [2024-07-24 02:12:13.264833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.649 [2024-07-24 02:12:13.264858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.649 qpair failed and we were unable to recover it. 00:33:58.649 [2024-07-24 02:12:13.265021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.649 [2024-07-24 02:12:13.265046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.649 qpair failed and we were unable to recover it. 00:33:58.649 [2024-07-24 02:12:13.265179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.649 [2024-07-24 02:12:13.265204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.649 qpair failed and we were unable to recover it. 00:33:58.649 [2024-07-24 02:12:13.265313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.649 [2024-07-24 02:12:13.265348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.649 qpair failed and we were unable to recover it. 00:33:58.649 [2024-07-24 02:12:13.265528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.649 [2024-07-24 02:12:13.265571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.649 qpair failed and we were unable to recover it. 00:33:58.649 [2024-07-24 02:12:13.265755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.649 [2024-07-24 02:12:13.265803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.649 qpair failed and we were unable to recover it. 00:33:58.649 [2024-07-24 02:12:13.265978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.649 [2024-07-24 02:12:13.266021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.649 qpair failed and we were unable to recover it. 00:33:58.649 [2024-07-24 02:12:13.266186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.649 [2024-07-24 02:12:13.266211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.649 qpair failed and we were unable to recover it. 00:33:58.649 [2024-07-24 02:12:13.266385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.649 [2024-07-24 02:12:13.266429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.649 qpair failed and we were unable to recover it. 00:33:58.649 [2024-07-24 02:12:13.266608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.649 [2024-07-24 02:12:13.266651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.649 qpair failed and we were unable to recover it. 00:33:58.649 [2024-07-24 02:12:13.266775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.649 [2024-07-24 02:12:13.266803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.649 qpair failed and we were unable to recover it. 00:33:58.649 [2024-07-24 02:12:13.266963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.649 [2024-07-24 02:12:13.267007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.649 qpair failed and we were unable to recover it. 00:33:58.649 [2024-07-24 02:12:13.267133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.649 [2024-07-24 02:12:13.267158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.649 qpair failed and we were unable to recover it. 00:33:58.649 [2024-07-24 02:12:13.267323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.649 [2024-07-24 02:12:13.267349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.649 qpair failed and we were unable to recover it. 00:33:58.649 [2024-07-24 02:12:13.267487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.649 [2024-07-24 02:12:13.267517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.649 qpair failed and we were unable to recover it. 00:33:58.649 [2024-07-24 02:12:13.267618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.649 [2024-07-24 02:12:13.267643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.649 qpair failed and we were unable to recover it. 00:33:58.649 [2024-07-24 02:12:13.267775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.649 [2024-07-24 02:12:13.267801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.649 qpair failed and we were unable to recover it. 00:33:58.649 [2024-07-24 02:12:13.267934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.649 [2024-07-24 02:12:13.267959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.649 qpair failed and we were unable to recover it. 00:33:58.649 [2024-07-24 02:12:13.268117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.649 [2024-07-24 02:12:13.268143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.649 qpair failed and we were unable to recover it. 00:33:58.649 [2024-07-24 02:12:13.268260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.649 [2024-07-24 02:12:13.268285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.649 qpair failed and we were unable to recover it. 00:33:58.649 [2024-07-24 02:12:13.268453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.649 [2024-07-24 02:12:13.268479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.649 qpair failed and we were unable to recover it. 00:33:58.649 [2024-07-24 02:12:13.268610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.649 [2024-07-24 02:12:13.268653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.649 qpair failed and we were unable to recover it. 00:33:58.649 [2024-07-24 02:12:13.268838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.649 [2024-07-24 02:12:13.268881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.649 qpair failed and we were unable to recover it. 00:33:58.649 [2024-07-24 02:12:13.268987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.649 [2024-07-24 02:12:13.269012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.649 qpair failed and we were unable to recover it. 00:33:58.649 [2024-07-24 02:12:13.269171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.649 [2024-07-24 02:12:13.269196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.649 qpair failed and we were unable to recover it. 00:33:58.649 [2024-07-24 02:12:13.269326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.649 [2024-07-24 02:12:13.269352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.649 qpair failed and we were unable to recover it. 00:33:58.649 [2024-07-24 02:12:13.269498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.649 [2024-07-24 02:12:13.269526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.649 qpair failed and we were unable to recover it. 00:33:58.649 [2024-07-24 02:12:13.269724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.649 [2024-07-24 02:12:13.269770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.649 qpair failed and we were unable to recover it. 00:33:58.649 [2024-07-24 02:12:13.269959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.649 [2024-07-24 02:12:13.269987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.649 qpair failed and we were unable to recover it. 00:33:58.649 [2024-07-24 02:12:13.270111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.649 [2024-07-24 02:12:13.270137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.649 qpair failed and we were unable to recover it. 00:33:58.649 [2024-07-24 02:12:13.270265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.649 [2024-07-24 02:12:13.270290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.649 qpair failed and we were unable to recover it. 00:33:58.649 [2024-07-24 02:12:13.270429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.649 [2024-07-24 02:12:13.270456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.649 qpair failed and we were unable to recover it. 00:33:58.649 [2024-07-24 02:12:13.270622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.649 [2024-07-24 02:12:13.270647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.649 qpair failed and we were unable to recover it. 00:33:58.649 [2024-07-24 02:12:13.270825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.649 [2024-07-24 02:12:13.270867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.649 qpair failed and we were unable to recover it. 00:33:58.649 [2024-07-24 02:12:13.271014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.649 [2024-07-24 02:12:13.271056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.649 qpair failed and we were unable to recover it. 00:33:58.649 [2024-07-24 02:12:13.271211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.649 [2024-07-24 02:12:13.271236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.649 qpair failed and we were unable to recover it. 00:33:58.649 [2024-07-24 02:12:13.271387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.649 [2024-07-24 02:12:13.271416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.649 qpair failed and we were unable to recover it. 00:33:58.649 [2024-07-24 02:12:13.271608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.649 [2024-07-24 02:12:13.271634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.649 qpair failed and we were unable to recover it. 00:33:58.649 [2024-07-24 02:12:13.271759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.649 [2024-07-24 02:12:13.271804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.649 qpair failed and we were unable to recover it. 00:33:58.649 [2024-07-24 02:12:13.271949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.649 [2024-07-24 02:12:13.271990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.649 qpair failed and we were unable to recover it. 00:33:58.649 [2024-07-24 02:12:13.272132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.650 [2024-07-24 02:12:13.272157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.650 qpair failed and we were unable to recover it. 00:33:58.650 [2024-07-24 02:12:13.272322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.650 [2024-07-24 02:12:13.272348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.650 qpair failed and we were unable to recover it. 00:33:58.650 [2024-07-24 02:12:13.272476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.650 [2024-07-24 02:12:13.272502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.650 qpair failed and we were unable to recover it. 00:33:58.650 [2024-07-24 02:12:13.272610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.650 [2024-07-24 02:12:13.272635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.650 qpair failed and we were unable to recover it. 00:33:58.650 [2024-07-24 02:12:13.272765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.650 [2024-07-24 02:12:13.272790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.650 qpair failed and we were unable to recover it. 00:33:58.650 [2024-07-24 02:12:13.272896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.650 [2024-07-24 02:12:13.272921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.650 qpair failed and we were unable to recover it. 00:33:58.650 [2024-07-24 02:12:13.273032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.650 [2024-07-24 02:12:13.273057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.650 qpair failed and we were unable to recover it. 00:33:58.650 [2024-07-24 02:12:13.273184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.650 [2024-07-24 02:12:13.273210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.650 qpair failed and we were unable to recover it. 00:33:58.650 [2024-07-24 02:12:13.273377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.650 [2024-07-24 02:12:13.273403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.650 qpair failed and we were unable to recover it. 00:33:58.650 [2024-07-24 02:12:13.273566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.650 [2024-07-24 02:12:13.273591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.650 qpair failed and we were unable to recover it. 00:33:58.650 [2024-07-24 02:12:13.273721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.650 [2024-07-24 02:12:13.273746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.650 qpair failed and we were unable to recover it. 00:33:58.650 [2024-07-24 02:12:13.273879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.650 [2024-07-24 02:12:13.273905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.650 qpair failed and we were unable to recover it. 00:33:58.650 [2024-07-24 02:12:13.274065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.650 [2024-07-24 02:12:13.274090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.650 qpair failed and we were unable to recover it. 00:33:58.650 [2024-07-24 02:12:13.274251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.650 [2024-07-24 02:12:13.274276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.650 qpair failed and we were unable to recover it. 00:33:58.650 [2024-07-24 02:12:13.274381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.650 [2024-07-24 02:12:13.274411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.650 qpair failed and we were unable to recover it. 00:33:58.650 [2024-07-24 02:12:13.274535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.650 [2024-07-24 02:12:13.274563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.650 qpair failed and we were unable to recover it. 00:33:58.650 [2024-07-24 02:12:13.274759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.650 [2024-07-24 02:12:13.274803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.650 qpair failed and we were unable to recover it. 00:33:58.650 [2024-07-24 02:12:13.274937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.650 [2024-07-24 02:12:13.274981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.650 qpair failed and we were unable to recover it. 00:33:58.650 [2024-07-24 02:12:13.275114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.650 [2024-07-24 02:12:13.275139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.650 qpair failed and we were unable to recover it. 00:33:58.650 [2024-07-24 02:12:13.275297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.650 [2024-07-24 02:12:13.275327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.650 qpair failed and we were unable to recover it. 00:33:58.650 [2024-07-24 02:12:13.275482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.650 [2024-07-24 02:12:13.275526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.650 qpair failed and we were unable to recover it. 00:33:58.650 [2024-07-24 02:12:13.275684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.650 [2024-07-24 02:12:13.275728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.650 qpair failed and we were unable to recover it. 00:33:58.650 [2024-07-24 02:12:13.275925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.650 [2024-07-24 02:12:13.275968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.650 qpair failed and we were unable to recover it. 00:33:58.650 [2024-07-24 02:12:13.276152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.650 [2024-07-24 02:12:13.276181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.650 qpair failed and we were unable to recover it. 00:33:58.650 [2024-07-24 02:12:13.276325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.650 [2024-07-24 02:12:13.276351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.650 qpair failed and we were unable to recover it. 00:33:58.650 [2024-07-24 02:12:13.276494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.650 [2024-07-24 02:12:13.276519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.650 qpair failed and we were unable to recover it. 00:33:58.650 [2024-07-24 02:12:13.276678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.650 [2024-07-24 02:12:13.276703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.650 qpair failed and we were unable to recover it. 00:33:58.650 [2024-07-24 02:12:13.276834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.650 [2024-07-24 02:12:13.276859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.650 qpair failed and we were unable to recover it. 00:33:58.650 [2024-07-24 02:12:13.276965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.650 [2024-07-24 02:12:13.276992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.650 qpair failed and we were unable to recover it. 00:33:58.650 [2024-07-24 02:12:13.277151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.650 [2024-07-24 02:12:13.277176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.650 qpair failed and we were unable to recover it. 00:33:58.650 [2024-07-24 02:12:13.277285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.650 [2024-07-24 02:12:13.277310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.650 qpair failed and we were unable to recover it. 00:33:58.650 [2024-07-24 02:12:13.277449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.650 [2024-07-24 02:12:13.277492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.650 qpair failed and we were unable to recover it. 00:33:58.650 [2024-07-24 02:12:13.277629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.650 [2024-07-24 02:12:13.277654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.650 qpair failed and we were unable to recover it. 00:33:58.650 [2024-07-24 02:12:13.277810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.650 [2024-07-24 02:12:13.277836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.650 qpair failed and we were unable to recover it. 00:33:58.650 [2024-07-24 02:12:13.277994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.650 [2024-07-24 02:12:13.278020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.650 qpair failed and we were unable to recover it. 00:33:58.650 [2024-07-24 02:12:13.278177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.650 [2024-07-24 02:12:13.278202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.650 qpair failed and we were unable to recover it. 00:33:58.650 [2024-07-24 02:12:13.278302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.650 [2024-07-24 02:12:13.278332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.650 qpair failed and we were unable to recover it. 00:33:58.650 [2024-07-24 02:12:13.278480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.650 [2024-07-24 02:12:13.278525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.650 qpair failed and we were unable to recover it. 00:33:58.650 [2024-07-24 02:12:13.278687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.650 [2024-07-24 02:12:13.278713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.650 qpair failed and we were unable to recover it. 00:33:58.650 [2024-07-24 02:12:13.278840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.650 [2024-07-24 02:12:13.278887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.650 qpair failed and we were unable to recover it. 00:33:58.650 [2024-07-24 02:12:13.279041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.650 [2024-07-24 02:12:13.279067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.650 qpair failed and we were unable to recover it. 00:33:58.650 [2024-07-24 02:12:13.279231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.650 [2024-07-24 02:12:13.279256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.650 qpair failed and we were unable to recover it. 00:33:58.650 [2024-07-24 02:12:13.279395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.650 [2024-07-24 02:12:13.279424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.650 qpair failed and we were unable to recover it. 00:33:58.650 [2024-07-24 02:12:13.279554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.650 [2024-07-24 02:12:13.279582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.650 qpair failed and we were unable to recover it. 00:33:58.650 [2024-07-24 02:12:13.279783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.650 [2024-07-24 02:12:13.279826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.650 qpair failed and we were unable to recover it. 00:33:58.650 [2024-07-24 02:12:13.280008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.650 [2024-07-24 02:12:13.280054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.650 qpair failed and we were unable to recover it. 00:33:58.650 [2024-07-24 02:12:13.280160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.650 [2024-07-24 02:12:13.280185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.650 qpair failed and we were unable to recover it. 00:33:58.650 [2024-07-24 02:12:13.280315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.650 [2024-07-24 02:12:13.280345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.650 qpair failed and we were unable to recover it. 00:33:58.651 [2024-07-24 02:12:13.280465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.651 [2024-07-24 02:12:13.280493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.651 qpair failed and we were unable to recover it. 00:33:58.651 [2024-07-24 02:12:13.280663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.651 [2024-07-24 02:12:13.280705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.651 qpair failed and we were unable to recover it. 00:33:58.651 [2024-07-24 02:12:13.280864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.651 [2024-07-24 02:12:13.280889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.651 qpair failed and we were unable to recover it. 00:33:58.651 [2024-07-24 02:12:13.280989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.651 [2024-07-24 02:12:13.281014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.651 qpair failed and we were unable to recover it. 00:33:58.651 [2024-07-24 02:12:13.281145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.651 [2024-07-24 02:12:13.281170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.651 qpair failed and we were unable to recover it. 00:33:58.651 [2024-07-24 02:12:13.281308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.651 [2024-07-24 02:12:13.281338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.651 qpair failed and we were unable to recover it. 00:33:58.651 [2024-07-24 02:12:13.281466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.651 [2024-07-24 02:12:13.281496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.651 qpair failed and we were unable to recover it. 00:33:58.651 [2024-07-24 02:12:13.281597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.651 [2024-07-24 02:12:13.281622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.651 qpair failed and we were unable to recover it. 00:33:58.651 [2024-07-24 02:12:13.281759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.651 [2024-07-24 02:12:13.281784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.651 qpair failed and we were unable to recover it. 00:33:58.651 [2024-07-24 02:12:13.281885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.651 [2024-07-24 02:12:13.281911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.651 qpair failed and we were unable to recover it. 00:33:58.651 [2024-07-24 02:12:13.282075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.651 [2024-07-24 02:12:13.282100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.651 qpair failed and we were unable to recover it. 00:33:58.651 [2024-07-24 02:12:13.282228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.651 [2024-07-24 02:12:13.282253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.651 qpair failed and we were unable to recover it. 00:33:58.651 [2024-07-24 02:12:13.282415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.651 [2024-07-24 02:12:13.282457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.651 qpair failed and we were unable to recover it. 00:33:58.651 [2024-07-24 02:12:13.282637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.651 [2024-07-24 02:12:13.282680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.651 qpair failed and we were unable to recover it. 00:33:58.651 [2024-07-24 02:12:13.282864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.651 [2024-07-24 02:12:13.282905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.651 qpair failed and we were unable to recover it. 00:33:58.651 [2024-07-24 02:12:13.283014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.651 [2024-07-24 02:12:13.283040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.651 qpair failed and we were unable to recover it. 00:33:58.651 [2024-07-24 02:12:13.283173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.651 [2024-07-24 02:12:13.283198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.651 qpair failed and we were unable to recover it. 00:33:58.651 [2024-07-24 02:12:13.283377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.651 [2024-07-24 02:12:13.283432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.651 qpair failed and we were unable to recover it. 00:33:58.651 [2024-07-24 02:12:13.283565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.651 [2024-07-24 02:12:13.283592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.651 qpair failed and we were unable to recover it. 00:33:58.651 [2024-07-24 02:12:13.283736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.651 [2024-07-24 02:12:13.283783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.651 qpair failed and we were unable to recover it. 00:33:58.651 [2024-07-24 02:12:13.283945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.651 [2024-07-24 02:12:13.283971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.651 qpair failed and we were unable to recover it. 00:33:58.651 [2024-07-24 02:12:13.284107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.651 [2024-07-24 02:12:13.284132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.651 qpair failed and we were unable to recover it. 00:33:58.651 [2024-07-24 02:12:13.284292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.651 [2024-07-24 02:12:13.284328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.651 qpair failed and we were unable to recover it. 00:33:58.651 [2024-07-24 02:12:13.284465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.651 [2024-07-24 02:12:13.284491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.651 qpair failed and we were unable to recover it. 00:33:58.651 [2024-07-24 02:12:13.284626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.651 [2024-07-24 02:12:13.284651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.651 qpair failed and we were unable to recover it. 00:33:58.651 [2024-07-24 02:12:13.284774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.651 [2024-07-24 02:12:13.284817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.651 qpair failed and we were unable to recover it. 00:33:58.651 [2024-07-24 02:12:13.284948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.651 [2024-07-24 02:12:13.284974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.651 qpair failed and we were unable to recover it. 00:33:58.651 [2024-07-24 02:12:13.285111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.651 [2024-07-24 02:12:13.285136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.651 qpair failed and we were unable to recover it. 00:33:58.651 [2024-07-24 02:12:13.285263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.651 [2024-07-24 02:12:13.285288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.651 qpair failed and we were unable to recover it. 00:33:58.651 [2024-07-24 02:12:13.285439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.651 [2024-07-24 02:12:13.285483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.651 qpair failed and we were unable to recover it. 00:33:58.651 [2024-07-24 02:12:13.285681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.651 [2024-07-24 02:12:13.285709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.651 qpair failed and we were unable to recover it. 00:33:58.651 [2024-07-24 02:12:13.285877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.651 [2024-07-24 02:12:13.285920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.651 qpair failed and we were unable to recover it. 00:33:58.651 [2024-07-24 02:12:13.286076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.651 [2024-07-24 02:12:13.286101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.651 qpair failed and we were unable to recover it. 00:33:58.651 [2024-07-24 02:12:13.286219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.651 [2024-07-24 02:12:13.286245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.651 qpair failed and we were unable to recover it. 00:33:58.651 [2024-07-24 02:12:13.286376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.651 [2024-07-24 02:12:13.286420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.651 qpair failed and we were unable to recover it. 00:33:58.651 [2024-07-24 02:12:13.286540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.651 [2024-07-24 02:12:13.286586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.651 qpair failed and we were unable to recover it. 00:33:58.652 [2024-07-24 02:12:13.286736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.652 [2024-07-24 02:12:13.286764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.652 qpair failed and we were unable to recover it. 00:33:58.652 [2024-07-24 02:12:13.286903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.652 [2024-07-24 02:12:13.286948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.652 qpair failed and we were unable to recover it. 00:33:58.652 [2024-07-24 02:12:13.287077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.652 [2024-07-24 02:12:13.287102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.652 qpair failed and we were unable to recover it. 00:33:58.652 [2024-07-24 02:12:13.287227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.652 [2024-07-24 02:12:13.287252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.652 qpair failed and we were unable to recover it. 00:33:58.652 [2024-07-24 02:12:13.287363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.652 [2024-07-24 02:12:13.287390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.652 qpair failed and we were unable to recover it. 00:33:58.652 [2024-07-24 02:12:13.287518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.652 [2024-07-24 02:12:13.287546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.652 qpair failed and we were unable to recover it. 00:33:58.652 [2024-07-24 02:12:13.287720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.652 [2024-07-24 02:12:13.287763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.652 qpair failed and we were unable to recover it. 00:33:58.652 [2024-07-24 02:12:13.287941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.652 [2024-07-24 02:12:13.287984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.652 qpair failed and we were unable to recover it. 00:33:58.652 [2024-07-24 02:12:13.288141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.652 [2024-07-24 02:12:13.288166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.652 qpair failed and we were unable to recover it. 00:33:58.652 [2024-07-24 02:12:13.288290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.652 [2024-07-24 02:12:13.288338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.652 qpair failed and we were unable to recover it. 00:33:58.652 [2024-07-24 02:12:13.288526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.652 [2024-07-24 02:12:13.288574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.652 qpair failed and we were unable to recover it. 00:33:58.652 [2024-07-24 02:12:13.288728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.652 [2024-07-24 02:12:13.288771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.652 qpair failed and we were unable to recover it. 00:33:58.652 [2024-07-24 02:12:13.288946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.652 [2024-07-24 02:12:13.288972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.652 qpair failed and we were unable to recover it. 00:33:58.652 [2024-07-24 02:12:13.289102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.652 [2024-07-24 02:12:13.289128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.652 qpair failed and we were unable to recover it. 00:33:58.652 [2024-07-24 02:12:13.289256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.652 [2024-07-24 02:12:13.289282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.652 qpair failed and we were unable to recover it. 00:33:58.652 [2024-07-24 02:12:13.289385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.652 [2024-07-24 02:12:13.289411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.652 qpair failed and we were unable to recover it. 00:33:58.652 [2024-07-24 02:12:13.289545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.652 [2024-07-24 02:12:13.289570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.652 qpair failed and we were unable to recover it. 00:33:58.652 [2024-07-24 02:12:13.289708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.652 [2024-07-24 02:12:13.289733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.652 qpair failed and we were unable to recover it. 00:33:58.652 [2024-07-24 02:12:13.289870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.652 [2024-07-24 02:12:13.289895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.652 qpair failed and we were unable to recover it. 00:33:58.652 [2024-07-24 02:12:13.290026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.652 [2024-07-24 02:12:13.290051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.652 qpair failed and we were unable to recover it. 00:33:58.652 [2024-07-24 02:12:13.290196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.652 [2024-07-24 02:12:13.290221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.652 qpair failed and we were unable to recover it. 00:33:58.652 [2024-07-24 02:12:13.290377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.652 [2024-07-24 02:12:13.290420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.652 qpair failed and we were unable to recover it. 00:33:58.652 [2024-07-24 02:12:13.290563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.652 [2024-07-24 02:12:13.290606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.652 qpair failed and we were unable to recover it. 00:33:58.652 [2024-07-24 02:12:13.290794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.652 [2024-07-24 02:12:13.290836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.652 qpair failed and we were unable to recover it. 00:33:58.652 [2024-07-24 02:12:13.291004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.652 [2024-07-24 02:12:13.291029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.652 qpair failed and we were unable to recover it. 00:33:58.652 [2024-07-24 02:12:13.291185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.652 [2024-07-24 02:12:13.291211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.652 qpair failed and we were unable to recover it. 00:33:58.652 [2024-07-24 02:12:13.291313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.652 [2024-07-24 02:12:13.291344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.652 qpair failed and we were unable to recover it. 00:33:58.652 [2024-07-24 02:12:13.291525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.652 [2024-07-24 02:12:13.291553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.652 qpair failed and we were unable to recover it. 00:33:58.652 [2024-07-24 02:12:13.291715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.652 [2024-07-24 02:12:13.291758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.652 qpair failed and we were unable to recover it. 00:33:58.652 [2024-07-24 02:12:13.291919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.652 [2024-07-24 02:12:13.291962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.652 qpair failed and we were unable to recover it. 00:33:58.652 [2024-07-24 02:12:13.292101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.652 [2024-07-24 02:12:13.292126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.652 qpair failed and we were unable to recover it. 00:33:58.652 [2024-07-24 02:12:13.292286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.652 [2024-07-24 02:12:13.292311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.652 qpair failed and we were unable to recover it. 00:33:58.652 [2024-07-24 02:12:13.292509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.652 [2024-07-24 02:12:13.292552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.652 qpair failed and we were unable to recover it. 00:33:58.652 [2024-07-24 02:12:13.292710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.652 [2024-07-24 02:12:13.292752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.652 qpair failed and we were unable to recover it. 00:33:58.652 [2024-07-24 02:12:13.292936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.653 [2024-07-24 02:12:13.292979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.653 qpair failed and we were unable to recover it. 00:33:58.653 [2024-07-24 02:12:13.293110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.653 [2024-07-24 02:12:13.293135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.653 qpair failed and we were unable to recover it. 00:33:58.653 [2024-07-24 02:12:13.293292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.653 [2024-07-24 02:12:13.293322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.653 qpair failed and we were unable to recover it. 00:33:58.653 [2024-07-24 02:12:13.293484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.653 [2024-07-24 02:12:13.293526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.653 qpair failed and we were unable to recover it. 00:33:58.653 [2024-07-24 02:12:13.293674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.653 [2024-07-24 02:12:13.293717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.653 qpair failed and we were unable to recover it. 00:33:58.653 [2024-07-24 02:12:13.293872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.653 [2024-07-24 02:12:13.293914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.653 qpair failed and we were unable to recover it. 00:33:58.653 [2024-07-24 02:12:13.294091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.653 [2024-07-24 02:12:13.294135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.653 qpair failed and we were unable to recover it. 00:33:58.653 [2024-07-24 02:12:13.294270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.653 [2024-07-24 02:12:13.294296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.653 qpair failed and we were unable to recover it. 00:33:58.653 [2024-07-24 02:12:13.294457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.653 [2024-07-24 02:12:13.294501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.653 qpair failed and we were unable to recover it. 00:33:58.653 [2024-07-24 02:12:13.294648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.653 [2024-07-24 02:12:13.294675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.653 qpair failed and we were unable to recover it. 00:33:58.653 [2024-07-24 02:12:13.294868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.653 [2024-07-24 02:12:13.294916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.653 qpair failed and we were unable to recover it. 00:33:58.653 [2024-07-24 02:12:13.295055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.653 [2024-07-24 02:12:13.295080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.653 qpair failed and we were unable to recover it. 00:33:58.653 [2024-07-24 02:12:13.295235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.653 [2024-07-24 02:12:13.295259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.653 qpair failed and we were unable to recover it. 00:33:58.653 [2024-07-24 02:12:13.295379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.653 [2024-07-24 02:12:13.295409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.653 qpair failed and we were unable to recover it. 00:33:58.653 [2024-07-24 02:12:13.295581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.653 [2024-07-24 02:12:13.295610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.653 qpair failed and we were unable to recover it. 00:33:58.653 [2024-07-24 02:12:13.295814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.653 [2024-07-24 02:12:13.295842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.653 qpair failed and we were unable to recover it. 00:33:58.653 [2024-07-24 02:12:13.296034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.653 [2024-07-24 02:12:13.296081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.653 qpair failed and we were unable to recover it. 00:33:58.653 [2024-07-24 02:12:13.296243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.653 [2024-07-24 02:12:13.296268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.653 qpair failed and we were unable to recover it. 00:33:58.653 [2024-07-24 02:12:13.296449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.653 [2024-07-24 02:12:13.296478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.653 qpair failed and we were unable to recover it. 00:33:58.653 [2024-07-24 02:12:13.296644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.653 [2024-07-24 02:12:13.296686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.653 qpair failed and we were unable to recover it. 00:33:58.653 [2024-07-24 02:12:13.296870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.653 [2024-07-24 02:12:13.296912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.653 qpair failed and we were unable to recover it. 00:33:58.653 [2024-07-24 02:12:13.297067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.653 [2024-07-24 02:12:13.297093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.653 qpair failed and we were unable to recover it. 00:33:58.653 [2024-07-24 02:12:13.297217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.653 [2024-07-24 02:12:13.297242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.653 qpair failed and we were unable to recover it. 00:33:58.653 [2024-07-24 02:12:13.297419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.653 [2024-07-24 02:12:13.297462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.653 qpair failed and we were unable to recover it. 00:33:58.653 [2024-07-24 02:12:13.297616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.653 [2024-07-24 02:12:13.297657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.653 qpair failed and we were unable to recover it. 00:33:58.653 [2024-07-24 02:12:13.297816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.653 [2024-07-24 02:12:13.297860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.653 qpair failed and we were unable to recover it. 00:33:58.653 [2024-07-24 02:12:13.297978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.653 [2024-07-24 02:12:13.298003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.653 qpair failed and we were unable to recover it. 00:33:58.653 [2024-07-24 02:12:13.298134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.653 [2024-07-24 02:12:13.298160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.653 qpair failed and we were unable to recover it. 00:33:58.653 [2024-07-24 02:12:13.298264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.653 [2024-07-24 02:12:13.298289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.653 qpair failed and we were unable to recover it. 00:33:58.653 [2024-07-24 02:12:13.298418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.653 [2024-07-24 02:12:13.298448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.653 qpair failed and we were unable to recover it. 00:33:58.653 [2024-07-24 02:12:13.298594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.653 [2024-07-24 02:12:13.298637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.653 qpair failed and we were unable to recover it. 00:33:58.653 [2024-07-24 02:12:13.298815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.653 [2024-07-24 02:12:13.298843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.653 qpair failed and we were unable to recover it. 00:33:58.653 [2024-07-24 02:12:13.299000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.653 [2024-07-24 02:12:13.299025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.653 qpair failed and we were unable to recover it. 00:33:58.654 [2024-07-24 02:12:13.299134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.654 [2024-07-24 02:12:13.299161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.654 qpair failed and we were unable to recover it. 00:33:58.654 [2024-07-24 02:12:13.299326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.654 [2024-07-24 02:12:13.299352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.654 qpair failed and we were unable to recover it. 00:33:58.654 [2024-07-24 02:12:13.299544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.654 [2024-07-24 02:12:13.299592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.654 qpair failed and we were unable to recover it. 00:33:58.654 [2024-07-24 02:12:13.299745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.654 [2024-07-24 02:12:13.299786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.654 qpair failed and we were unable to recover it. 00:33:58.654 [2024-07-24 02:12:13.299936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.654 [2024-07-24 02:12:13.299979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.654 qpair failed and we were unable to recover it. 00:33:58.654 [2024-07-24 02:12:13.300107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.654 [2024-07-24 02:12:13.300132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.654 qpair failed and we were unable to recover it. 00:33:58.654 [2024-07-24 02:12:13.300265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.654 [2024-07-24 02:12:13.300290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.654 qpair failed and we were unable to recover it. 00:33:58.654 [2024-07-24 02:12:13.300466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.654 [2024-07-24 02:12:13.300510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.654 qpair failed and we were unable to recover it. 00:33:58.654 [2024-07-24 02:12:13.300651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.654 [2024-07-24 02:12:13.300676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.654 qpair failed and we were unable to recover it. 00:33:58.654 [2024-07-24 02:12:13.300815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.654 [2024-07-24 02:12:13.300840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.654 qpair failed and we were unable to recover it. 00:33:58.654 [2024-07-24 02:12:13.300970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.654 [2024-07-24 02:12:13.301012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.654 qpair failed and we were unable to recover it. 00:33:58.654 [2024-07-24 02:12:13.301115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.654 [2024-07-24 02:12:13.301141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.654 qpair failed and we were unable to recover it. 00:33:58.654 [2024-07-24 02:12:13.301275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.654 [2024-07-24 02:12:13.301300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.654 qpair failed and we were unable to recover it. 00:33:58.654 [2024-07-24 02:12:13.301525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.654 [2024-07-24 02:12:13.301568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.654 qpair failed and we were unable to recover it. 00:33:58.654 [2024-07-24 02:12:13.301746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.654 [2024-07-24 02:12:13.301789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.654 qpair failed and we were unable to recover it. 00:33:58.654 [2024-07-24 02:12:13.301920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.654 [2024-07-24 02:12:13.301963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.654 qpair failed and we were unable to recover it. 00:33:58.654 [2024-07-24 02:12:13.302089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.654 [2024-07-24 02:12:13.302114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.654 qpair failed and we were unable to recover it. 00:33:58.654 [2024-07-24 02:12:13.302215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.654 [2024-07-24 02:12:13.302240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.654 qpair failed and we were unable to recover it. 00:33:58.654 [2024-07-24 02:12:13.302375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.654 [2024-07-24 02:12:13.302401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.654 qpair failed and we were unable to recover it. 00:33:58.654 [2024-07-24 02:12:13.302534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.654 [2024-07-24 02:12:13.302559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.654 qpair failed and we were unable to recover it. 00:33:58.654 [2024-07-24 02:12:13.302699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.654 [2024-07-24 02:12:13.302724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.654 qpair failed and we were unable to recover it. 00:33:58.654 [2024-07-24 02:12:13.302894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.654 [2024-07-24 02:12:13.302919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.654 qpair failed and we were unable to recover it. 00:33:58.654 [2024-07-24 02:12:13.303024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.654 [2024-07-24 02:12:13.303049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.654 qpair failed and we were unable to recover it. 00:33:58.654 [2024-07-24 02:12:13.303186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.654 [2024-07-24 02:12:13.303216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.654 qpair failed and we were unable to recover it. 00:33:58.654 [2024-07-24 02:12:13.303326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.654 [2024-07-24 02:12:13.303352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.654 qpair failed and we were unable to recover it. 00:33:58.654 [2024-07-24 02:12:13.303502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.654 [2024-07-24 02:12:13.303546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.654 qpair failed and we were unable to recover it. 00:33:58.654 [2024-07-24 02:12:13.303731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.654 [2024-07-24 02:12:13.303758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.654 qpair failed and we were unable to recover it. 00:33:58.654 [2024-07-24 02:12:13.303954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.654 [2024-07-24 02:12:13.303997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.654 qpair failed and we were unable to recover it. 00:33:58.654 [2024-07-24 02:12:13.304129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.654 [2024-07-24 02:12:13.304155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.654 qpair failed and we were unable to recover it. 00:33:58.654 [2024-07-24 02:12:13.304290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.654 [2024-07-24 02:12:13.304331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.654 qpair failed and we were unable to recover it. 00:33:58.654 [2024-07-24 02:12:13.304484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.654 [2024-07-24 02:12:13.304528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.654 qpair failed and we were unable to recover it. 00:33:58.654 [2024-07-24 02:12:13.304721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.654 [2024-07-24 02:12:13.304749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.654 qpair failed and we were unable to recover it. 00:33:58.654 [2024-07-24 02:12:13.304925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.654 [2024-07-24 02:12:13.304950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.654 qpair failed and we were unable to recover it. 00:33:58.654 [2024-07-24 02:12:13.305078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.654 [2024-07-24 02:12:13.305103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.654 qpair failed and we were unable to recover it. 00:33:58.654 [2024-07-24 02:12:13.305215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.654 [2024-07-24 02:12:13.305240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.654 qpair failed and we were unable to recover it. 00:33:58.654 [2024-07-24 02:12:13.305367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.654 [2024-07-24 02:12:13.305396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.654 qpair failed and we were unable to recover it. 00:33:58.654 [2024-07-24 02:12:13.305523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.655 [2024-07-24 02:12:13.305565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.655 qpair failed and we were unable to recover it. 00:33:58.655 [2024-07-24 02:12:13.305751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.655 [2024-07-24 02:12:13.305793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.655 qpair failed and we were unable to recover it. 00:33:58.655 [2024-07-24 02:12:13.305936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.655 [2024-07-24 02:12:13.305961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.655 qpair failed and we were unable to recover it. 00:33:58.655 [2024-07-24 02:12:13.306092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.655 [2024-07-24 02:12:13.306117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.655 qpair failed and we were unable to recover it. 00:33:58.655 [2024-07-24 02:12:13.306243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.655 [2024-07-24 02:12:13.306269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.655 qpair failed and we were unable to recover it. 00:33:58.655 [2024-07-24 02:12:13.306381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.655 [2024-07-24 02:12:13.306407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.655 qpair failed and we were unable to recover it. 00:33:58.655 [2024-07-24 02:12:13.306512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.655 [2024-07-24 02:12:13.306537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.655 qpair failed and we were unable to recover it. 00:33:58.655 [2024-07-24 02:12:13.306691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.655 [2024-07-24 02:12:13.306716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.655 qpair failed and we were unable to recover it. 00:33:58.655 [2024-07-24 02:12:13.306821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.655 [2024-07-24 02:12:13.306846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.655 qpair failed and we were unable to recover it. 00:33:58.655 [2024-07-24 02:12:13.306971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.655 [2024-07-24 02:12:13.306996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.655 qpair failed and we were unable to recover it. 00:33:58.655 [2024-07-24 02:12:13.307104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.655 [2024-07-24 02:12:13.307129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.655 qpair failed and we were unable to recover it. 00:33:58.655 [2024-07-24 02:12:13.307290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.655 [2024-07-24 02:12:13.307319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.655 qpair failed and we were unable to recover it. 00:33:58.655 [2024-07-24 02:12:13.307454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.655 [2024-07-24 02:12:13.307481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.655 qpair failed and we were unable to recover it. 00:33:58.655 [2024-07-24 02:12:13.307616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.655 [2024-07-24 02:12:13.307642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.655 qpair failed and we were unable to recover it. 00:33:58.655 [2024-07-24 02:12:13.307792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.655 [2024-07-24 02:12:13.307817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.655 qpair failed and we were unable to recover it. 00:33:58.655 [2024-07-24 02:12:13.307953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.655 [2024-07-24 02:12:13.307980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.655 qpair failed and we were unable to recover it. 00:33:58.655 [2024-07-24 02:12:13.308093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.655 [2024-07-24 02:12:13.308118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.655 qpair failed and we were unable to recover it. 00:33:58.655 [2024-07-24 02:12:13.308278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.655 [2024-07-24 02:12:13.308303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.655 qpair failed and we were unable to recover it. 00:33:58.655 [2024-07-24 02:12:13.308490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.655 [2024-07-24 02:12:13.308516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.655 qpair failed and we were unable to recover it. 00:33:58.655 [2024-07-24 02:12:13.308649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.655 [2024-07-24 02:12:13.308675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.655 qpair failed and we were unable to recover it. 00:33:58.655 [2024-07-24 02:12:13.308856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.655 [2024-07-24 02:12:13.308899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.655 qpair failed and we were unable to recover it. 00:33:58.655 [2024-07-24 02:12:13.309058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.655 [2024-07-24 02:12:13.309083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.655 qpair failed and we were unable to recover it. 00:33:58.655 [2024-07-24 02:12:13.309189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.655 [2024-07-24 02:12:13.309214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.655 qpair failed and we were unable to recover it. 00:33:58.655 [2024-07-24 02:12:13.309365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.655 [2024-07-24 02:12:13.309395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.655 qpair failed and we were unable to recover it. 00:33:58.655 [2024-07-24 02:12:13.309569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.655 [2024-07-24 02:12:13.309594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.655 qpair failed and we were unable to recover it. 00:33:58.655 [2024-07-24 02:12:13.309735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.655 [2024-07-24 02:12:13.309779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.655 qpair failed and we were unable to recover it. 00:33:58.655 [2024-07-24 02:12:13.309940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.655 [2024-07-24 02:12:13.309982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.655 qpair failed and we were unable to recover it. 00:33:58.655 [2024-07-24 02:12:13.310120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.655 [2024-07-24 02:12:13.310149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.655 qpair failed and we were unable to recover it. 00:33:58.655 [2024-07-24 02:12:13.310249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.655 [2024-07-24 02:12:13.310275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.655 qpair failed and we were unable to recover it. 00:33:58.655 [2024-07-24 02:12:13.310439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.655 [2024-07-24 02:12:13.310486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.655 qpair failed and we were unable to recover it. 00:33:58.655 [2024-07-24 02:12:13.310644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.655 [2024-07-24 02:12:13.310687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.655 qpair failed and we were unable to recover it. 00:33:58.655 [2024-07-24 02:12:13.310876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.655 [2024-07-24 02:12:13.310919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.655 qpair failed and we were unable to recover it. 00:33:58.655 [2024-07-24 02:12:13.311056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.655 [2024-07-24 02:12:13.311081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.655 qpair failed and we were unable to recover it. 00:33:58.655 [2024-07-24 02:12:13.311238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.655 [2024-07-24 02:12:13.311263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.655 qpair failed and we were unable to recover it. 00:33:58.655 [2024-07-24 02:12:13.311410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.655 [2024-07-24 02:12:13.311454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.655 qpair failed and we were unable to recover it. 00:33:58.655 [2024-07-24 02:12:13.311577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.655 [2024-07-24 02:12:13.311605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.655 qpair failed and we were unable to recover it. 00:33:58.655 [2024-07-24 02:12:13.311800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.656 [2024-07-24 02:12:13.311844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.656 qpair failed and we were unable to recover it. 00:33:58.656 [2024-07-24 02:12:13.312001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.656 [2024-07-24 02:12:13.312043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.656 qpair failed and we were unable to recover it. 00:33:58.656 [2024-07-24 02:12:13.312151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.656 [2024-07-24 02:12:13.312178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.656 qpair failed and we were unable to recover it. 00:33:58.656 [2024-07-24 02:12:13.312339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.656 [2024-07-24 02:12:13.312365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.656 qpair failed and we were unable to recover it. 00:33:58.656 [2024-07-24 02:12:13.312534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.656 [2024-07-24 02:12:13.312560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.656 qpair failed and we were unable to recover it. 00:33:58.656 [2024-07-24 02:12:13.312714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.656 [2024-07-24 02:12:13.312757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.656 qpair failed and we were unable to recover it. 00:33:58.656 [2024-07-24 02:12:13.312904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.656 [2024-07-24 02:12:13.312929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.656 qpair failed and we were unable to recover it. 00:33:58.656 [2024-07-24 02:12:13.313065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.656 [2024-07-24 02:12:13.313090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.656 qpair failed and we were unable to recover it. 00:33:58.656 [2024-07-24 02:12:13.313254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.656 [2024-07-24 02:12:13.313279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.656 qpair failed and we were unable to recover it. 00:33:58.656 [2024-07-24 02:12:13.313425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.656 [2024-07-24 02:12:13.313468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.656 qpair failed and we were unable to recover it. 00:33:58.656 [2024-07-24 02:12:13.313646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.656 [2024-07-24 02:12:13.313695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.656 qpair failed and we were unable to recover it. 00:33:58.656 [2024-07-24 02:12:13.313847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.656 [2024-07-24 02:12:13.313889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.656 qpair failed and we were unable to recover it. 00:33:58.656 [2024-07-24 02:12:13.314041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.656 [2024-07-24 02:12:13.314066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.656 qpair failed and we were unable to recover it. 00:33:58.656 [2024-07-24 02:12:13.314194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.656 [2024-07-24 02:12:13.314220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.656 qpair failed and we were unable to recover it. 00:33:58.656 [2024-07-24 02:12:13.314356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.656 [2024-07-24 02:12:13.314381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.656 qpair failed and we were unable to recover it. 00:33:58.656 [2024-07-24 02:12:13.314536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.656 [2024-07-24 02:12:13.314578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.656 qpair failed and we were unable to recover it. 00:33:58.656 [2024-07-24 02:12:13.314762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.656 [2024-07-24 02:12:13.314804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.656 qpair failed and we were unable to recover it. 00:33:58.656 [2024-07-24 02:12:13.314967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.656 [2024-07-24 02:12:13.315009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.656 qpair failed and we were unable to recover it. 00:33:58.656 [2024-07-24 02:12:13.315147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.656 [2024-07-24 02:12:13.315172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.656 qpair failed and we were unable to recover it. 00:33:58.656 [2024-07-24 02:12:13.315331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.656 [2024-07-24 02:12:13.315357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.656 qpair failed and we were unable to recover it. 00:33:58.656 [2024-07-24 02:12:13.315503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.656 [2024-07-24 02:12:13.315531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.656 qpair failed and we were unable to recover it. 00:33:58.656 [2024-07-24 02:12:13.315726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.656 [2024-07-24 02:12:13.315770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.656 qpair failed and we were unable to recover it. 00:33:58.656 [2024-07-24 02:12:13.315880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.656 [2024-07-24 02:12:13.315908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.656 qpair failed and we were unable to recover it. 00:33:58.656 [2024-07-24 02:12:13.316084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.656 [2024-07-24 02:12:13.316109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.656 qpair failed and we were unable to recover it. 00:33:58.656 [2024-07-24 02:12:13.316211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.656 [2024-07-24 02:12:13.316238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.656 qpair failed and we were unable to recover it. 00:33:58.656 [2024-07-24 02:12:13.316411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.656 [2024-07-24 02:12:13.316454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.656 qpair failed and we were unable to recover it. 00:33:58.656 [2024-07-24 02:12:13.316610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.656 [2024-07-24 02:12:13.316654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.656 qpair failed and we were unable to recover it. 00:33:58.656 [2024-07-24 02:12:13.316800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.656 [2024-07-24 02:12:13.316842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.656 qpair failed and we were unable to recover it. 00:33:58.656 [2024-07-24 02:12:13.316979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.656 [2024-07-24 02:12:13.317004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.656 qpair failed and we were unable to recover it. 00:33:58.656 [2024-07-24 02:12:13.317168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.656 [2024-07-24 02:12:13.317193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.656 qpair failed and we were unable to recover it. 00:33:58.656 [2024-07-24 02:12:13.317326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.656 [2024-07-24 02:12:13.317352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.656 qpair failed and we were unable to recover it. 00:33:58.656 [2024-07-24 02:12:13.317530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.656 [2024-07-24 02:12:13.317576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.656 qpair failed and we were unable to recover it. 00:33:58.656 [2024-07-24 02:12:13.317715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.656 [2024-07-24 02:12:13.317740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.656 qpair failed and we were unable to recover it. 00:33:58.656 [2024-07-24 02:12:13.317917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.656 [2024-07-24 02:12:13.317959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.656 qpair failed and we were unable to recover it. 00:33:58.656 [2024-07-24 02:12:13.318092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.656 [2024-07-24 02:12:13.318116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.656 qpair failed and we were unable to recover it. 00:33:58.656 [2024-07-24 02:12:13.318246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.656 [2024-07-24 02:12:13.318271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.656 qpair failed and we were unable to recover it. 00:33:58.656 [2024-07-24 02:12:13.318450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.656 [2024-07-24 02:12:13.318493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.656 qpair failed and we were unable to recover it. 00:33:58.657 [2024-07-24 02:12:13.318678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.657 [2024-07-24 02:12:13.318721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.657 qpair failed and we were unable to recover it. 00:33:58.657 [2024-07-24 02:12:13.318841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.657 [2024-07-24 02:12:13.318869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.657 qpair failed and we were unable to recover it. 00:33:58.657 [2024-07-24 02:12:13.319017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.657 [2024-07-24 02:12:13.319042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.657 qpair failed and we were unable to recover it. 00:33:58.657 [2024-07-24 02:12:13.319154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.657 [2024-07-24 02:12:13.319180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.657 qpair failed and we were unable to recover it. 00:33:58.657 [2024-07-24 02:12:13.319313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.657 [2024-07-24 02:12:13.319343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.657 qpair failed and we were unable to recover it. 00:33:58.657 [2024-07-24 02:12:13.319465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.657 [2024-07-24 02:12:13.319491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.657 qpair failed and we were unable to recover it. 00:33:58.657 [2024-07-24 02:12:13.319649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.657 [2024-07-24 02:12:13.319673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.657 qpair failed and we were unable to recover it. 00:33:58.657 [2024-07-24 02:12:13.319816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.657 [2024-07-24 02:12:13.319841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.657 qpair failed and we were unable to recover it. 00:33:58.657 [2024-07-24 02:12:13.320008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.657 [2024-07-24 02:12:13.320033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.657 qpair failed and we were unable to recover it. 00:33:58.657 [2024-07-24 02:12:13.320188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.657 [2024-07-24 02:12:13.320214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.657 qpair failed and we were unable to recover it. 00:33:58.657 [2024-07-24 02:12:13.320376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.657 [2024-07-24 02:12:13.320402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.657 qpair failed and we were unable to recover it. 00:33:58.657 [2024-07-24 02:12:13.320553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.657 [2024-07-24 02:12:13.320582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.657 qpair failed and we were unable to recover it. 00:33:58.657 [2024-07-24 02:12:13.320773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.657 [2024-07-24 02:12:13.320815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.657 qpair failed and we were unable to recover it. 00:33:58.657 [2024-07-24 02:12:13.320951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.657 [2024-07-24 02:12:13.320976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.657 qpair failed and we were unable to recover it. 00:33:58.657 [2024-07-24 02:12:13.321105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.657 [2024-07-24 02:12:13.321130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.657 qpair failed and we were unable to recover it. 00:33:58.657 [2024-07-24 02:12:13.321286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.657 [2024-07-24 02:12:13.321311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.657 qpair failed and we were unable to recover it. 00:33:58.657 [2024-07-24 02:12:13.321440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.657 [2024-07-24 02:12:13.321468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.657 qpair failed and we were unable to recover it. 00:33:58.657 [2024-07-24 02:12:13.321640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.657 [2024-07-24 02:12:13.321682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.657 qpair failed and we were unable to recover it. 00:33:58.657 [2024-07-24 02:12:13.321864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.657 [2024-07-24 02:12:13.321892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.657 qpair failed and we were unable to recover it. 00:33:58.657 [2024-07-24 02:12:13.322040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.657 [2024-07-24 02:12:13.322066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.657 qpair failed and we were unable to recover it. 00:33:58.657 [2024-07-24 02:12:13.322196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.657 [2024-07-24 02:12:13.322221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.657 qpair failed and we were unable to recover it. 00:33:58.657 [2024-07-24 02:12:13.322376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.657 [2024-07-24 02:12:13.322420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.657 qpair failed and we were unable to recover it. 00:33:58.657 [2024-07-24 02:12:13.322537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.657 [2024-07-24 02:12:13.322562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.657 qpair failed and we were unable to recover it. 00:33:58.657 [2024-07-24 02:12:13.322707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.657 [2024-07-24 02:12:13.322732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.657 qpair failed and we were unable to recover it. 00:33:58.657 [2024-07-24 02:12:13.322860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.657 [2024-07-24 02:12:13.322885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.657 qpair failed and we were unable to recover it. 00:33:58.657 [2024-07-24 02:12:13.323017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.657 [2024-07-24 02:12:13.323042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.657 qpair failed and we were unable to recover it. 00:33:58.657 [2024-07-24 02:12:13.323179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.657 [2024-07-24 02:12:13.323204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.657 qpair failed and we were unable to recover it. 00:33:58.657 [2024-07-24 02:12:13.323356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.657 [2024-07-24 02:12:13.323384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.657 qpair failed and we were unable to recover it. 00:33:58.657 [2024-07-24 02:12:13.323585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.657 [2024-07-24 02:12:13.323626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.657 qpair failed and we were unable to recover it. 00:33:58.657 [2024-07-24 02:12:13.323776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.657 [2024-07-24 02:12:13.323817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.657 qpair failed and we were unable to recover it. 00:33:58.657 [2024-07-24 02:12:13.323951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.658 [2024-07-24 02:12:13.323976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.658 qpair failed and we were unable to recover it. 00:33:58.658 [2024-07-24 02:12:13.324108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.658 [2024-07-24 02:12:13.324133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.658 qpair failed and we were unable to recover it. 00:33:58.658 [2024-07-24 02:12:13.324298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.658 [2024-07-24 02:12:13.324328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.658 qpair failed and we were unable to recover it. 00:33:58.658 [2024-07-24 02:12:13.324484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.658 [2024-07-24 02:12:13.324527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.658 qpair failed and we were unable to recover it. 00:33:58.658 [2024-07-24 02:12:13.324688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.658 [2024-07-24 02:12:13.324729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.658 qpair failed and we were unable to recover it. 00:33:58.658 [2024-07-24 02:12:13.324878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.658 [2024-07-24 02:12:13.324920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.658 qpair failed and we were unable to recover it. 00:33:58.658 [2024-07-24 02:12:13.325031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.658 [2024-07-24 02:12:13.325056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.658 qpair failed and we were unable to recover it. 00:33:58.658 [2024-07-24 02:12:13.325191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.658 [2024-07-24 02:12:13.325218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.658 qpair failed and we were unable to recover it. 00:33:58.658 [2024-07-24 02:12:13.325374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.658 [2024-07-24 02:12:13.325401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.658 qpair failed and we were unable to recover it. 00:33:58.658 [2024-07-24 02:12:13.325558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.658 [2024-07-24 02:12:13.325583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.658 qpair failed and we were unable to recover it. 00:33:58.658 [2024-07-24 02:12:13.325724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.658 [2024-07-24 02:12:13.325749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.658 qpair failed and we were unable to recover it. 00:33:58.658 [2024-07-24 02:12:13.325903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.658 [2024-07-24 02:12:13.325928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.658 qpair failed and we were unable to recover it. 00:33:58.658 [2024-07-24 02:12:13.326035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.658 [2024-07-24 02:12:13.326060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.658 qpair failed and we were unable to recover it. 00:33:58.658 [2024-07-24 02:12:13.326227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.658 [2024-07-24 02:12:13.326252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.658 qpair failed and we were unable to recover it. 00:33:58.658 [2024-07-24 02:12:13.326399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.658 [2024-07-24 02:12:13.326429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.658 qpair failed and we were unable to recover it. 00:33:58.658 [2024-07-24 02:12:13.326559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.658 [2024-07-24 02:12:13.326587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.658 qpair failed and we were unable to recover it. 00:33:58.658 [2024-07-24 02:12:13.326793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.658 [2024-07-24 02:12:13.326821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.658 qpair failed and we were unable to recover it. 00:33:58.658 [2024-07-24 02:12:13.326966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.658 [2024-07-24 02:12:13.326991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.658 qpair failed and we were unable to recover it. 00:33:58.658 [2024-07-24 02:12:13.327097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.658 [2024-07-24 02:12:13.327122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.658 qpair failed and we were unable to recover it. 00:33:58.658 [2024-07-24 02:12:13.327261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.658 [2024-07-24 02:12:13.327286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.658 qpair failed and we were unable to recover it. 00:33:58.658 [2024-07-24 02:12:13.327449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.658 [2024-07-24 02:12:13.327492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.658 qpair failed and we were unable to recover it. 00:33:58.658 [2024-07-24 02:12:13.327646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.658 [2024-07-24 02:12:13.327689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.658 qpair failed and we were unable to recover it. 00:33:58.658 [2024-07-24 02:12:13.327851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.658 [2024-07-24 02:12:13.327877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.658 qpair failed and we were unable to recover it. 00:33:58.658 [2024-07-24 02:12:13.328013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.658 [2024-07-24 02:12:13.328039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.658 qpair failed and we were unable to recover it. 00:33:58.658 [2024-07-24 02:12:13.328196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.658 [2024-07-24 02:12:13.328221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.658 qpair failed and we were unable to recover it. 00:33:58.658 [2024-07-24 02:12:13.328363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.658 [2024-07-24 02:12:13.328389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.658 qpair failed and we were unable to recover it. 00:33:58.658 [2024-07-24 02:12:13.328504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.658 [2024-07-24 02:12:13.328547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.658 qpair failed and we were unable to recover it. 00:33:58.658 [2024-07-24 02:12:13.328651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.658 [2024-07-24 02:12:13.328678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.658 qpair failed and we were unable to recover it. 00:33:58.658 [2024-07-24 02:12:13.328839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.658 [2024-07-24 02:12:13.328865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.658 qpair failed and we were unable to recover it. 00:33:58.658 [2024-07-24 02:12:13.329020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.658 [2024-07-24 02:12:13.329045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.658 qpair failed and we were unable to recover it. 00:33:58.658 [2024-07-24 02:12:13.329139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.658 [2024-07-24 02:12:13.329164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.658 qpair failed and we were unable to recover it. 00:33:58.658 [2024-07-24 02:12:13.329289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.658 [2024-07-24 02:12:13.329344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.658 qpair failed and we were unable to recover it. 00:33:58.658 [2024-07-24 02:12:13.329506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.658 [2024-07-24 02:12:13.329531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.658 qpair failed and we were unable to recover it. 00:33:58.658 [2024-07-24 02:12:13.329678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.658 [2024-07-24 02:12:13.329721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.658 qpair failed and we were unable to recover it. 00:33:58.658 [2024-07-24 02:12:13.329911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.658 [2024-07-24 02:12:13.329955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.658 qpair failed and we were unable to recover it. 00:33:58.658 [2024-07-24 02:12:13.330085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.658 [2024-07-24 02:12:13.330110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.658 qpair failed and we were unable to recover it. 00:33:58.658 [2024-07-24 02:12:13.330267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.659 [2024-07-24 02:12:13.330292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.659 qpair failed and we were unable to recover it. 00:33:58.659 [2024-07-24 02:12:13.330440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.659 [2024-07-24 02:12:13.330483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.659 qpair failed and we were unable to recover it. 00:33:58.659 [2024-07-24 02:12:13.330647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.659 [2024-07-24 02:12:13.330672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.659 qpair failed and we were unable to recover it. 00:33:58.659 [2024-07-24 02:12:13.330824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.659 [2024-07-24 02:12:13.330852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.659 qpair failed and we were unable to recover it. 00:33:58.659 [2024-07-24 02:12:13.331005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.659 [2024-07-24 02:12:13.331030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.659 qpair failed and we were unable to recover it. 00:33:58.659 [2024-07-24 02:12:13.331160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.659 [2024-07-24 02:12:13.331185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.659 qpair failed and we were unable to recover it. 00:33:58.659 [2024-07-24 02:12:13.331298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.659 [2024-07-24 02:12:13.331327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.659 qpair failed and we were unable to recover it. 00:33:58.659 [2024-07-24 02:12:13.331507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.659 [2024-07-24 02:12:13.331550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.659 qpair failed and we were unable to recover it. 00:33:58.659 [2024-07-24 02:12:13.331682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.659 [2024-07-24 02:12:13.331710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.659 qpair failed and we were unable to recover it. 00:33:58.659 [2024-07-24 02:12:13.331905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.659 [2024-07-24 02:12:13.331948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.659 qpair failed and we were unable to recover it. 00:33:58.659 [2024-07-24 02:12:13.332111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.659 [2024-07-24 02:12:13.332136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.659 qpair failed and we were unable to recover it. 00:33:58.659 [2024-07-24 02:12:13.332281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.659 [2024-07-24 02:12:13.332306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.659 qpair failed and we were unable to recover it. 00:33:58.659 [2024-07-24 02:12:13.332434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.659 [2024-07-24 02:12:13.332477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.659 qpair failed and we were unable to recover it. 00:33:58.659 [2024-07-24 02:12:13.332592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.659 [2024-07-24 02:12:13.332616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.659 qpair failed and we were unable to recover it. 00:33:58.659 [2024-07-24 02:12:13.332715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.659 [2024-07-24 02:12:13.332740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.659 qpair failed and we were unable to recover it. 00:33:58.659 [2024-07-24 02:12:13.332905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.659 [2024-07-24 02:12:13.332949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.659 qpair failed and we were unable to recover it. 00:33:58.659 [2024-07-24 02:12:13.333060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.659 [2024-07-24 02:12:13.333085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.659 qpair failed and we were unable to recover it. 00:33:58.659 [2024-07-24 02:12:13.333217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.659 [2024-07-24 02:12:13.333242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.659 qpair failed and we were unable to recover it. 00:33:58.659 [2024-07-24 02:12:13.333382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.659 [2024-07-24 02:12:13.333408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.659 qpair failed and we were unable to recover it. 00:33:58.659 [2024-07-24 02:12:13.333571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.659 [2024-07-24 02:12:13.333596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.659 qpair failed and we were unable to recover it. 00:33:58.659 [2024-07-24 02:12:13.333701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.659 [2024-07-24 02:12:13.333726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.659 qpair failed and we were unable to recover it. 00:33:58.659 [2024-07-24 02:12:13.333869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.659 [2024-07-24 02:12:13.333893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.659 qpair failed and we were unable to recover it. 00:33:58.659 [2024-07-24 02:12:13.334063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.659 [2024-07-24 02:12:13.334089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.659 qpair failed and we were unable to recover it. 00:33:58.659 [2024-07-24 02:12:13.334221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.659 [2024-07-24 02:12:13.334246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.659 qpair failed and we were unable to recover it. 00:33:58.659 [2024-07-24 02:12:13.334376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.659 [2024-07-24 02:12:13.334405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.659 qpair failed and we were unable to recover it. 00:33:58.659 [2024-07-24 02:12:13.334577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.659 [2024-07-24 02:12:13.334620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.659 qpair failed and we were unable to recover it. 00:33:58.659 [2024-07-24 02:12:13.334785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.659 [2024-07-24 02:12:13.334811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.659 qpair failed and we were unable to recover it. 00:33:58.659 [2024-07-24 02:12:13.334943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.659 [2024-07-24 02:12:13.334968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.659 qpair failed and we were unable to recover it. 00:33:58.659 [2024-07-24 02:12:13.335103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.659 [2024-07-24 02:12:13.335128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.659 qpair failed and we were unable to recover it. 00:33:58.659 [2024-07-24 02:12:13.335283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.659 [2024-07-24 02:12:13.335308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.659 qpair failed and we were unable to recover it. 00:33:58.659 [2024-07-24 02:12:13.335468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.659 [2024-07-24 02:12:13.335512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.659 qpair failed and we were unable to recover it. 00:33:58.659 [2024-07-24 02:12:13.335668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.659 [2024-07-24 02:12:13.335711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.659 qpair failed and we were unable to recover it. 00:33:58.659 [2024-07-24 02:12:13.335848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.659 [2024-07-24 02:12:13.335874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.659 qpair failed and we were unable to recover it. 00:33:58.659 [2024-07-24 02:12:13.336042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.659 [2024-07-24 02:12:13.336067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.659 qpair failed and we were unable to recover it. 00:33:58.659 [2024-07-24 02:12:13.336226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.659 [2024-07-24 02:12:13.336251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.659 qpair failed and we were unable to recover it. 00:33:58.659 [2024-07-24 02:12:13.336400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.659 [2024-07-24 02:12:13.336447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.659 qpair failed and we were unable to recover it. 00:33:58.659 [2024-07-24 02:12:13.336597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.659 [2024-07-24 02:12:13.336639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.660 qpair failed and we were unable to recover it. 00:33:58.660 [2024-07-24 02:12:13.336825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.660 [2024-07-24 02:12:13.336867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.660 qpair failed and we were unable to recover it. 00:33:58.660 [2024-07-24 02:12:13.337025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.660 [2024-07-24 02:12:13.337067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.660 qpair failed and we were unable to recover it. 00:33:58.660 [2024-07-24 02:12:13.337170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.660 [2024-07-24 02:12:13.337196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.660 qpair failed and we were unable to recover it. 00:33:58.660 [2024-07-24 02:12:13.337346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.660 [2024-07-24 02:12:13.337373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.660 qpair failed and we were unable to recover it. 00:33:58.660 [2024-07-24 02:12:13.337519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.660 [2024-07-24 02:12:13.337564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.660 qpair failed and we were unable to recover it. 00:33:58.660 [2024-07-24 02:12:13.337717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.660 [2024-07-24 02:12:13.337759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.660 qpair failed and we were unable to recover it. 00:33:58.660 [2024-07-24 02:12:13.337895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.660 [2024-07-24 02:12:13.337920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.660 qpair failed and we were unable to recover it. 00:33:58.660 [2024-07-24 02:12:13.338051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.660 [2024-07-24 02:12:13.338076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.660 qpair failed and we were unable to recover it. 00:33:58.660 [2024-07-24 02:12:13.338230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.660 [2024-07-24 02:12:13.338255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.660 qpair failed and we were unable to recover it. 00:33:58.660 [2024-07-24 02:12:13.338376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.660 [2024-07-24 02:12:13.338404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.660 qpair failed and we were unable to recover it. 00:33:58.660 [2024-07-24 02:12:13.338544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.660 [2024-07-24 02:12:13.338572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.660 qpair failed and we were unable to recover it. 00:33:58.660 [2024-07-24 02:12:13.338719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.660 [2024-07-24 02:12:13.338761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.660 qpair failed and we were unable to recover it. 00:33:58.660 [2024-07-24 02:12:13.338904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.660 [2024-07-24 02:12:13.338929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.660 qpair failed and we were unable to recover it. 00:33:58.660 [2024-07-24 02:12:13.339089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.660 [2024-07-24 02:12:13.339114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.660 qpair failed and we were unable to recover it. 00:33:58.660 [2024-07-24 02:12:13.339271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.660 [2024-07-24 02:12:13.339297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.660 qpair failed and we were unable to recover it. 00:33:58.660 [2024-07-24 02:12:13.339422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.660 [2024-07-24 02:12:13.339450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.660 qpair failed and we were unable to recover it. 00:33:58.660 [2024-07-24 02:12:13.339617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.660 [2024-07-24 02:12:13.339660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.660 qpair failed and we were unable to recover it. 00:33:58.660 [2024-07-24 02:12:13.339810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.660 [2024-07-24 02:12:13.339853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.660 qpair failed and we were unable to recover it. 00:33:58.660 [2024-07-24 02:12:13.340016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.660 [2024-07-24 02:12:13.340041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.660 qpair failed and we were unable to recover it. 00:33:58.660 [2024-07-24 02:12:13.340171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.660 [2024-07-24 02:12:13.340196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.660 qpair failed and we were unable to recover it. 00:33:58.660 [2024-07-24 02:12:13.340339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.660 [2024-07-24 02:12:13.340366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.660 qpair failed and we were unable to recover it. 00:33:58.660 [2024-07-24 02:12:13.340544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.660 [2024-07-24 02:12:13.340587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.660 qpair failed and we were unable to recover it. 00:33:58.660 [2024-07-24 02:12:13.340730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.660 [2024-07-24 02:12:13.340772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.660 qpair failed and we were unable to recover it. 00:33:58.660 [2024-07-24 02:12:13.340926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.660 [2024-07-24 02:12:13.340970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.660 qpair failed and we were unable to recover it. 00:33:58.660 [2024-07-24 02:12:13.341078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.660 [2024-07-24 02:12:13.341104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.660 qpair failed and we were unable to recover it. 00:33:58.660 [2024-07-24 02:12:13.341253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.660 [2024-07-24 02:12:13.341279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.660 qpair failed and we were unable to recover it. 00:33:58.660 [2024-07-24 02:12:13.341394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.660 [2024-07-24 02:12:13.341437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.660 qpair failed and we were unable to recover it. 00:33:58.660 [2024-07-24 02:12:13.341597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.660 [2024-07-24 02:12:13.341623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.660 qpair failed and we were unable to recover it. 00:33:58.660 [2024-07-24 02:12:13.341776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.660 [2024-07-24 02:12:13.341805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.660 qpair failed and we were unable to recover it. 00:33:58.660 [2024-07-24 02:12:13.341931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.660 [2024-07-24 02:12:13.341958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.660 qpair failed and we were unable to recover it. 00:33:58.660 [2024-07-24 02:12:13.342098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.660 [2024-07-24 02:12:13.342124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.660 qpair failed and we were unable to recover it. 00:33:58.660 [2024-07-24 02:12:13.342258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.660 [2024-07-24 02:12:13.342284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.660 qpair failed and we were unable to recover it. 00:33:58.660 [2024-07-24 02:12:13.342434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.660 [2024-07-24 02:12:13.342463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.660 qpair failed and we were unable to recover it. 00:33:58.660 [2024-07-24 02:12:13.342625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.660 [2024-07-24 02:12:13.342668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.660 qpair failed and we were unable to recover it. 00:33:58.660 [2024-07-24 02:12:13.342819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.660 [2024-07-24 02:12:13.342861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.660 qpair failed and we were unable to recover it. 00:33:58.660 [2024-07-24 02:12:13.342994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.660 [2024-07-24 02:12:13.343019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.660 qpair failed and we were unable to recover it. 00:33:58.661 [2024-07-24 02:12:13.343153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.661 [2024-07-24 02:12:13.343178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.661 qpair failed and we were unable to recover it. 00:33:58.661 [2024-07-24 02:12:13.343320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.661 [2024-07-24 02:12:13.343346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.661 qpair failed and we were unable to recover it. 00:33:58.661 [2024-07-24 02:12:13.343464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.661 [2024-07-24 02:12:13.343493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.661 qpair failed and we were unable to recover it. 00:33:58.661 [2024-07-24 02:12:13.343626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.661 [2024-07-24 02:12:13.343651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.661 qpair failed and we were unable to recover it. 00:33:58.661 [2024-07-24 02:12:13.343807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.661 [2024-07-24 02:12:13.343832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.661 qpair failed and we were unable to recover it. 00:33:58.661 [2024-07-24 02:12:13.343966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.661 [2024-07-24 02:12:13.343992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.661 qpair failed and we were unable to recover it. 00:33:58.661 [2024-07-24 02:12:13.344089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.661 [2024-07-24 02:12:13.344114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.661 qpair failed and we were unable to recover it. 00:33:58.661 [2024-07-24 02:12:13.344247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.661 [2024-07-24 02:12:13.344272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.661 qpair failed and we were unable to recover it. 00:33:58.661 [2024-07-24 02:12:13.344387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.661 [2024-07-24 02:12:13.344412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.661 qpair failed and we were unable to recover it. 00:33:58.661 [2024-07-24 02:12:13.344545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.661 [2024-07-24 02:12:13.344571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.661 qpair failed and we were unable to recover it. 00:33:58.661 [2024-07-24 02:12:13.344727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.661 [2024-07-24 02:12:13.344752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.661 qpair failed and we were unable to recover it. 00:33:58.661 [2024-07-24 02:12:13.344880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.661 [2024-07-24 02:12:13.344904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.661 qpair failed and we were unable to recover it. 00:33:58.661 [2024-07-24 02:12:13.345040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.661 [2024-07-24 02:12:13.345067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.661 qpair failed and we were unable to recover it. 00:33:58.661 [2024-07-24 02:12:13.345230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.661 [2024-07-24 02:12:13.345256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.661 qpair failed and we were unable to recover it. 00:33:58.661 [2024-07-24 02:12:13.345415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.661 [2024-07-24 02:12:13.345458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.661 qpair failed and we were unable to recover it. 00:33:58.661 [2024-07-24 02:12:13.345577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.661 [2024-07-24 02:12:13.345619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.661 qpair failed and we were unable to recover it. 00:33:58.661 [2024-07-24 02:12:13.345775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.661 [2024-07-24 02:12:13.345817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.661 qpair failed and we were unable to recover it. 00:33:58.661 [2024-07-24 02:12:13.345955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.661 [2024-07-24 02:12:13.345980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.661 qpair failed and we were unable to recover it. 00:33:58.661 [2024-07-24 02:12:13.346118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.661 [2024-07-24 02:12:13.346144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.661 qpair failed and we were unable to recover it. 00:33:58.661 [2024-07-24 02:12:13.346278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.661 [2024-07-24 02:12:13.346303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.661 qpair failed and we were unable to recover it. 00:33:58.661 [2024-07-24 02:12:13.346455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.661 [2024-07-24 02:12:13.346498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.661 qpair failed and we were unable to recover it. 00:33:58.661 [2024-07-24 02:12:13.346680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.661 [2024-07-24 02:12:13.346723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.661 qpair failed and we were unable to recover it. 00:33:58.661 [2024-07-24 02:12:13.346851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.661 [2024-07-24 02:12:13.346876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.661 qpair failed and we were unable to recover it. 00:33:58.661 [2024-07-24 02:12:13.346999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.661 [2024-07-24 02:12:13.347024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.661 qpair failed and we were unable to recover it. 00:33:58.661 [2024-07-24 02:12:13.347159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.661 [2024-07-24 02:12:13.347186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.661 qpair failed and we were unable to recover it. 00:33:58.661 [2024-07-24 02:12:13.347325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.661 [2024-07-24 02:12:13.347350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.661 qpair failed and we were unable to recover it. 00:33:58.661 [2024-07-24 02:12:13.347503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.661 [2024-07-24 02:12:13.347545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.661 qpair failed and we were unable to recover it. 00:33:58.661 [2024-07-24 02:12:13.347699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.661 [2024-07-24 02:12:13.347741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.661 qpair failed and we were unable to recover it. 00:33:58.661 [2024-07-24 02:12:13.347855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.661 [2024-07-24 02:12:13.347897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.661 qpair failed and we were unable to recover it. 00:33:58.661 [2024-07-24 02:12:13.348023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.661 [2024-07-24 02:12:13.348049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.661 qpair failed and we were unable to recover it. 00:33:58.661 [2024-07-24 02:12:13.348183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.661 [2024-07-24 02:12:13.348209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.661 qpair failed and we were unable to recover it. 00:33:58.661 [2024-07-24 02:12:13.348349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.661 [2024-07-24 02:12:13.348376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.661 qpair failed and we were unable to recover it. 00:33:58.661 [2024-07-24 02:12:13.348512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.661 [2024-07-24 02:12:13.348538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.661 qpair failed and we were unable to recover it. 00:33:58.661 [2024-07-24 02:12:13.348686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.661 [2024-07-24 02:12:13.348711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.661 qpair failed and we were unable to recover it. 00:33:58.661 [2024-07-24 02:12:13.348837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.661 [2024-07-24 02:12:13.348863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.661 qpair failed and we were unable to recover it. 00:33:58.661 [2024-07-24 02:12:13.349024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.661 [2024-07-24 02:12:13.349049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.661 qpair failed and we were unable to recover it. 00:33:58.661 [2024-07-24 02:12:13.349180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.662 [2024-07-24 02:12:13.349206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.662 qpair failed and we were unable to recover it. 00:33:58.662 [2024-07-24 02:12:13.349361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.662 [2024-07-24 02:12:13.349390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.662 qpair failed and we were unable to recover it. 00:33:58.662 [2024-07-24 02:12:13.349527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.662 [2024-07-24 02:12:13.349571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.662 qpair failed and we were unable to recover it. 00:33:58.662 [2024-07-24 02:12:13.349734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.662 [2024-07-24 02:12:13.349759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.662 qpair failed and we were unable to recover it. 00:33:58.662 [2024-07-24 02:12:13.349884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.662 [2024-07-24 02:12:13.349910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.662 qpair failed and we were unable to recover it. 00:33:58.662 [2024-07-24 02:12:13.350020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.662 [2024-07-24 02:12:13.350045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.662 qpair failed and we were unable to recover it. 00:33:58.662 [2024-07-24 02:12:13.350208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.662 [2024-07-24 02:12:13.350238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.662 qpair failed and we were unable to recover it. 00:33:58.662 [2024-07-24 02:12:13.350415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.662 [2024-07-24 02:12:13.350459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.662 qpair failed and we were unable to recover it. 00:33:58.662 [2024-07-24 02:12:13.350610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.662 [2024-07-24 02:12:13.350652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.662 qpair failed and we were unable to recover it. 00:33:58.662 [2024-07-24 02:12:13.350786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.662 [2024-07-24 02:12:13.350832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.662 qpair failed and we were unable to recover it. 00:33:58.662 [2024-07-24 02:12:13.350959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.662 [2024-07-24 02:12:13.350985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.662 qpair failed and we were unable to recover it. 00:33:58.662 [2024-07-24 02:12:13.351114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.662 [2024-07-24 02:12:13.351140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.662 qpair failed and we were unable to recover it. 00:33:58.662 [2024-07-24 02:12:13.351305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.662 [2024-07-24 02:12:13.351334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.662 qpair failed and we were unable to recover it. 00:33:58.662 [2024-07-24 02:12:13.351493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.662 [2024-07-24 02:12:13.351518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.662 qpair failed and we were unable to recover it. 00:33:58.662 [2024-07-24 02:12:13.351624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.662 [2024-07-24 02:12:13.351650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.662 qpair failed and we were unable to recover it. 00:33:58.662 [2024-07-24 02:12:13.351817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.662 [2024-07-24 02:12:13.351842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.662 qpair failed and we were unable to recover it. 00:33:58.662 [2024-07-24 02:12:13.352020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.662 [2024-07-24 02:12:13.352063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.662 qpair failed and we were unable to recover it. 00:33:58.662 [2024-07-24 02:12:13.352204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.662 [2024-07-24 02:12:13.352229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.662 qpair failed and we were unable to recover it. 00:33:58.662 [2024-07-24 02:12:13.352412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.662 [2024-07-24 02:12:13.352459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.662 qpair failed and we were unable to recover it. 00:33:58.662 [2024-07-24 02:12:13.352590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.662 [2024-07-24 02:12:13.352632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.662 qpair failed and we were unable to recover it. 00:33:58.662 [2024-07-24 02:12:13.352752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.662 [2024-07-24 02:12:13.352795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.662 qpair failed and we were unable to recover it. 00:33:58.662 [2024-07-24 02:12:13.352933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.662 [2024-07-24 02:12:13.352958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.662 qpair failed and we were unable to recover it. 00:33:58.662 [2024-07-24 02:12:13.353113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.662 [2024-07-24 02:12:13.353138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.662 qpair failed and we were unable to recover it. 00:33:58.662 [2024-07-24 02:12:13.353265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.662 [2024-07-24 02:12:13.353290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.662 qpair failed and we were unable to recover it. 00:33:58.662 [2024-07-24 02:12:13.353430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.662 [2024-07-24 02:12:13.353456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.662 qpair failed and we were unable to recover it. 00:33:58.662 [2024-07-24 02:12:13.353591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.662 [2024-07-24 02:12:13.353616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.662 qpair failed and we were unable to recover it. 00:33:58.662 [2024-07-24 02:12:13.353750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.662 [2024-07-24 02:12:13.353775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.662 qpair failed and we were unable to recover it. 00:33:58.662 [2024-07-24 02:12:13.353874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.662 [2024-07-24 02:12:13.353900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.662 qpair failed and we were unable to recover it. 00:33:58.662 [2024-07-24 02:12:13.354035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.662 [2024-07-24 02:12:13.354062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.662 qpair failed and we were unable to recover it. 00:33:58.662 [2024-07-24 02:12:13.354200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.662 [2024-07-24 02:12:13.354226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.662 qpair failed and we were unable to recover it. 00:33:58.662 [2024-07-24 02:12:13.354372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.662 [2024-07-24 02:12:13.354398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.662 qpair failed and we were unable to recover it. 00:33:58.662 [2024-07-24 02:12:13.354509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.663 [2024-07-24 02:12:13.354535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.663 qpair failed and we were unable to recover it. 00:33:58.663 [2024-07-24 02:12:13.354718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.663 [2024-07-24 02:12:13.354761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.663 qpair failed and we were unable to recover it. 00:33:58.663 [2024-07-24 02:12:13.354927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.663 [2024-07-24 02:12:13.354970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.663 qpair failed and we were unable to recover it. 00:33:58.663 [2024-07-24 02:12:13.355128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.663 [2024-07-24 02:12:13.355153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.663 qpair failed and we were unable to recover it. 00:33:58.663 [2024-07-24 02:12:13.355287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.663 [2024-07-24 02:12:13.355314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.663 qpair failed and we were unable to recover it. 00:33:58.663 [2024-07-24 02:12:13.355476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.663 [2024-07-24 02:12:13.355518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.663 qpair failed and we were unable to recover it. 00:33:58.663 [2024-07-24 02:12:13.355661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.663 [2024-07-24 02:12:13.355708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.663 qpair failed and we were unable to recover it. 00:33:58.663 [2024-07-24 02:12:13.355831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.663 [2024-07-24 02:12:13.355859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.663 qpair failed and we were unable to recover it. 00:33:58.663 [2024-07-24 02:12:13.355985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.663 [2024-07-24 02:12:13.356010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.663 qpair failed and we were unable to recover it. 00:33:58.663 [2024-07-24 02:12:13.356146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.663 [2024-07-24 02:12:13.356171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.663 qpair failed and we were unable to recover it. 00:33:58.663 [2024-07-24 02:12:13.356276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.663 [2024-07-24 02:12:13.356301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.663 qpair failed and we were unable to recover it. 00:33:58.663 [2024-07-24 02:12:13.356440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.663 [2024-07-24 02:12:13.356465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.663 qpair failed and we were unable to recover it. 00:33:58.663 [2024-07-24 02:12:13.356564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.663 [2024-07-24 02:12:13.356589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.663 qpair failed and we were unable to recover it. 00:33:58.663 [2024-07-24 02:12:13.356742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.663 [2024-07-24 02:12:13.356771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.663 qpair failed and we were unable to recover it. 00:33:58.663 [2024-07-24 02:12:13.356921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.663 [2024-07-24 02:12:13.356947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.663 qpair failed and we were unable to recover it. 00:33:58.663 [2024-07-24 02:12:13.357109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.663 [2024-07-24 02:12:13.357139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.663 qpair failed and we were unable to recover it. 00:33:58.663 [2024-07-24 02:12:13.357248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.663 [2024-07-24 02:12:13.357273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.663 qpair failed and we were unable to recover it. 00:33:58.663 [2024-07-24 02:12:13.357468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.663 [2024-07-24 02:12:13.357498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.663 qpair failed and we were unable to recover it. 00:33:58.663 [2024-07-24 02:12:13.357629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.663 [2024-07-24 02:12:13.357672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.663 qpair failed and we were unable to recover it. 00:33:58.663 [2024-07-24 02:12:13.357821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.663 [2024-07-24 02:12:13.357863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.663 qpair failed and we were unable to recover it. 00:33:58.663 [2024-07-24 02:12:13.358021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.663 [2024-07-24 02:12:13.358047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.663 qpair failed and we were unable to recover it. 00:33:58.663 [2024-07-24 02:12:13.358192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.663 [2024-07-24 02:12:13.358218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.663 qpair failed and we were unable to recover it. 00:33:58.663 [2024-07-24 02:12:13.358358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.663 [2024-07-24 02:12:13.358384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.663 qpair failed and we were unable to recover it. 00:33:58.663 [2024-07-24 02:12:13.358524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.663 [2024-07-24 02:12:13.358550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.663 qpair failed and we were unable to recover it. 00:33:58.663 [2024-07-24 02:12:13.358651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.663 [2024-07-24 02:12:13.358677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.663 qpair failed and we were unable to recover it. 00:33:58.663 [2024-07-24 02:12:13.358810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.663 [2024-07-24 02:12:13.358836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.663 qpair failed and we were unable to recover it. 00:33:58.663 [2024-07-24 02:12:13.358969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.663 [2024-07-24 02:12:13.358994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.663 qpair failed and we were unable to recover it. 00:33:58.663 [2024-07-24 02:12:13.359122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.663 [2024-07-24 02:12:13.359148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.663 qpair failed and we were unable to recover it. 00:33:58.663 [2024-07-24 02:12:13.359259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.663 [2024-07-24 02:12:13.359285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.663 qpair failed and we were unable to recover it. 00:33:58.663 [2024-07-24 02:12:13.359436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.663 [2024-07-24 02:12:13.359463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.663 qpair failed and we were unable to recover it. 00:33:58.663 [2024-07-24 02:12:13.359590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.663 [2024-07-24 02:12:13.359615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.663 qpair failed and we were unable to recover it. 00:33:58.663 [2024-07-24 02:12:13.359746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.663 [2024-07-24 02:12:13.359771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.663 qpair failed and we were unable to recover it. 00:33:58.663 [2024-07-24 02:12:13.359882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.663 [2024-07-24 02:12:13.359907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.663 qpair failed and we were unable to recover it. 00:33:58.663 [2024-07-24 02:12:13.360037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.663 [2024-07-24 02:12:13.360062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.663 qpair failed and we were unable to recover it. 00:33:58.663 [2024-07-24 02:12:13.360195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.663 [2024-07-24 02:12:13.360221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.663 qpair failed and we were unable to recover it. 00:33:58.663 [2024-07-24 02:12:13.360363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.663 [2024-07-24 02:12:13.360389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.663 qpair failed and we were unable to recover it. 00:33:58.663 [2024-07-24 02:12:13.360493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.664 [2024-07-24 02:12:13.360520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.664 qpair failed and we were unable to recover it. 00:33:58.664 [2024-07-24 02:12:13.360621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.664 [2024-07-24 02:12:13.360646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.664 qpair failed and we were unable to recover it. 00:33:58.664 [2024-07-24 02:12:13.360806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.664 [2024-07-24 02:12:13.360831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.664 qpair failed and we were unable to recover it. 00:33:58.664 [2024-07-24 02:12:13.360977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.664 [2024-07-24 02:12:13.361003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.664 qpair failed and we were unable to recover it. 00:33:58.664 [2024-07-24 02:12:13.361162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.664 [2024-07-24 02:12:13.361187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.664 qpair failed and we were unable to recover it. 00:33:58.664 [2024-07-24 02:12:13.361299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.664 [2024-07-24 02:12:13.361331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.664 qpair failed and we were unable to recover it. 00:33:58.664 [2024-07-24 02:12:13.361467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.664 [2024-07-24 02:12:13.361493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.664 qpair failed and we were unable to recover it. 00:33:58.664 [2024-07-24 02:12:13.361634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.664 [2024-07-24 02:12:13.361660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.664 qpair failed and we were unable to recover it. 00:33:58.664 [2024-07-24 02:12:13.361807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.664 [2024-07-24 02:12:13.361832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.664 qpair failed and we were unable to recover it. 00:33:58.664 [2024-07-24 02:12:13.361960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.664 [2024-07-24 02:12:13.361985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.664 qpair failed and we were unable to recover it. 00:33:58.664 [2024-07-24 02:12:13.362129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.664 [2024-07-24 02:12:13.362154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.664 qpair failed and we were unable to recover it. 00:33:58.664 [2024-07-24 02:12:13.362294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.664 [2024-07-24 02:12:13.362327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.664 qpair failed and we were unable to recover it. 00:33:58.664 [2024-07-24 02:12:13.362485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.664 [2024-07-24 02:12:13.362511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.664 qpair failed and we were unable to recover it. 00:33:58.664 [2024-07-24 02:12:13.362614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.664 [2024-07-24 02:12:13.362640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.664 qpair failed and we were unable to recover it. 00:33:58.664 [2024-07-24 02:12:13.362777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.664 [2024-07-24 02:12:13.362802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.664 qpair failed and we were unable to recover it. 00:33:58.664 [2024-07-24 02:12:13.362962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.664 [2024-07-24 02:12:13.362987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.664 qpair failed and we were unable to recover it. 00:33:58.664 [2024-07-24 02:12:13.363121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.664 [2024-07-24 02:12:13.363147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.664 qpair failed and we were unable to recover it. 00:33:58.664 [2024-07-24 02:12:13.363282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.664 [2024-07-24 02:12:13.363309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.664 qpair failed and we were unable to recover it. 00:33:58.664 [2024-07-24 02:12:13.363474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.664 [2024-07-24 02:12:13.363517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.664 qpair failed and we were unable to recover it. 00:33:58.664 [2024-07-24 02:12:13.363673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.664 [2024-07-24 02:12:13.363726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.664 qpair failed and we were unable to recover it. 00:33:58.664 [2024-07-24 02:12:13.363898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.664 [2024-07-24 02:12:13.363924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.664 qpair failed and we were unable to recover it. 00:33:58.664 [2024-07-24 02:12:13.364055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.664 [2024-07-24 02:12:13.364080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.664 qpair failed and we were unable to recover it. 00:33:58.664 [2024-07-24 02:12:13.364219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.664 [2024-07-24 02:12:13.364245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.664 qpair failed and we were unable to recover it. 00:33:58.664 [2024-07-24 02:12:13.364375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.664 [2024-07-24 02:12:13.364405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.664 qpair failed and we were unable to recover it. 00:33:58.664 [2024-07-24 02:12:13.364601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.664 [2024-07-24 02:12:13.364649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.664 qpair failed and we were unable to recover it. 00:33:58.664 [2024-07-24 02:12:13.364800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.664 [2024-07-24 02:12:13.364842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.664 qpair failed and we were unable to recover it. 00:33:58.664 [2024-07-24 02:12:13.365002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.664 [2024-07-24 02:12:13.365027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.664 qpair failed and we were unable to recover it. 00:33:58.664 [2024-07-24 02:12:13.365189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.664 [2024-07-24 02:12:13.365214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.664 qpair failed and we were unable to recover it. 00:33:58.664 [2024-07-24 02:12:13.365337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.664 [2024-07-24 02:12:13.365363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.664 qpair failed and we were unable to recover it. 00:33:58.664 [2024-07-24 02:12:13.365491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.664 [2024-07-24 02:12:13.365534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.664 qpair failed and we were unable to recover it. 00:33:58.664 [2024-07-24 02:12:13.365661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.664 [2024-07-24 02:12:13.365703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.664 qpair failed and we were unable to recover it. 00:33:58.664 [2024-07-24 02:12:13.365883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.664 [2024-07-24 02:12:13.365924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.664 qpair failed and we were unable to recover it. 00:33:58.664 [2024-07-24 02:12:13.366088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.664 [2024-07-24 02:12:13.366114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.664 qpair failed and we were unable to recover it. 00:33:58.664 [2024-07-24 02:12:13.366218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.664 [2024-07-24 02:12:13.366243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.664 qpair failed and we were unable to recover it. 00:33:58.664 [2024-07-24 02:12:13.366340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.664 [2024-07-24 02:12:13.366366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.664 qpair failed and we were unable to recover it. 00:33:58.664 [2024-07-24 02:12:13.366528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.664 [2024-07-24 02:12:13.366554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.664 qpair failed and we were unable to recover it. 00:33:58.664 [2024-07-24 02:12:13.366685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.665 [2024-07-24 02:12:13.366729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.665 qpair failed and we were unable to recover it. 00:33:58.665 [2024-07-24 02:12:13.366885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.665 [2024-07-24 02:12:13.366929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.665 qpair failed and we were unable to recover it. 00:33:58.665 [2024-07-24 02:12:13.367089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.665 [2024-07-24 02:12:13.367115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.665 qpair failed and we were unable to recover it. 00:33:58.665 [2024-07-24 02:12:13.367277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.665 [2024-07-24 02:12:13.367302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.665 qpair failed and we were unable to recover it. 00:33:58.665 [2024-07-24 02:12:13.367431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.665 [2024-07-24 02:12:13.367477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.665 qpair failed and we were unable to recover it. 00:33:58.665 [2024-07-24 02:12:13.367663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.665 [2024-07-24 02:12:13.367705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.665 qpair failed and we were unable to recover it. 00:33:58.665 [2024-07-24 02:12:13.367852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.665 [2024-07-24 02:12:13.367894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.665 qpair failed and we were unable to recover it. 00:33:58.665 [2024-07-24 02:12:13.368031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.665 [2024-07-24 02:12:13.368057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.665 qpair failed and we were unable to recover it. 00:33:58.665 [2024-07-24 02:12:13.368215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.665 [2024-07-24 02:12:13.368240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.665 qpair failed and we were unable to recover it. 00:33:58.665 [2024-07-24 02:12:13.368392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.665 [2024-07-24 02:12:13.368435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.665 qpair failed and we were unable to recover it. 00:33:58.665 [2024-07-24 02:12:13.368590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.665 [2024-07-24 02:12:13.368620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.665 qpair failed and we were unable to recover it. 00:33:58.665 [2024-07-24 02:12:13.368785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.665 [2024-07-24 02:12:13.368813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.665 qpair failed and we were unable to recover it. 00:33:58.665 [2024-07-24 02:12:13.368963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.665 [2024-07-24 02:12:13.368988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.665 qpair failed and we were unable to recover it. 00:33:58.665 [2024-07-24 02:12:13.369093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.665 [2024-07-24 02:12:13.369120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.665 qpair failed and we were unable to recover it. 00:33:58.665 [2024-07-24 02:12:13.369283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.665 [2024-07-24 02:12:13.369308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.665 qpair failed and we were unable to recover it. 00:33:58.665 [2024-07-24 02:12:13.369492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.665 [2024-07-24 02:12:13.369520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.665 qpair failed and we were unable to recover it. 00:33:58.665 [2024-07-24 02:12:13.369655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.665 [2024-07-24 02:12:13.369683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.665 qpair failed and we were unable to recover it. 00:33:58.665 [2024-07-24 02:12:13.369845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.665 [2024-07-24 02:12:13.369887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.665 qpair failed and we were unable to recover it. 00:33:58.665 [2024-07-24 02:12:13.370046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.665 [2024-07-24 02:12:13.370071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.665 qpair failed and we were unable to recover it. 00:33:58.665 [2024-07-24 02:12:13.370232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.665 [2024-07-24 02:12:13.370257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.665 qpair failed and we were unable to recover it. 00:33:58.665 [2024-07-24 02:12:13.370410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.665 [2024-07-24 02:12:13.370438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.665 qpair failed and we were unable to recover it. 00:33:58.665 [2024-07-24 02:12:13.370609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.665 [2024-07-24 02:12:13.370637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.665 qpair failed and we were unable to recover it. 00:33:58.665 [2024-07-24 02:12:13.370871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.665 [2024-07-24 02:12:13.370913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.665 qpair failed and we were unable to recover it. 00:33:58.665 [2024-07-24 02:12:13.371075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.665 [2024-07-24 02:12:13.371105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.665 qpair failed and we were unable to recover it. 00:33:58.665 [2024-07-24 02:12:13.371237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.665 [2024-07-24 02:12:13.371263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.665 qpair failed and we were unable to recover it. 00:33:58.665 [2024-07-24 02:12:13.371391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.665 [2024-07-24 02:12:13.371435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.665 qpair failed and we were unable to recover it. 00:33:58.665 [2024-07-24 02:12:13.371626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.665 [2024-07-24 02:12:13.371669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.665 qpair failed and we were unable to recover it. 00:33:58.665 [2024-07-24 02:12:13.371820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.665 [2024-07-24 02:12:13.371863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.665 qpair failed and we were unable to recover it. 00:33:58.665 [2024-07-24 02:12:13.371997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.665 [2024-07-24 02:12:13.372022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.665 qpair failed and we were unable to recover it. 00:33:58.665 [2024-07-24 02:12:13.372152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.665 [2024-07-24 02:12:13.372178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.665 qpair failed and we were unable to recover it. 00:33:58.665 [2024-07-24 02:12:13.372312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.665 [2024-07-24 02:12:13.372344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.665 qpair failed and we were unable to recover it. 00:33:58.665 [2024-07-24 02:12:13.372503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.665 [2024-07-24 02:12:13.372546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.665 qpair failed and we were unable to recover it. 00:33:58.665 [2024-07-24 02:12:13.372683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.665 [2024-07-24 02:12:13.372725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.665 qpair failed and we were unable to recover it. 00:33:58.665 [2024-07-24 02:12:13.372823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.665 [2024-07-24 02:12:13.372850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.665 qpair failed and we were unable to recover it. 00:33:58.665 [2024-07-24 02:12:13.372983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.665 [2024-07-24 02:12:13.373009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.665 qpair failed and we were unable to recover it. 00:33:58.665 [2024-07-24 02:12:13.373144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.665 [2024-07-24 02:12:13.373170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.665 qpair failed and we were unable to recover it. 00:33:58.665 [2024-07-24 02:12:13.373332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.665 [2024-07-24 02:12:13.373359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.666 qpair failed and we were unable to recover it. 00:33:58.666 [2024-07-24 02:12:13.373517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.666 [2024-07-24 02:12:13.373560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.666 qpair failed and we were unable to recover it. 00:33:58.666 [2024-07-24 02:12:13.373689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.666 [2024-07-24 02:12:13.373732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.666 qpair failed and we were unable to recover it. 00:33:58.666 [2024-07-24 02:12:13.373836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.666 [2024-07-24 02:12:13.373863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.666 qpair failed and we were unable to recover it. 00:33:58.666 [2024-07-24 02:12:13.374027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.666 [2024-07-24 02:12:13.374053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.666 qpair failed and we were unable to recover it. 00:33:58.666 [2024-07-24 02:12:13.374188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.666 [2024-07-24 02:12:13.374214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.666 qpair failed and we were unable to recover it. 00:33:58.666 [2024-07-24 02:12:13.374373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.666 [2024-07-24 02:12:13.374402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.666 qpair failed and we were unable to recover it. 00:33:58.666 [2024-07-24 02:12:13.374633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.666 [2024-07-24 02:12:13.374674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.666 qpair failed and we were unable to recover it. 00:33:58.666 [2024-07-24 02:12:13.374834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.666 [2024-07-24 02:12:13.374878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.666 qpair failed and we were unable to recover it. 00:33:58.666 [2024-07-24 02:12:13.375004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.666 [2024-07-24 02:12:13.375028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.666 qpair failed and we were unable to recover it. 00:33:58.666 [2024-07-24 02:12:13.375188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.666 [2024-07-24 02:12:13.375213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.666 qpair failed and we were unable to recover it. 00:33:58.666 [2024-07-24 02:12:13.375382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.666 [2024-07-24 02:12:13.375408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.666 qpair failed and we were unable to recover it. 00:33:58.666 [2024-07-24 02:12:13.375575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.666 [2024-07-24 02:12:13.375600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.666 qpair failed and we were unable to recover it. 00:33:58.666 [2024-07-24 02:12:13.375752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.666 [2024-07-24 02:12:13.375798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.666 qpair failed and we were unable to recover it. 00:33:58.666 [2024-07-24 02:12:13.375937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.666 [2024-07-24 02:12:13.375963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.666 qpair failed and we were unable to recover it. 00:33:58.666 [2024-07-24 02:12:13.376069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.666 [2024-07-24 02:12:13.376094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.666 qpair failed and we were unable to recover it. 00:33:58.666 [2024-07-24 02:12:13.376240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.666 [2024-07-24 02:12:13.376266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.666 qpair failed and we were unable to recover it. 00:33:58.666 [2024-07-24 02:12:13.376427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.666 [2024-07-24 02:12:13.376470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.666 qpair failed and we were unable to recover it. 00:33:58.666 [2024-07-24 02:12:13.376589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.666 [2024-07-24 02:12:13.376631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.666 qpair failed and we were unable to recover it. 00:33:58.666 [2024-07-24 02:12:13.376759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.666 [2024-07-24 02:12:13.376801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.666 qpair failed and we were unable to recover it. 00:33:58.666 [2024-07-24 02:12:13.376939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.666 [2024-07-24 02:12:13.376965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.666 qpair failed and we were unable to recover it. 00:33:58.666 [2024-07-24 02:12:13.377101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.666 [2024-07-24 02:12:13.377126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.666 qpair failed and we were unable to recover it. 00:33:58.666 [2024-07-24 02:12:13.377237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.666 [2024-07-24 02:12:13.377263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.666 qpair failed and we were unable to recover it. 00:33:58.666 [2024-07-24 02:12:13.377402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.666 [2024-07-24 02:12:13.377428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.666 qpair failed and we were unable to recover it. 00:33:58.666 [2024-07-24 02:12:13.377561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.666 [2024-07-24 02:12:13.377586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.666 qpair failed and we were unable to recover it. 00:33:58.666 [2024-07-24 02:12:13.377722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.666 [2024-07-24 02:12:13.377748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.666 qpair failed and we were unable to recover it. 00:33:58.666 [2024-07-24 02:12:13.377854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.666 [2024-07-24 02:12:13.377879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.666 qpair failed and we were unable to recover it. 00:33:58.666 [2024-07-24 02:12:13.377981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.666 [2024-07-24 02:12:13.378010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.666 qpair failed and we were unable to recover it. 00:33:58.666 [2024-07-24 02:12:13.378122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.666 [2024-07-24 02:12:13.378147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.666 qpair failed and we were unable to recover it. 00:33:58.666 [2024-07-24 02:12:13.378284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.666 [2024-07-24 02:12:13.378311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.666 qpair failed and we were unable to recover it. 00:33:58.666 [2024-07-24 02:12:13.378450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.666 [2024-07-24 02:12:13.378475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.666 qpair failed and we were unable to recover it. 00:33:58.666 [2024-07-24 02:12:13.378570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.666 [2024-07-24 02:12:13.378595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.666 qpair failed and we were unable to recover it. 00:33:58.666 [2024-07-24 02:12:13.378726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.666 [2024-07-24 02:12:13.378751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.666 qpair failed and we were unable to recover it. 00:33:58.666 [2024-07-24 02:12:13.378911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.666 [2024-07-24 02:12:13.378936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.666 qpair failed and we were unable to recover it. 00:33:58.666 [2024-07-24 02:12:13.379071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.666 [2024-07-24 02:12:13.379096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.666 qpair failed and we were unable to recover it. 00:33:58.666 [2024-07-24 02:12:13.379256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.666 [2024-07-24 02:12:13.379282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.666 qpair failed and we were unable to recover it. 00:33:58.666 [2024-07-24 02:12:13.379424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.666 [2024-07-24 02:12:13.379450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.667 qpair failed and we were unable to recover it. 00:33:58.667 [2024-07-24 02:12:13.379605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.667 [2024-07-24 02:12:13.379630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.667 qpair failed and we were unable to recover it. 00:33:58.667 [2024-07-24 02:12:13.379783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.667 [2024-07-24 02:12:13.379811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.667 qpair failed and we were unable to recover it. 00:33:58.667 [2024-07-24 02:12:13.380005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.667 [2024-07-24 02:12:13.380050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.667 qpair failed and we were unable to recover it. 00:33:58.667 [2024-07-24 02:12:13.380174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.667 [2024-07-24 02:12:13.380199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.667 qpair failed and we were unable to recover it. 00:33:58.667 [2024-07-24 02:12:13.380342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.667 [2024-07-24 02:12:13.380368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.667 qpair failed and we were unable to recover it. 00:33:58.667 [2024-07-24 02:12:13.380482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.667 [2024-07-24 02:12:13.380510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.667 qpair failed and we were unable to recover it. 00:33:58.667 [2024-07-24 02:12:13.380675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.667 [2024-07-24 02:12:13.380719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.667 qpair failed and we were unable to recover it. 00:33:58.667 [2024-07-24 02:12:13.380901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.667 [2024-07-24 02:12:13.380946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.667 qpair failed and we were unable to recover it. 00:33:58.667 [2024-07-24 02:12:13.381106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.667 [2024-07-24 02:12:13.381131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.667 qpair failed and we were unable to recover it. 00:33:58.667 [2024-07-24 02:12:13.381241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.667 [2024-07-24 02:12:13.381267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.667 qpair failed and we were unable to recover it. 00:33:58.667 [2024-07-24 02:12:13.381427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.667 [2024-07-24 02:12:13.381470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.667 qpair failed and we were unable to recover it. 00:33:58.667 [2024-07-24 02:12:13.381653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.667 [2024-07-24 02:12:13.381696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.667 qpair failed and we were unable to recover it. 00:33:58.667 [2024-07-24 02:12:13.381821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.667 [2024-07-24 02:12:13.381850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.667 qpair failed and we were unable to recover it. 00:33:58.667 [2024-07-24 02:12:13.382001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.667 [2024-07-24 02:12:13.382026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.667 qpair failed and we were unable to recover it. 00:33:58.667 [2024-07-24 02:12:13.382129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.667 [2024-07-24 02:12:13.382154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.667 qpair failed and we were unable to recover it. 00:33:58.667 [2024-07-24 02:12:13.382283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.667 [2024-07-24 02:12:13.382309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.667 qpair failed and we were unable to recover it. 00:33:58.667 [2024-07-24 02:12:13.382469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.667 [2024-07-24 02:12:13.382511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.667 qpair failed and we were unable to recover it. 00:33:58.667 [2024-07-24 02:12:13.382663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.667 [2024-07-24 02:12:13.382701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.667 qpair failed and we were unable to recover it. 00:33:58.667 [2024-07-24 02:12:13.382836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.667 [2024-07-24 02:12:13.382863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.667 qpair failed and we were unable to recover it. 00:33:58.667 [2024-07-24 02:12:13.383025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.667 [2024-07-24 02:12:13.383050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.667 qpair failed and we were unable to recover it. 00:33:58.667 [2024-07-24 02:12:13.383175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.667 [2024-07-24 02:12:13.383199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.667 qpair failed and we were unable to recover it. 00:33:58.667 [2024-07-24 02:12:13.383330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.667 [2024-07-24 02:12:13.383356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.667 qpair failed and we were unable to recover it. 00:33:58.667 [2024-07-24 02:12:13.383466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.667 [2024-07-24 02:12:13.383491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.667 qpair failed and we were unable to recover it. 00:33:58.667 [2024-07-24 02:12:13.383599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.667 [2024-07-24 02:12:13.383624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.667 qpair failed and we were unable to recover it. 00:33:58.667 [2024-07-24 02:12:13.383719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.667 [2024-07-24 02:12:13.383762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.667 qpair failed and we were unable to recover it. 00:33:58.667 [2024-07-24 02:12:13.383904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.667 [2024-07-24 02:12:13.383932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.667 qpair failed and we were unable to recover it. 00:33:58.667 [2024-07-24 02:12:13.384070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.667 [2024-07-24 02:12:13.384098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.667 qpair failed and we were unable to recover it. 00:33:58.667 [2024-07-24 02:12:13.384246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.667 [2024-07-24 02:12:13.384273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.667 qpair failed and we were unable to recover it. 00:33:58.667 [2024-07-24 02:12:13.384414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.667 [2024-07-24 02:12:13.384441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.667 qpair failed and we were unable to recover it. 00:33:58.667 [2024-07-24 02:12:13.384600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.668 [2024-07-24 02:12:13.384645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.668 qpair failed and we were unable to recover it. 00:33:58.668 [2024-07-24 02:12:13.384798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.668 [2024-07-24 02:12:13.384841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.668 qpair failed and we were unable to recover it. 00:33:58.668 [2024-07-24 02:12:13.384993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.668 [2024-07-24 02:12:13.385036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.668 qpair failed and we were unable to recover it. 00:33:58.668 [2024-07-24 02:12:13.385141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.668 [2024-07-24 02:12:13.385168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.668 qpair failed and we were unable to recover it. 00:33:58.668 [2024-07-24 02:12:13.385322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.668 [2024-07-24 02:12:13.385364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.668 qpair failed and we were unable to recover it. 00:33:58.668 [2024-07-24 02:12:13.385513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.668 [2024-07-24 02:12:13.385541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.668 qpair failed and we were unable to recover it. 00:33:58.668 [2024-07-24 02:12:13.385685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.668 [2024-07-24 02:12:13.385729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.668 qpair failed and we were unable to recover it. 00:33:58.668 [2024-07-24 02:12:13.385840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.668 [2024-07-24 02:12:13.385882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.668 qpair failed and we were unable to recover it. 00:33:58.668 [2024-07-24 02:12:13.386045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.668 [2024-07-24 02:12:13.386070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.668 qpair failed and we were unable to recover it. 00:33:58.668 [2024-07-24 02:12:13.386174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.668 [2024-07-24 02:12:13.386201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.668 qpair failed and we were unable to recover it. 00:33:58.668 [2024-07-24 02:12:13.386328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.668 [2024-07-24 02:12:13.386371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.668 qpair failed and we were unable to recover it. 00:33:58.668 [2024-07-24 02:12:13.386516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.668 [2024-07-24 02:12:13.386544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.668 qpair failed and we were unable to recover it. 00:33:58.668 [2024-07-24 02:12:13.386656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.668 [2024-07-24 02:12:13.386683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.668 qpair failed and we were unable to recover it. 00:33:58.668 [2024-07-24 02:12:13.386798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.668 [2024-07-24 02:12:13.386827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.668 qpair failed and we were unable to recover it. 00:33:58.668 [2024-07-24 02:12:13.386973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.668 [2024-07-24 02:12:13.387000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.668 qpair failed and we were unable to recover it. 00:33:58.668 [2024-07-24 02:12:13.387166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.668 [2024-07-24 02:12:13.387210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.668 qpair failed and we were unable to recover it. 00:33:58.668 [2024-07-24 02:12:13.387349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.668 [2024-07-24 02:12:13.387383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.668 qpair failed and we were unable to recover it. 00:33:58.668 [2024-07-24 02:12:13.387535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.668 [2024-07-24 02:12:13.387579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.668 qpair failed and we were unable to recover it. 00:33:58.668 [2024-07-24 02:12:13.387733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.668 [2024-07-24 02:12:13.387777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.668 qpair failed and we were unable to recover it. 00:33:58.668 [2024-07-24 02:12:13.387929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.668 [2024-07-24 02:12:13.387973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.668 qpair failed and we were unable to recover it. 00:33:58.668 [2024-07-24 02:12:13.388086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.668 [2024-07-24 02:12:13.388113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.668 qpair failed and we were unable to recover it. 00:33:58.668 [2024-07-24 02:12:13.388223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.668 [2024-07-24 02:12:13.388250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.668 qpair failed and we were unable to recover it. 00:33:58.668 [2024-07-24 02:12:13.388380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.668 [2024-07-24 02:12:13.388406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.668 qpair failed and we were unable to recover it. 00:33:58.668 [2024-07-24 02:12:13.388507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.668 [2024-07-24 02:12:13.388549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.668 qpair failed and we were unable to recover it. 00:33:58.668 [2024-07-24 02:12:13.388719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.668 [2024-07-24 02:12:13.388746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.668 qpair failed and we were unable to recover it. 00:33:58.668 [2024-07-24 02:12:13.388865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.668 [2024-07-24 02:12:13.388892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.668 qpair failed and we were unable to recover it. 00:33:58.668 [2024-07-24 02:12:13.389009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.668 [2024-07-24 02:12:13.389037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.668 qpair failed and we were unable to recover it. 00:33:58.668 [2024-07-24 02:12:13.389189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.668 [2024-07-24 02:12:13.389216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.668 qpair failed and we were unable to recover it. 00:33:58.668 [2024-07-24 02:12:13.389371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.668 [2024-07-24 02:12:13.389400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.669 qpair failed and we were unable to recover it. 00:33:58.669 [2024-07-24 02:12:13.389576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.669 [2024-07-24 02:12:13.389619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.669 qpair failed and we were unable to recover it. 00:33:58.669 [2024-07-24 02:12:13.389778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.669 [2024-07-24 02:12:13.389806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.669 qpair failed and we were unable to recover it. 00:33:58.669 [2024-07-24 02:12:13.389977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.669 [2024-07-24 02:12:13.390019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.669 qpair failed and we were unable to recover it. 00:33:58.669 [2024-07-24 02:12:13.390158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.669 [2024-07-24 02:12:13.390184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:58.669 qpair failed and we were unable to recover it. 00:33:58.669 [2024-07-24 02:12:13.390345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.669 [2024-07-24 02:12:13.390371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.669 qpair failed and we were unable to recover it. 00:33:58.669 [2024-07-24 02:12:13.390481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.669 [2024-07-24 02:12:13.390506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.669 qpair failed and we were unable to recover it. 00:33:58.669 [2024-07-24 02:12:13.390668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.669 [2024-07-24 02:12:13.390696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.669 qpair failed and we were unable to recover it. 00:33:58.669 [2024-07-24 02:12:13.390814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.669 [2024-07-24 02:12:13.390856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.669 qpair failed and we were unable to recover it. 00:33:58.669 [2024-07-24 02:12:13.390996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.669 [2024-07-24 02:12:13.391024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.669 qpair failed and we were unable to recover it. 00:33:58.669 [2024-07-24 02:12:13.391161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.669 [2024-07-24 02:12:13.391189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.669 qpair failed and we were unable to recover it. 00:33:58.669 [2024-07-24 02:12:13.391344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.669 [2024-07-24 02:12:13.391370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.669 qpair failed and we were unable to recover it. 00:33:58.669 [2024-07-24 02:12:13.391498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.669 [2024-07-24 02:12:13.391523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.669 qpair failed and we were unable to recover it. 00:33:58.669 [2024-07-24 02:12:13.391676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.669 [2024-07-24 02:12:13.391703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.669 qpair failed and we were unable to recover it. 00:33:58.669 [2024-07-24 02:12:13.391907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.669 [2024-07-24 02:12:13.391935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.669 qpair failed and we were unable to recover it. 00:33:58.669 [2024-07-24 02:12:13.392079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.669 [2024-07-24 02:12:13.392107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.669 qpair failed and we were unable to recover it. 00:33:58.669 [2024-07-24 02:12:13.392252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.669 [2024-07-24 02:12:13.392280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.669 qpair failed and we were unable to recover it. 00:33:58.669 [2024-07-24 02:12:13.392430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.669 [2024-07-24 02:12:13.392456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.669 qpair failed and we were unable to recover it. 00:33:58.669 [2024-07-24 02:12:13.392604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.669 [2024-07-24 02:12:13.392632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.669 qpair failed and we were unable to recover it. 00:33:58.669 [2024-07-24 02:12:13.392804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.669 [2024-07-24 02:12:13.392831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.669 qpair failed and we were unable to recover it. 00:33:58.669 [2024-07-24 02:12:13.392956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.669 [2024-07-24 02:12:13.392998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.669 qpair failed and we were unable to recover it. 00:33:58.669 [2024-07-24 02:12:13.393104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.669 [2024-07-24 02:12:13.393131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.669 qpair failed and we were unable to recover it. 00:33:58.669 [2024-07-24 02:12:13.393274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.669 [2024-07-24 02:12:13.393302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.669 qpair failed and we were unable to recover it. 00:33:58.669 [2024-07-24 02:12:13.393441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.669 [2024-07-24 02:12:13.393466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.669 qpair failed and we were unable to recover it. 00:33:58.669 [2024-07-24 02:12:13.393595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.669 [2024-07-24 02:12:13.393619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.669 qpair failed and we were unable to recover it. 00:33:58.669 [2024-07-24 02:12:13.393768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.669 [2024-07-24 02:12:13.393796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.669 qpair failed and we were unable to recover it. 00:33:58.669 [2024-07-24 02:12:13.393999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.669 [2024-07-24 02:12:13.394027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.669 qpair failed and we were unable to recover it. 00:33:58.669 [2024-07-24 02:12:13.394136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.669 [2024-07-24 02:12:13.394163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.669 qpair failed and we were unable to recover it. 00:33:58.669 [2024-07-24 02:12:13.394313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.669 [2024-07-24 02:12:13.394364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.669 qpair failed and we were unable to recover it. 00:33:58.669 [2024-07-24 02:12:13.394472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.669 [2024-07-24 02:12:13.394497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.669 qpair failed and we were unable to recover it. 00:33:58.670 [2024-07-24 02:12:13.394622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.670 [2024-07-24 02:12:13.394646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.670 qpair failed and we were unable to recover it. 00:33:58.670 [2024-07-24 02:12:13.394809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.670 [2024-07-24 02:12:13.394836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.670 qpair failed and we were unable to recover it. 00:33:58.670 [2024-07-24 02:12:13.394954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.670 [2024-07-24 02:12:13.394995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.670 qpair failed and we were unable to recover it. 00:33:58.670 [2024-07-24 02:12:13.395100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.670 [2024-07-24 02:12:13.395128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.670 qpair failed and we were unable to recover it. 00:33:58.670 [2024-07-24 02:12:13.395273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.670 [2024-07-24 02:12:13.395300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.670 qpair failed and we were unable to recover it. 00:33:58.670 [2024-07-24 02:12:13.395484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.670 [2024-07-24 02:12:13.395509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.670 qpair failed and we were unable to recover it. 00:33:58.670 [2024-07-24 02:12:13.395672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.670 [2024-07-24 02:12:13.395700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.670 qpair failed and we were unable to recover it. 00:33:58.670 [2024-07-24 02:12:13.395889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.670 [2024-07-24 02:12:13.395914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.670 qpair failed and we were unable to recover it. 00:33:58.670 [2024-07-24 02:12:13.396129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.670 [2024-07-24 02:12:13.396157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.670 qpair failed and we were unable to recover it. 00:33:58.670 [2024-07-24 02:12:13.396328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.670 [2024-07-24 02:12:13.396372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.670 qpair failed and we were unable to recover it. 00:33:58.670 [2024-07-24 02:12:13.396515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.670 [2024-07-24 02:12:13.396539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.670 qpair failed and we were unable to recover it. 00:33:58.670 [2024-07-24 02:12:13.396690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.670 [2024-07-24 02:12:13.396719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.670 qpair failed and we were unable to recover it. 00:33:58.670 [2024-07-24 02:12:13.396822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.670 [2024-07-24 02:12:13.396863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.670 qpair failed and we were unable to recover it. 00:33:58.670 [2024-07-24 02:12:13.397015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.670 [2024-07-24 02:12:13.397043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.670 qpair failed and we were unable to recover it. 00:33:58.670 [2024-07-24 02:12:13.397189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.670 [2024-07-24 02:12:13.397216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.670 qpair failed and we were unable to recover it. 00:33:58.670 [2024-07-24 02:12:13.397386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.670 [2024-07-24 02:12:13.397412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.670 qpair failed and we were unable to recover it. 00:33:58.670 [2024-07-24 02:12:13.397542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.670 [2024-07-24 02:12:13.397567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.670 qpair failed and we were unable to recover it. 00:33:58.670 [2024-07-24 02:12:13.397700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.670 [2024-07-24 02:12:13.397725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.670 qpair failed and we were unable to recover it. 00:33:58.670 [2024-07-24 02:12:13.397847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.670 [2024-07-24 02:12:13.397872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.670 qpair failed and we were unable to recover it. 00:33:58.670 [2024-07-24 02:12:13.398028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.670 [2024-07-24 02:12:13.398056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.670 qpair failed and we were unable to recover it. 00:33:58.670 [2024-07-24 02:12:13.398251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.670 [2024-07-24 02:12:13.398278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.670 qpair failed and we were unable to recover it. 00:33:58.670 [2024-07-24 02:12:13.398459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.670 [2024-07-24 02:12:13.398485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.670 qpair failed and we were unable to recover it. 00:33:58.670 [2024-07-24 02:12:13.398636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.670 [2024-07-24 02:12:13.398664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.670 qpair failed and we were unable to recover it. 00:33:58.670 [2024-07-24 02:12:13.398784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.670 [2024-07-24 02:12:13.398808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.670 qpair failed and we were unable to recover it. 00:33:58.670 [2024-07-24 02:12:13.398912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.670 [2024-07-24 02:12:13.398936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.670 qpair failed and we were unable to recover it. 00:33:58.670 [2024-07-24 02:12:13.399082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.670 [2024-07-24 02:12:13.399109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.670 qpair failed and we were unable to recover it. 00:33:58.670 [2024-07-24 02:12:13.399237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.670 [2024-07-24 02:12:13.399262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.670 qpair failed and we were unable to recover it. 00:33:58.670 [2024-07-24 02:12:13.399389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.670 [2024-07-24 02:12:13.399415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.670 qpair failed and we were unable to recover it. 00:33:58.670 [2024-07-24 02:12:13.399531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.671 [2024-07-24 02:12:13.399558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.671 qpair failed and we were unable to recover it. 00:33:58.671 [2024-07-24 02:12:13.399736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.671 [2024-07-24 02:12:13.399761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.671 qpair failed and we were unable to recover it. 00:33:58.671 [2024-07-24 02:12:13.399895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.671 [2024-07-24 02:12:13.399920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.671 qpair failed and we were unable to recover it. 00:33:58.671 [2024-07-24 02:12:13.400052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.671 [2024-07-24 02:12:13.400076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.671 qpair failed and we were unable to recover it. 00:33:58.671 [2024-07-24 02:12:13.400207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.671 [2024-07-24 02:12:13.400233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.671 qpair failed and we were unable to recover it. 00:33:58.671 [2024-07-24 02:12:13.400368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.671 [2024-07-24 02:12:13.400410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.671 qpair failed and we were unable to recover it. 00:33:58.671 [2024-07-24 02:12:13.400544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.671 [2024-07-24 02:12:13.400572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.671 qpair failed and we were unable to recover it. 00:33:58.671 [2024-07-24 02:12:13.400725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.671 [2024-07-24 02:12:13.400749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.671 qpair failed and we were unable to recover it. 00:33:58.671 [2024-07-24 02:12:13.400879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.671 [2024-07-24 02:12:13.400904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.671 qpair failed and we were unable to recover it. 00:33:58.671 [2024-07-24 02:12:13.401058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.671 [2024-07-24 02:12:13.401085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.671 qpair failed and we were unable to recover it. 00:33:58.671 [2024-07-24 02:12:13.401208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.671 [2024-07-24 02:12:13.401233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.671 qpair failed and we were unable to recover it. 00:33:58.671 [2024-07-24 02:12:13.401364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.671 [2024-07-24 02:12:13.401389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.671 qpair failed and we were unable to recover it. 00:33:58.671 [2024-07-24 02:12:13.401565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.671 [2024-07-24 02:12:13.401593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.671 qpair failed and we were unable to recover it. 00:33:58.671 [2024-07-24 02:12:13.401769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.671 [2024-07-24 02:12:13.401794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.671 qpair failed and we were unable to recover it. 00:33:58.671 [2024-07-24 02:12:13.401928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.671 [2024-07-24 02:12:13.401971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.671 qpair failed and we were unable to recover it. 00:33:58.671 [2024-07-24 02:12:13.402154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.671 [2024-07-24 02:12:13.402181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.671 qpair failed and we were unable to recover it. 00:33:58.671 [2024-07-24 02:12:13.402304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.671 [2024-07-24 02:12:13.402334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.671 qpair failed and we were unable to recover it. 00:33:58.671 [2024-07-24 02:12:13.402432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.671 [2024-07-24 02:12:13.402457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.671 qpair failed and we were unable to recover it. 00:33:58.671 [2024-07-24 02:12:13.402608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.671 [2024-07-24 02:12:13.402635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.671 qpair failed and we were unable to recover it. 00:33:58.671 [2024-07-24 02:12:13.402790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.671 [2024-07-24 02:12:13.402814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.671 qpair failed and we were unable to recover it. 00:33:58.671 [2024-07-24 02:12:13.402915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.671 [2024-07-24 02:12:13.402940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.671 qpair failed and we were unable to recover it. 00:33:58.671 [2024-07-24 02:12:13.403086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.671 [2024-07-24 02:12:13.403114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.671 qpair failed and we were unable to recover it. 00:33:58.671 [2024-07-24 02:12:13.403261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.671 [2024-07-24 02:12:13.403286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.671 qpair failed and we were unable to recover it. 00:33:58.671 [2024-07-24 02:12:13.403421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.671 [2024-07-24 02:12:13.403446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.671 qpair failed and we were unable to recover it. 00:33:58.671 [2024-07-24 02:12:13.403655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.671 [2024-07-24 02:12:13.403697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.671 qpair failed and we were unable to recover it. 00:33:58.671 [2024-07-24 02:12:13.403840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.671 [2024-07-24 02:12:13.403868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.671 qpair failed and we were unable to recover it. 00:33:58.671 [2024-07-24 02:12:13.404006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.672 [2024-07-24 02:12:13.404032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.672 qpair failed and we were unable to recover it. 00:33:58.672 [2024-07-24 02:12:13.404164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.672 [2024-07-24 02:12:13.404189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.672 qpair failed and we were unable to recover it. 00:33:58.672 [2024-07-24 02:12:13.404367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.672 [2024-07-24 02:12:13.404393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.672 qpair failed and we were unable to recover it. 00:33:58.672 [2024-07-24 02:12:13.404529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.672 [2024-07-24 02:12:13.404554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.672 qpair failed and we were unable to recover it. 00:33:58.672 [2024-07-24 02:12:13.404709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.672 [2024-07-24 02:12:13.404734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.672 qpair failed and we were unable to recover it. 00:33:58.672 [2024-07-24 02:12:13.404843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.672 [2024-07-24 02:12:13.404870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.672 qpair failed and we were unable to recover it. 00:33:58.672 [2024-07-24 02:12:13.404978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.672 [2024-07-24 02:12:13.405004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.672 qpair failed and we were unable to recover it. 00:33:58.672 [2024-07-24 02:12:13.405137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.672 [2024-07-24 02:12:13.405163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.672 qpair failed and we were unable to recover it. 00:33:58.672 [2024-07-24 02:12:13.405289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.672 [2024-07-24 02:12:13.405314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.672 qpair failed and we were unable to recover it. 00:33:58.672 [2024-07-24 02:12:13.405490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.672 [2024-07-24 02:12:13.405515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.672 qpair failed and we were unable to recover it. 00:33:58.672 [2024-07-24 02:12:13.405671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.672 [2024-07-24 02:12:13.405696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.672 qpair failed and we were unable to recover it. 00:33:58.672 [2024-07-24 02:12:13.405822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.672 [2024-07-24 02:12:13.405846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.672 qpair failed and we were unable to recover it. 00:33:58.672 [2024-07-24 02:12:13.405978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.672 [2024-07-24 02:12:13.406003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.672 qpair failed and we were unable to recover it. 00:33:58.672 [2024-07-24 02:12:13.406176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.672 [2024-07-24 02:12:13.406204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.672 qpair failed and we were unable to recover it. 00:33:58.672 [2024-07-24 02:12:13.406389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.672 [2024-07-24 02:12:13.406414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.672 qpair failed and we were unable to recover it. 00:33:58.672 [2024-07-24 02:12:13.406526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.672 [2024-07-24 02:12:13.406551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.672 qpair failed and we were unable to recover it. 00:33:58.672 [2024-07-24 02:12:13.406712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.672 [2024-07-24 02:12:13.406736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.672 qpair failed and we were unable to recover it. 00:33:58.672 [2024-07-24 02:12:13.406840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.672 [2024-07-24 02:12:13.406865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.672 qpair failed and we were unable to recover it. 00:33:58.672 [2024-07-24 02:12:13.406996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.672 [2024-07-24 02:12:13.407021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.672 qpair failed and we were unable to recover it. 00:33:58.672 [2024-07-24 02:12:13.407159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.672 [2024-07-24 02:12:13.407187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.672 qpair failed and we were unable to recover it. 00:33:58.672 [2024-07-24 02:12:13.407351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.672 [2024-07-24 02:12:13.407377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.672 qpair failed and we were unable to recover it. 00:33:58.672 [2024-07-24 02:12:13.407480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.672 [2024-07-24 02:12:13.407506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.672 qpair failed and we were unable to recover it. 00:33:58.672 [2024-07-24 02:12:13.407637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.672 [2024-07-24 02:12:13.407662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.672 qpair failed and we were unable to recover it. 00:33:58.672 [2024-07-24 02:12:13.407780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.672 [2024-07-24 02:12:13.407804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.672 qpair failed and we were unable to recover it. 00:33:58.672 [2024-07-24 02:12:13.407916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.673 [2024-07-24 02:12:13.407941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.673 qpair failed and we were unable to recover it. 00:33:58.673 [2024-07-24 02:12:13.408083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.673 [2024-07-24 02:12:13.408110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.673 qpair failed and we were unable to recover it. 00:33:58.673 [2024-07-24 02:12:13.408264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.673 [2024-07-24 02:12:13.408291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.673 qpair failed and we were unable to recover it. 00:33:58.673 [2024-07-24 02:12:13.408445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.673 [2024-07-24 02:12:13.408470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.673 qpair failed and we were unable to recover it. 00:33:58.673 [2024-07-24 02:12:13.408601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.673 [2024-07-24 02:12:13.408626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.673 qpair failed and we were unable to recover it. 00:33:58.673 [2024-07-24 02:12:13.408784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.673 [2024-07-24 02:12:13.408808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.673 qpair failed and we were unable to recover it. 00:33:58.673 [2024-07-24 02:12:13.408963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.673 [2024-07-24 02:12:13.408988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.673 qpair failed and we were unable to recover it. 00:33:58.673 [2024-07-24 02:12:13.409125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.673 [2024-07-24 02:12:13.409153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.673 qpair failed and we were unable to recover it. 00:33:58.673 [2024-07-24 02:12:13.409282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.673 [2024-07-24 02:12:13.409311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.673 qpair failed and we were unable to recover it. 00:33:58.673 [2024-07-24 02:12:13.409486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.673 [2024-07-24 02:12:13.409511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.673 qpair failed and we were unable to recover it. 00:33:58.673 [2024-07-24 02:12:13.409669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.673 [2024-07-24 02:12:13.409693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.673 qpair failed and we were unable to recover it. 00:33:58.673 [2024-07-24 02:12:13.409827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.673 [2024-07-24 02:12:13.409852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.673 qpair failed and we were unable to recover it. 00:33:58.673 [2024-07-24 02:12:13.409964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.673 [2024-07-24 02:12:13.409990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.673 qpair failed and we were unable to recover it. 00:33:58.673 [2024-07-24 02:12:13.410161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.673 [2024-07-24 02:12:13.410188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.673 qpair failed and we were unable to recover it. 00:33:58.673 [2024-07-24 02:12:13.410306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.673 [2024-07-24 02:12:13.410340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.673 qpair failed and we were unable to recover it. 00:33:58.673 [2024-07-24 02:12:13.410494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.673 [2024-07-24 02:12:13.410519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.673 qpair failed and we were unable to recover it. 00:33:58.673 [2024-07-24 02:12:13.410653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.673 [2024-07-24 02:12:13.410678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.673 qpair failed and we were unable to recover it. 00:33:58.673 [2024-07-24 02:12:13.410837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.673 [2024-07-24 02:12:13.410862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.673 qpair failed and we were unable to recover it. 00:33:58.673 [2024-07-24 02:12:13.410990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.673 [2024-07-24 02:12:13.411016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.674 qpair failed and we were unable to recover it. 00:33:58.674 [2024-07-24 02:12:13.411152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.674 [2024-07-24 02:12:13.411179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.674 qpair failed and we were unable to recover it. 00:33:58.674 [2024-07-24 02:12:13.411345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.674 [2024-07-24 02:12:13.411371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.674 qpair failed and we were unable to recover it. 00:33:58.674 [2024-07-24 02:12:13.411503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.674 [2024-07-24 02:12:13.411528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.674 qpair failed and we were unable to recover it. 00:33:58.674 [2024-07-24 02:12:13.411654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.674 [2024-07-24 02:12:13.411679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.674 qpair failed and we were unable to recover it. 00:33:58.674 [2024-07-24 02:12:13.411789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.674 [2024-07-24 02:12:13.411814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.674 qpair failed and we were unable to recover it. 00:33:58.674 [2024-07-24 02:12:13.411943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.674 [2024-07-24 02:12:13.411968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.674 qpair failed and we were unable to recover it. 00:33:58.674 [2024-07-24 02:12:13.412069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.674 [2024-07-24 02:12:13.412094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.674 qpair failed and we were unable to recover it. 00:33:58.674 [2024-07-24 02:12:13.412239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.674 [2024-07-24 02:12:13.412267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.674 qpair failed and we were unable to recover it. 00:33:58.674 [2024-07-24 02:12:13.412420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.674 [2024-07-24 02:12:13.412445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.674 qpair failed and we were unable to recover it. 00:33:58.674 [2024-07-24 02:12:13.412581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.674 [2024-07-24 02:12:13.412606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.674 qpair failed and we were unable to recover it. 00:33:58.674 [2024-07-24 02:12:13.412767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.674 [2024-07-24 02:12:13.412793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.674 qpair failed and we were unable to recover it. 00:33:58.674 [2024-07-24 02:12:13.412920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.674 [2024-07-24 02:12:13.412945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.674 qpair failed and we were unable to recover it. 00:33:58.674 [2024-07-24 02:12:13.413056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.674 [2024-07-24 02:12:13.413083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.674 qpair failed and we were unable to recover it. 00:33:58.674 [2024-07-24 02:12:13.413233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.674 [2024-07-24 02:12:13.413258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.674 qpair failed and we were unable to recover it. 00:33:58.674 [2024-07-24 02:12:13.413381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.674 [2024-07-24 02:12:13.413406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.674 qpair failed and we were unable to recover it. 00:33:58.674 [2024-07-24 02:12:13.413541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.674 [2024-07-24 02:12:13.413568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.674 qpair failed and we were unable to recover it. 00:33:58.674 [2024-07-24 02:12:13.413709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.674 [2024-07-24 02:12:13.413736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.674 qpair failed and we were unable to recover it. 00:33:58.674 [2024-07-24 02:12:13.413897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.674 [2024-07-24 02:12:13.413922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.674 qpair failed and we were unable to recover it. 00:33:58.674 [2024-07-24 02:12:13.414056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.674 [2024-07-24 02:12:13.414083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.674 qpair failed and we were unable to recover it. 00:33:58.674 [2024-07-24 02:12:13.414218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.674 [2024-07-24 02:12:13.414246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.674 qpair failed and we were unable to recover it. 00:33:58.674 [2024-07-24 02:12:13.414393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.675 [2024-07-24 02:12:13.414419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.675 qpair failed and we were unable to recover it. 00:33:58.675 [2024-07-24 02:12:13.414550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.675 [2024-07-24 02:12:13.414575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.675 qpair failed and we were unable to recover it. 00:33:58.675 [2024-07-24 02:12:13.414705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.675 [2024-07-24 02:12:13.414734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.675 qpair failed and we were unable to recover it. 00:33:58.675 [2024-07-24 02:12:13.414862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.675 [2024-07-24 02:12:13.414887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.675 qpair failed and we were unable to recover it. 00:33:58.675 [2024-07-24 02:12:13.415016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.675 [2024-07-24 02:12:13.415040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.675 qpair failed and we were unable to recover it. 00:33:58.675 [2024-07-24 02:12:13.415169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.675 [2024-07-24 02:12:13.415194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.675 qpair failed and we were unable to recover it. 00:33:58.675 [2024-07-24 02:12:13.415326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.675 [2024-07-24 02:12:13.415351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.675 qpair failed and we were unable to recover it. 00:33:58.675 [2024-07-24 02:12:13.415450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.675 [2024-07-24 02:12:13.415474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.675 qpair failed and we were unable to recover it. 00:33:58.675 [2024-07-24 02:12:13.415630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.675 [2024-07-24 02:12:13.415655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.675 qpair failed and we were unable to recover it. 00:33:58.675 [2024-07-24 02:12:13.415793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.675 [2024-07-24 02:12:13.415818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.675 qpair failed and we were unable to recover it. 00:33:58.675 [2024-07-24 02:12:13.415952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.675 [2024-07-24 02:12:13.415978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.675 qpair failed and we were unable to recover it. 00:33:58.675 [2024-07-24 02:12:13.416116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.675 [2024-07-24 02:12:13.416141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.675 qpair failed and we were unable to recover it. 00:33:58.675 [2024-07-24 02:12:13.416295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.675 [2024-07-24 02:12:13.416329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.675 qpair failed and we were unable to recover it. 00:33:58.675 [2024-07-24 02:12:13.416445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.675 [2024-07-24 02:12:13.416470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.675 qpair failed and we were unable to recover it. 00:33:58.675 [2024-07-24 02:12:13.416602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.675 [2024-07-24 02:12:13.416627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.675 qpair failed and we were unable to recover it. 00:33:58.675 [2024-07-24 02:12:13.416760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.675 [2024-07-24 02:12:13.416785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.675 qpair failed and we were unable to recover it. 00:33:58.675 [2024-07-24 02:12:13.416901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.675 [2024-07-24 02:12:13.416926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.675 qpair failed and we were unable to recover it. 00:33:58.675 [2024-07-24 02:12:13.417026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.675 [2024-07-24 02:12:13.417050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.675 qpair failed and we were unable to recover it. 00:33:58.675 [2024-07-24 02:12:13.417202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.675 [2024-07-24 02:12:13.417228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.675 qpair failed and we were unable to recover it. 00:33:58.675 [2024-07-24 02:12:13.417361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.675 [2024-07-24 02:12:13.417400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.675 qpair failed and we were unable to recover it. 00:33:58.675 [2024-07-24 02:12:13.417539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.675 [2024-07-24 02:12:13.417566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.675 qpair failed and we were unable to recover it. 00:33:58.675 [2024-07-24 02:12:13.417713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.675 [2024-07-24 02:12:13.417738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.675 qpair failed and we were unable to recover it. 00:33:58.675 [2024-07-24 02:12:13.417877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.676 [2024-07-24 02:12:13.417903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.676 qpair failed and we were unable to recover it. 00:33:58.676 [2024-07-24 02:12:13.418076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.676 [2024-07-24 02:12:13.418102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.676 qpair failed and we were unable to recover it. 00:33:58.676 [2024-07-24 02:12:13.418227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.676 [2024-07-24 02:12:13.418255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.676 qpair failed and we were unable to recover it. 00:33:58.676 [2024-07-24 02:12:13.418418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.676 [2024-07-24 02:12:13.418445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.676 qpair failed and we were unable to recover it. 00:33:58.676 [2024-07-24 02:12:13.418560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.676 [2024-07-24 02:12:13.418586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.676 qpair failed and we were unable to recover it. 00:33:58.676 [2024-07-24 02:12:13.418694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.676 [2024-07-24 02:12:13.418719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.676 qpair failed and we were unable to recover it. 00:33:58.676 [2024-07-24 02:12:13.418876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.676 [2024-07-24 02:12:13.418901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.676 qpair failed and we were unable to recover it. 00:33:58.676 [2024-07-24 02:12:13.419060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.676 [2024-07-24 02:12:13.419084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.676 qpair failed and we were unable to recover it. 00:33:58.676 [2024-07-24 02:12:13.419244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.676 [2024-07-24 02:12:13.419269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.676 qpair failed and we were unable to recover it. 00:33:58.676 [2024-07-24 02:12:13.419367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.676 [2024-07-24 02:12:13.419393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.676 qpair failed and we were unable to recover it. 00:33:58.676 [2024-07-24 02:12:13.419553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.676 [2024-07-24 02:12:13.419578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.676 qpair failed and we were unable to recover it. 00:33:58.676 [2024-07-24 02:12:13.419699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.676 [2024-07-24 02:12:13.419724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.676 qpair failed and we were unable to recover it. 00:33:58.676 [2024-07-24 02:12:13.419855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.676 [2024-07-24 02:12:13.419881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.676 qpair failed and we were unable to recover it. 00:33:58.676 [2024-07-24 02:12:13.419993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.676 [2024-07-24 02:12:13.420018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.676 qpair failed and we were unable to recover it. 00:33:58.676 [2024-07-24 02:12:13.420194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.676 [2024-07-24 02:12:13.420222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.676 qpair failed and we were unable to recover it. 00:33:58.676 [2024-07-24 02:12:13.420366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.676 [2024-07-24 02:12:13.420392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.676 qpair failed and we were unable to recover it. 00:33:58.676 [2024-07-24 02:12:13.420501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.676 [2024-07-24 02:12:13.420526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.676 qpair failed and we were unable to recover it. 00:33:58.676 [2024-07-24 02:12:13.420690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.676 [2024-07-24 02:12:13.420715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.676 qpair failed and we were unable to recover it. 00:33:58.676 [2024-07-24 02:12:13.420841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.676 [2024-07-24 02:12:13.420865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.676 qpair failed and we were unable to recover it. 00:33:58.676 [2024-07-24 02:12:13.420972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.676 [2024-07-24 02:12:13.420997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.676 qpair failed and we were unable to recover it. 00:33:58.676 [2024-07-24 02:12:13.421126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.676 [2024-07-24 02:12:13.421151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.676 qpair failed and we were unable to recover it. 00:33:58.676 [2024-07-24 02:12:13.421246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.676 [2024-07-24 02:12:13.421274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.676 qpair failed and we were unable to recover it. 00:33:58.676 [2024-07-24 02:12:13.421416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.676 [2024-07-24 02:12:13.421442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.676 qpair failed and we were unable to recover it. 00:33:58.676 [2024-07-24 02:12:13.421549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.676 [2024-07-24 02:12:13.421575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.676 qpair failed and we were unable to recover it. 00:33:58.676 [2024-07-24 02:12:13.421680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.676 [2024-07-24 02:12:13.421705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.676 qpair failed and we were unable to recover it. 00:33:58.676 [2024-07-24 02:12:13.421814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.676 [2024-07-24 02:12:13.421839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.676 qpair failed and we were unable to recover it. 00:33:58.676 [2024-07-24 02:12:13.421928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.676 [2024-07-24 02:12:13.421953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.676 qpair failed and we were unable to recover it. 00:33:58.676 [2024-07-24 02:12:13.422083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.676 [2024-07-24 02:12:13.422109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.676 qpair failed and we were unable to recover it. 00:33:58.676 [2024-07-24 02:12:13.422287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.676 [2024-07-24 02:12:13.422320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.676 qpair failed and we were unable to recover it. 00:33:58.676 [2024-07-24 02:12:13.422497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.676 [2024-07-24 02:12:13.422522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.676 qpair failed and we were unable to recover it. 00:33:58.677 [2024-07-24 02:12:13.422644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.677 [2024-07-24 02:12:13.422669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.677 qpair failed and we were unable to recover it. 00:33:58.677 [2024-07-24 02:12:13.422768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.677 [2024-07-24 02:12:13.422792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.677 qpair failed and we were unable to recover it. 00:33:58.677 [2024-07-24 02:12:13.422953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.677 [2024-07-24 02:12:13.422977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.677 qpair failed and we were unable to recover it. 00:33:58.677 [2024-07-24 02:12:13.423095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.677 [2024-07-24 02:12:13.423134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.677 qpair failed and we were unable to recover it. 00:33:58.677 [2024-07-24 02:12:13.423311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.677 [2024-07-24 02:12:13.423347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.677 qpair failed and we were unable to recover it. 00:33:58.677 [2024-07-24 02:12:13.423507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.677 [2024-07-24 02:12:13.423533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.677 qpair failed and we were unable to recover it. 00:33:58.677 [2024-07-24 02:12:13.423693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.677 [2024-07-24 02:12:13.423719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.677 qpair failed and we were unable to recover it. 00:33:58.677 [2024-07-24 02:12:13.423828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.677 [2024-07-24 02:12:13.423853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.677 qpair failed and we were unable to recover it. 00:33:58.677 [2024-07-24 02:12:13.423954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.677 [2024-07-24 02:12:13.423979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.677 qpair failed and we were unable to recover it. 00:33:58.677 [2024-07-24 02:12:13.424131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.677 [2024-07-24 02:12:13.424156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.677 qpair failed and we were unable to recover it. 00:33:58.677 [2024-07-24 02:12:13.424314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.677 [2024-07-24 02:12:13.424345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.677 qpair failed and we were unable to recover it. 00:33:58.677 [2024-07-24 02:12:13.424477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.677 [2024-07-24 02:12:13.424503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.677 qpair failed and we were unable to recover it. 00:33:58.677 [2024-07-24 02:12:13.424640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.677 [2024-07-24 02:12:13.424664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.677 qpair failed and we were unable to recover it. 00:33:58.677 [2024-07-24 02:12:13.424798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.677 [2024-07-24 02:12:13.424823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.677 qpair failed and we were unable to recover it. 00:33:58.677 [2024-07-24 02:12:13.424955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.677 [2024-07-24 02:12:13.424981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.677 qpair failed and we were unable to recover it. 00:33:58.677 [2024-07-24 02:12:13.425139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.677 [2024-07-24 02:12:13.425166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.677 qpair failed and we were unable to recover it. 00:33:58.677 [2024-07-24 02:12:13.425287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.677 [2024-07-24 02:12:13.425314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.677 qpair failed and we were unable to recover it. 00:33:58.677 [2024-07-24 02:12:13.425513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.677 [2024-07-24 02:12:13.425538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.677 qpair failed and we were unable to recover it. 00:33:58.677 [2024-07-24 02:12:13.425692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.677 [2024-07-24 02:12:13.425720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.677 qpair failed and we were unable to recover it. 00:33:58.677 [2024-07-24 02:12:13.425877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.677 [2024-07-24 02:12:13.425902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.677 qpair failed and we were unable to recover it. 00:33:58.677 [2024-07-24 02:12:13.426001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.677 [2024-07-24 02:12:13.426026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.677 qpair failed and we were unable to recover it. 00:33:58.677 [2024-07-24 02:12:13.426153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.677 [2024-07-24 02:12:13.426178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.677 qpair failed and we were unable to recover it. 00:33:58.677 [2024-07-24 02:12:13.426369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.677 [2024-07-24 02:12:13.426394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.677 qpair failed and we were unable to recover it. 00:33:58.677 [2024-07-24 02:12:13.426555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.677 [2024-07-24 02:12:13.426580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.677 qpair failed and we were unable to recover it. 00:33:58.677 [2024-07-24 02:12:13.426705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.677 [2024-07-24 02:12:13.426730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.677 qpair failed and we were unable to recover it. 00:33:58.677 [2024-07-24 02:12:13.426860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.677 [2024-07-24 02:12:13.426885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.677 qpair failed and we were unable to recover it. 00:33:58.677 [2024-07-24 02:12:13.427027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.677 [2024-07-24 02:12:13.427052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.677 qpair failed and we were unable to recover it. 00:33:58.678 [2024-07-24 02:12:13.427222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.678 [2024-07-24 02:12:13.427260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.678 qpair failed and we were unable to recover it. 00:33:58.678 [2024-07-24 02:12:13.427397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.678 [2024-07-24 02:12:13.427425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.678 qpair failed and we were unable to recover it. 00:33:58.678 [2024-07-24 02:12:13.427538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.678 [2024-07-24 02:12:13.427563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.678 qpair failed and we were unable to recover it. 00:33:58.678 [2024-07-24 02:12:13.427718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.678 [2024-07-24 02:12:13.427743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.678 qpair failed and we were unable to recover it. 00:33:58.678 [2024-07-24 02:12:13.427860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.678 [2024-07-24 02:12:13.427889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.678 qpair failed and we were unable to recover it. 00:33:58.678 [2024-07-24 02:12:13.428026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.678 [2024-07-24 02:12:13.428052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.678 qpair failed and we were unable to recover it. 00:33:58.678 [2024-07-24 02:12:13.428175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.678 [2024-07-24 02:12:13.428204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.678 qpair failed and we were unable to recover it. 00:33:58.678 [2024-07-24 02:12:13.428381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.678 [2024-07-24 02:12:13.428406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.678 qpair failed and we were unable to recover it. 00:33:58.678 [2024-07-24 02:12:13.428538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.678 [2024-07-24 02:12:13.428563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.678 qpair failed and we were unable to recover it. 00:33:58.678 [2024-07-24 02:12:13.428698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.678 [2024-07-24 02:12:13.428724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.678 qpair failed and we were unable to recover it. 00:33:58.678 [2024-07-24 02:12:13.428825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.678 [2024-07-24 02:12:13.428850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.678 qpair failed and we were unable to recover it. 00:33:58.678 [2024-07-24 02:12:13.428983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.678 [2024-07-24 02:12:13.429008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.678 qpair failed and we were unable to recover it. 00:33:58.678 [2024-07-24 02:12:13.429108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.678 [2024-07-24 02:12:13.429133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.678 qpair failed and we were unable to recover it. 00:33:58.678 [2024-07-24 02:12:13.429269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.678 [2024-07-24 02:12:13.429294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.678 qpair failed and we were unable to recover it. 00:33:58.678 [2024-07-24 02:12:13.429401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.678 [2024-07-24 02:12:13.429426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.678 qpair failed and we were unable to recover it. 00:33:58.678 [2024-07-24 02:12:13.429563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.678 [2024-07-24 02:12:13.429587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.678 qpair failed and we were unable to recover it. 00:33:58.678 [2024-07-24 02:12:13.429719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.678 [2024-07-24 02:12:13.429744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.678 qpair failed and we were unable to recover it. 00:33:58.678 [2024-07-24 02:12:13.429880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.678 [2024-07-24 02:12:13.429904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.678 qpair failed and we were unable to recover it. 00:33:58.678 [2024-07-24 02:12:13.430035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.678 [2024-07-24 02:12:13.430064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.678 qpair failed and we were unable to recover it. 00:33:58.678 [2024-07-24 02:12:13.430218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.678 [2024-07-24 02:12:13.430246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.678 qpair failed and we were unable to recover it. 00:33:58.678 [2024-07-24 02:12:13.430367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.678 [2024-07-24 02:12:13.430392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.678 qpair failed and we were unable to recover it. 00:33:58.678 [2024-07-24 02:12:13.430517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.678 [2024-07-24 02:12:13.430542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.678 qpair failed and we were unable to recover it. 00:33:58.678 [2024-07-24 02:12:13.430673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.678 [2024-07-24 02:12:13.430698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.678 qpair failed and we were unable to recover it. 00:33:58.678 [2024-07-24 02:12:13.430806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.678 [2024-07-24 02:12:13.430830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.678 qpair failed and we were unable to recover it. 00:33:58.678 [2024-07-24 02:12:13.430954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.678 [2024-07-24 02:12:13.430979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.678 qpair failed and we were unable to recover it. 00:33:58.678 [2024-07-24 02:12:13.431079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.678 [2024-07-24 02:12:13.431104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.678 qpair failed and we were unable to recover it. 00:33:58.678 [2024-07-24 02:12:13.431209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.678 [2024-07-24 02:12:13.431234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.678 qpair failed and we were unable to recover it. 00:33:58.678 [2024-07-24 02:12:13.431376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.678 [2024-07-24 02:12:13.431415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.678 qpair failed and we were unable to recover it. 00:33:58.678 [2024-07-24 02:12:13.431534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.678 [2024-07-24 02:12:13.431561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.678 qpair failed and we were unable to recover it. 00:33:58.679 [2024-07-24 02:12:13.431663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.679 [2024-07-24 02:12:13.431688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.679 qpair failed and we were unable to recover it. 00:33:58.679 [2024-07-24 02:12:13.431849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.679 [2024-07-24 02:12:13.431874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.679 qpair failed and we were unable to recover it. 00:33:58.679 [2024-07-24 02:12:13.432011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.679 [2024-07-24 02:12:13.432036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.679 qpair failed and we were unable to recover it. 00:33:58.679 [2024-07-24 02:12:13.432195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.679 [2024-07-24 02:12:13.432225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.679 qpair failed and we were unable to recover it. 00:33:58.679 [2024-07-24 02:12:13.432370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.679 [2024-07-24 02:12:13.432396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.679 qpair failed and we were unable to recover it. 00:33:58.679 [2024-07-24 02:12:13.432509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.679 [2024-07-24 02:12:13.432533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.679 qpair failed and we were unable to recover it. 00:33:58.679 [2024-07-24 02:12:13.432635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.679 [2024-07-24 02:12:13.432659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.679 qpair failed and we were unable to recover it. 00:33:58.679 [2024-07-24 02:12:13.432790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.679 [2024-07-24 02:12:13.432815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.679 qpair failed and we were unable to recover it. 00:33:58.679 [2024-07-24 02:12:13.432920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.679 [2024-07-24 02:12:13.432945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.679 qpair failed and we were unable to recover it. 00:33:58.679 [2024-07-24 02:12:13.433073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.679 [2024-07-24 02:12:13.433099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.679 qpair failed and we were unable to recover it. 00:33:58.679 [2024-07-24 02:12:13.433201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.679 [2024-07-24 02:12:13.433226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.679 qpair failed and we were unable to recover it. 00:33:58.679 [2024-07-24 02:12:13.433354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.679 [2024-07-24 02:12:13.433380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.679 qpair failed and we were unable to recover it. 00:33:58.679 [2024-07-24 02:12:13.433491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.679 [2024-07-24 02:12:13.433516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.679 qpair failed and we were unable to recover it. 00:33:58.679 [2024-07-24 02:12:13.433615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.679 [2024-07-24 02:12:13.433639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.679 qpair failed and we were unable to recover it. 00:33:58.679 [2024-07-24 02:12:13.433773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.679 [2024-07-24 02:12:13.433799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.679 qpair failed and we were unable to recover it. 00:33:58.679 [2024-07-24 02:12:13.433902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.679 [2024-07-24 02:12:13.433927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.679 qpair failed and we were unable to recover it. 00:33:58.679 [2024-07-24 02:12:13.434026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.679 [2024-07-24 02:12:13.434058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.679 qpair failed and we were unable to recover it. 00:33:58.679 [2024-07-24 02:12:13.434169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.679 [2024-07-24 02:12:13.434195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.679 qpair failed and we were unable to recover it. 00:33:58.679 [2024-07-24 02:12:13.434330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.679 [2024-07-24 02:12:13.434357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.679 qpair failed and we were unable to recover it. 00:33:58.679 [2024-07-24 02:12:13.434464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.679 [2024-07-24 02:12:13.434490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.679 qpair failed and we were unable to recover it. 00:33:58.679 [2024-07-24 02:12:13.434641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.679 [2024-07-24 02:12:13.434667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.679 qpair failed and we were unable to recover it. 00:33:58.679 [2024-07-24 02:12:13.434773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.679 [2024-07-24 02:12:13.434799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.679 qpair failed and we were unable to recover it. 00:33:58.679 [2024-07-24 02:12:13.434931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.679 [2024-07-24 02:12:13.434957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.679 qpair failed and we were unable to recover it. 00:33:58.679 [2024-07-24 02:12:13.435094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.679 [2024-07-24 02:12:13.435119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.679 qpair failed and we were unable to recover it. 00:33:58.679 [2024-07-24 02:12:13.435240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.679 [2024-07-24 02:12:13.435268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.679 qpair failed and we were unable to recover it. 00:33:58.679 [2024-07-24 02:12:13.435396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.679 [2024-07-24 02:12:13.435422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.679 qpair failed and we were unable to recover it. 00:33:58.679 [2024-07-24 02:12:13.435570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.679 [2024-07-24 02:12:13.435595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.679 qpair failed and we were unable to recover it. 00:33:58.679 [2024-07-24 02:12:13.435750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.679 [2024-07-24 02:12:13.435775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.679 qpair failed and we were unable to recover it. 00:33:58.679 [2024-07-24 02:12:13.435913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.679 [2024-07-24 02:12:13.435938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.679 qpair failed and we were unable to recover it. 00:33:58.679 [2024-07-24 02:12:13.436074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.679 [2024-07-24 02:12:13.436098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.680 qpair failed and we were unable to recover it. 00:33:58.680 [2024-07-24 02:12:13.436259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.680 [2024-07-24 02:12:13.436284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.680 qpair failed and we were unable to recover it. 00:33:58.680 [2024-07-24 02:12:13.436387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.680 [2024-07-24 02:12:13.436412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.680 qpair failed and we were unable to recover it. 00:33:58.680 [2024-07-24 02:12:13.436562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.680 [2024-07-24 02:12:13.436587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.680 qpair failed and we were unable to recover it. 00:33:58.680 [2024-07-24 02:12:13.436713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.680 [2024-07-24 02:12:13.436738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.680 qpair failed and we were unable to recover it. 00:33:58.680 [2024-07-24 02:12:13.436894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.680 [2024-07-24 02:12:13.436919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.680 qpair failed and we were unable to recover it. 00:33:58.680 [2024-07-24 02:12:13.437048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.680 [2024-07-24 02:12:13.437073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.680 qpair failed and we were unable to recover it. 00:33:58.680 [2024-07-24 02:12:13.437243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.680 [2024-07-24 02:12:13.437270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.680 qpair failed and we were unable to recover it. 00:33:58.680 [2024-07-24 02:12:13.437403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.680 [2024-07-24 02:12:13.437432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.680 qpair failed and we were unable to recover it. 00:33:58.680 [2024-07-24 02:12:13.437547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.680 [2024-07-24 02:12:13.437572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.680 qpair failed and we were unable to recover it. 00:33:58.680 [2024-07-24 02:12:13.437729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.680 [2024-07-24 02:12:13.437754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.680 qpair failed and we were unable to recover it. 00:33:58.680 [2024-07-24 02:12:13.437861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.680 [2024-07-24 02:12:13.437886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.680 qpair failed and we were unable to recover it. 00:33:58.680 [2024-07-24 02:12:13.437993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.680 [2024-07-24 02:12:13.438019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.680 qpair failed and we were unable to recover it. 00:33:58.680 [2024-07-24 02:12:13.438157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.680 [2024-07-24 02:12:13.438182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.680 qpair failed and we were unable to recover it. 00:33:58.680 [2024-07-24 02:12:13.438331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.680 [2024-07-24 02:12:13.438361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.680 qpair failed and we were unable to recover it. 00:33:58.680 [2024-07-24 02:12:13.438462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.680 [2024-07-24 02:12:13.438487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.680 qpair failed and we were unable to recover it. 00:33:58.680 [2024-07-24 02:12:13.438619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.680 [2024-07-24 02:12:13.438644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.680 qpair failed and we were unable to recover it. 00:33:58.681 [2024-07-24 02:12:13.438778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.681 [2024-07-24 02:12:13.438803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.681 qpair failed and we were unable to recover it. 00:33:58.681 [2024-07-24 02:12:13.438957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.681 [2024-07-24 02:12:13.438981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.681 qpair failed and we were unable to recover it. 00:33:58.681 [2024-07-24 02:12:13.439079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.681 [2024-07-24 02:12:13.439104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.681 qpair failed and we were unable to recover it. 00:33:58.681 [2024-07-24 02:12:13.439265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.681 [2024-07-24 02:12:13.439307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.681 qpair failed and we were unable to recover it. 00:33:58.681 [2024-07-24 02:12:13.439472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.681 [2024-07-24 02:12:13.439500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.681 qpair failed and we were unable to recover it. 00:33:58.681 [2024-07-24 02:12:13.439638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.681 [2024-07-24 02:12:13.439665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.681 qpair failed and we were unable to recover it. 00:33:58.681 [2024-07-24 02:12:13.439797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.681 [2024-07-24 02:12:13.439822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.681 qpair failed and we were unable to recover it. 00:33:58.681 [2024-07-24 02:12:13.439983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.681 [2024-07-24 02:12:13.440008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.681 qpair failed and we were unable to recover it. 00:33:58.681 [2024-07-24 02:12:13.440142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.681 [2024-07-24 02:12:13.440167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.681 qpair failed and we were unable to recover it. 00:33:58.681 [2024-07-24 02:12:13.440328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.681 [2024-07-24 02:12:13.440354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.681 qpair failed and we were unable to recover it. 00:33:58.681 [2024-07-24 02:12:13.440461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.681 [2024-07-24 02:12:13.440487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.681 qpair failed and we were unable to recover it. 00:33:58.681 [2024-07-24 02:12:13.440588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.681 [2024-07-24 02:12:13.440613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.681 qpair failed and we were unable to recover it. 00:33:58.681 [2024-07-24 02:12:13.440765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.681 [2024-07-24 02:12:13.440790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.681 qpair failed and we were unable to recover it. 00:33:58.681 [2024-07-24 02:12:13.440923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.681 [2024-07-24 02:12:13.440948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.681 qpair failed and we were unable to recover it. 00:33:58.681 [2024-07-24 02:12:13.441079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.681 [2024-07-24 02:12:13.441103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.681 qpair failed and we were unable to recover it. 00:33:58.681 [2024-07-24 02:12:13.441229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.681 [2024-07-24 02:12:13.441259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.681 qpair failed and we were unable to recover it. 00:33:58.681 [2024-07-24 02:12:13.441387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.681 [2024-07-24 02:12:13.441413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.681 qpair failed and we were unable to recover it. 00:33:58.681 [2024-07-24 02:12:13.441537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.681 [2024-07-24 02:12:13.441561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.681 qpair failed and we were unable to recover it. 00:33:58.681 [2024-07-24 02:12:13.441692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.681 [2024-07-24 02:12:13.441717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.681 qpair failed and we were unable to recover it. 00:33:58.681 [2024-07-24 02:12:13.441875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.681 [2024-07-24 02:12:13.441900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.681 qpair failed and we were unable to recover it. 00:33:58.681 [2024-07-24 02:12:13.442028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.681 [2024-07-24 02:12:13.442053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.681 qpair failed and we were unable to recover it. 00:33:58.681 [2024-07-24 02:12:13.442184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.681 [2024-07-24 02:12:13.442209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.681 qpair failed and we were unable to recover it. 00:33:58.681 [2024-07-24 02:12:13.442327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.681 [2024-07-24 02:12:13.442353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.681 qpair failed and we were unable to recover it. 00:33:58.681 [2024-07-24 02:12:13.442509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.681 [2024-07-24 02:12:13.442534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.681 qpair failed and we were unable to recover it. 00:33:58.681 [2024-07-24 02:12:13.442660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.681 [2024-07-24 02:12:13.442690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.681 qpair failed and we were unable to recover it. 00:33:58.681 [2024-07-24 02:12:13.442823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.681 [2024-07-24 02:12:13.442849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.681 qpair failed and we were unable to recover it. 00:33:58.681 [2024-07-24 02:12:13.442986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.681 [2024-07-24 02:12:13.443011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.681 qpair failed and we were unable to recover it. 00:33:58.681 [2024-07-24 02:12:13.443189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.681 [2024-07-24 02:12:13.443217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.681 qpair failed and we were unable to recover it. 00:33:58.681 [2024-07-24 02:12:13.443396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.681 [2024-07-24 02:12:13.443422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.681 qpair failed and we were unable to recover it. 00:33:58.681 [2024-07-24 02:12:13.443519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.681 [2024-07-24 02:12:13.443544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.681 qpair failed and we were unable to recover it. 00:33:58.681 [2024-07-24 02:12:13.443651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.682 [2024-07-24 02:12:13.443679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.682 qpair failed and we were unable to recover it. 00:33:58.682 [2024-07-24 02:12:13.443788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.682 [2024-07-24 02:12:13.443814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.682 qpair failed and we were unable to recover it. 00:33:58.682 [2024-07-24 02:12:13.443949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.682 [2024-07-24 02:12:13.443973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.682 qpair failed and we were unable to recover it. 00:33:58.682 [2024-07-24 02:12:13.444102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.682 [2024-07-24 02:12:13.444127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.682 qpair failed and we were unable to recover it. 00:33:58.682 [2024-07-24 02:12:13.444235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.682 [2024-07-24 02:12:13.444261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.682 qpair failed and we were unable to recover it. 00:33:58.682 [2024-07-24 02:12:13.444377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.682 [2024-07-24 02:12:13.444402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.682 qpair failed and we were unable to recover it. 00:33:58.682 [2024-07-24 02:12:13.444538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.682 [2024-07-24 02:12:13.444563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.682 qpair failed and we were unable to recover it. 00:33:58.682 [2024-07-24 02:12:13.444686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.682 [2024-07-24 02:12:13.444711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.682 qpair failed and we were unable to recover it. 00:33:58.682 [2024-07-24 02:12:13.444816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.682 [2024-07-24 02:12:13.444841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.682 qpair failed and we were unable to recover it. 00:33:58.682 [2024-07-24 02:12:13.444944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.682 [2024-07-24 02:12:13.444969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.682 qpair failed and we were unable to recover it. 00:33:58.682 [2024-07-24 02:12:13.445092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.682 [2024-07-24 02:12:13.445117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.682 qpair failed and we were unable to recover it. 00:33:58.682 [2024-07-24 02:12:13.445250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.682 [2024-07-24 02:12:13.445275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.682 qpair failed and we were unable to recover it. 00:33:58.682 [2024-07-24 02:12:13.445448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.682 [2024-07-24 02:12:13.445475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.682 qpair failed and we were unable to recover it. 00:33:58.682 [2024-07-24 02:12:13.445613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.682 [2024-07-24 02:12:13.445638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.682 qpair failed and we were unable to recover it. 00:33:58.682 [2024-07-24 02:12:13.445788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.682 [2024-07-24 02:12:13.445813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.682 qpair failed and we were unable to recover it. 00:33:58.682 [2024-07-24 02:12:13.445943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.682 [2024-07-24 02:12:13.445968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.682 qpair failed and we were unable to recover it. 00:33:58.682 [2024-07-24 02:12:13.446075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.682 [2024-07-24 02:12:13.446101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.682 qpair failed and we were unable to recover it. 00:33:58.682 [2024-07-24 02:12:13.446256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.682 [2024-07-24 02:12:13.446281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.682 qpair failed and we were unable to recover it. 00:33:58.682 [2024-07-24 02:12:13.446440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.682 [2024-07-24 02:12:13.446466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.682 qpair failed and we were unable to recover it. 00:33:58.682 [2024-07-24 02:12:13.446572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.682 [2024-07-24 02:12:13.446597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.682 qpair failed and we were unable to recover it. 00:33:58.682 [2024-07-24 02:12:13.446727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.682 [2024-07-24 02:12:13.446752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.682 qpair failed and we were unable to recover it. 00:33:58.682 [2024-07-24 02:12:13.446874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.682 [2024-07-24 02:12:13.446903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.682 qpair failed and we were unable to recover it. 00:33:58.682 [2024-07-24 02:12:13.447004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.682 [2024-07-24 02:12:13.447029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.682 qpair failed and we were unable to recover it. 00:33:58.682 [2024-07-24 02:12:13.447188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.682 [2024-07-24 02:12:13.447212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.682 qpair failed and we were unable to recover it. 00:33:58.682 [2024-07-24 02:12:13.447381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.682 [2024-07-24 02:12:13.447406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.682 qpair failed and we were unable to recover it. 00:33:58.682 [2024-07-24 02:12:13.447508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.682 [2024-07-24 02:12:13.447533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.682 qpair failed and we were unable to recover it. 00:33:58.682 [2024-07-24 02:12:13.447639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.682 [2024-07-24 02:12:13.447664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.682 qpair failed and we were unable to recover it. 00:33:58.682 [2024-07-24 02:12:13.447791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.682 [2024-07-24 02:12:13.447816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.682 qpair failed and we were unable to recover it. 00:33:58.682 [2024-07-24 02:12:13.447950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.682 [2024-07-24 02:12:13.447975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.682 qpair failed and we were unable to recover it. 00:33:58.682 [2024-07-24 02:12:13.448085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.682 [2024-07-24 02:12:13.448110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.682 qpair failed and we were unable to recover it. 00:33:58.682 [2024-07-24 02:12:13.448242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.682 [2024-07-24 02:12:13.448266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.682 qpair failed and we were unable to recover it. 00:33:58.682 [2024-07-24 02:12:13.448399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.682 [2024-07-24 02:12:13.448424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.682 qpair failed and we were unable to recover it. 00:33:58.682 [2024-07-24 02:12:13.448587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.683 [2024-07-24 02:12:13.448611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.683 qpair failed and we were unable to recover it. 00:33:58.683 [2024-07-24 02:12:13.448716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.683 [2024-07-24 02:12:13.448740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.683 qpair failed and we were unable to recover it. 00:33:58.683 [2024-07-24 02:12:13.448847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.683 [2024-07-24 02:12:13.448873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.683 qpair failed and we were unable to recover it. 00:33:58.683 [2024-07-24 02:12:13.449034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.683 [2024-07-24 02:12:13.449059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.683 qpair failed and we were unable to recover it. 00:33:58.683 [2024-07-24 02:12:13.449186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.683 [2024-07-24 02:12:13.449228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.683 qpair failed and we were unable to recover it. 00:33:58.683 [2024-07-24 02:12:13.449403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.683 [2024-07-24 02:12:13.449429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.683 qpair failed and we were unable to recover it. 00:33:58.683 [2024-07-24 02:12:13.449535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.683 [2024-07-24 02:12:13.449560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.683 qpair failed and we were unable to recover it. 00:33:58.683 [2024-07-24 02:12:13.449660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.683 [2024-07-24 02:12:13.449685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.683 qpair failed and we were unable to recover it. 00:33:58.683 [2024-07-24 02:12:13.449792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.683 [2024-07-24 02:12:13.449817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.683 qpair failed and we were unable to recover it. 00:33:58.683 [2024-07-24 02:12:13.449945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.683 [2024-07-24 02:12:13.449970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.683 qpair failed and we were unable to recover it. 00:33:58.683 [2024-07-24 02:12:13.450108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.683 [2024-07-24 02:12:13.450132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.683 qpair failed and we were unable to recover it. 00:33:58.683 [2024-07-24 02:12:13.450257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.683 [2024-07-24 02:12:13.450283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.683 qpair failed and we were unable to recover it. 00:33:58.683 [2024-07-24 02:12:13.450428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.683 [2024-07-24 02:12:13.450454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.683 qpair failed and we were unable to recover it. 00:33:58.683 [2024-07-24 02:12:13.450583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.683 [2024-07-24 02:12:13.450608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.683 qpair failed and we were unable to recover it. 00:33:58.683 [2024-07-24 02:12:13.450735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.683 [2024-07-24 02:12:13.450760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.683 qpair failed and we were unable to recover it. 00:33:58.683 [2024-07-24 02:12:13.450888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.683 [2024-07-24 02:12:13.450912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.683 qpair failed and we were unable to recover it. 00:33:58.683 [2024-07-24 02:12:13.451064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.683 [2024-07-24 02:12:13.451093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.683 qpair failed and we were unable to recover it. 00:33:58.683 [2024-07-24 02:12:13.451221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.683 [2024-07-24 02:12:13.451246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.683 qpair failed and we were unable to recover it. 00:33:58.683 [2024-07-24 02:12:13.451400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.683 [2024-07-24 02:12:13.451425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.683 qpair failed and we were unable to recover it. 00:33:58.683 [2024-07-24 02:12:13.451575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.683 [2024-07-24 02:12:13.451600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.683 qpair failed and we were unable to recover it. 00:33:58.683 [2024-07-24 02:12:13.451733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.683 [2024-07-24 02:12:13.451759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.683 qpair failed and we were unable to recover it. 00:33:58.683 [2024-07-24 02:12:13.451921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.683 [2024-07-24 02:12:13.451946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.683 qpair failed and we were unable to recover it. 00:33:58.683 [2024-07-24 02:12:13.452048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.683 [2024-07-24 02:12:13.452073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.683 qpair failed and we were unable to recover it. 00:33:58.683 [2024-07-24 02:12:13.452251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.683 [2024-07-24 02:12:13.452278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.683 qpair failed and we were unable to recover it. 00:33:58.683 [2024-07-24 02:12:13.452410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.683 [2024-07-24 02:12:13.452435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.683 qpair failed and we were unable to recover it. 00:33:58.683 [2024-07-24 02:12:13.452548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.683 [2024-07-24 02:12:13.452573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.683 qpair failed and we were unable to recover it. 00:33:58.683 [2024-07-24 02:12:13.452680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.683 [2024-07-24 02:12:13.452705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.683 qpair failed and we were unable to recover it. 00:33:58.683 [2024-07-24 02:12:13.452864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.683 [2024-07-24 02:12:13.452889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.683 qpair failed and we were unable to recover it. 00:33:58.683 [2024-07-24 02:12:13.453020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.683 [2024-07-24 02:12:13.453045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.683 qpair failed and we were unable to recover it. 00:33:58.683 [2024-07-24 02:12:13.453144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.683 [2024-07-24 02:12:13.453169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.683 qpair failed and we were unable to recover it. 00:33:58.683 [2024-07-24 02:12:13.453295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.683 [2024-07-24 02:12:13.453325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.683 qpair failed and we were unable to recover it. 00:33:58.683 [2024-07-24 02:12:13.453425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.683 [2024-07-24 02:12:13.453449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.684 qpair failed and we were unable to recover it. 00:33:58.684 [2024-07-24 02:12:13.453588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.684 [2024-07-24 02:12:13.453613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.684 qpair failed and we were unable to recover it. 00:33:58.684 [2024-07-24 02:12:13.453742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.684 [2024-07-24 02:12:13.453767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.684 qpair failed and we were unable to recover it. 00:33:58.684 [2024-07-24 02:12:13.453898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.684 [2024-07-24 02:12:13.453923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.684 qpair failed and we were unable to recover it. 00:33:58.684 [2024-07-24 02:12:13.454057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.684 [2024-07-24 02:12:13.454083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.684 qpair failed and we were unable to recover it. 00:33:58.684 [2024-07-24 02:12:13.454198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.684 [2024-07-24 02:12:13.454223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.684 qpair failed and we were unable to recover it. 00:33:58.684 [2024-07-24 02:12:13.454334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.684 [2024-07-24 02:12:13.454373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.684 qpair failed and we were unable to recover it. 00:33:58.684 [2024-07-24 02:12:13.454482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.684 [2024-07-24 02:12:13.454509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.684 qpair failed and we were unable to recover it. 00:33:58.684 [2024-07-24 02:12:13.454636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.684 [2024-07-24 02:12:13.454661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.684 qpair failed and we were unable to recover it. 00:33:58.684 [2024-07-24 02:12:13.454790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.684 [2024-07-24 02:12:13.454815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.684 qpair failed and we were unable to recover it. 00:33:58.684 [2024-07-24 02:12:13.454950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.684 [2024-07-24 02:12:13.454975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.684 qpair failed and we were unable to recover it. 00:33:58.684 [2024-07-24 02:12:13.455108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.684 [2024-07-24 02:12:13.455133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.684 qpair failed and we were unable to recover it. 00:33:58.684 [2024-07-24 02:12:13.455252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.684 [2024-07-24 02:12:13.455286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.684 qpair failed and we were unable to recover it. 00:33:58.684 [2024-07-24 02:12:13.455448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.684 [2024-07-24 02:12:13.455474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.684 qpair failed and we were unable to recover it. 00:33:58.684 [2024-07-24 02:12:13.455599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.684 [2024-07-24 02:12:13.455624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.684 qpair failed and we were unable to recover it. 00:33:58.684 [2024-07-24 02:12:13.455748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.684 [2024-07-24 02:12:13.455774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.684 qpair failed and we were unable to recover it. 00:33:58.684 [2024-07-24 02:12:13.455907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.684 [2024-07-24 02:12:13.455932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.684 qpair failed and we were unable to recover it. 00:33:58.684 [2024-07-24 02:12:13.456061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.684 [2024-07-24 02:12:13.456087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.684 qpair failed and we were unable to recover it. 00:33:58.684 [2024-07-24 02:12:13.456214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.684 [2024-07-24 02:12:13.456239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.684 qpair failed and we were unable to recover it. 00:33:58.684 [2024-07-24 02:12:13.456344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.684 [2024-07-24 02:12:13.456370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.684 qpair failed and we were unable to recover it. 00:33:58.684 [2024-07-24 02:12:13.456496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.684 [2024-07-24 02:12:13.456521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.684 qpair failed and we were unable to recover it. 00:33:58.684 [2024-07-24 02:12:13.456681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.684 [2024-07-24 02:12:13.456707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.684 qpair failed and we were unable to recover it. 00:33:58.684 [2024-07-24 02:12:13.456838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.684 [2024-07-24 02:12:13.456863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.684 qpair failed and we were unable to recover it. 00:33:58.684 [2024-07-24 02:12:13.456992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.684 [2024-07-24 02:12:13.457017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.684 qpair failed and we were unable to recover it. 00:33:58.684 [2024-07-24 02:12:13.457176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.684 [2024-07-24 02:12:13.457201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.684 qpair failed and we were unable to recover it. 00:33:58.684 [2024-07-24 02:12:13.457338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.684 [2024-07-24 02:12:13.457364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.684 qpair failed and we were unable to recover it. 00:33:58.684 [2024-07-24 02:12:13.457531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.684 [2024-07-24 02:12:13.457556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.684 qpair failed and we were unable to recover it. 00:33:58.684 [2024-07-24 02:12:13.457686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.684 [2024-07-24 02:12:13.457712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.684 qpair failed and we were unable to recover it. 00:33:58.684 [2024-07-24 02:12:13.457846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.684 [2024-07-24 02:12:13.457873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.684 qpair failed and we were unable to recover it. 00:33:58.684 [2024-07-24 02:12:13.458009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.684 [2024-07-24 02:12:13.458035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.684 qpair failed and we were unable to recover it. 00:33:58.684 [2024-07-24 02:12:13.458183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.684 [2024-07-24 02:12:13.458226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.684 qpair failed and we were unable to recover it. 00:33:58.684 [2024-07-24 02:12:13.458392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.685 [2024-07-24 02:12:13.458418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.685 qpair failed and we were unable to recover it. 00:33:58.685 [2024-07-24 02:12:13.458551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.685 [2024-07-24 02:12:13.458576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.685 qpair failed and we were unable to recover it. 00:33:58.685 [2024-07-24 02:12:13.458677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.685 [2024-07-24 02:12:13.458702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.685 qpair failed and we were unable to recover it. 00:33:58.685 [2024-07-24 02:12:13.458835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.685 [2024-07-24 02:12:13.458860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.685 qpair failed and we were unable to recover it. 00:33:58.685 [2024-07-24 02:12:13.458958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.685 [2024-07-24 02:12:13.458983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.685 qpair failed and we were unable to recover it. 00:33:58.685 [2024-07-24 02:12:13.459096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.685 [2024-07-24 02:12:13.459120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.685 qpair failed and we were unable to recover it. 00:33:58.685 [2024-07-24 02:12:13.459230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.685 [2024-07-24 02:12:13.459255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.685 qpair failed and we were unable to recover it. 00:33:58.685 [2024-07-24 02:12:13.459389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.685 [2024-07-24 02:12:13.459414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.685 qpair failed and we were unable to recover it. 00:33:58.685 [2024-07-24 02:12:13.459516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.685 [2024-07-24 02:12:13.459545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.685 qpair failed and we were unable to recover it. 00:33:58.685 [2024-07-24 02:12:13.459677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.685 [2024-07-24 02:12:13.459702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.685 qpair failed and we were unable to recover it. 00:33:58.685 [2024-07-24 02:12:13.459834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.685 [2024-07-24 02:12:13.459859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.685 qpair failed and we were unable to recover it. 00:33:58.685 [2024-07-24 02:12:13.459957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.685 [2024-07-24 02:12:13.459984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.685 qpair failed and we were unable to recover it. 00:33:58.685 [2024-07-24 02:12:13.460160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.685 [2024-07-24 02:12:13.460188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.685 qpair failed and we were unable to recover it. 00:33:58.685 [2024-07-24 02:12:13.460326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.685 [2024-07-24 02:12:13.460368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.685 qpair failed and we were unable to recover it. 00:33:58.685 [2024-07-24 02:12:13.460505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.685 [2024-07-24 02:12:13.460530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.685 qpair failed and we were unable to recover it. 00:33:58.685 [2024-07-24 02:12:13.460636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.685 [2024-07-24 02:12:13.460661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.685 qpair failed and we were unable to recover it. 00:33:58.685 [2024-07-24 02:12:13.460800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.685 [2024-07-24 02:12:13.460825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.685 qpair failed and we were unable to recover it. 00:33:58.685 [2024-07-24 02:12:13.460958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.685 [2024-07-24 02:12:13.460983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.685 qpair failed and we were unable to recover it. 00:33:58.685 [2024-07-24 02:12:13.461146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.685 [2024-07-24 02:12:13.461171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.685 qpair failed and we were unable to recover it. 00:33:58.685 [2024-07-24 02:12:13.461300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.685 [2024-07-24 02:12:13.461335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.685 qpair failed and we were unable to recover it. 00:33:58.685 [2024-07-24 02:12:13.461467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.685 [2024-07-24 02:12:13.461492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.685 qpair failed and we were unable to recover it. 00:33:58.685 [2024-07-24 02:12:13.461653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.685 [2024-07-24 02:12:13.461678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.685 qpair failed and we were unable to recover it. 00:33:58.685 [2024-07-24 02:12:13.461841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.685 [2024-07-24 02:12:13.461866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.685 qpair failed and we were unable to recover it. 00:33:58.685 [2024-07-24 02:12:13.461999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.685 [2024-07-24 02:12:13.462026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.685 qpair failed and we were unable to recover it. 00:33:58.685 [2024-07-24 02:12:13.462164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.685 [2024-07-24 02:12:13.462189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.685 qpair failed and we were unable to recover it. 00:33:58.685 [2024-07-24 02:12:13.462311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.685 [2024-07-24 02:12:13.462356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.685 qpair failed and we were unable to recover it. 00:33:58.685 [2024-07-24 02:12:13.462486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.685 [2024-07-24 02:12:13.462511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.685 qpair failed and we were unable to recover it. 00:33:58.685 [2024-07-24 02:12:13.462644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.685 [2024-07-24 02:12:13.462669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.685 qpair failed and we were unable to recover it. 00:33:58.685 [2024-07-24 02:12:13.462812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.685 [2024-07-24 02:12:13.462837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.685 qpair failed and we were unable to recover it. 00:33:58.685 [2024-07-24 02:12:13.462965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.685 [2024-07-24 02:12:13.462992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.685 qpair failed and we were unable to recover it. 00:33:58.685 [2024-07-24 02:12:13.463144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.686 [2024-07-24 02:12:13.463173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.686 qpair failed and we were unable to recover it. 00:33:58.686 [2024-07-24 02:12:13.463306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.686 [2024-07-24 02:12:13.463340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.686 qpair failed and we were unable to recover it. 00:33:58.686 [2024-07-24 02:12:13.463495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.686 [2024-07-24 02:12:13.463520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.686 qpair failed and we were unable to recover it. 00:33:58.686 [2024-07-24 02:12:13.463649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.686 [2024-07-24 02:12:13.463674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.686 qpair failed and we were unable to recover it. 00:33:58.686 [2024-07-24 02:12:13.463830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.686 [2024-07-24 02:12:13.463855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.686 qpair failed and we were unable to recover it. 00:33:58.686 [2024-07-24 02:12:13.463967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.686 [2024-07-24 02:12:13.463996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.686 qpair failed and we were unable to recover it. 00:33:58.686 [2024-07-24 02:12:13.464127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.686 [2024-07-24 02:12:13.464152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.686 qpair failed and we were unable to recover it. 00:33:58.686 [2024-07-24 02:12:13.464274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.686 [2024-07-24 02:12:13.464299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.686 qpair failed and we were unable to recover it. 00:33:58.686 [2024-07-24 02:12:13.464430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.686 [2024-07-24 02:12:13.464455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.686 qpair failed and we were unable to recover it. 00:33:58.686 [2024-07-24 02:12:13.464587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.686 [2024-07-24 02:12:13.464612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.686 qpair failed and we were unable to recover it. 00:33:58.686 [2024-07-24 02:12:13.464740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.686 [2024-07-24 02:12:13.464765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.686 qpair failed and we were unable to recover it. 00:33:58.686 [2024-07-24 02:12:13.464870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.686 [2024-07-24 02:12:13.464897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.686 qpair failed and we were unable to recover it. 00:33:58.686 [2024-07-24 02:12:13.465036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.686 [2024-07-24 02:12:13.465061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.686 qpair failed and we were unable to recover it. 00:33:58.686 [2024-07-24 02:12:13.465179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.686 [2024-07-24 02:12:13.465208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.686 qpair failed and we were unable to recover it. 00:33:58.686 [2024-07-24 02:12:13.465362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.686 [2024-07-24 02:12:13.465387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.686 qpair failed and we were unable to recover it. 00:33:58.686 [2024-07-24 02:12:13.465525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.686 [2024-07-24 02:12:13.465550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.686 qpair failed and we were unable to recover it. 00:33:58.686 [2024-07-24 02:12:13.465682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.686 [2024-07-24 02:12:13.465710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.686 qpair failed and we were unable to recover it. 00:33:58.686 [2024-07-24 02:12:13.465835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.686 [2024-07-24 02:12:13.465863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.686 qpair failed and we were unable to recover it. 00:33:58.686 [2024-07-24 02:12:13.466036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.686 [2024-07-24 02:12:13.466062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.686 qpair failed and we were unable to recover it. 00:33:58.686 [2024-07-24 02:12:13.466224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.686 [2024-07-24 02:12:13.466249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.686 qpair failed and we were unable to recover it. 00:33:58.686 [2024-07-24 02:12:13.466378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.686 [2024-07-24 02:12:13.466404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.686 qpair failed and we were unable to recover it. 00:33:58.686 [2024-07-24 02:12:13.466513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.686 [2024-07-24 02:12:13.466538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.686 qpair failed and we were unable to recover it. 00:33:58.686 [2024-07-24 02:12:13.466693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.686 [2024-07-24 02:12:13.466718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.686 qpair failed and we were unable to recover it. 00:33:58.686 [2024-07-24 02:12:13.466848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.686 [2024-07-24 02:12:13.466873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.686 qpair failed and we were unable to recover it. 00:33:58.687 [2024-07-24 02:12:13.466979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.687 [2024-07-24 02:12:13.467005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.687 qpair failed and we were unable to recover it. 00:33:58.687 [2024-07-24 02:12:13.467173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.687 [2024-07-24 02:12:13.467201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.687 qpair failed and we were unable to recover it. 00:33:58.687 [2024-07-24 02:12:13.467325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.687 [2024-07-24 02:12:13.467373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.687 qpair failed and we were unable to recover it. 00:33:58.687 [2024-07-24 02:12:13.467537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.687 [2024-07-24 02:12:13.467563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.687 qpair failed and we were unable to recover it. 00:33:58.687 [2024-07-24 02:12:13.467667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.687 [2024-07-24 02:12:13.467692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.687 qpair failed and we were unable to recover it. 00:33:58.687 [2024-07-24 02:12:13.467844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.687 [2024-07-24 02:12:13.467869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.687 qpair failed and we were unable to recover it. 00:33:58.687 [2024-07-24 02:12:13.467966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.687 [2024-07-24 02:12:13.467991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.687 qpair failed and we were unable to recover it. 00:33:58.687 [2024-07-24 02:12:13.468121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.687 [2024-07-24 02:12:13.468145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.687 qpair failed and we were unable to recover it. 00:33:58.687 [2024-07-24 02:12:13.468249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.687 [2024-07-24 02:12:13.468276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.687 qpair failed and we were unable to recover it. 00:33:58.687 [2024-07-24 02:12:13.468423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.687 [2024-07-24 02:12:13.468448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.687 qpair failed and we were unable to recover it. 00:33:58.687 [2024-07-24 02:12:13.468581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.687 [2024-07-24 02:12:13.468606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.687 qpair failed and we were unable to recover it. 00:33:58.687 [2024-07-24 02:12:13.468740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.687 [2024-07-24 02:12:13.468766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.687 qpair failed and we were unable to recover it. 00:33:58.687 [2024-07-24 02:12:13.468901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.687 [2024-07-24 02:12:13.468927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.687 qpair failed and we were unable to recover it. 00:33:58.687 [2024-07-24 02:12:13.469053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.687 [2024-07-24 02:12:13.469078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.687 qpair failed and we were unable to recover it. 00:33:58.687 [2024-07-24 02:12:13.469243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.687 [2024-07-24 02:12:13.469269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.687 qpair failed and we were unable to recover it. 00:33:58.687 [2024-07-24 02:12:13.469382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.687 [2024-07-24 02:12:13.469409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.687 qpair failed and we were unable to recover it. 00:33:58.687 [2024-07-24 02:12:13.469569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.687 [2024-07-24 02:12:13.469593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.687 qpair failed and we were unable to recover it. 00:33:58.687 [2024-07-24 02:12:13.469687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.687 [2024-07-24 02:12:13.469711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.687 qpair failed and we were unable to recover it. 00:33:58.687 [2024-07-24 02:12:13.469844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.687 [2024-07-24 02:12:13.469869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.687 qpair failed and we were unable to recover it. 00:33:58.687 [2024-07-24 02:12:13.469970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.687 [2024-07-24 02:12:13.469996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.688 qpair failed and we were unable to recover it. 00:33:58.688 [2024-07-24 02:12:13.470125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.688 [2024-07-24 02:12:13.470150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.688 qpair failed and we were unable to recover it. 00:33:58.688 [2024-07-24 02:12:13.470282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.688 [2024-07-24 02:12:13.470307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.688 qpair failed and we were unable to recover it. 00:33:58.688 [2024-07-24 02:12:13.470449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.688 [2024-07-24 02:12:13.470474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.688 qpair failed and we were unable to recover it. 00:33:58.688 [2024-07-24 02:12:13.470599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.688 [2024-07-24 02:12:13.470624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.688 qpair failed and we were unable to recover it. 00:33:58.688 [2024-07-24 02:12:13.470787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.688 [2024-07-24 02:12:13.470811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.688 qpair failed and we were unable to recover it. 00:33:58.688 [2024-07-24 02:12:13.470941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.688 [2024-07-24 02:12:13.470965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.688 qpair failed and we were unable to recover it. 00:33:58.688 [2024-07-24 02:12:13.471098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.688 [2024-07-24 02:12:13.471144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.688 qpair failed and we were unable to recover it. 00:33:58.688 [2024-07-24 02:12:13.471392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.688 [2024-07-24 02:12:13.471418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.688 qpair failed and we were unable to recover it. 00:33:58.688 [2024-07-24 02:12:13.471528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.688 [2024-07-24 02:12:13.471553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.688 qpair failed and we were unable to recover it. 00:33:58.688 [2024-07-24 02:12:13.471682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.688 [2024-07-24 02:12:13.471707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.688 qpair failed and we were unable to recover it. 00:33:58.688 [2024-07-24 02:12:13.471841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.688 [2024-07-24 02:12:13.471867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.688 qpair failed and we were unable to recover it. 00:33:58.688 [2024-07-24 02:12:13.472003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.688 [2024-07-24 02:12:13.472028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.688 qpair failed and we were unable to recover it. 00:33:58.688 [2024-07-24 02:12:13.472132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.688 [2024-07-24 02:12:13.472157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.688 qpair failed and we were unable to recover it. 00:33:58.688 [2024-07-24 02:12:13.472289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.688 [2024-07-24 02:12:13.472314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.688 qpair failed and we were unable to recover it. 00:33:58.688 [2024-07-24 02:12:13.472459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.688 [2024-07-24 02:12:13.472484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.688 qpair failed and we were unable to recover it. 00:33:58.688 [2024-07-24 02:12:13.472647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.688 [2024-07-24 02:12:13.472674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.688 qpair failed and we were unable to recover it. 00:33:58.688 [2024-07-24 02:12:13.472787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.688 [2024-07-24 02:12:13.472812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.688 qpair failed and we were unable to recover it. 00:33:58.688 [2024-07-24 02:12:13.472941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.688 [2024-07-24 02:12:13.472966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.688 qpair failed and we were unable to recover it. 00:33:58.688 [2024-07-24 02:12:13.473099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.688 [2024-07-24 02:12:13.473125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.688 qpair failed and we were unable to recover it. 00:33:58.688 [2024-07-24 02:12:13.473242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.688 [2024-07-24 02:12:13.473269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.688 qpair failed and we were unable to recover it. 00:33:58.688 [2024-07-24 02:12:13.473428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.688 [2024-07-24 02:12:13.473453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.688 qpair failed and we were unable to recover it. 00:33:58.688 [2024-07-24 02:12:13.473562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.688 [2024-07-24 02:12:13.473587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.688 qpair failed and we were unable to recover it. 00:33:58.688 [2024-07-24 02:12:13.473687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.688 [2024-07-24 02:12:13.473713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.688 qpair failed and we were unable to recover it. 00:33:58.688 [2024-07-24 02:12:13.473839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.688 [2024-07-24 02:12:13.473864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.688 qpair failed and we were unable to recover it. 00:33:58.688 [2024-07-24 02:12:13.473995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.688 [2024-07-24 02:12:13.474019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.688 qpair failed and we were unable to recover it. 00:33:58.688 [2024-07-24 02:12:13.474125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.688 [2024-07-24 02:12:13.474149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.688 qpair failed and we were unable to recover it. 00:33:58.688 [2024-07-24 02:12:13.474276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.688 [2024-07-24 02:12:13.474300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.688 qpair failed and we were unable to recover it. 00:33:58.688 [2024-07-24 02:12:13.474427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.688 [2024-07-24 02:12:13.474452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.688 qpair failed and we were unable to recover it. 00:33:58.688 [2024-07-24 02:12:13.474557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.688 [2024-07-24 02:12:13.474581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.688 qpair failed and we were unable to recover it. 00:33:58.688 [2024-07-24 02:12:13.474715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.688 [2024-07-24 02:12:13.474741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.688 qpair failed and we were unable to recover it. 00:33:58.688 [2024-07-24 02:12:13.474872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.688 [2024-07-24 02:12:13.474897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.688 qpair failed and we were unable to recover it. 00:33:58.688 [2024-07-24 02:12:13.475002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.688 [2024-07-24 02:12:13.475026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.688 qpair failed and we were unable to recover it. 00:33:58.688 [2024-07-24 02:12:13.475160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.688 [2024-07-24 02:12:13.475185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.688 qpair failed and we were unable to recover it. 00:33:58.688 [2024-07-24 02:12:13.475337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.688 [2024-07-24 02:12:13.475381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.688 qpair failed and we were unable to recover it. 00:33:58.688 [2024-07-24 02:12:13.475512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.688 [2024-07-24 02:12:13.475538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.688 qpair failed and we were unable to recover it. 00:33:58.688 [2024-07-24 02:12:13.475662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.688 [2024-07-24 02:12:13.475687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.688 qpair failed and we were unable to recover it. 00:33:58.688 [2024-07-24 02:12:13.475858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.688 [2024-07-24 02:12:13.475882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.688 qpair failed and we were unable to recover it. 00:33:58.688 [2024-07-24 02:12:13.476041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.688 [2024-07-24 02:12:13.476066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.688 qpair failed and we were unable to recover it. 00:33:58.688 [2024-07-24 02:12:13.476196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.688 [2024-07-24 02:12:13.476222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.688 qpair failed and we were unable to recover it. 00:33:58.688 [2024-07-24 02:12:13.476353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.688 [2024-07-24 02:12:13.476379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.688 qpair failed and we were unable to recover it. 00:33:58.688 [2024-07-24 02:12:13.476472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.688 [2024-07-24 02:12:13.476497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.688 qpair failed and we were unable to recover it. 00:33:58.688 [2024-07-24 02:12:13.476624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.688 [2024-07-24 02:12:13.476649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.688 qpair failed and we were unable to recover it. 00:33:58.688 [2024-07-24 02:12:13.476780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.688 [2024-07-24 02:12:13.476808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.688 qpair failed and we were unable to recover it. 00:33:58.688 [2024-07-24 02:12:13.476944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.688 [2024-07-24 02:12:13.476969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.688 qpair failed and we were unable to recover it. 00:33:58.688 [2024-07-24 02:12:13.477067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.688 [2024-07-24 02:12:13.477092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.688 qpair failed and we were unable to recover it. 00:33:58.688 [2024-07-24 02:12:13.477233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.688 [2024-07-24 02:12:13.477261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.688 qpair failed and we were unable to recover it. 00:33:58.688 [2024-07-24 02:12:13.477417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.688 [2024-07-24 02:12:13.477442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.688 qpair failed and we were unable to recover it. 00:33:58.688 [2024-07-24 02:12:13.477573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.688 [2024-07-24 02:12:13.477597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.688 qpair failed and we were unable to recover it. 00:33:58.688 [2024-07-24 02:12:13.477707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.688 [2024-07-24 02:12:13.477733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.688 qpair failed and we were unable to recover it. 00:33:58.688 [2024-07-24 02:12:13.477846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.688 [2024-07-24 02:12:13.477871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.688 qpair failed and we were unable to recover it. 00:33:58.688 [2024-07-24 02:12:13.478032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.688 [2024-07-24 02:12:13.478056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.688 qpair failed and we were unable to recover it. 00:33:58.688 [2024-07-24 02:12:13.478166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.688 [2024-07-24 02:12:13.478191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.688 qpair failed and we were unable to recover it. 00:33:58.688 [2024-07-24 02:12:13.478352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.688 [2024-07-24 02:12:13.478378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.688 qpair failed and we were unable to recover it. 00:33:58.688 [2024-07-24 02:12:13.478501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.688 [2024-07-24 02:12:13.478526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.688 qpair failed and we were unable to recover it. 00:33:58.688 [2024-07-24 02:12:13.478626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.689 [2024-07-24 02:12:13.478651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.689 qpair failed and we were unable to recover it. 00:33:58.689 [2024-07-24 02:12:13.478810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.689 [2024-07-24 02:12:13.478835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.689 qpair failed and we were unable to recover it. 00:33:58.689 [2024-07-24 02:12:13.478972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.689 [2024-07-24 02:12:13.478997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.689 qpair failed and we were unable to recover it. 00:33:58.689 [2024-07-24 02:12:13.479127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.689 [2024-07-24 02:12:13.479154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.689 qpair failed and we were unable to recover it. 00:33:58.689 [2024-07-24 02:12:13.479329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.689 [2024-07-24 02:12:13.479372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.689 qpair failed and we were unable to recover it. 00:33:58.689 [2024-07-24 02:12:13.479503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.689 [2024-07-24 02:12:13.479529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.689 qpair failed and we were unable to recover it. 00:33:58.689 [2024-07-24 02:12:13.479661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.689 [2024-07-24 02:12:13.479686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.689 qpair failed and we were unable to recover it. 00:33:58.689 [2024-07-24 02:12:13.479793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.689 [2024-07-24 02:12:13.479818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.689 qpair failed and we were unable to recover it. 00:33:58.689 [2024-07-24 02:12:13.479947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.689 [2024-07-24 02:12:13.479973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.689 qpair failed and we were unable to recover it. 00:33:58.689 [2024-07-24 02:12:13.480105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.689 [2024-07-24 02:12:13.480130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.689 qpair failed and we were unable to recover it. 00:33:58.689 [2024-07-24 02:12:13.480244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.689 [2024-07-24 02:12:13.480269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.689 qpair failed and we were unable to recover it. 00:33:58.689 [2024-07-24 02:12:13.480396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.689 [2024-07-24 02:12:13.480421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.689 qpair failed and we were unable to recover it. 00:33:58.689 [2024-07-24 02:12:13.480551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.689 [2024-07-24 02:12:13.480576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.689 qpair failed and we were unable to recover it. 00:33:58.689 [2024-07-24 02:12:13.480704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.689 [2024-07-24 02:12:13.480729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.689 qpair failed and we were unable to recover it. 00:33:58.689 [2024-07-24 02:12:13.480864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.689 [2024-07-24 02:12:13.480888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.689 qpair failed and we were unable to recover it. 00:33:58.689 [2024-07-24 02:12:13.481021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.689 [2024-07-24 02:12:13.481051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.689 qpair failed and we were unable to recover it. 00:33:58.689 [2024-07-24 02:12:13.481167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.689 [2024-07-24 02:12:13.481194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.689 qpair failed and we were unable to recover it. 00:33:58.689 [2024-07-24 02:12:13.481368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.689 [2024-07-24 02:12:13.481393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.689 qpair failed and we were unable to recover it. 00:33:58.689 [2024-07-24 02:12:13.481500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.689 [2024-07-24 02:12:13.481525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.689 qpair failed and we were unable to recover it. 00:33:58.689 [2024-07-24 02:12:13.481635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.689 [2024-07-24 02:12:13.481660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.689 qpair failed and we were unable to recover it. 00:33:58.689 [2024-07-24 02:12:13.481765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.689 [2024-07-24 02:12:13.481790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.689 qpair failed and we were unable to recover it. 00:33:58.689 [2024-07-24 02:12:13.481917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.689 [2024-07-24 02:12:13.481942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.689 qpair failed and we were unable to recover it. 00:33:58.689 [2024-07-24 02:12:13.482075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.689 [2024-07-24 02:12:13.482101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.689 qpair failed and we were unable to recover it. 00:33:58.689 [2024-07-24 02:12:13.482230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.689 [2024-07-24 02:12:13.482255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.689 qpair failed and we were unable to recover it. 00:33:58.689 [2024-07-24 02:12:13.482387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.689 [2024-07-24 02:12:13.482412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.689 qpair failed and we were unable to recover it. 00:33:58.689 [2024-07-24 02:12:13.482513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.689 [2024-07-24 02:12:13.482538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.689 qpair failed and we were unable to recover it. 00:33:58.689 [2024-07-24 02:12:13.482692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.689 [2024-07-24 02:12:13.482716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.689 qpair failed and we were unable to recover it. 00:33:58.689 [2024-07-24 02:12:13.482818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.689 [2024-07-24 02:12:13.482842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.689 qpair failed and we were unable to recover it. 00:33:58.689 [2024-07-24 02:12:13.482975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.689 [2024-07-24 02:12:13.482999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.689 qpair failed and we were unable to recover it. 00:33:58.689 [2024-07-24 02:12:13.483164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.689 [2024-07-24 02:12:13.483189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.689 qpair failed and we were unable to recover it. 00:33:58.689 [2024-07-24 02:12:13.483307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.689 [2024-07-24 02:12:13.483341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.689 qpair failed and we were unable to recover it. 00:33:58.689 [2024-07-24 02:12:13.483490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.689 [2024-07-24 02:12:13.483515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.689 qpair failed and we were unable to recover it. 00:33:58.689 [2024-07-24 02:12:13.483645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.689 [2024-07-24 02:12:13.483669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.689 qpair failed and we were unable to recover it. 00:33:58.689 [2024-07-24 02:12:13.483772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.689 [2024-07-24 02:12:13.483798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.689 qpair failed and we were unable to recover it. 00:33:58.689 [2024-07-24 02:12:13.483936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.689 [2024-07-24 02:12:13.483961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.689 qpair failed and we were unable to recover it. 00:33:58.689 [2024-07-24 02:12:13.484085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.689 [2024-07-24 02:12:13.484110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.689 qpair failed and we were unable to recover it. 00:33:58.689 [2024-07-24 02:12:13.484241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.689 [2024-07-24 02:12:13.484280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.689 qpair failed and we were unable to recover it. 00:33:58.689 [2024-07-24 02:12:13.484455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.689 [2024-07-24 02:12:13.484483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.689 qpair failed and we were unable to recover it. 00:33:58.689 [2024-07-24 02:12:13.484592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.689 [2024-07-24 02:12:13.484619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.689 qpair failed and we were unable to recover it. 00:33:58.689 [2024-07-24 02:12:13.484780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.689 [2024-07-24 02:12:13.484806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.689 qpair failed and we were unable to recover it. 00:33:58.689 [2024-07-24 02:12:13.484954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.689 [2024-07-24 02:12:13.484979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.689 qpair failed and we were unable to recover it. 00:33:58.689 [2024-07-24 02:12:13.485105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.689 [2024-07-24 02:12:13.485130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.689 qpair failed and we were unable to recover it. 00:33:58.689 [2024-07-24 02:12:13.485286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.689 [2024-07-24 02:12:13.485329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.689 qpair failed and we were unable to recover it. 00:33:58.689 [2024-07-24 02:12:13.485492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.689 [2024-07-24 02:12:13.485518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.689 qpair failed and we were unable to recover it. 00:33:58.689 [2024-07-24 02:12:13.485677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.689 [2024-07-24 02:12:13.485702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.689 qpair failed and we were unable to recover it. 00:33:58.689 [2024-07-24 02:12:13.485810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.689 [2024-07-24 02:12:13.485835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.689 qpair failed and we were unable to recover it. 00:33:58.689 [2024-07-24 02:12:13.485962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.689 [2024-07-24 02:12:13.485987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.689 qpair failed and we were unable to recover it. 00:33:58.689 [2024-07-24 02:12:13.486089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.689 [2024-07-24 02:12:13.486114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.689 qpair failed and we were unable to recover it. 00:33:58.689 [2024-07-24 02:12:13.486244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.689 [2024-07-24 02:12:13.486270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.689 qpair failed and we were unable to recover it. 00:33:58.689 [2024-07-24 02:12:13.486439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.689 [2024-07-24 02:12:13.486465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.689 qpair failed and we were unable to recover it. 00:33:58.689 [2024-07-24 02:12:13.486595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.689 [2024-07-24 02:12:13.486620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.690 qpair failed and we were unable to recover it. 00:33:58.690 [2024-07-24 02:12:13.486721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.690 [2024-07-24 02:12:13.486746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.690 qpair failed and we were unable to recover it. 00:33:58.690 [2024-07-24 02:12:13.486881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.690 [2024-07-24 02:12:13.486906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.690 qpair failed and we were unable to recover it. 00:33:58.690 [2024-07-24 02:12:13.487022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.690 [2024-07-24 02:12:13.487048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.690 qpair failed and we were unable to recover it. 00:33:58.690 [2024-07-24 02:12:13.487215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.690 [2024-07-24 02:12:13.487257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.690 qpair failed and we were unable to recover it. 00:33:58.690 [2024-07-24 02:12:13.487429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.690 [2024-07-24 02:12:13.487456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.690 qpair failed and we were unable to recover it. 00:33:58.690 [2024-07-24 02:12:13.487575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.690 [2024-07-24 02:12:13.487614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.690 qpair failed and we were unable to recover it. 00:33:58.690 [2024-07-24 02:12:13.487753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.690 [2024-07-24 02:12:13.487780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.690 qpair failed and we were unable to recover it. 00:33:58.690 [2024-07-24 02:12:13.487919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.690 [2024-07-24 02:12:13.487945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.690 qpair failed and we were unable to recover it. 00:33:58.690 [2024-07-24 02:12:13.488075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.690 [2024-07-24 02:12:13.488101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.690 qpair failed and we were unable to recover it. 00:33:58.690 [2024-07-24 02:12:13.488256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.690 [2024-07-24 02:12:13.488282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.690 qpair failed and we were unable to recover it. 00:33:58.690 [2024-07-24 02:12:13.488420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.690 [2024-07-24 02:12:13.488445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.690 qpair failed and we were unable to recover it. 00:33:58.690 [2024-07-24 02:12:13.488562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.690 [2024-07-24 02:12:13.488589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.690 qpair failed and we were unable to recover it. 00:33:58.690 [2024-07-24 02:12:13.488722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.690 [2024-07-24 02:12:13.488748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.690 qpair failed and we were unable to recover it. 00:33:58.690 [2024-07-24 02:12:13.488858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.690 [2024-07-24 02:12:13.488882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.690 qpair failed and we were unable to recover it. 00:33:58.690 [2024-07-24 02:12:13.488979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.690 [2024-07-24 02:12:13.489005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.690 qpair failed and we were unable to recover it. 00:33:58.690 [2024-07-24 02:12:13.489114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.690 [2024-07-24 02:12:13.489140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.690 qpair failed and we were unable to recover it. 00:33:58.690 [2024-07-24 02:12:13.489300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.690 [2024-07-24 02:12:13.489332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.690 qpair failed and we were unable to recover it. 00:33:58.690 [2024-07-24 02:12:13.489440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.690 [2024-07-24 02:12:13.489465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.690 qpair failed and we were unable to recover it. 00:33:58.690 [2024-07-24 02:12:13.489591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.690 [2024-07-24 02:12:13.489616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.690 qpair failed and we were unable to recover it. 00:33:58.690 [2024-07-24 02:12:13.489781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.690 [2024-07-24 02:12:13.489806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.690 qpair failed and we were unable to recover it. 00:33:58.690 [2024-07-24 02:12:13.489968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.690 [2024-07-24 02:12:13.489993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.690 qpair failed and we were unable to recover it. 00:33:58.690 [2024-07-24 02:12:13.490092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.690 [2024-07-24 02:12:13.490134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.690 qpair failed and we were unable to recover it. 00:33:58.690 [2024-07-24 02:12:13.490320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.690 [2024-07-24 02:12:13.490353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.690 qpair failed and we were unable to recover it. 00:33:58.690 [2024-07-24 02:12:13.490485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.690 [2024-07-24 02:12:13.490510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.690 qpair failed and we were unable to recover it. 00:33:58.690 [2024-07-24 02:12:13.490666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.690 [2024-07-24 02:12:13.490691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.690 qpair failed and we were unable to recover it. 00:33:58.690 [2024-07-24 02:12:13.490825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.690 [2024-07-24 02:12:13.490850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.690 qpair failed and we were unable to recover it. 00:33:58.690 [2024-07-24 02:12:13.491008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.690 [2024-07-24 02:12:13.491033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.690 qpair failed and we were unable to recover it. 00:33:58.690 [2024-07-24 02:12:13.491216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.690 [2024-07-24 02:12:13.491243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.690 qpair failed and we were unable to recover it. 00:33:58.690 [2024-07-24 02:12:13.491426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.690 [2024-07-24 02:12:13.491451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.690 qpair failed and we were unable to recover it. 00:33:58.690 [2024-07-24 02:12:13.491580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.690 [2024-07-24 02:12:13.491605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.690 qpair failed and we were unable to recover it. 00:33:58.690 [2024-07-24 02:12:13.491713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.690 [2024-07-24 02:12:13.491738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.690 qpair failed and we were unable to recover it. 00:33:58.690 [2024-07-24 02:12:13.491847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.690 [2024-07-24 02:12:13.491872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.690 qpair failed and we were unable to recover it. 00:33:58.690 [2024-07-24 02:12:13.492032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.690 [2024-07-24 02:12:13.492057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.690 qpair failed and we were unable to recover it. 00:33:58.690 [2024-07-24 02:12:13.492190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.690 [2024-07-24 02:12:13.492216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.690 qpair failed and we were unable to recover it. 00:33:58.691 [2024-07-24 02:12:13.492326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.691 [2024-07-24 02:12:13.492351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.691 qpair failed and we were unable to recover it. 00:33:58.691 [2024-07-24 02:12:13.492454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.691 [2024-07-24 02:12:13.492479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.691 qpair failed and we were unable to recover it. 00:33:58.691 [2024-07-24 02:12:13.492688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.691 [2024-07-24 02:12:13.492713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.691 qpair failed and we were unable to recover it. 00:33:58.691 [2024-07-24 02:12:13.492847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.691 [2024-07-24 02:12:13.492872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.691 qpair failed and we were unable to recover it. 00:33:58.691 [2024-07-24 02:12:13.493012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.691 [2024-07-24 02:12:13.493037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.691 qpair failed and we were unable to recover it. 00:33:58.691 [2024-07-24 02:12:13.493180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.691 [2024-07-24 02:12:13.493208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.691 qpair failed and we were unable to recover it. 00:33:58.691 [2024-07-24 02:12:13.493355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.691 [2024-07-24 02:12:13.493381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.691 qpair failed and we were unable to recover it. 00:33:58.691 [2024-07-24 02:12:13.493510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.691 [2024-07-24 02:12:13.493536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.692 qpair failed and we were unable to recover it. 00:33:58.692 [2024-07-24 02:12:13.493692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.692 [2024-07-24 02:12:13.493717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.692 qpair failed and we were unable to recover it. 00:33:58.692 [2024-07-24 02:12:13.493825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.692 [2024-07-24 02:12:13.493851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.692 qpair failed and we were unable to recover it. 00:33:58.692 [2024-07-24 02:12:13.494014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.692 [2024-07-24 02:12:13.494039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.692 qpair failed and we were unable to recover it. 00:33:58.692 [2024-07-24 02:12:13.494172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.692 [2024-07-24 02:12:13.494201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.692 qpair failed and we were unable to recover it. 00:33:58.692 [2024-07-24 02:12:13.494334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.692 [2024-07-24 02:12:13.494360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.692 qpair failed and we were unable to recover it. 00:33:58.692 [2024-07-24 02:12:13.494521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.692 [2024-07-24 02:12:13.494548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.692 qpair failed and we were unable to recover it. 00:33:58.692 [2024-07-24 02:12:13.494658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.692 [2024-07-24 02:12:13.494683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.692 qpair failed and we were unable to recover it. 00:33:58.692 [2024-07-24 02:12:13.494819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.692 [2024-07-24 02:12:13.494844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.692 qpair failed and we were unable to recover it. 00:33:58.692 [2024-07-24 02:12:13.494977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.692 [2024-07-24 02:12:13.495002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.692 qpair failed and we were unable to recover it. 00:33:58.692 [2024-07-24 02:12:13.495169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.692 [2024-07-24 02:12:13.495196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.692 qpair failed and we were unable to recover it. 00:33:58.692 [2024-07-24 02:12:13.495326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.693 [2024-07-24 02:12:13.495352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.693 qpair failed and we were unable to recover it. 00:33:58.693 [2024-07-24 02:12:13.495506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.693 [2024-07-24 02:12:13.495531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.693 qpair failed and we were unable to recover it. 00:33:58.693 [2024-07-24 02:12:13.495632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.693 [2024-07-24 02:12:13.495657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.693 qpair failed and we were unable to recover it. 00:33:58.693 [2024-07-24 02:12:13.495785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.693 [2024-07-24 02:12:13.495809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.693 qpair failed and we were unable to recover it. 00:33:58.693 [2024-07-24 02:12:13.495908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.693 [2024-07-24 02:12:13.495935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.693 qpair failed and we were unable to recover it. 00:33:58.693 [2024-07-24 02:12:13.496058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.693 [2024-07-24 02:12:13.496083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.693 qpair failed and we were unable to recover it. 00:33:58.693 [2024-07-24 02:12:13.496219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.693 [2024-07-24 02:12:13.496245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.693 qpair failed and we were unable to recover it. 00:33:58.693 [2024-07-24 02:12:13.496459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.693 [2024-07-24 02:12:13.496485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.693 qpair failed and we were unable to recover it. 00:33:58.693 [2024-07-24 02:12:13.496586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.693 [2024-07-24 02:12:13.496612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.693 qpair failed and we were unable to recover it. 00:33:58.693 [2024-07-24 02:12:13.496720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.693 [2024-07-24 02:12:13.496745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.693 qpair failed and we were unable to recover it. 00:33:58.693 [2024-07-24 02:12:13.496847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.693 [2024-07-24 02:12:13.496872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.693 qpair failed and we were unable to recover it. 00:33:58.693 [2024-07-24 02:12:13.497000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.693 [2024-07-24 02:12:13.497025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.693 qpair failed and we were unable to recover it. 00:33:58.694 [2024-07-24 02:12:13.497135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.694 [2024-07-24 02:12:13.497160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.694 qpair failed and we were unable to recover it. 00:33:58.694 [2024-07-24 02:12:13.497287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.694 [2024-07-24 02:12:13.497313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.694 qpair failed and we were unable to recover it. 00:33:58.694 [2024-07-24 02:12:13.497419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.694 [2024-07-24 02:12:13.497444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.694 qpair failed and we were unable to recover it. 00:33:58.694 [2024-07-24 02:12:13.497553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.694 [2024-07-24 02:12:13.497578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.694 qpair failed and we were unable to recover it. 00:33:58.694 [2024-07-24 02:12:13.497709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.694 [2024-07-24 02:12:13.497735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.694 qpair failed and we were unable to recover it. 00:33:58.694 [2024-07-24 02:12:13.497902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.694 [2024-07-24 02:12:13.497927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.694 qpair failed and we were unable to recover it. 00:33:58.694 [2024-07-24 02:12:13.498062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.694 [2024-07-24 02:12:13.498087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.694 qpair failed and we were unable to recover it. 00:33:58.694 [2024-07-24 02:12:13.498240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.694 [2024-07-24 02:12:13.498271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.694 qpair failed and we were unable to recover it. 00:33:58.694 [2024-07-24 02:12:13.498408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.694 [2024-07-24 02:12:13.498433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.694 qpair failed and we were unable to recover it. 00:33:58.695 [2024-07-24 02:12:13.498541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.695 [2024-07-24 02:12:13.498568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.695 qpair failed and we were unable to recover it. 00:33:58.695 [2024-07-24 02:12:13.498725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.695 [2024-07-24 02:12:13.498761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.695 qpair failed and we were unable to recover it. 00:33:58.695 [2024-07-24 02:12:13.498891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.695 [2024-07-24 02:12:13.498915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.695 qpair failed and we were unable to recover it. 00:33:58.695 [2024-07-24 02:12:13.499057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.695 [2024-07-24 02:12:13.499082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.695 qpair failed and we were unable to recover it. 00:33:58.695 [2024-07-24 02:12:13.499217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.695 [2024-07-24 02:12:13.499242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.695 qpair failed and we were unable to recover it. 00:33:58.695 [2024-07-24 02:12:13.499416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.695 [2024-07-24 02:12:13.499441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.695 qpair failed and we were unable to recover it. 00:33:58.695 [2024-07-24 02:12:13.499567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.695 [2024-07-24 02:12:13.499592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.695 qpair failed and we were unable to recover it. 00:33:58.695 [2024-07-24 02:12:13.499698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.695 [2024-07-24 02:12:13.499723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.695 qpair failed and we were unable to recover it. 00:33:58.695 [2024-07-24 02:12:13.499853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.695 [2024-07-24 02:12:13.499879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.695 qpair failed and we were unable to recover it. 00:33:58.695 [2024-07-24 02:12:13.500092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.695 [2024-07-24 02:12:13.500117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.695 qpair failed and we were unable to recover it. 00:33:58.695 [2024-07-24 02:12:13.500268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.695 [2024-07-24 02:12:13.500297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.695 qpair failed and we were unable to recover it. 00:33:58.696 [2024-07-24 02:12:13.500454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.696 [2024-07-24 02:12:13.500479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.696 qpair failed and we were unable to recover it. 00:33:58.696 [2024-07-24 02:12:13.500613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.696 [2024-07-24 02:12:13.500638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.696 qpair failed and we were unable to recover it. 00:33:58.696 [2024-07-24 02:12:13.500773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.696 [2024-07-24 02:12:13.500799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.696 qpair failed and we were unable to recover it. 00:33:58.696 [2024-07-24 02:12:13.500905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.696 [2024-07-24 02:12:13.500931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.696 qpair failed and we were unable to recover it. 00:33:58.696 [2024-07-24 02:12:13.501062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.696 [2024-07-24 02:12:13.501088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.696 qpair failed and we were unable to recover it. 00:33:58.696 [2024-07-24 02:12:13.501241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.696 [2024-07-24 02:12:13.501268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.696 qpair failed and we were unable to recover it. 00:33:58.696 [2024-07-24 02:12:13.501410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.696 [2024-07-24 02:12:13.501437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.696 qpair failed and we were unable to recover it. 00:33:58.696 [2024-07-24 02:12:13.501546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.696 [2024-07-24 02:12:13.501572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.696 qpair failed and we were unable to recover it. 00:33:58.696 [2024-07-24 02:12:13.501707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.696 [2024-07-24 02:12:13.501733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.696 qpair failed and we were unable to recover it. 00:33:58.696 [2024-07-24 02:12:13.501888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.697 [2024-07-24 02:12:13.501916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.697 qpair failed and we were unable to recover it. 00:33:58.697 [2024-07-24 02:12:13.502023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.697 [2024-07-24 02:12:13.502049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.697 qpair failed and we were unable to recover it. 00:33:58.697 [2024-07-24 02:12:13.502152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.697 [2024-07-24 02:12:13.502180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.697 qpair failed and we were unable to recover it. 00:33:58.697 [2024-07-24 02:12:13.502313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.697 [2024-07-24 02:12:13.502345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.697 qpair failed and we were unable to recover it. 00:33:58.697 [2024-07-24 02:12:13.502452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.697 [2024-07-24 02:12:13.502477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.697 qpair failed and we were unable to recover it. 00:33:58.697 [2024-07-24 02:12:13.502610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.697 [2024-07-24 02:12:13.502636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.697 qpair failed and we were unable to recover it. 00:33:58.697 [2024-07-24 02:12:13.502775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.697 [2024-07-24 02:12:13.502800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.697 qpair failed and we were unable to recover it. 00:33:58.697 [2024-07-24 02:12:13.502897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.697 [2024-07-24 02:12:13.502922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.697 qpair failed and we were unable to recover it. 00:33:58.697 [2024-07-24 02:12:13.503023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.697 [2024-07-24 02:12:13.503048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.697 qpair failed and we were unable to recover it. 00:33:58.697 [2024-07-24 02:12:13.503147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.697 [2024-07-24 02:12:13.503172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.697 qpair failed and we were unable to recover it. 00:33:58.697 [2024-07-24 02:12:13.503308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.697 [2024-07-24 02:12:13.503339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.697 qpair failed and we were unable to recover it. 00:33:58.697 [2024-07-24 02:12:13.503442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.697 [2024-07-24 02:12:13.503468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.697 qpair failed and we were unable to recover it. 00:33:58.697 [2024-07-24 02:12:13.503577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.697 [2024-07-24 02:12:13.503603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.697 qpair failed and we were unable to recover it. 00:33:58.698 [2024-07-24 02:12:13.503747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.698 [2024-07-24 02:12:13.503773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.698 qpair failed and we were unable to recover it. 00:33:58.698 [2024-07-24 02:12:13.503904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.698 [2024-07-24 02:12:13.503946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.698 qpair failed and we were unable to recover it. 00:33:58.698 [2024-07-24 02:12:13.504091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.698 [2024-07-24 02:12:13.504120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.698 qpair failed and we were unable to recover it. 00:33:58.698 [2024-07-24 02:12:13.504272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.698 [2024-07-24 02:12:13.504298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.698 qpair failed and we were unable to recover it. 00:33:58.698 [2024-07-24 02:12:13.504424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.698 [2024-07-24 02:12:13.504449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.698 qpair failed and we were unable to recover it. 00:33:58.698 [2024-07-24 02:12:13.504558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.698 [2024-07-24 02:12:13.504584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.698 qpair failed and we were unable to recover it. 00:33:58.698 [2024-07-24 02:12:13.504777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.698 [2024-07-24 02:12:13.504802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.698 qpair failed and we were unable to recover it. 00:33:58.698 [2024-07-24 02:12:13.504984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.698 [2024-07-24 02:12:13.505012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.698 qpair failed and we were unable to recover it. 00:33:58.698 [2024-07-24 02:12:13.505116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.698 [2024-07-24 02:12:13.505144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.698 qpair failed and we were unable to recover it. 00:33:58.698 [2024-07-24 02:12:13.505299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.699 [2024-07-24 02:12:13.505341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.699 qpair failed and we were unable to recover it. 00:33:58.699 [2024-07-24 02:12:13.505439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.699 [2024-07-24 02:12:13.505464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.699 qpair failed and we were unable to recover it. 00:33:58.699 [2024-07-24 02:12:13.505564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.699 [2024-07-24 02:12:13.505605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.699 qpair failed and we were unable to recover it. 00:33:58.699 [2024-07-24 02:12:13.505789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.699 [2024-07-24 02:12:13.505813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.699 qpair failed and we were unable to recover it. 00:33:58.699 [2024-07-24 02:12:13.505962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.699 [2024-07-24 02:12:13.505990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.699 qpair failed and we were unable to recover it. 00:33:58.699 [2024-07-24 02:12:13.506131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.699 [2024-07-24 02:12:13.506159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.699 qpair failed and we were unable to recover it. 00:33:58.699 [2024-07-24 02:12:13.506311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.699 [2024-07-24 02:12:13.506343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.699 qpair failed and we were unable to recover it. 00:33:58.699 [2024-07-24 02:12:13.506486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.699 [2024-07-24 02:12:13.506516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.699 qpair failed and we were unable to recover it. 00:33:58.699 [2024-07-24 02:12:13.506646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.702 [2024-07-24 02:12:13.506672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.702 qpair failed and we were unable to recover it. 00:33:58.702 [2024-07-24 02:12:13.506840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.702 [2024-07-24 02:12:13.506866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.702 qpair failed and we were unable to recover it. 00:33:58.702 [2024-07-24 02:12:13.506978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.702 [2024-07-24 02:12:13.507004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.702 qpair failed and we were unable to recover it. 00:33:58.702 [2024-07-24 02:12:13.507162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.702 [2024-07-24 02:12:13.507190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.702 qpair failed and we were unable to recover it. 00:33:58.702 [2024-07-24 02:12:13.507335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.702 [2024-07-24 02:12:13.507361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.702 qpair failed and we were unable to recover it. 00:33:58.702 [2024-07-24 02:12:13.507463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.702 [2024-07-24 02:12:13.507488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.702 qpair failed and we were unable to recover it. 00:33:58.702 [2024-07-24 02:12:13.507622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.702 [2024-07-24 02:12:13.507647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.702 qpair failed and we were unable to recover it. 00:33:58.702 [2024-07-24 02:12:13.507782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.702 [2024-07-24 02:12:13.507807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.703 qpair failed and we were unable to recover it. 00:33:58.703 [2024-07-24 02:12:13.507932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.703 [2024-07-24 02:12:13.507957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.703 qpair failed and we were unable to recover it. 00:33:58.703 [2024-07-24 02:12:13.508065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.703 [2024-07-24 02:12:13.508091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.703 qpair failed and we were unable to recover it. 00:33:58.703 [2024-07-24 02:12:13.508255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.703 [2024-07-24 02:12:13.508280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.703 qpair failed and we were unable to recover it. 00:33:58.703 [2024-07-24 02:12:13.508416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.703 [2024-07-24 02:12:13.508441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.703 qpair failed and we were unable to recover it. 00:33:58.703 [2024-07-24 02:12:13.508572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.703 [2024-07-24 02:12:13.508597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.703 qpair failed and we were unable to recover it. 00:33:58.703 [2024-07-24 02:12:13.508714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.703 [2024-07-24 02:12:13.508740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.703 qpair failed and we were unable to recover it. 00:33:58.703 [2024-07-24 02:12:13.508869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.703 [2024-07-24 02:12:13.508895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.703 qpair failed and we were unable to recover it. 00:33:58.703 [2024-07-24 02:12:13.509056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.703 [2024-07-24 02:12:13.509081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.703 qpair failed and we were unable to recover it. 00:33:58.703 [2024-07-24 02:12:13.509253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.703 [2024-07-24 02:12:13.509282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.703 qpair failed and we were unable to recover it. 00:33:58.704 [2024-07-24 02:12:13.509394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.704 [2024-07-24 02:12:13.509420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.704 qpair failed and we were unable to recover it. 00:33:58.704 [2024-07-24 02:12:13.509518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.704 [2024-07-24 02:12:13.509543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.704 qpair failed and we were unable to recover it. 00:33:58.704 [2024-07-24 02:12:13.509668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.704 [2024-07-24 02:12:13.509692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.704 qpair failed and we were unable to recover it. 00:33:58.704 [2024-07-24 02:12:13.509824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.704 [2024-07-24 02:12:13.509849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.704 qpair failed and we were unable to recover it. 00:33:58.704 [2024-07-24 02:12:13.509989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.704 [2024-07-24 02:12:13.510014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.704 qpair failed and we were unable to recover it. 00:33:58.704 [2024-07-24 02:12:13.510115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.704 [2024-07-24 02:12:13.510141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.704 qpair failed and we were unable to recover it. 00:33:58.704 [2024-07-24 02:12:13.510251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.704 [2024-07-24 02:12:13.510287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.704 qpair failed and we were unable to recover it. 00:33:58.704 [2024-07-24 02:12:13.510433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.704 [2024-07-24 02:12:13.510460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.704 qpair failed and we were unable to recover it. 00:33:58.996 [2024-07-24 02:12:13.510572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.996 [2024-07-24 02:12:13.510597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.996 qpair failed and we were unable to recover it. 00:33:58.996 [2024-07-24 02:12:13.510774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.996 [2024-07-24 02:12:13.510800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.996 qpair failed and we were unable to recover it. 00:33:58.996 [2024-07-24 02:12:13.510904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.996 [2024-07-24 02:12:13.510929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.996 qpair failed and we were unable to recover it. 00:33:58.996 [2024-07-24 02:12:13.511088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.996 [2024-07-24 02:12:13.511114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.996 qpair failed and we were unable to recover it. 00:33:58.996 [2024-07-24 02:12:13.511235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.996 [2024-07-24 02:12:13.511263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.996 qpair failed and we were unable to recover it. 00:33:58.996 [2024-07-24 02:12:13.511442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.996 [2024-07-24 02:12:13.511481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.996 qpair failed and we were unable to recover it. 00:33:58.996 [2024-07-24 02:12:13.511595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.996 [2024-07-24 02:12:13.511633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.996 qpair failed and we were unable to recover it. 00:33:58.996 [2024-07-24 02:12:13.511747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.996 [2024-07-24 02:12:13.511779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.996 qpair failed and we were unable to recover it. 00:33:58.996 [2024-07-24 02:12:13.511884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.996 [2024-07-24 02:12:13.511910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.996 qpair failed and we were unable to recover it. 00:33:58.996 [2024-07-24 02:12:13.512018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.996 [2024-07-24 02:12:13.512043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.996 qpair failed and we were unable to recover it. 00:33:58.996 [2024-07-24 02:12:13.512176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.996 [2024-07-24 02:12:13.512203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.996 qpair failed and we were unable to recover it. 00:33:58.996 [2024-07-24 02:12:13.512357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.996 [2024-07-24 02:12:13.512382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.996 qpair failed and we were unable to recover it. 00:33:58.996 [2024-07-24 02:12:13.512494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.996 [2024-07-24 02:12:13.512526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.996 qpair failed and we were unable to recover it. 00:33:58.996 [2024-07-24 02:12:13.512656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.996 [2024-07-24 02:12:13.512681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.996 qpair failed and we were unable to recover it. 00:33:58.996 [2024-07-24 02:12:13.512820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.996 [2024-07-24 02:12:13.512845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.996 qpair failed and we were unable to recover it. 00:33:58.996 [2024-07-24 02:12:13.512981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.996 [2024-07-24 02:12:13.513011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.996 qpair failed and we were unable to recover it. 00:33:58.996 [2024-07-24 02:12:13.513170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.996 [2024-07-24 02:12:13.513203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.996 qpair failed and we were unable to recover it. 00:33:58.996 [2024-07-24 02:12:13.513354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.996 [2024-07-24 02:12:13.513381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.996 qpair failed and we were unable to recover it. 00:33:58.996 [2024-07-24 02:12:13.513513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.996 [2024-07-24 02:12:13.513544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.996 qpair failed and we were unable to recover it. 00:33:58.996 [2024-07-24 02:12:13.513680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.996 [2024-07-24 02:12:13.513707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.996 qpair failed and we were unable to recover it. 00:33:58.996 [2024-07-24 02:12:13.513836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.996 [2024-07-24 02:12:13.513862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.996 qpair failed and we were unable to recover it. 00:33:58.996 [2024-07-24 02:12:13.514013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.996 [2024-07-24 02:12:13.514039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.996 qpair failed and we were unable to recover it. 00:33:58.996 [2024-07-24 02:12:13.514158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.996 [2024-07-24 02:12:13.514184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.996 qpair failed and we were unable to recover it. 00:33:58.996 [2024-07-24 02:12:13.514285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.996 [2024-07-24 02:12:13.514310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.996 qpair failed and we were unable to recover it. 00:33:58.996 [2024-07-24 02:12:13.514475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.996 [2024-07-24 02:12:13.514501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.996 qpair failed and we were unable to recover it. 00:33:58.996 [2024-07-24 02:12:13.514658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.996 [2024-07-24 02:12:13.514684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.996 qpair failed and we were unable to recover it. 00:33:58.996 [2024-07-24 02:12:13.514830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.996 [2024-07-24 02:12:13.514857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.996 qpair failed and we were unable to recover it. 00:33:58.996 [2024-07-24 02:12:13.514966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.996 [2024-07-24 02:12:13.514992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.996 qpair failed and we were unable to recover it. 00:33:58.996 [2024-07-24 02:12:13.515170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.996 [2024-07-24 02:12:13.515199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.996 qpair failed and we were unable to recover it. 00:33:58.996 [2024-07-24 02:12:13.515325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.996 [2024-07-24 02:12:13.515351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.996 qpair failed and we were unable to recover it. 00:33:58.996 [2024-07-24 02:12:13.515483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.996 [2024-07-24 02:12:13.515508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.996 qpair failed and we were unable to recover it. 00:33:58.996 [2024-07-24 02:12:13.515670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.996 [2024-07-24 02:12:13.515696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.996 qpair failed and we were unable to recover it. 00:33:58.996 [2024-07-24 02:12:13.515860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.996 [2024-07-24 02:12:13.515885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.996 qpair failed and we were unable to recover it. 00:33:58.996 [2024-07-24 02:12:13.515994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.996 [2024-07-24 02:12:13.516021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.997 qpair failed and we were unable to recover it. 00:33:58.997 [2024-07-24 02:12:13.516175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.997 [2024-07-24 02:12:13.516206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.997 qpair failed and we were unable to recover it. 00:33:58.997 [2024-07-24 02:12:13.516348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.997 [2024-07-24 02:12:13.516379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.997 qpair failed and we were unable to recover it. 00:33:58.997 [2024-07-24 02:12:13.516545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.997 [2024-07-24 02:12:13.516571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.997 qpair failed and we were unable to recover it. 00:33:58.997 [2024-07-24 02:12:13.516703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.997 [2024-07-24 02:12:13.516728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.997 qpair failed and we were unable to recover it. 00:33:58.997 [2024-07-24 02:12:13.516874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.997 [2024-07-24 02:12:13.516900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.997 qpair failed and we were unable to recover it. 00:33:58.997 [2024-07-24 02:12:13.517032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.997 [2024-07-24 02:12:13.517057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.997 qpair failed and we were unable to recover it. 00:33:58.997 [2024-07-24 02:12:13.517209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.997 [2024-07-24 02:12:13.517234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.997 qpair failed and we were unable to recover it. 00:33:58.997 [2024-07-24 02:12:13.517372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.997 [2024-07-24 02:12:13.517397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.997 qpair failed and we were unable to recover it. 00:33:58.997 [2024-07-24 02:12:13.517497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.997 [2024-07-24 02:12:13.517523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.997 qpair failed and we were unable to recover it. 00:33:58.997 [2024-07-24 02:12:13.517629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.997 [2024-07-24 02:12:13.517654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.997 qpair failed and we were unable to recover it. 00:33:58.997 [2024-07-24 02:12:13.517784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.997 [2024-07-24 02:12:13.517811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.997 qpair failed and we were unable to recover it. 00:33:58.997 [2024-07-24 02:12:13.517955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.997 [2024-07-24 02:12:13.517981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:58.997 qpair failed and we were unable to recover it. 00:33:58.997 [2024-07-24 02:12:13.518126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.997 [2024-07-24 02:12:13.518164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.997 qpair failed and we were unable to recover it. 00:33:58.997 [2024-07-24 02:12:13.518330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.997 [2024-07-24 02:12:13.518374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.997 qpair failed and we were unable to recover it. 00:33:58.997 [2024-07-24 02:12:13.518515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.997 [2024-07-24 02:12:13.518541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.997 qpair failed and we were unable to recover it. 00:33:58.997 [2024-07-24 02:12:13.518658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.997 [2024-07-24 02:12:13.518683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.997 qpair failed and we were unable to recover it. 00:33:58.997 [2024-07-24 02:12:13.518853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.997 [2024-07-24 02:12:13.518879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.997 qpair failed and we were unable to recover it. 00:33:58.997 [2024-07-24 02:12:13.518987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.997 [2024-07-24 02:12:13.519013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.997 qpair failed and we were unable to recover it. 00:33:58.997 [2024-07-24 02:12:13.519127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.997 [2024-07-24 02:12:13.519153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.997 qpair failed and we were unable to recover it. 00:33:58.997 [2024-07-24 02:12:13.519252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.997 [2024-07-24 02:12:13.519277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:58.997 qpair failed and we were unable to recover it. 00:33:58.997 [2024-07-24 02:12:13.519432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.997 [2024-07-24 02:12:13.519470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.997 qpair failed and we were unable to recover it. 00:33:58.997 [2024-07-24 02:12:13.519632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.997 [2024-07-24 02:12:13.519659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.997 qpair failed and we were unable to recover it. 00:33:58.997 [2024-07-24 02:12:13.519798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.997 [2024-07-24 02:12:13.519823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.997 qpair failed and we were unable to recover it. 00:33:58.997 [2024-07-24 02:12:13.519980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.997 [2024-07-24 02:12:13.520005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.997 qpair failed and we were unable to recover it. 00:33:58.997 [2024-07-24 02:12:13.520110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.997 [2024-07-24 02:12:13.520140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.997 qpair failed and we were unable to recover it. 00:33:58.997 [2024-07-24 02:12:13.520277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.997 [2024-07-24 02:12:13.520302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.997 qpair failed and we were unable to recover it. 00:33:58.997 [2024-07-24 02:12:13.520444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.997 [2024-07-24 02:12:13.520470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.997 qpair failed and we were unable to recover it. 00:33:58.997 [2024-07-24 02:12:13.520596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.997 [2024-07-24 02:12:13.520630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.997 qpair failed and we were unable to recover it. 00:33:58.997 [2024-07-24 02:12:13.520739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.997 [2024-07-24 02:12:13.520764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.997 qpair failed and we were unable to recover it. 00:33:58.997 [2024-07-24 02:12:13.520871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.997 [2024-07-24 02:12:13.520896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.997 qpair failed and we were unable to recover it. 00:33:58.997 [2024-07-24 02:12:13.521007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.997 [2024-07-24 02:12:13.521032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.997 qpair failed and we were unable to recover it. 00:33:58.997 [2024-07-24 02:12:13.521146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.997 [2024-07-24 02:12:13.521173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.997 qpair failed and we were unable to recover it. 00:33:58.997 [2024-07-24 02:12:13.521288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.997 [2024-07-24 02:12:13.521331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.997 qpair failed and we were unable to recover it. 00:33:58.997 [2024-07-24 02:12:13.521470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.997 [2024-07-24 02:12:13.521495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.997 qpair failed and we were unable to recover it. 00:33:58.997 [2024-07-24 02:12:13.521628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.997 [2024-07-24 02:12:13.521653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.997 qpair failed and we were unable to recover it. 00:33:58.997 [2024-07-24 02:12:13.521779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.997 [2024-07-24 02:12:13.521803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.997 qpair failed and we were unable to recover it. 00:33:58.998 [2024-07-24 02:12:13.521939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.998 [2024-07-24 02:12:13.521964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.998 qpair failed and we were unable to recover it. 00:33:58.998 [2024-07-24 02:12:13.522076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.998 [2024-07-24 02:12:13.522101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.998 qpair failed and we were unable to recover it. 00:33:58.998 [2024-07-24 02:12:13.522229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.998 [2024-07-24 02:12:13.522254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.998 qpair failed and we were unable to recover it. 00:33:58.998 [2024-07-24 02:12:13.522388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.998 [2024-07-24 02:12:13.522414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.998 qpair failed and we were unable to recover it. 00:33:58.998 [2024-07-24 02:12:13.522545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.998 [2024-07-24 02:12:13.522570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.998 qpair failed and we were unable to recover it. 00:33:58.998 [2024-07-24 02:12:13.522706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.998 [2024-07-24 02:12:13.522731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.998 qpair failed and we were unable to recover it. 00:33:58.998 [2024-07-24 02:12:13.522864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.998 [2024-07-24 02:12:13.522889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.998 qpair failed and we were unable to recover it. 00:33:58.998 [2024-07-24 02:12:13.523016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.998 [2024-07-24 02:12:13.523041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.998 qpair failed and we were unable to recover it. 00:33:58.998 [2024-07-24 02:12:13.523159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.998 [2024-07-24 02:12:13.523187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.998 qpair failed and we were unable to recover it. 00:33:58.998 [2024-07-24 02:12:13.523386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.998 [2024-07-24 02:12:13.523411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.998 qpair failed and we were unable to recover it. 00:33:58.998 [2024-07-24 02:12:13.523553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.998 [2024-07-24 02:12:13.523578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.998 qpair failed and we were unable to recover it. 00:33:58.998 [2024-07-24 02:12:13.523696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.998 [2024-07-24 02:12:13.523721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.998 qpair failed and we were unable to recover it. 00:33:58.998 [2024-07-24 02:12:13.523879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.998 [2024-07-24 02:12:13.523904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.998 qpair failed and we were unable to recover it. 00:33:58.998 [2024-07-24 02:12:13.524040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.998 [2024-07-24 02:12:13.524064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.998 qpair failed and we were unable to recover it. 00:33:58.998 [2024-07-24 02:12:13.524164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.998 [2024-07-24 02:12:13.524189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.998 qpair failed and we were unable to recover it. 00:33:58.998 [2024-07-24 02:12:13.524321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.998 [2024-07-24 02:12:13.524347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.998 qpair failed and we were unable to recover it. 00:33:58.998 [2024-07-24 02:12:13.524487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.998 [2024-07-24 02:12:13.524512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.998 qpair failed and we were unable to recover it. 00:33:58.998 [2024-07-24 02:12:13.524640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.998 [2024-07-24 02:12:13.524665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.998 qpair failed and we were unable to recover it. 00:33:58.998 [2024-07-24 02:12:13.524802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.998 [2024-07-24 02:12:13.524827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.998 qpair failed and we were unable to recover it. 00:33:58.998 [2024-07-24 02:12:13.524985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.998 [2024-07-24 02:12:13.525010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.998 qpair failed and we were unable to recover it. 00:33:58.998 [2024-07-24 02:12:13.525136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.998 [2024-07-24 02:12:13.525161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.998 qpair failed and we were unable to recover it. 00:33:58.998 [2024-07-24 02:12:13.525294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.998 [2024-07-24 02:12:13.525323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.998 qpair failed and we were unable to recover it. 00:33:58.998 [2024-07-24 02:12:13.525438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.998 [2024-07-24 02:12:13.525463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.998 qpair failed and we were unable to recover it. 00:33:58.998 [2024-07-24 02:12:13.525572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.998 [2024-07-24 02:12:13.525597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.998 qpair failed and we were unable to recover it. 00:33:58.998 [2024-07-24 02:12:13.525707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.998 [2024-07-24 02:12:13.525732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.998 qpair failed and we were unable to recover it. 00:33:58.998 [2024-07-24 02:12:13.525889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.998 [2024-07-24 02:12:13.525914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.998 qpair failed and we were unable to recover it. 00:33:58.998 [2024-07-24 02:12:13.526057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.998 [2024-07-24 02:12:13.526082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.998 qpair failed and we were unable to recover it. 00:33:58.998 [2024-07-24 02:12:13.526237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.998 [2024-07-24 02:12:13.526265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.998 qpair failed and we were unable to recover it. 00:33:58.998 [2024-07-24 02:12:13.526418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.998 [2024-07-24 02:12:13.526444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.998 qpair failed and we were unable to recover it. 00:33:58.998 [2024-07-24 02:12:13.526574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.998 [2024-07-24 02:12:13.526603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.998 qpair failed and we were unable to recover it. 00:33:58.998 [2024-07-24 02:12:13.526717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.998 [2024-07-24 02:12:13.526742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.998 qpair failed and we were unable to recover it. 00:33:58.998 [2024-07-24 02:12:13.526846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.998 [2024-07-24 02:12:13.526872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.998 qpair failed and we were unable to recover it. 00:33:58.998 [2024-07-24 02:12:13.526977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.998 [2024-07-24 02:12:13.527003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.998 qpair failed and we were unable to recover it. 00:33:58.998 [2024-07-24 02:12:13.527113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.998 [2024-07-24 02:12:13.527139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.998 qpair failed and we were unable to recover it. 00:33:58.998 [2024-07-24 02:12:13.527278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.998 [2024-07-24 02:12:13.527304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.998 qpair failed and we were unable to recover it. 00:33:58.998 [2024-07-24 02:12:13.527449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.998 [2024-07-24 02:12:13.527474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.998 qpair failed and we were unable to recover it. 00:33:58.999 [2024-07-24 02:12:13.527598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.999 [2024-07-24 02:12:13.527629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.999 qpair failed and we were unable to recover it. 00:33:58.999 [2024-07-24 02:12:13.527768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.999 [2024-07-24 02:12:13.527793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.999 qpair failed and we were unable to recover it. 00:33:58.999 [2024-07-24 02:12:13.527920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.999 [2024-07-24 02:12:13.527945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.999 qpair failed and we were unable to recover it. 00:33:58.999 [2024-07-24 02:12:13.528075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.999 [2024-07-24 02:12:13.528119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.999 qpair failed and we were unable to recover it. 00:33:58.999 [2024-07-24 02:12:13.528267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.999 [2024-07-24 02:12:13.528292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.999 qpair failed and we were unable to recover it. 00:33:58.999 [2024-07-24 02:12:13.528406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.999 [2024-07-24 02:12:13.528431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.999 qpair failed and we were unable to recover it. 00:33:58.999 [2024-07-24 02:12:13.528562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.999 [2024-07-24 02:12:13.528587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.999 qpair failed and we were unable to recover it. 00:33:58.999 [2024-07-24 02:12:13.528723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.999 [2024-07-24 02:12:13.528748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.999 qpair failed and we were unable to recover it. 00:33:58.999 [2024-07-24 02:12:13.528871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.999 [2024-07-24 02:12:13.528896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.999 qpair failed and we were unable to recover it. 00:33:58.999 [2024-07-24 02:12:13.529029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.999 [2024-07-24 02:12:13.529054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.999 qpair failed and we were unable to recover it. 00:33:58.999 [2024-07-24 02:12:13.529183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.999 [2024-07-24 02:12:13.529208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.999 qpair failed and we were unable to recover it. 00:33:58.999 [2024-07-24 02:12:13.529361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.999 [2024-07-24 02:12:13.529387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.999 qpair failed and we were unable to recover it. 00:33:58.999 [2024-07-24 02:12:13.529512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.999 [2024-07-24 02:12:13.529537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.999 qpair failed and we were unable to recover it. 00:33:58.999 [2024-07-24 02:12:13.529637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.999 [2024-07-24 02:12:13.529662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.999 qpair failed and we were unable to recover it. 00:33:58.999 [2024-07-24 02:12:13.529790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.999 [2024-07-24 02:12:13.529815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.999 qpair failed and we were unable to recover it. 00:33:58.999 [2024-07-24 02:12:13.529971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.999 [2024-07-24 02:12:13.529995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.999 qpair failed and we were unable to recover it. 00:33:58.999 [2024-07-24 02:12:13.530121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.999 [2024-07-24 02:12:13.530148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.999 qpair failed and we were unable to recover it. 00:33:58.999 [2024-07-24 02:12:13.530314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.999 [2024-07-24 02:12:13.530360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.999 qpair failed and we were unable to recover it. 00:33:58.999 [2024-07-24 02:12:13.530468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.999 [2024-07-24 02:12:13.530493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.999 qpair failed and we were unable to recover it. 00:33:58.999 [2024-07-24 02:12:13.530597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.999 [2024-07-24 02:12:13.530621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.999 qpair failed and we were unable to recover it. 00:33:58.999 [2024-07-24 02:12:13.530748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.999 [2024-07-24 02:12:13.530779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.999 qpair failed and we were unable to recover it. 00:33:58.999 [2024-07-24 02:12:13.530935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.999 [2024-07-24 02:12:13.530959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.999 qpair failed and we were unable to recover it. 00:33:58.999 [2024-07-24 02:12:13.531062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.999 [2024-07-24 02:12:13.531086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.999 qpair failed and we were unable to recover it. 00:33:58.999 [2024-07-24 02:12:13.531241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.999 [2024-07-24 02:12:13.531266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.999 qpair failed and we were unable to recover it. 00:33:58.999 [2024-07-24 02:12:13.531434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.999 [2024-07-24 02:12:13.531459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.999 qpair failed and we were unable to recover it. 00:33:58.999 [2024-07-24 02:12:13.531593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.999 [2024-07-24 02:12:13.531618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.999 qpair failed and we were unable to recover it. 00:33:58.999 [2024-07-24 02:12:13.531764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.999 [2024-07-24 02:12:13.531789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.999 qpair failed and we were unable to recover it. 00:33:58.999 [2024-07-24 02:12:13.531945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.999 [2024-07-24 02:12:13.531969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.999 qpair failed and we were unable to recover it. 00:33:58.999 [2024-07-24 02:12:13.532071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.999 [2024-07-24 02:12:13.532098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.999 qpair failed and we were unable to recover it. 00:33:58.999 [2024-07-24 02:12:13.532251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.999 [2024-07-24 02:12:13.532279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.999 qpair failed and we were unable to recover it. 00:33:58.999 [2024-07-24 02:12:13.532410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.999 [2024-07-24 02:12:13.532435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.999 qpair failed and we were unable to recover it. 00:33:58.999 [2024-07-24 02:12:13.532578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.999 [2024-07-24 02:12:13.532613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.999 qpair failed and we were unable to recover it. 00:33:58.999 [2024-07-24 02:12:13.532740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.999 [2024-07-24 02:12:13.532765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.999 qpair failed and we were unable to recover it. 00:33:58.999 [2024-07-24 02:12:13.532923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.999 [2024-07-24 02:12:13.532948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.999 qpair failed and we were unable to recover it. 00:33:58.999 [2024-07-24 02:12:13.533076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.999 [2024-07-24 02:12:13.533102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.999 qpair failed and we were unable to recover it. 00:33:58.999 [2024-07-24 02:12:13.533218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:58.999 [2024-07-24 02:12:13.533245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:58.999 qpair failed and we were unable to recover it. 00:33:58.999 [2024-07-24 02:12:13.533409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.000 [2024-07-24 02:12:13.533434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.000 qpair failed and we were unable to recover it. 00:33:59.000 [2024-07-24 02:12:13.533537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.000 [2024-07-24 02:12:13.533562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.000 qpair failed and we were unable to recover it. 00:33:59.000 [2024-07-24 02:12:13.533664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.000 [2024-07-24 02:12:13.533689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.000 qpair failed and we were unable to recover it. 00:33:59.000 [2024-07-24 02:12:13.533846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.000 [2024-07-24 02:12:13.533871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.000 qpair failed and we were unable to recover it. 00:33:59.000 [2024-07-24 02:12:13.533996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.000 [2024-07-24 02:12:13.534020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.000 qpair failed and we were unable to recover it. 00:33:59.000 [2024-07-24 02:12:13.534150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.000 [2024-07-24 02:12:13.534175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.000 qpair failed and we were unable to recover it. 00:33:59.000 [2024-07-24 02:12:13.534338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.000 [2024-07-24 02:12:13.534363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.000 qpair failed and we were unable to recover it. 00:33:59.000 [2024-07-24 02:12:13.534489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.000 [2024-07-24 02:12:13.534515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.000 qpair failed and we were unable to recover it. 00:33:59.000 [2024-07-24 02:12:13.534660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.000 [2024-07-24 02:12:13.534685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.000 qpair failed and we were unable to recover it. 00:33:59.000 [2024-07-24 02:12:13.534827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.000 [2024-07-24 02:12:13.534852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.000 qpair failed and we were unable to recover it. 00:33:59.000 [2024-07-24 02:12:13.534985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.000 [2024-07-24 02:12:13.535009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.000 qpair failed and we were unable to recover it. 00:33:59.000 [2024-07-24 02:12:13.535200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.000 [2024-07-24 02:12:13.535228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.000 qpair failed and we were unable to recover it. 00:33:59.000 [2024-07-24 02:12:13.535389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.000 [2024-07-24 02:12:13.535414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.000 qpair failed and we were unable to recover it. 00:33:59.000 [2024-07-24 02:12:13.535546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.000 [2024-07-24 02:12:13.535571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.000 qpair failed and we were unable to recover it. 00:33:59.000 [2024-07-24 02:12:13.535705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.000 [2024-07-24 02:12:13.535730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.000 qpair failed and we were unable to recover it. 00:33:59.000 [2024-07-24 02:12:13.535886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.000 [2024-07-24 02:12:13.535910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.000 qpair failed and we were unable to recover it. 00:33:59.000 [2024-07-24 02:12:13.536041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.000 [2024-07-24 02:12:13.536065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.000 qpair failed and we were unable to recover it. 00:33:59.000 [2024-07-24 02:12:13.536228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.000 [2024-07-24 02:12:13.536253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.000 qpair failed and we were unable to recover it. 00:33:59.000 [2024-07-24 02:12:13.536404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.000 [2024-07-24 02:12:13.536430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.000 qpair failed and we were unable to recover it. 00:33:59.000 [2024-07-24 02:12:13.536533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.000 [2024-07-24 02:12:13.536558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.000 qpair failed and we were unable to recover it. 00:33:59.000 [2024-07-24 02:12:13.536664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.000 [2024-07-24 02:12:13.536689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.000 qpair failed and we were unable to recover it. 00:33:59.000 [2024-07-24 02:12:13.536840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.000 [2024-07-24 02:12:13.536864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.000 qpair failed and we were unable to recover it. 00:33:59.000 [2024-07-24 02:12:13.536997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.000 [2024-07-24 02:12:13.537023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.000 qpair failed and we were unable to recover it. 00:33:59.000 [2024-07-24 02:12:13.537168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.000 [2024-07-24 02:12:13.537195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.000 qpair failed and we were unable to recover it. 00:33:59.000 [2024-07-24 02:12:13.537339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.000 [2024-07-24 02:12:13.537381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.000 qpair failed and we were unable to recover it. 00:33:59.000 [2024-07-24 02:12:13.537485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.000 [2024-07-24 02:12:13.537513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.000 qpair failed and we were unable to recover it. 00:33:59.000 [2024-07-24 02:12:13.537629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.000 [2024-07-24 02:12:13.537654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.000 qpair failed and we were unable to recover it. 00:33:59.000 [2024-07-24 02:12:13.537788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.000 [2024-07-24 02:12:13.537813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.000 qpair failed and we were unable to recover it. 00:33:59.000 [2024-07-24 02:12:13.537949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.000 [2024-07-24 02:12:13.537973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.000 qpair failed and we were unable to recover it. 00:33:59.001 [2024-07-24 02:12:13.538103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.001 [2024-07-24 02:12:13.538127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.001 qpair failed and we were unable to recover it. 00:33:59.001 [2024-07-24 02:12:13.538261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.001 [2024-07-24 02:12:13.538285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.001 qpair failed and we were unable to recover it. 00:33:59.001 [2024-07-24 02:12:13.538405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.001 [2024-07-24 02:12:13.538431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.001 qpair failed and we were unable to recover it. 00:33:59.001 [2024-07-24 02:12:13.538566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.001 [2024-07-24 02:12:13.538590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.001 qpair failed and we were unable to recover it. 00:33:59.001 [2024-07-24 02:12:13.538733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.001 [2024-07-24 02:12:13.538758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.001 qpair failed and we were unable to recover it. 00:33:59.001 [2024-07-24 02:12:13.538918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.001 [2024-07-24 02:12:13.538943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.001 qpair failed and we were unable to recover it. 00:33:59.001 [2024-07-24 02:12:13.539070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.001 [2024-07-24 02:12:13.539095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.001 qpair failed and we were unable to recover it. 00:33:59.001 [2024-07-24 02:12:13.539251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.001 [2024-07-24 02:12:13.539278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.001 qpair failed and we were unable to recover it. 00:33:59.001 [2024-07-24 02:12:13.539437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.001 [2024-07-24 02:12:13.539463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.001 qpair failed and we were unable to recover it. 00:33:59.001 [2024-07-24 02:12:13.539596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.001 [2024-07-24 02:12:13.539621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.001 qpair failed and we were unable to recover it. 00:33:59.001 [2024-07-24 02:12:13.539755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.001 [2024-07-24 02:12:13.539779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.001 qpair failed and we were unable to recover it. 00:33:59.001 [2024-07-24 02:12:13.539909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.001 [2024-07-24 02:12:13.539934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.001 qpair failed and we were unable to recover it. 00:33:59.001 [2024-07-24 02:12:13.540061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.001 [2024-07-24 02:12:13.540085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.001 qpair failed and we were unable to recover it. 00:33:59.001 [2024-07-24 02:12:13.540219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.001 [2024-07-24 02:12:13.540244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.001 qpair failed and we were unable to recover it. 00:33:59.001 [2024-07-24 02:12:13.540371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.001 [2024-07-24 02:12:13.540397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.001 qpair failed and we were unable to recover it. 00:33:59.001 [2024-07-24 02:12:13.540509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.001 [2024-07-24 02:12:13.540535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.001 qpair failed and we were unable to recover it. 00:33:59.001 [2024-07-24 02:12:13.540658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.001 [2024-07-24 02:12:13.540683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.001 qpair failed and we were unable to recover it. 00:33:59.001 [2024-07-24 02:12:13.540844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.001 [2024-07-24 02:12:13.540868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.001 qpair failed and we were unable to recover it. 00:33:59.001 [2024-07-24 02:12:13.540999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.001 [2024-07-24 02:12:13.541024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.001 qpair failed and we were unable to recover it. 00:33:59.001 [2024-07-24 02:12:13.541158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.001 [2024-07-24 02:12:13.541183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.001 qpair failed and we were unable to recover it. 00:33:59.001 [2024-07-24 02:12:13.541296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.001 [2024-07-24 02:12:13.541329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.001 qpair failed and we were unable to recover it. 00:33:59.001 [2024-07-24 02:12:13.541507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.001 [2024-07-24 02:12:13.541532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.001 qpair failed and we were unable to recover it. 00:33:59.001 [2024-07-24 02:12:13.541641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.001 [2024-07-24 02:12:13.541666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.001 qpair failed and we were unable to recover it. 00:33:59.001 [2024-07-24 02:12:13.541803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.001 [2024-07-24 02:12:13.541832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.001 qpair failed and we were unable to recover it. 00:33:59.001 [2024-07-24 02:12:13.541928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.001 [2024-07-24 02:12:13.541953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.001 qpair failed and we were unable to recover it. 00:33:59.001 [2024-07-24 02:12:13.542066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.001 [2024-07-24 02:12:13.542091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.001 qpair failed and we were unable to recover it. 00:33:59.001 [2024-07-24 02:12:13.542252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.001 [2024-07-24 02:12:13.542277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.001 qpair failed and we were unable to recover it. 00:33:59.001 [2024-07-24 02:12:13.542393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.001 [2024-07-24 02:12:13.542419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.001 qpair failed and we were unable to recover it. 00:33:59.001 [2024-07-24 02:12:13.542522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.001 [2024-07-24 02:12:13.542548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.001 qpair failed and we were unable to recover it. 00:33:59.001 [2024-07-24 02:12:13.542709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.001 [2024-07-24 02:12:13.542734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.001 qpair failed and we were unable to recover it. 00:33:59.001 [2024-07-24 02:12:13.542863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.001 [2024-07-24 02:12:13.542887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.001 qpair failed and we were unable to recover it. 00:33:59.001 [2024-07-24 02:12:13.542995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.001 [2024-07-24 02:12:13.543020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.001 qpair failed and we were unable to recover it. 00:33:59.001 [2024-07-24 02:12:13.543166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.001 [2024-07-24 02:12:13.543193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.001 qpair failed and we were unable to recover it. 00:33:59.001 [2024-07-24 02:12:13.543435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.001 [2024-07-24 02:12:13.543461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.001 qpair failed and we were unable to recover it. 00:33:59.001 [2024-07-24 02:12:13.543593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.001 [2024-07-24 02:12:13.543618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.001 qpair failed and we were unable to recover it. 00:33:59.001 [2024-07-24 02:12:13.543714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.001 [2024-07-24 02:12:13.543738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.001 qpair failed and we were unable to recover it. 00:33:59.001 [2024-07-24 02:12:13.543835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.002 [2024-07-24 02:12:13.543860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.002 qpair failed and we were unable to recover it. 00:33:59.002 [2024-07-24 02:12:13.543961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.002 [2024-07-24 02:12:13.543986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.002 qpair failed and we were unable to recover it. 00:33:59.002 [2024-07-24 02:12:13.544137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.002 [2024-07-24 02:12:13.544162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.002 qpair failed and we were unable to recover it. 00:33:59.002 [2024-07-24 02:12:13.544265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.002 [2024-07-24 02:12:13.544290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.002 qpair failed and we were unable to recover it. 00:33:59.002 [2024-07-24 02:12:13.544402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.002 [2024-07-24 02:12:13.544429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.002 qpair failed and we were unable to recover it. 00:33:59.002 [2024-07-24 02:12:13.544539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.002 [2024-07-24 02:12:13.544564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.002 qpair failed and we were unable to recover it. 00:33:59.002 [2024-07-24 02:12:13.544698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.002 [2024-07-24 02:12:13.544723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.002 qpair failed and we were unable to recover it. 00:33:59.002 [2024-07-24 02:12:13.544878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.002 [2024-07-24 02:12:13.544904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.002 qpair failed and we were unable to recover it. 00:33:59.002 [2024-07-24 02:12:13.545046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.002 [2024-07-24 02:12:13.545071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.002 qpair failed and we were unable to recover it. 00:33:59.002 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 1583689 Killed "${NVMF_APP[@]}" "$@" 00:33:59.002 [2024-07-24 02:12:13.545252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.002 [2024-07-24 02:12:13.545280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.002 qpair failed and we were unable to recover it. 00:33:59.002 [2024-07-24 02:12:13.545435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.002 [2024-07-24 02:12:13.545461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.002 qpair failed and we were unable to recover it. 00:33:59.002 02:12:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:33:59.002 [2024-07-24 02:12:13.545584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.002 [2024-07-24 02:12:13.545610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.002 qpair failed and we were unable to recover it. 00:33:59.002 02:12:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:33:59.002 [2024-07-24 02:12:13.545743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.002 [2024-07-24 02:12:13.545769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.002 qpair failed and we were unable to recover it. 00:33:59.002 02:12:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:59.002 [2024-07-24 02:12:13.545905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.002 02:12:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:33:59.002 [2024-07-24 02:12:13.545931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.002 qpair failed and we were unable to recover it. 00:33:59.002 02:12:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:59.002 [2024-07-24 02:12:13.546088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.002 [2024-07-24 02:12:13.546113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.002 qpair failed and we were unable to recover it. 00:33:59.002 [2024-07-24 02:12:13.546251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.002 [2024-07-24 02:12:13.546276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.002 qpair failed and we were unable to recover it. 00:33:59.002 [2024-07-24 02:12:13.546379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.002 [2024-07-24 02:12:13.546405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.002 qpair failed and we were unable to recover it. 00:33:59.002 [2024-07-24 02:12:13.546540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.002 [2024-07-24 02:12:13.546566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.002 qpair failed and we were unable to recover it. 00:33:59.002 [2024-07-24 02:12:13.546721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.002 [2024-07-24 02:12:13.546746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.002 qpair failed and we were unable to recover it. 00:33:59.002 [2024-07-24 02:12:13.546908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.002 [2024-07-24 02:12:13.546933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.002 qpair failed and we were unable to recover it. 00:33:59.002 [2024-07-24 02:12:13.547065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.002 [2024-07-24 02:12:13.547090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.002 qpair failed and we were unable to recover it. 00:33:59.002 [2024-07-24 02:12:13.547242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.002 [2024-07-24 02:12:13.547269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.002 qpair failed and we were unable to recover it. 00:33:59.002 [2024-07-24 02:12:13.547402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.002 [2024-07-24 02:12:13.547428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.002 qpair failed and we were unable to recover it. 00:33:59.002 [2024-07-24 02:12:13.547558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.002 [2024-07-24 02:12:13.547584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.002 qpair failed and we were unable to recover it. 00:33:59.002 [2024-07-24 02:12:13.547689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.002 [2024-07-24 02:12:13.547715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.002 qpair failed and we were unable to recover it. 00:33:59.002 [2024-07-24 02:12:13.547845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.002 [2024-07-24 02:12:13.547870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.002 qpair failed and we were unable to recover it. 00:33:59.002 [2024-07-24 02:12:13.547985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.002 [2024-07-24 02:12:13.548011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.002 qpair failed and we were unable to recover it. 00:33:59.002 [2024-07-24 02:12:13.548139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.002 [2024-07-24 02:12:13.548165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.002 qpair failed and we were unable to recover it. 00:33:59.002 [2024-07-24 02:12:13.548262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.002 [2024-07-24 02:12:13.548287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.002 qpair failed and we were unable to recover it. 00:33:59.002 [2024-07-24 02:12:13.548400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.002 [2024-07-24 02:12:13.548425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.002 qpair failed and we were unable to recover it. 00:33:59.002 [2024-07-24 02:12:13.548554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.002 [2024-07-24 02:12:13.548579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.002 qpair failed and we were unable to recover it. 00:33:59.002 [2024-07-24 02:12:13.548754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.002 [2024-07-24 02:12:13.548779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.002 qpair failed and we were unable to recover it. 00:33:59.002 [2024-07-24 02:12:13.548910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.002 [2024-07-24 02:12:13.548935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.002 qpair failed and we were unable to recover it. 00:33:59.002 [2024-07-24 02:12:13.549032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.002 [2024-07-24 02:12:13.549057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.002 qpair failed and we were unable to recover it. 00:33:59.002 02:12:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1584249 00:33:59.002 [2024-07-24 02:12:13.549183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.003 [2024-07-24 02:12:13.549212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.003 02:12:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:33:59.003 qpair failed and we were unable to recover it. 00:33:59.003 02:12:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1584249 00:33:59.003 [2024-07-24 02:12:13.549338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.003 [2024-07-24 02:12:13.549364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.003 qpair failed and we were unable to recover it. 00:33:59.003 02:12:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 1584249 ']' 00:33:59.003 [2024-07-24 02:12:13.549502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.003 [2024-07-24 02:12:13.549529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.003 qpair failed and we were unable to recover it. 00:33:59.003 02:12:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:59.003 [2024-07-24 02:12:13.549672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.003 [2024-07-24 02:12:13.549698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.003 qpair failed and we were unable to recover it. 00:33:59.003 02:12:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:59.003 [2024-07-24 02:12:13.549816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.003 [2024-07-24 02:12:13.549842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.003 qpair failed and we were unable to recover it. 00:33:59.003 02:12:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:59.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:59.003 02:12:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:59.003 [2024-07-24 02:12:13.549976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.003 [2024-07-24 02:12:13.550002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.003 qpair failed and we were unable to recover it. 00:33:59.003 02:12:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:59.003 [2024-07-24 02:12:13.550140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.003 [2024-07-24 02:12:13.550166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.003 qpair failed and we were unable to recover it. 00:33:59.003 [2024-07-24 02:12:13.550266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.003 [2024-07-24 02:12:13.550291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.003 qpair failed and we were unable to recover it. 00:33:59.003 [2024-07-24 02:12:13.550449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.003 [2024-07-24 02:12:13.550474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.003 qpair failed and we were unable to recover it. 00:33:59.003 [2024-07-24 02:12:13.550631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.003 [2024-07-24 02:12:13.550656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.003 qpair failed and we were unable to recover it. 00:33:59.003 [2024-07-24 02:12:13.550770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.003 [2024-07-24 02:12:13.550796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.003 qpair failed and we were unable to recover it. 00:33:59.003 [2024-07-24 02:12:13.550907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.003 [2024-07-24 02:12:13.550932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.003 qpair failed and we were unable to recover it. 00:33:59.003 [2024-07-24 02:12:13.551089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.003 [2024-07-24 02:12:13.551114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.003 qpair failed and we were unable to recover it. 00:33:59.003 [2024-07-24 02:12:13.551269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.003 [2024-07-24 02:12:13.551306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.003 qpair failed and we were unable to recover it. 00:33:59.003 [2024-07-24 02:12:13.551467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.003 [2024-07-24 02:12:13.551493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.003 qpair failed and we were unable to recover it. 00:33:59.003 [2024-07-24 02:12:13.551592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.003 [2024-07-24 02:12:13.551617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.003 qpair failed and we were unable to recover it. 00:33:59.003 [2024-07-24 02:12:13.551747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.003 [2024-07-24 02:12:13.551772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.003 qpair failed and we were unable to recover it. 00:33:59.003 [2024-07-24 02:12:13.551906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.003 [2024-07-24 02:12:13.551931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.003 qpair failed and we were unable to recover it. 00:33:59.003 [2024-07-24 02:12:13.552032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.003 [2024-07-24 02:12:13.552057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.003 qpair failed and we were unable to recover it. 00:33:59.003 [2024-07-24 02:12:13.552187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.003 [2024-07-24 02:12:13.552212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.003 qpair failed and we were unable to recover it. 00:33:59.003 [2024-07-24 02:12:13.552379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.003 [2024-07-24 02:12:13.552405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.003 qpair failed and we were unable to recover it. 00:33:59.003 [2024-07-24 02:12:13.552505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.003 [2024-07-24 02:12:13.552530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.003 qpair failed and we were unable to recover it. 00:33:59.003 [2024-07-24 02:12:13.552648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.003 [2024-07-24 02:12:13.552672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.003 qpair failed and we were unable to recover it. 00:33:59.003 [2024-07-24 02:12:13.552811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.003 [2024-07-24 02:12:13.552837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.003 qpair failed and we were unable to recover it. 00:33:59.003 [2024-07-24 02:12:13.552961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.003 [2024-07-24 02:12:13.552985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.003 qpair failed and we were unable to recover it. 00:33:59.003 [2024-07-24 02:12:13.553093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.003 [2024-07-24 02:12:13.553117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.003 qpair failed and we were unable to recover it. 00:33:59.003 [2024-07-24 02:12:13.553246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.003 [2024-07-24 02:12:13.553271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.003 qpair failed and we were unable to recover it. 00:33:59.003 [2024-07-24 02:12:13.553412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.003 [2024-07-24 02:12:13.553438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.003 qpair failed and we were unable to recover it. 00:33:59.003 [2024-07-24 02:12:13.553547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.003 [2024-07-24 02:12:13.553571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.003 qpair failed and we were unable to recover it. 00:33:59.003 [2024-07-24 02:12:13.553703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.003 [2024-07-24 02:12:13.553728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.003 qpair failed and we were unable to recover it. 00:33:59.003 [2024-07-24 02:12:13.553834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.003 [2024-07-24 02:12:13.553859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.003 qpair failed and we were unable to recover it. 00:33:59.003 [2024-07-24 02:12:13.553982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.003 [2024-07-24 02:12:13.554010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.003 qpair failed and we were unable to recover it. 00:33:59.003 [2024-07-24 02:12:13.554162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.004 [2024-07-24 02:12:13.554187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.004 qpair failed and we were unable to recover it. 00:33:59.004 [2024-07-24 02:12:13.554357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.004 [2024-07-24 02:12:13.554386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.004 qpair failed and we were unable to recover it. 00:33:59.004 [2024-07-24 02:12:13.554612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.004 [2024-07-24 02:12:13.554640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.004 qpair failed and we were unable to recover it. 00:33:59.004 [2024-07-24 02:12:13.554779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.004 [2024-07-24 02:12:13.554812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.004 qpair failed and we were unable to recover it. 00:33:59.004 [2024-07-24 02:12:13.554988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.004 [2024-07-24 02:12:13.555013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.004 qpair failed and we were unable to recover it. 00:33:59.004 [2024-07-24 02:12:13.555117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.004 [2024-07-24 02:12:13.555142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.004 qpair failed and we were unable to recover it. 00:33:59.004 [2024-07-24 02:12:13.555252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.004 [2024-07-24 02:12:13.555278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.004 qpair failed and we were unable to recover it. 00:33:59.004 [2024-07-24 02:12:13.555441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.004 [2024-07-24 02:12:13.555469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.004 qpair failed and we were unable to recover it. 00:33:59.004 [2024-07-24 02:12:13.555601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.004 [2024-07-24 02:12:13.555636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.004 qpair failed and we were unable to recover it. 00:33:59.004 [2024-07-24 02:12:13.555784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.004 [2024-07-24 02:12:13.555815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.004 qpair failed and we were unable to recover it. 00:33:59.004 [2024-07-24 02:12:13.555966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.004 [2024-07-24 02:12:13.555992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.004 qpair failed and we were unable to recover it. 00:33:59.004 [2024-07-24 02:12:13.556121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.004 [2024-07-24 02:12:13.556146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.004 qpair failed and we were unable to recover it. 00:33:59.004 [2024-07-24 02:12:13.556275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.004 [2024-07-24 02:12:13.556300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.004 qpair failed and we were unable to recover it. 00:33:59.004 [2024-07-24 02:12:13.556453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.004 [2024-07-24 02:12:13.556481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.004 qpair failed and we were unable to recover it. 00:33:59.004 [2024-07-24 02:12:13.556671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.004 [2024-07-24 02:12:13.556700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.004 qpair failed and we were unable to recover it. 00:33:59.004 [2024-07-24 02:12:13.556860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.004 [2024-07-24 02:12:13.556888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.004 qpair failed and we were unable to recover it. 00:33:59.004 [2024-07-24 02:12:13.557045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.004 [2024-07-24 02:12:13.557083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.004 qpair failed and we were unable to recover it. 00:33:59.004 [2024-07-24 02:12:13.557249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.004 [2024-07-24 02:12:13.557276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.004 qpair failed and we were unable to recover it. 00:33:59.004 [2024-07-24 02:12:13.557427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.004 [2024-07-24 02:12:13.557458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.004 qpair failed and we were unable to recover it. 00:33:59.004 [2024-07-24 02:12:13.557616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.004 [2024-07-24 02:12:13.557644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.004 qpair failed and we were unable to recover it. 00:33:59.004 [2024-07-24 02:12:13.557808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.004 [2024-07-24 02:12:13.557837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.004 qpair failed and we were unable to recover it. 00:33:59.004 [2024-07-24 02:12:13.558004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.004 [2024-07-24 02:12:13.558029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.004 qpair failed and we were unable to recover it. 00:33:59.004 [2024-07-24 02:12:13.558163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.004 [2024-07-24 02:12:13.558188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.004 qpair failed and we were unable to recover it. 00:33:59.004 [2024-07-24 02:12:13.558362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.004 [2024-07-24 02:12:13.558391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.004 qpair failed and we were unable to recover it. 00:33:59.004 [2024-07-24 02:12:13.558537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.004 [2024-07-24 02:12:13.558566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.004 qpair failed and we were unable to recover it. 00:33:59.004 [2024-07-24 02:12:13.558700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.004 [2024-07-24 02:12:13.558727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.004 qpair failed and we were unable to recover it. 00:33:59.004 [2024-07-24 02:12:13.558873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.004 [2024-07-24 02:12:13.558898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.004 qpair failed and we were unable to recover it. 00:33:59.004 [2024-07-24 02:12:13.559004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.004 [2024-07-24 02:12:13.559029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.004 qpair failed and we were unable to recover it. 00:33:59.004 [2024-07-24 02:12:13.559134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.004 [2024-07-24 02:12:13.559160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.004 qpair failed and we were unable to recover it. 00:33:59.004 [2024-07-24 02:12:13.559262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.004 [2024-07-24 02:12:13.559287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.004 qpair failed and we were unable to recover it. 00:33:59.004 [2024-07-24 02:12:13.559334] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcd4620 (9): Bad file descriptor 00:33:59.004 [2024-07-24 02:12:13.559561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.004 [2024-07-24 02:12:13.559615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.004 qpair failed and we were unable to recover it. 00:33:59.004 [2024-07-24 02:12:13.559780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.004 [2024-07-24 02:12:13.559824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.004 qpair failed and we were unable to recover it. 00:33:59.004 [2024-07-24 02:12:13.559926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.004 [2024-07-24 02:12:13.559951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.004 qpair failed and we were unable to recover it. 00:33:59.004 [2024-07-24 02:12:13.560087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.004 [2024-07-24 02:12:13.560113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.004 qpair failed and we were unable to recover it. 00:33:59.004 [2024-07-24 02:12:13.560212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.004 [2024-07-24 02:12:13.560238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.004 qpair failed and we were unable to recover it. 00:33:59.004 [2024-07-24 02:12:13.560395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.004 [2024-07-24 02:12:13.560442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.004 qpair failed and we were unable to recover it. 00:33:59.005 [2024-07-24 02:12:13.560607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.005 [2024-07-24 02:12:13.560650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.005 qpair failed and we were unable to recover it. 00:33:59.005 [2024-07-24 02:12:13.560804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.005 [2024-07-24 02:12:13.560831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.005 qpair failed and we were unable to recover it. 00:33:59.005 [2024-07-24 02:12:13.560966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.005 [2024-07-24 02:12:13.560991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.005 qpair failed and we were unable to recover it. 00:33:59.005 [2024-07-24 02:12:13.561153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.005 [2024-07-24 02:12:13.561179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.005 qpair failed and we were unable to recover it. 00:33:59.005 [2024-07-24 02:12:13.561276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.005 [2024-07-24 02:12:13.561302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.005 qpair failed and we were unable to recover it. 00:33:59.005 [2024-07-24 02:12:13.561423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.005 [2024-07-24 02:12:13.561466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.005 qpair failed and we were unable to recover it. 00:33:59.005 [2024-07-24 02:12:13.561632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.005 [2024-07-24 02:12:13.561674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.005 qpair failed and we were unable to recover it. 00:33:59.005 [2024-07-24 02:12:13.561807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.005 [2024-07-24 02:12:13.561832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.005 qpair failed and we were unable to recover it. 00:33:59.005 [2024-07-24 02:12:13.561989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.005 [2024-07-24 02:12:13.562014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.005 qpair failed and we were unable to recover it. 00:33:59.005 [2024-07-24 02:12:13.562149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.005 [2024-07-24 02:12:13.562175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.005 qpair failed and we were unable to recover it. 00:33:59.005 [2024-07-24 02:12:13.562334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.005 [2024-07-24 02:12:13.562360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.005 qpair failed and we were unable to recover it. 00:33:59.005 [2024-07-24 02:12:13.562492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.005 [2024-07-24 02:12:13.562536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.005 qpair failed and we were unable to recover it. 00:33:59.005 [2024-07-24 02:12:13.562691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.005 [2024-07-24 02:12:13.562734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.005 qpair failed and we were unable to recover it. 00:33:59.005 [2024-07-24 02:12:13.562841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.005 [2024-07-24 02:12:13.562866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.005 qpair failed and we were unable to recover it. 00:33:59.005 [2024-07-24 02:12:13.562972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.005 [2024-07-24 02:12:13.562997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.005 qpair failed and we were unable to recover it. 00:33:59.005 [2024-07-24 02:12:13.563139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.005 [2024-07-24 02:12:13.563165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.005 qpair failed and we were unable to recover it. 00:33:59.005 [2024-07-24 02:12:13.563289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.005 [2024-07-24 02:12:13.563314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.005 qpair failed and we were unable to recover it. 00:33:59.005 [2024-07-24 02:12:13.563476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.005 [2024-07-24 02:12:13.563518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.005 qpair failed and we were unable to recover it. 00:33:59.005 [2024-07-24 02:12:13.563677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.005 [2024-07-24 02:12:13.563705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.005 qpair failed and we were unable to recover it. 00:33:59.005 [2024-07-24 02:12:13.563860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.005 [2024-07-24 02:12:13.563885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.005 qpair failed and we were unable to recover it. 00:33:59.005 [2024-07-24 02:12:13.564044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.005 [2024-07-24 02:12:13.564069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.005 qpair failed and we were unable to recover it. 00:33:59.005 [2024-07-24 02:12:13.564205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.005 [2024-07-24 02:12:13.564230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.005 qpair failed and we were unable to recover it. 00:33:59.005 [2024-07-24 02:12:13.564348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.005 [2024-07-24 02:12:13.564373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.005 qpair failed and we were unable to recover it. 00:33:59.005 [2024-07-24 02:12:13.564539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.005 [2024-07-24 02:12:13.564584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.005 qpair failed and we were unable to recover it. 00:33:59.005 [2024-07-24 02:12:13.564774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.005 [2024-07-24 02:12:13.564817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.005 qpair failed and we were unable to recover it. 00:33:59.005 [2024-07-24 02:12:13.564946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.005 [2024-07-24 02:12:13.564989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.005 qpair failed and we were unable to recover it. 00:33:59.005 [2024-07-24 02:12:13.565097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.005 [2024-07-24 02:12:13.565127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.005 qpair failed and we were unable to recover it. 00:33:59.005 [2024-07-24 02:12:13.565258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.005 [2024-07-24 02:12:13.565283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.005 qpair failed and we were unable to recover it. 00:33:59.005 [2024-07-24 02:12:13.565411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.005 [2024-07-24 02:12:13.565437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.005 qpair failed and we were unable to recover it. 00:33:59.005 [2024-07-24 02:12:13.565562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.005 [2024-07-24 02:12:13.565587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.005 qpair failed and we were unable to recover it. 00:33:59.005 [2024-07-24 02:12:13.565684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.006 [2024-07-24 02:12:13.565709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.006 qpair failed and we were unable to recover it. 00:33:59.006 [2024-07-24 02:12:13.565876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.006 [2024-07-24 02:12:13.565902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.006 qpair failed and we were unable to recover it. 00:33:59.006 [2024-07-24 02:12:13.566035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.006 [2024-07-24 02:12:13.566060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.006 qpair failed and we were unable to recover it. 00:33:59.006 [2024-07-24 02:12:13.566173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.006 [2024-07-24 02:12:13.566198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.006 qpair failed and we were unable to recover it. 00:33:59.006 [2024-07-24 02:12:13.566328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.006 [2024-07-24 02:12:13.566354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.006 qpair failed and we were unable to recover it. 00:33:59.006 [2024-07-24 02:12:13.566503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.006 [2024-07-24 02:12:13.566547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.006 qpair failed and we were unable to recover it. 00:33:59.006 [2024-07-24 02:12:13.566685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.006 [2024-07-24 02:12:13.566726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.006 qpair failed and we were unable to recover it. 00:33:59.006 [2024-07-24 02:12:13.566858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.006 [2024-07-24 02:12:13.566884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.006 qpair failed and we were unable to recover it. 00:33:59.006 [2024-07-24 02:12:13.567020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.006 [2024-07-24 02:12:13.567045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.006 qpair failed and we were unable to recover it. 00:33:59.006 [2024-07-24 02:12:13.567153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.006 [2024-07-24 02:12:13.567178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.006 qpair failed and we were unable to recover it. 00:33:59.006 [2024-07-24 02:12:13.567328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.006 [2024-07-24 02:12:13.567355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.006 qpair failed and we were unable to recover it. 00:33:59.006 [2024-07-24 02:12:13.567460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.006 [2024-07-24 02:12:13.567487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.006 qpair failed and we were unable to recover it. 00:33:59.006 [2024-07-24 02:12:13.567637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.006 [2024-07-24 02:12:13.567680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.006 qpair failed and we were unable to recover it. 00:33:59.006 [2024-07-24 02:12:13.567835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.006 [2024-07-24 02:12:13.567879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.006 qpair failed and we were unable to recover it. 00:33:59.006 [2024-07-24 02:12:13.568020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.006 [2024-07-24 02:12:13.568047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.006 qpair failed and we were unable to recover it. 00:33:59.006 [2024-07-24 02:12:13.568187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.006 [2024-07-24 02:12:13.568213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.006 qpair failed and we were unable to recover it. 00:33:59.006 [2024-07-24 02:12:13.568346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.006 [2024-07-24 02:12:13.568373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.006 qpair failed and we were unable to recover it. 00:33:59.006 [2024-07-24 02:12:13.568497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.006 [2024-07-24 02:12:13.568539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.006 qpair failed and we were unable to recover it. 00:33:59.006 [2024-07-24 02:12:13.568701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.006 [2024-07-24 02:12:13.568745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.006 qpair failed and we were unable to recover it. 00:33:59.006 [2024-07-24 02:12:13.568880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.006 [2024-07-24 02:12:13.568906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.006 qpair failed and we were unable to recover it. 00:33:59.006 [2024-07-24 02:12:13.569071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.006 [2024-07-24 02:12:13.569096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.006 qpair failed and we were unable to recover it. 00:33:59.006 [2024-07-24 02:12:13.569194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.006 [2024-07-24 02:12:13.569219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.006 qpair failed and we were unable to recover it. 00:33:59.006 [2024-07-24 02:12:13.569337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.006 [2024-07-24 02:12:13.569363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.006 qpair failed and we were unable to recover it. 00:33:59.006 [2024-07-24 02:12:13.569507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.006 [2024-07-24 02:12:13.569550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.006 qpair failed and we were unable to recover it. 00:33:59.006 [2024-07-24 02:12:13.569699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.006 [2024-07-24 02:12:13.569725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.006 qpair failed and we were unable to recover it. 00:33:59.006 [2024-07-24 02:12:13.569854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.006 [2024-07-24 02:12:13.569878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.006 qpair failed and we were unable to recover it. 00:33:59.006 [2024-07-24 02:12:13.569999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.006 [2024-07-24 02:12:13.570025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.006 qpair failed and we were unable to recover it. 00:33:59.006 [2024-07-24 02:12:13.570160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.006 [2024-07-24 02:12:13.570185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.006 qpair failed and we were unable to recover it. 00:33:59.006 [2024-07-24 02:12:13.570339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.006 [2024-07-24 02:12:13.570365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.006 qpair failed and we were unable to recover it. 00:33:59.006 [2024-07-24 02:12:13.570546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.006 [2024-07-24 02:12:13.570591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.006 qpair failed and we were unable to recover it. 00:33:59.006 [2024-07-24 02:12:13.570719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.006 [2024-07-24 02:12:13.570760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.006 qpair failed and we were unable to recover it. 00:33:59.006 [2024-07-24 02:12:13.570893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.006 [2024-07-24 02:12:13.570919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.006 qpair failed and we were unable to recover it. 00:33:59.006 [2024-07-24 02:12:13.571051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.006 [2024-07-24 02:12:13.571077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.006 qpair failed and we were unable to recover it. 00:33:59.006 [2024-07-24 02:12:13.571233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.006 [2024-07-24 02:12:13.571258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.006 qpair failed and we were unable to recover it. 00:33:59.006 [2024-07-24 02:12:13.571417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.006 [2024-07-24 02:12:13.571461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.006 qpair failed and we were unable to recover it. 00:33:59.006 [2024-07-24 02:12:13.571656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.006 [2024-07-24 02:12:13.571699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.006 qpair failed and we were unable to recover it. 00:33:59.006 [2024-07-24 02:12:13.571823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.007 [2024-07-24 02:12:13.571872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.007 qpair failed and we were unable to recover it. 00:33:59.007 [2024-07-24 02:12:13.572036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.007 [2024-07-24 02:12:13.572061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.007 qpair failed and we were unable to recover it. 00:33:59.007 [2024-07-24 02:12:13.572194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.007 [2024-07-24 02:12:13.572219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.007 qpair failed and we were unable to recover it. 00:33:59.007 [2024-07-24 02:12:13.572351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.007 [2024-07-24 02:12:13.572377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.007 qpair failed and we were unable to recover it. 00:33:59.007 [2024-07-24 02:12:13.572552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.007 [2024-07-24 02:12:13.572593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.007 qpair failed and we were unable to recover it. 00:33:59.007 [2024-07-24 02:12:13.572745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.007 [2024-07-24 02:12:13.572785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.007 qpair failed and we were unable to recover it. 00:33:59.007 [2024-07-24 02:12:13.572896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.007 [2024-07-24 02:12:13.572926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.007 qpair failed and we were unable to recover it. 00:33:59.007 [2024-07-24 02:12:13.573072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.007 [2024-07-24 02:12:13.573097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.007 qpair failed and we were unable to recover it. 00:33:59.007 [2024-07-24 02:12:13.573204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.007 [2024-07-24 02:12:13.573231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.007 qpair failed and we were unable to recover it. 00:33:59.007 [2024-07-24 02:12:13.573402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.007 [2024-07-24 02:12:13.573444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.007 qpair failed and we were unable to recover it. 00:33:59.007 [2024-07-24 02:12:13.573620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.007 [2024-07-24 02:12:13.573646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.007 qpair failed and we were unable to recover it. 00:33:59.007 [2024-07-24 02:12:13.573773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.007 [2024-07-24 02:12:13.573800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.007 qpair failed and we were unable to recover it. 00:33:59.007 [2024-07-24 02:12:13.573933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.007 [2024-07-24 02:12:13.573958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.007 qpair failed and we were unable to recover it. 00:33:59.007 [2024-07-24 02:12:13.574065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.007 [2024-07-24 02:12:13.574092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.007 qpair failed and we were unable to recover it. 00:33:59.007 [2024-07-24 02:12:13.574230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.007 [2024-07-24 02:12:13.574256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.007 qpair failed and we were unable to recover it. 00:33:59.007 [2024-07-24 02:12:13.574411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.007 [2024-07-24 02:12:13.574438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.007 qpair failed and we were unable to recover it. 00:33:59.007 [2024-07-24 02:12:13.574587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.007 [2024-07-24 02:12:13.574630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.007 qpair failed and we were unable to recover it. 00:33:59.007 [2024-07-24 02:12:13.574786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.007 [2024-07-24 02:12:13.574832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.007 qpair failed and we were unable to recover it. 00:33:59.007 [2024-07-24 02:12:13.574987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.007 [2024-07-24 02:12:13.575012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.007 qpair failed and we were unable to recover it. 00:33:59.007 [2024-07-24 02:12:13.575158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.007 [2024-07-24 02:12:13.575183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.007 qpair failed and we were unable to recover it. 00:33:59.007 [2024-07-24 02:12:13.575303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.007 [2024-07-24 02:12:13.575334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.007 qpair failed and we were unable to recover it. 00:33:59.007 [2024-07-24 02:12:13.575458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.007 [2024-07-24 02:12:13.575484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.007 qpair failed and we were unable to recover it. 00:33:59.007 [2024-07-24 02:12:13.575620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.007 [2024-07-24 02:12:13.575662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.007 qpair failed and we were unable to recover it. 00:33:59.007 [2024-07-24 02:12:13.575809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.007 [2024-07-24 02:12:13.575834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.007 qpair failed and we were unable to recover it. 00:33:59.007 [2024-07-24 02:12:13.575941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.007 [2024-07-24 02:12:13.575967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.007 qpair failed and we were unable to recover it. 00:33:59.007 [2024-07-24 02:12:13.576076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.007 [2024-07-24 02:12:13.576102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.007 qpair failed and we were unable to recover it. 00:33:59.007 [2024-07-24 02:12:13.576203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.007 [2024-07-24 02:12:13.576229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.007 qpair failed and we were unable to recover it. 00:33:59.007 [2024-07-24 02:12:13.576362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.007 [2024-07-24 02:12:13.576388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.007 qpair failed and we were unable to recover it. 00:33:59.007 [2024-07-24 02:12:13.576489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.007 [2024-07-24 02:12:13.576514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.007 qpair failed and we were unable to recover it. 00:33:59.007 [2024-07-24 02:12:13.576678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.007 [2024-07-24 02:12:13.576703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.007 qpair failed and we were unable to recover it. 00:33:59.007 [2024-07-24 02:12:13.576861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.007 [2024-07-24 02:12:13.576886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.007 qpair failed and we were unable to recover it. 00:33:59.007 [2024-07-24 02:12:13.577030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.007 [2024-07-24 02:12:13.577056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.007 qpair failed and we were unable to recover it. 00:33:59.007 [2024-07-24 02:12:13.577183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.007 [2024-07-24 02:12:13.577208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.007 qpair failed and we were unable to recover it. 00:33:59.007 [2024-07-24 02:12:13.577349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.007 [2024-07-24 02:12:13.577374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.007 qpair failed and we were unable to recover it. 00:33:59.007 [2024-07-24 02:12:13.577506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.007 [2024-07-24 02:12:13.577531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.007 qpair failed and we were unable to recover it. 00:33:59.007 [2024-07-24 02:12:13.577642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.007 [2024-07-24 02:12:13.577668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.007 qpair failed and we were unable to recover it. 00:33:59.007 [2024-07-24 02:12:13.577769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.008 [2024-07-24 02:12:13.577796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.008 qpair failed and we were unable to recover it. 00:33:59.008 [2024-07-24 02:12:13.577934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.008 [2024-07-24 02:12:13.577961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.008 qpair failed and we were unable to recover it. 00:33:59.008 [2024-07-24 02:12:13.578064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.008 [2024-07-24 02:12:13.578089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.008 qpair failed and we were unable to recover it. 00:33:59.008 [2024-07-24 02:12:13.578233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.008 [2024-07-24 02:12:13.578259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.008 qpair failed and we were unable to recover it. 00:33:59.008 [2024-07-24 02:12:13.578403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.008 [2024-07-24 02:12:13.578447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.008 qpair failed and we were unable to recover it. 00:33:59.008 [2024-07-24 02:12:13.578584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.008 [2024-07-24 02:12:13.578609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.008 qpair failed and we were unable to recover it. 00:33:59.008 [2024-07-24 02:12:13.578740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.008 [2024-07-24 02:12:13.578765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.008 qpair failed and we were unable to recover it. 00:33:59.008 [2024-07-24 02:12:13.578893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.008 [2024-07-24 02:12:13.578918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.008 qpair failed and we were unable to recover it. 00:33:59.008 [2024-07-24 02:12:13.579053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.008 [2024-07-24 02:12:13.579078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.008 qpair failed and we were unable to recover it. 00:33:59.008 [2024-07-24 02:12:13.579209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.008 [2024-07-24 02:12:13.579234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.008 qpair failed and we were unable to recover it. 00:33:59.008 [2024-07-24 02:12:13.579375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.008 [2024-07-24 02:12:13.579401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.008 qpair failed and we were unable to recover it. 00:33:59.008 [2024-07-24 02:12:13.579551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.008 [2024-07-24 02:12:13.579577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.008 qpair failed and we were unable to recover it. 00:33:59.008 [2024-07-24 02:12:13.579677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.008 [2024-07-24 02:12:13.579702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.008 qpair failed and we were unable to recover it. 00:33:59.008 [2024-07-24 02:12:13.579871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.008 [2024-07-24 02:12:13.579897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.008 qpair failed and we were unable to recover it. 00:33:59.008 [2024-07-24 02:12:13.580034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.008 [2024-07-24 02:12:13.580060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.008 qpair failed and we were unable to recover it. 00:33:59.008 [2024-07-24 02:12:13.580158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.008 [2024-07-24 02:12:13.580184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.008 qpair failed and we were unable to recover it. 00:33:59.008 [2024-07-24 02:12:13.580285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.008 [2024-07-24 02:12:13.580311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.008 qpair failed and we were unable to recover it. 00:33:59.008 [2024-07-24 02:12:13.580457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.008 [2024-07-24 02:12:13.580482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.008 qpair failed and we were unable to recover it. 00:33:59.008 [2024-07-24 02:12:13.580615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.008 [2024-07-24 02:12:13.580640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.008 qpair failed and we were unable to recover it. 00:33:59.008 [2024-07-24 02:12:13.580740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.008 [2024-07-24 02:12:13.580766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.008 qpair failed and we were unable to recover it. 00:33:59.008 [2024-07-24 02:12:13.580904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.008 [2024-07-24 02:12:13.580929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.008 qpair failed and we were unable to recover it. 00:33:59.008 [2024-07-24 02:12:13.581065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.008 [2024-07-24 02:12:13.581090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.008 qpair failed and we were unable to recover it. 00:33:59.008 [2024-07-24 02:12:13.581222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.008 [2024-07-24 02:12:13.581247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.008 qpair failed and we were unable to recover it. 00:33:59.008 [2024-07-24 02:12:13.581362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.008 [2024-07-24 02:12:13.581389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.008 qpair failed and we were unable to recover it. 00:33:59.008 [2024-07-24 02:12:13.581527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.008 [2024-07-24 02:12:13.581552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.008 qpair failed and we were unable to recover it. 00:33:59.008 [2024-07-24 02:12:13.581689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.008 [2024-07-24 02:12:13.581714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.008 qpair failed and we were unable to recover it. 00:33:59.008 [2024-07-24 02:12:13.581822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.008 [2024-07-24 02:12:13.581847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.008 qpair failed and we were unable to recover it. 00:33:59.008 [2024-07-24 02:12:13.581984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.008 [2024-07-24 02:12:13.582009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.008 qpair failed and we were unable to recover it. 00:33:59.008 [2024-07-24 02:12:13.582147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.008 [2024-07-24 02:12:13.582173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.008 qpair failed and we were unable to recover it. 00:33:59.008 [2024-07-24 02:12:13.582345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.008 [2024-07-24 02:12:13.582371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.008 qpair failed and we were unable to recover it. 00:33:59.008 [2024-07-24 02:12:13.582506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.008 [2024-07-24 02:12:13.582533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.008 qpair failed and we were unable to recover it. 00:33:59.008 [2024-07-24 02:12:13.582682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.008 [2024-07-24 02:12:13.582708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.008 qpair failed and we were unable to recover it. 00:33:59.008 [2024-07-24 02:12:13.582843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.008 [2024-07-24 02:12:13.582869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.008 qpair failed and we were unable to recover it. 00:33:59.008 [2024-07-24 02:12:13.583026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.008 [2024-07-24 02:12:13.583051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.008 qpair failed and we were unable to recover it. 00:33:59.008 [2024-07-24 02:12:13.583210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.008 [2024-07-24 02:12:13.583236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.008 qpair failed and we were unable to recover it. 00:33:59.008 [2024-07-24 02:12:13.583396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.008 [2024-07-24 02:12:13.583422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.008 qpair failed and we were unable to recover it. 00:33:59.008 [2024-07-24 02:12:13.583525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.008 [2024-07-24 02:12:13.583552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.009 qpair failed and we were unable to recover it. 00:33:59.009 [2024-07-24 02:12:13.583691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.009 [2024-07-24 02:12:13.583716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.009 qpair failed and we were unable to recover it. 00:33:59.009 [2024-07-24 02:12:13.583824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.009 [2024-07-24 02:12:13.583850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.009 qpair failed and we were unable to recover it. 00:33:59.009 [2024-07-24 02:12:13.584062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.009 [2024-07-24 02:12:13.584087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.009 qpair failed and we were unable to recover it. 00:33:59.009 [2024-07-24 02:12:13.584193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.009 [2024-07-24 02:12:13.584219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.009 qpair failed and we were unable to recover it. 00:33:59.009 [2024-07-24 02:12:13.584376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.009 [2024-07-24 02:12:13.584402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.009 qpair failed and we were unable to recover it. 00:33:59.009 [2024-07-24 02:12:13.584534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.009 [2024-07-24 02:12:13.584559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.009 qpair failed and we were unable to recover it. 00:33:59.009 [2024-07-24 02:12:13.584704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.009 [2024-07-24 02:12:13.584729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.009 qpair failed and we were unable to recover it. 00:33:59.009 [2024-07-24 02:12:13.584863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.009 [2024-07-24 02:12:13.584892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.009 qpair failed and we were unable to recover it. 00:33:59.009 [2024-07-24 02:12:13.585051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.009 [2024-07-24 02:12:13.585076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.009 qpair failed and we were unable to recover it. 00:33:59.009 [2024-07-24 02:12:13.585176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.009 [2024-07-24 02:12:13.585201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.009 qpair failed and we were unable to recover it. 00:33:59.009 [2024-07-24 02:12:13.585368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.009 [2024-07-24 02:12:13.585394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.009 qpair failed and we were unable to recover it. 00:33:59.009 [2024-07-24 02:12:13.585526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.009 [2024-07-24 02:12:13.585551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.009 qpair failed and we were unable to recover it. 00:33:59.009 [2024-07-24 02:12:13.585660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.009 [2024-07-24 02:12:13.585685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.009 qpair failed and we were unable to recover it. 00:33:59.009 [2024-07-24 02:12:13.585818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.009 [2024-07-24 02:12:13.585843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.009 qpair failed and we were unable to recover it. 00:33:59.009 [2024-07-24 02:12:13.585978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.009 [2024-07-24 02:12:13.586002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.009 qpair failed and we were unable to recover it. 00:33:59.009 [2024-07-24 02:12:13.586165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.009 [2024-07-24 02:12:13.586190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.009 qpair failed and we were unable to recover it. 00:33:59.009 [2024-07-24 02:12:13.586323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.009 [2024-07-24 02:12:13.586348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.009 qpair failed and we were unable to recover it. 00:33:59.009 [2024-07-24 02:12:13.586475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.009 [2024-07-24 02:12:13.586499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.009 qpair failed and we were unable to recover it. 00:33:59.009 [2024-07-24 02:12:13.586648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.009 [2024-07-24 02:12:13.586672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.009 qpair failed and we were unable to recover it. 00:33:59.009 [2024-07-24 02:12:13.586809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.009 [2024-07-24 02:12:13.586835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.009 qpair failed and we were unable to recover it. 00:33:59.009 [2024-07-24 02:12:13.586969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.009 [2024-07-24 02:12:13.586994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.009 qpair failed and we were unable to recover it. 00:33:59.009 [2024-07-24 02:12:13.587155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.009 [2024-07-24 02:12:13.587179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.009 qpair failed and we were unable to recover it. 00:33:59.009 [2024-07-24 02:12:13.587344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.009 [2024-07-24 02:12:13.587369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.009 qpair failed and we were unable to recover it. 00:33:59.009 [2024-07-24 02:12:13.587505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.009 [2024-07-24 02:12:13.587529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.009 qpair failed and we were unable to recover it. 00:33:59.009 [2024-07-24 02:12:13.587667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.009 [2024-07-24 02:12:13.587691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.009 qpair failed and we were unable to recover it. 00:33:59.009 [2024-07-24 02:12:13.587799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.009 [2024-07-24 02:12:13.587825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.009 qpair failed and we were unable to recover it. 00:33:59.009 [2024-07-24 02:12:13.587922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.009 [2024-07-24 02:12:13.587946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.009 qpair failed and we were unable to recover it. 00:33:59.009 [2024-07-24 02:12:13.588109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.009 [2024-07-24 02:12:13.588134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.009 qpair failed and we were unable to recover it. 00:33:59.009 [2024-07-24 02:12:13.588257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.009 [2024-07-24 02:12:13.588281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.009 qpair failed and we were unable to recover it. 00:33:59.009 [2024-07-24 02:12:13.588424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.009 [2024-07-24 02:12:13.588448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.009 qpair failed and we were unable to recover it. 00:33:59.009 [2024-07-24 02:12:13.588578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.009 [2024-07-24 02:12:13.588603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.009 qpair failed and we were unable to recover it. 00:33:59.009 [2024-07-24 02:12:13.588761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.009 [2024-07-24 02:12:13.588785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.009 qpair failed and we were unable to recover it. 00:33:59.009 [2024-07-24 02:12:13.588918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.009 [2024-07-24 02:12:13.588943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.009 qpair failed and we were unable to recover it. 00:33:59.009 [2024-07-24 02:12:13.589068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.009 [2024-07-24 02:12:13.589093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.009 qpair failed and we were unable to recover it. 00:33:59.009 [2024-07-24 02:12:13.589210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.009 [2024-07-24 02:12:13.589234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.009 qpair failed and we were unable to recover it. 00:33:59.010 [2024-07-24 02:12:13.589391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.010 [2024-07-24 02:12:13.589417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.010 qpair failed and we were unable to recover it. 00:33:59.010 [2024-07-24 02:12:13.589548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.010 [2024-07-24 02:12:13.589572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.010 qpair failed and we were unable to recover it. 00:33:59.010 [2024-07-24 02:12:13.589748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.010 [2024-07-24 02:12:13.589773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.010 qpair failed and we were unable to recover it. 00:33:59.010 [2024-07-24 02:12:13.589874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.010 [2024-07-24 02:12:13.589899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.010 qpair failed and we were unable to recover it. 00:33:59.010 [2024-07-24 02:12:13.590030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.010 [2024-07-24 02:12:13.590056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.010 qpair failed and we were unable to recover it. 00:33:59.010 [2024-07-24 02:12:13.590197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.010 [2024-07-24 02:12:13.590223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.010 qpair failed and we were unable to recover it. 00:33:59.010 [2024-07-24 02:12:13.590359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.010 [2024-07-24 02:12:13.590385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.010 qpair failed and we were unable to recover it. 00:33:59.010 [2024-07-24 02:12:13.590546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.010 [2024-07-24 02:12:13.590572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.010 qpair failed and we were unable to recover it. 00:33:59.010 [2024-07-24 02:12:13.590718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.010 [2024-07-24 02:12:13.590742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.010 qpair failed and we were unable to recover it. 00:33:59.010 [2024-07-24 02:12:13.590851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.010 [2024-07-24 02:12:13.590876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.010 qpair failed and we were unable to recover it. 00:33:59.010 [2024-07-24 02:12:13.591004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.010 [2024-07-24 02:12:13.591029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.010 qpair failed and we were unable to recover it. 00:33:59.010 [2024-07-24 02:12:13.591137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.010 [2024-07-24 02:12:13.591161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.010 qpair failed and we were unable to recover it. 00:33:59.010 [2024-07-24 02:12:13.591290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.010 [2024-07-24 02:12:13.591347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.010 qpair failed and we were unable to recover it. 00:33:59.010 [2024-07-24 02:12:13.591450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.010 [2024-07-24 02:12:13.591476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.010 qpair failed and we were unable to recover it. 00:33:59.010 [2024-07-24 02:12:13.591585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.010 [2024-07-24 02:12:13.591611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.010 qpair failed and we were unable to recover it. 00:33:59.010 [2024-07-24 02:12:13.591748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.010 [2024-07-24 02:12:13.591774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.010 qpair failed and we were unable to recover it. 00:33:59.010 [2024-07-24 02:12:13.591933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.010 [2024-07-24 02:12:13.591958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.010 qpair failed and we were unable to recover it. 00:33:59.010 [2024-07-24 02:12:13.592093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.010 [2024-07-24 02:12:13.592118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.010 qpair failed and we were unable to recover it. 00:33:59.010 [2024-07-24 02:12:13.592226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.010 [2024-07-24 02:12:13.592251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.010 qpair failed and we were unable to recover it. 00:33:59.010 [2024-07-24 02:12:13.592436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.010 [2024-07-24 02:12:13.592463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.010 qpair failed and we were unable to recover it. 00:33:59.010 [2024-07-24 02:12:13.592600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.010 [2024-07-24 02:12:13.592636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.010 qpair failed and we were unable to recover it. 00:33:59.010 [2024-07-24 02:12:13.592735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.010 [2024-07-24 02:12:13.592759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.010 qpair failed and we were unable to recover it. 00:33:59.010 [2024-07-24 02:12:13.592868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.010 [2024-07-24 02:12:13.592894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.010 [2024-07-24 02:12:13.592880] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:33:59.010 qpair failed and we were unable to recover it. 00:33:59.010 [2024-07-24 02:12:13.592957] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:59.010 [2024-07-24 02:12:13.593059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.010 [2024-07-24 02:12:13.593084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.010 qpair failed and we were unable to recover it. 00:33:59.010 [2024-07-24 02:12:13.593250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.010 [2024-07-24 02:12:13.593278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.010 qpair failed and we were unable to recover it. 00:33:59.010 [2024-07-24 02:12:13.593452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.010 [2024-07-24 02:12:13.593478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.010 qpair failed and we were unable to recover it. 00:33:59.010 [2024-07-24 02:12:13.593613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.010 [2024-07-24 02:12:13.593639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.010 qpair failed and we were unable to recover it. 00:33:59.010 [2024-07-24 02:12:13.593771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.010 [2024-07-24 02:12:13.593798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.010 qpair failed and we were unable to recover it. 00:33:59.010 [2024-07-24 02:12:13.593919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.010 [2024-07-24 02:12:13.593944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.010 qpair failed and we were unable to recover it. 00:33:59.010 [2024-07-24 02:12:13.594078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.010 [2024-07-24 02:12:13.594104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.011 qpair failed and we were unable to recover it. 00:33:59.011 [2024-07-24 02:12:13.594264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.011 [2024-07-24 02:12:13.594290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.011 qpair failed and we were unable to recover it. 00:33:59.011 [2024-07-24 02:12:13.594406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.011 [2024-07-24 02:12:13.594432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.011 qpair failed and we were unable to recover it. 00:33:59.011 [2024-07-24 02:12:13.594540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.011 [2024-07-24 02:12:13.594565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.011 qpair failed and we were unable to recover it. 00:33:59.011 [2024-07-24 02:12:13.594682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.011 [2024-07-24 02:12:13.594709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.011 qpair failed and we were unable to recover it. 00:33:59.011 [2024-07-24 02:12:13.594845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.011 [2024-07-24 02:12:13.594870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.011 qpair failed and we were unable to recover it. 00:33:59.011 [2024-07-24 02:12:13.595008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.011 [2024-07-24 02:12:13.595034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.011 qpair failed and we were unable to recover it. 00:33:59.011 [2024-07-24 02:12:13.595164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.011 [2024-07-24 02:12:13.595189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.011 qpair failed and we were unable to recover it. 00:33:59.011 [2024-07-24 02:12:13.595334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.011 [2024-07-24 02:12:13.595360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.011 qpair failed and we were unable to recover it. 00:33:59.011 [2024-07-24 02:12:13.595502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.011 [2024-07-24 02:12:13.595530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.011 qpair failed and we were unable to recover it. 00:33:59.011 [2024-07-24 02:12:13.595655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.011 [2024-07-24 02:12:13.595680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.011 qpair failed and we were unable to recover it. 00:33:59.011 [2024-07-24 02:12:13.595825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.011 [2024-07-24 02:12:13.595850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.011 qpair failed and we were unable to recover it. 00:33:59.011 [2024-07-24 02:12:13.595997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.011 [2024-07-24 02:12:13.596022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.011 qpair failed and we were unable to recover it. 00:33:59.011 [2024-07-24 02:12:13.596184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.011 [2024-07-24 02:12:13.596210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.011 qpair failed and we were unable to recover it. 00:33:59.011 [2024-07-24 02:12:13.596349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.011 [2024-07-24 02:12:13.596374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.011 qpair failed and we were unable to recover it. 00:33:59.011 [2024-07-24 02:12:13.596489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.011 [2024-07-24 02:12:13.596515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.011 qpair failed and we were unable to recover it. 00:33:59.011 [2024-07-24 02:12:13.596622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.011 [2024-07-24 02:12:13.596647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.011 qpair failed and we were unable to recover it. 00:33:59.011 [2024-07-24 02:12:13.596789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.011 [2024-07-24 02:12:13.596815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.011 qpair failed and we were unable to recover it. 00:33:59.011 [2024-07-24 02:12:13.596953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.011 [2024-07-24 02:12:13.596978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.011 qpair failed and we were unable to recover it. 00:33:59.011 [2024-07-24 02:12:13.597112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.011 [2024-07-24 02:12:13.597137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.011 qpair failed and we were unable to recover it. 00:33:59.011 [2024-07-24 02:12:13.597269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.011 [2024-07-24 02:12:13.597294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.011 qpair failed and we were unable to recover it. 00:33:59.011 [2024-07-24 02:12:13.597435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.011 [2024-07-24 02:12:13.597461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.011 qpair failed and we were unable to recover it. 00:33:59.011 [2024-07-24 02:12:13.597598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.011 [2024-07-24 02:12:13.597632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.011 qpair failed and we were unable to recover it. 00:33:59.011 [2024-07-24 02:12:13.597763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.011 [2024-07-24 02:12:13.597790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.011 qpair failed and we were unable to recover it. 00:33:59.011 [2024-07-24 02:12:13.597930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.011 [2024-07-24 02:12:13.597955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.011 qpair failed and we were unable to recover it. 00:33:59.011 [2024-07-24 02:12:13.598099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.011 [2024-07-24 02:12:13.598125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.011 qpair failed and we were unable to recover it. 00:33:59.011 [2024-07-24 02:12:13.598234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.011 [2024-07-24 02:12:13.598260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.011 qpair failed and we were unable to recover it. 00:33:59.011 [2024-07-24 02:12:13.598376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.011 [2024-07-24 02:12:13.598401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.011 qpair failed and we were unable to recover it. 00:33:59.011 [2024-07-24 02:12:13.598539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.011 [2024-07-24 02:12:13.598565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.011 qpair failed and we were unable to recover it. 00:33:59.011 [2024-07-24 02:12:13.598696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.011 [2024-07-24 02:12:13.598720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.011 qpair failed and we were unable to recover it. 00:33:59.011 [2024-07-24 02:12:13.598846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.011 [2024-07-24 02:12:13.598872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.011 qpair failed and we were unable to recover it. 00:33:59.011 [2024-07-24 02:12:13.599002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.011 [2024-07-24 02:12:13.599027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.011 qpair failed and we were unable to recover it. 00:33:59.011 [2024-07-24 02:12:13.599138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.011 [2024-07-24 02:12:13.599162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.011 qpair failed and we were unable to recover it. 00:33:59.011 [2024-07-24 02:12:13.599290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.011 [2024-07-24 02:12:13.599323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.011 qpair failed and we were unable to recover it. 00:33:59.011 [2024-07-24 02:12:13.599462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.011 [2024-07-24 02:12:13.599488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.011 qpair failed and we were unable to recover it. 00:33:59.011 [2024-07-24 02:12:13.599628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.011 [2024-07-24 02:12:13.599657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.011 qpair failed and we were unable to recover it. 00:33:59.011 [2024-07-24 02:12:13.599767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.012 [2024-07-24 02:12:13.599793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.012 qpair failed and we were unable to recover it. 00:33:59.012 [2024-07-24 02:12:13.599914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.012 [2024-07-24 02:12:13.599939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.012 qpair failed and we were unable to recover it. 00:33:59.012 [2024-07-24 02:12:13.600047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.012 [2024-07-24 02:12:13.600073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.012 qpair failed and we were unable to recover it. 00:33:59.012 [2024-07-24 02:12:13.600234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.012 [2024-07-24 02:12:13.600259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.012 qpair failed and we were unable to recover it. 00:33:59.012 [2024-07-24 02:12:13.600405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.012 [2024-07-24 02:12:13.600431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.012 qpair failed and we were unable to recover it. 00:33:59.012 [2024-07-24 02:12:13.600562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.012 [2024-07-24 02:12:13.600588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.012 qpair failed and we were unable to recover it. 00:33:59.012 [2024-07-24 02:12:13.600726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.012 [2024-07-24 02:12:13.600751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.012 qpair failed and we were unable to recover it. 00:33:59.012 [2024-07-24 02:12:13.600893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.012 [2024-07-24 02:12:13.600918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.012 qpair failed and we were unable to recover it. 00:33:59.012 [2024-07-24 02:12:13.601075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.012 [2024-07-24 02:12:13.601099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.012 qpair failed and we were unable to recover it. 00:33:59.012 [2024-07-24 02:12:13.601226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.012 [2024-07-24 02:12:13.601250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.012 qpair failed and we were unable to recover it. 00:33:59.012 [2024-07-24 02:12:13.601415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.012 [2024-07-24 02:12:13.601442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.012 qpair failed and we were unable to recover it. 00:33:59.012 [2024-07-24 02:12:13.601541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.012 [2024-07-24 02:12:13.601566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.012 qpair failed and we were unable to recover it. 00:33:59.012 [2024-07-24 02:12:13.601668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.012 [2024-07-24 02:12:13.601693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.012 qpair failed and we were unable to recover it. 00:33:59.012 [2024-07-24 02:12:13.601830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.012 [2024-07-24 02:12:13.601855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.012 qpair failed and we were unable to recover it. 00:33:59.012 [2024-07-24 02:12:13.601992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.012 [2024-07-24 02:12:13.602016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.012 qpair failed and we were unable to recover it. 00:33:59.012 [2024-07-24 02:12:13.602154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.012 [2024-07-24 02:12:13.602179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.012 qpair failed and we were unable to recover it. 00:33:59.012 [2024-07-24 02:12:13.602327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.012 [2024-07-24 02:12:13.602352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.012 qpair failed and we were unable to recover it. 00:33:59.012 [2024-07-24 02:12:13.602481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.012 [2024-07-24 02:12:13.602507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.012 qpair failed and we were unable to recover it. 00:33:59.012 [2024-07-24 02:12:13.602644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.012 [2024-07-24 02:12:13.602669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.012 qpair failed and we were unable to recover it. 00:33:59.012 [2024-07-24 02:12:13.602801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.012 [2024-07-24 02:12:13.602826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.012 qpair failed and we were unable to recover it. 00:33:59.012 [2024-07-24 02:12:13.602934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.012 [2024-07-24 02:12:13.602959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.012 qpair failed and we were unable to recover it. 00:33:59.012 [2024-07-24 02:12:13.603098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.012 [2024-07-24 02:12:13.603123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.012 qpair failed and we were unable to recover it. 00:33:59.012 [2024-07-24 02:12:13.603284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.012 [2024-07-24 02:12:13.603337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.012 qpair failed and we were unable to recover it. 00:33:59.012 [2024-07-24 02:12:13.603446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.012 [2024-07-24 02:12:13.603472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.012 qpair failed and we were unable to recover it. 00:33:59.012 [2024-07-24 02:12:13.603615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.012 [2024-07-24 02:12:13.603640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.012 qpair failed and we were unable to recover it. 00:33:59.012 [2024-07-24 02:12:13.603778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.012 [2024-07-24 02:12:13.603802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.012 qpair failed and we were unable to recover it. 00:33:59.012 [2024-07-24 02:12:13.603929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.012 [2024-07-24 02:12:13.603954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.012 qpair failed and we were unable to recover it. 00:33:59.012 [2024-07-24 02:12:13.604130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.012 [2024-07-24 02:12:13.604154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.012 qpair failed and we were unable to recover it. 00:33:59.012 [2024-07-24 02:12:13.604299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.012 [2024-07-24 02:12:13.604338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.012 qpair failed and we were unable to recover it. 00:33:59.012 [2024-07-24 02:12:13.604473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.012 [2024-07-24 02:12:13.604499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.012 qpair failed and we were unable to recover it. 00:33:59.012 [2024-07-24 02:12:13.604660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.012 [2024-07-24 02:12:13.604685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.012 qpair failed and we were unable to recover it. 00:33:59.012 [2024-07-24 02:12:13.604786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.012 [2024-07-24 02:12:13.604811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.012 qpair failed and we were unable to recover it. 00:33:59.012 [2024-07-24 02:12:13.604916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.012 [2024-07-24 02:12:13.604940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.012 qpair failed and we were unable to recover it. 00:33:59.012 [2024-07-24 02:12:13.605042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.012 [2024-07-24 02:12:13.605066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.012 qpair failed and we were unable to recover it. 00:33:59.012 [2024-07-24 02:12:13.605195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.012 [2024-07-24 02:12:13.605219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.013 qpair failed and we were unable to recover it. 00:33:59.013 [2024-07-24 02:12:13.605383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.013 [2024-07-24 02:12:13.605408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.013 qpair failed and we were unable to recover it. 00:33:59.013 [2024-07-24 02:12:13.605517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.013 [2024-07-24 02:12:13.605543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.013 qpair failed and we were unable to recover it. 00:33:59.013 [2024-07-24 02:12:13.605660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.013 [2024-07-24 02:12:13.605685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.013 qpair failed and we were unable to recover it. 00:33:59.013 [2024-07-24 02:12:13.605783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.013 [2024-07-24 02:12:13.605808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.013 qpair failed and we were unable to recover it. 00:33:59.013 [2024-07-24 02:12:13.605920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.013 [2024-07-24 02:12:13.605944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.013 qpair failed and we were unable to recover it. 00:33:59.013 [2024-07-24 02:12:13.606087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.013 [2024-07-24 02:12:13.606111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.013 qpair failed and we were unable to recover it. 00:33:59.013 [2024-07-24 02:12:13.606219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.013 [2024-07-24 02:12:13.606243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.013 qpair failed and we were unable to recover it. 00:33:59.013 [2024-07-24 02:12:13.606387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.013 [2024-07-24 02:12:13.606412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.013 qpair failed and we were unable to recover it. 00:33:59.013 [2024-07-24 02:12:13.606573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.013 [2024-07-24 02:12:13.606599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.013 qpair failed and we were unable to recover it. 00:33:59.013 [2024-07-24 02:12:13.606734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.013 [2024-07-24 02:12:13.606758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.013 qpair failed and we were unable to recover it. 00:33:59.013 [2024-07-24 02:12:13.606895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.013 [2024-07-24 02:12:13.606921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.013 qpair failed and we were unable to recover it. 00:33:59.013 [2024-07-24 02:12:13.607054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.013 [2024-07-24 02:12:13.607079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.013 qpair failed and we were unable to recover it. 00:33:59.013 [2024-07-24 02:12:13.607213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.013 [2024-07-24 02:12:13.607238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.013 qpair failed and we were unable to recover it. 00:33:59.013 [2024-07-24 02:12:13.607400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.013 [2024-07-24 02:12:13.607425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.013 qpair failed and we were unable to recover it. 00:33:59.013 [2024-07-24 02:12:13.607532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.013 [2024-07-24 02:12:13.607557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.013 qpair failed and we were unable to recover it. 00:33:59.013 [2024-07-24 02:12:13.607694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.013 [2024-07-24 02:12:13.607718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.013 qpair failed and we were unable to recover it. 00:33:59.013 [2024-07-24 02:12:13.607827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.013 [2024-07-24 02:12:13.607852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.013 qpair failed and we were unable to recover it. 00:33:59.013 [2024-07-24 02:12:13.608013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.013 [2024-07-24 02:12:13.608037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.013 qpair failed and we were unable to recover it. 00:33:59.013 [2024-07-24 02:12:13.608176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.013 [2024-07-24 02:12:13.608201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.013 qpair failed and we were unable to recover it. 00:33:59.013 [2024-07-24 02:12:13.608348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.013 [2024-07-24 02:12:13.608375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.013 qpair failed and we were unable to recover it. 00:33:59.013 [2024-07-24 02:12:13.608526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.013 [2024-07-24 02:12:13.608550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.013 qpair failed and we were unable to recover it. 00:33:59.013 [2024-07-24 02:12:13.608693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.013 [2024-07-24 02:12:13.608718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.013 qpair failed and we were unable to recover it. 00:33:59.013 [2024-07-24 02:12:13.608834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.013 [2024-07-24 02:12:13.608858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.013 qpair failed and we were unable to recover it. 00:33:59.013 [2024-07-24 02:12:13.608986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.013 [2024-07-24 02:12:13.609012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.013 qpair failed and we were unable to recover it. 00:33:59.013 [2024-07-24 02:12:13.609141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.013 [2024-07-24 02:12:13.609167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.013 qpair failed and we were unable to recover it. 00:33:59.013 [2024-07-24 02:12:13.609300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.013 [2024-07-24 02:12:13.609337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.013 qpair failed and we were unable to recover it. 00:33:59.013 [2024-07-24 02:12:13.609447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.013 [2024-07-24 02:12:13.609472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.013 qpair failed and we were unable to recover it. 00:33:59.013 [2024-07-24 02:12:13.609604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.013 [2024-07-24 02:12:13.609631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.013 qpair failed and we were unable to recover it. 00:33:59.013 [2024-07-24 02:12:13.609750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.013 [2024-07-24 02:12:13.609775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.013 qpair failed and we were unable to recover it. 00:33:59.013 [2024-07-24 02:12:13.609905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.013 [2024-07-24 02:12:13.609930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.013 qpair failed and we were unable to recover it. 00:33:59.013 [2024-07-24 02:12:13.610026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.013 [2024-07-24 02:12:13.610050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.013 qpair failed and we were unable to recover it. 00:33:59.013 [2024-07-24 02:12:13.610222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.013 [2024-07-24 02:12:13.610251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.013 qpair failed and we were unable to recover it. 00:33:59.013 [2024-07-24 02:12:13.610399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.013 [2024-07-24 02:12:13.610424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.013 qpair failed and we were unable to recover it. 00:33:59.013 [2024-07-24 02:12:13.610551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.013 [2024-07-24 02:12:13.610576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.013 qpair failed and we were unable to recover it. 00:33:59.013 [2024-07-24 02:12:13.610686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.013 [2024-07-24 02:12:13.610711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.013 qpair failed and we were unable to recover it. 00:33:59.013 [2024-07-24 02:12:13.610875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.014 [2024-07-24 02:12:13.610900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.014 qpair failed and we were unable to recover it. 00:33:59.014 [2024-07-24 02:12:13.611006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.014 [2024-07-24 02:12:13.611032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.014 qpair failed and we were unable to recover it. 00:33:59.014 [2024-07-24 02:12:13.611168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.014 [2024-07-24 02:12:13.611193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.014 qpair failed and we were unable to recover it. 00:33:59.014 [2024-07-24 02:12:13.611354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.014 [2024-07-24 02:12:13.611380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.014 qpair failed and we were unable to recover it. 00:33:59.014 [2024-07-24 02:12:13.611521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.014 [2024-07-24 02:12:13.611546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.014 qpair failed and we were unable to recover it. 00:33:59.014 [2024-07-24 02:12:13.611703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.014 [2024-07-24 02:12:13.611728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.014 qpair failed and we were unable to recover it. 00:33:59.014 [2024-07-24 02:12:13.611832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.014 [2024-07-24 02:12:13.611856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.014 qpair failed and we were unable to recover it. 00:33:59.014 [2024-07-24 02:12:13.611965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.014 [2024-07-24 02:12:13.611990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.014 qpair failed and we were unable to recover it. 00:33:59.014 [2024-07-24 02:12:13.612157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.014 [2024-07-24 02:12:13.612182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.014 qpair failed and we were unable to recover it. 00:33:59.014 [2024-07-24 02:12:13.612286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.014 [2024-07-24 02:12:13.612311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.014 qpair failed and we were unable to recover it. 00:33:59.014 [2024-07-24 02:12:13.612452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.014 [2024-07-24 02:12:13.612479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.014 qpair failed and we were unable to recover it. 00:33:59.014 [2024-07-24 02:12:13.612584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.014 [2024-07-24 02:12:13.612609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.014 qpair failed and we were unable to recover it. 00:33:59.014 [2024-07-24 02:12:13.612768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.014 [2024-07-24 02:12:13.612792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.014 qpair failed and we were unable to recover it. 00:33:59.014 [2024-07-24 02:12:13.612917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.014 [2024-07-24 02:12:13.612942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.014 qpair failed and we were unable to recover it. 00:33:59.014 [2024-07-24 02:12:13.613052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.014 [2024-07-24 02:12:13.613079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.014 qpair failed and we were unable to recover it. 00:33:59.014 [2024-07-24 02:12:13.613183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.014 [2024-07-24 02:12:13.613208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.014 qpair failed and we were unable to recover it. 00:33:59.014 [2024-07-24 02:12:13.613370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.014 [2024-07-24 02:12:13.613396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.014 qpair failed and we were unable to recover it. 00:33:59.014 [2024-07-24 02:12:13.613528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.014 [2024-07-24 02:12:13.613553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.014 qpair failed and we were unable to recover it. 00:33:59.014 [2024-07-24 02:12:13.613681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.014 [2024-07-24 02:12:13.613706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.014 qpair failed and we were unable to recover it. 00:33:59.014 [2024-07-24 02:12:13.613835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.014 [2024-07-24 02:12:13.613860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.014 qpair failed and we were unable to recover it. 00:33:59.014 [2024-07-24 02:12:13.613965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.014 [2024-07-24 02:12:13.613990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.014 qpair failed and we were unable to recover it. 00:33:59.014 [2024-07-24 02:12:13.614120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.014 [2024-07-24 02:12:13.614146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.014 qpair failed and we were unable to recover it. 00:33:59.014 [2024-07-24 02:12:13.614241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.014 [2024-07-24 02:12:13.614266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.014 qpair failed and we were unable to recover it. 00:33:59.014 [2024-07-24 02:12:13.614406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.014 [2024-07-24 02:12:13.614431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.014 qpair failed and we were unable to recover it. 00:33:59.014 [2024-07-24 02:12:13.614555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.014 [2024-07-24 02:12:13.614580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.014 qpair failed and we were unable to recover it. 00:33:59.014 [2024-07-24 02:12:13.614715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.014 [2024-07-24 02:12:13.614740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.014 qpair failed and we were unable to recover it. 00:33:59.014 [2024-07-24 02:12:13.614890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.014 [2024-07-24 02:12:13.614915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.014 qpair failed and we were unable to recover it. 00:33:59.014 [2024-07-24 02:12:13.615071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.014 [2024-07-24 02:12:13.615096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.014 qpair failed and we were unable to recover it. 00:33:59.014 [2024-07-24 02:12:13.615193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.014 [2024-07-24 02:12:13.615218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.014 qpair failed and we were unable to recover it. 00:33:59.014 [2024-07-24 02:12:13.615327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.014 [2024-07-24 02:12:13.615352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.014 qpair failed and we were unable to recover it. 00:33:59.014 [2024-07-24 02:12:13.615484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.014 [2024-07-24 02:12:13.615509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.014 qpair failed and we were unable to recover it. 00:33:59.014 [2024-07-24 02:12:13.615612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.014 [2024-07-24 02:12:13.615637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.014 qpair failed and we were unable to recover it. 00:33:59.014 [2024-07-24 02:12:13.615766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.014 [2024-07-24 02:12:13.615790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.014 qpair failed and we were unable to recover it. 00:33:59.014 [2024-07-24 02:12:13.615921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.014 [2024-07-24 02:12:13.615946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.014 qpair failed and we were unable to recover it. 00:33:59.014 [2024-07-24 02:12:13.616076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.014 [2024-07-24 02:12:13.616101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.015 qpair failed and we were unable to recover it. 00:33:59.015 [2024-07-24 02:12:13.616234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.015 [2024-07-24 02:12:13.616259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.015 qpair failed and we were unable to recover it. 00:33:59.015 [2024-07-24 02:12:13.616372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.015 [2024-07-24 02:12:13.616402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.015 qpair failed and we were unable to recover it. 00:33:59.015 [2024-07-24 02:12:13.616532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.015 [2024-07-24 02:12:13.616557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.015 qpair failed and we were unable to recover it. 00:33:59.015 [2024-07-24 02:12:13.616689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.015 [2024-07-24 02:12:13.616716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.015 qpair failed and we were unable to recover it. 00:33:59.015 [2024-07-24 02:12:13.616873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.015 [2024-07-24 02:12:13.616898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.015 qpair failed and we were unable to recover it. 00:33:59.015 [2024-07-24 02:12:13.617033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.015 [2024-07-24 02:12:13.617059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.015 qpair failed and we were unable to recover it. 00:33:59.015 [2024-07-24 02:12:13.617190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.015 [2024-07-24 02:12:13.617216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.015 qpair failed and we were unable to recover it. 00:33:59.015 [2024-07-24 02:12:13.617327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.015 [2024-07-24 02:12:13.617353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.015 qpair failed and we were unable to recover it. 00:33:59.015 [2024-07-24 02:12:13.617490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.015 [2024-07-24 02:12:13.617515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.015 qpair failed and we were unable to recover it. 00:33:59.015 [2024-07-24 02:12:13.617627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.015 [2024-07-24 02:12:13.617652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.015 qpair failed and we were unable to recover it. 00:33:59.015 [2024-07-24 02:12:13.617781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.015 [2024-07-24 02:12:13.617807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.015 qpair failed and we were unable to recover it. 00:33:59.015 [2024-07-24 02:12:13.617939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.015 [2024-07-24 02:12:13.617964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.015 qpair failed and we were unable to recover it. 00:33:59.015 [2024-07-24 02:12:13.618103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.015 [2024-07-24 02:12:13.618128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.015 qpair failed and we were unable to recover it. 00:33:59.015 [2024-07-24 02:12:13.618221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.015 [2024-07-24 02:12:13.618246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.015 qpair failed and we were unable to recover it. 00:33:59.015 [2024-07-24 02:12:13.618355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.015 [2024-07-24 02:12:13.618381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.015 qpair failed and we were unable to recover it. 00:33:59.015 [2024-07-24 02:12:13.618502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.015 [2024-07-24 02:12:13.618527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.015 qpair failed and we were unable to recover it. 00:33:59.015 [2024-07-24 02:12:13.618659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.015 [2024-07-24 02:12:13.618684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.015 qpair failed and we were unable to recover it. 00:33:59.015 [2024-07-24 02:12:13.618783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.015 [2024-07-24 02:12:13.618808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.015 qpair failed and we were unable to recover it. 00:33:59.015 [2024-07-24 02:12:13.618948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.015 [2024-07-24 02:12:13.618973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.015 qpair failed and we were unable to recover it. 00:33:59.015 [2024-07-24 02:12:13.619104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.015 [2024-07-24 02:12:13.619131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.015 qpair failed and we were unable to recover it. 00:33:59.015 [2024-07-24 02:12:13.619231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.015 [2024-07-24 02:12:13.619257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.015 qpair failed and we were unable to recover it. 00:33:59.015 [2024-07-24 02:12:13.619419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.015 [2024-07-24 02:12:13.619445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.015 qpair failed and we were unable to recover it. 00:33:59.015 [2024-07-24 02:12:13.619603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.015 [2024-07-24 02:12:13.619628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.015 qpair failed and we were unable to recover it. 00:33:59.015 [2024-07-24 02:12:13.619757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.015 [2024-07-24 02:12:13.619782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.015 qpair failed and we were unable to recover it. 00:33:59.015 [2024-07-24 02:12:13.619919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.015 [2024-07-24 02:12:13.619946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.015 qpair failed and we were unable to recover it. 00:33:59.015 [2024-07-24 02:12:13.620049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.015 [2024-07-24 02:12:13.620074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.015 qpair failed and we were unable to recover it. 00:33:59.015 [2024-07-24 02:12:13.620209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.015 [2024-07-24 02:12:13.620234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.015 qpair failed and we were unable to recover it. 00:33:59.015 [2024-07-24 02:12:13.620369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.015 [2024-07-24 02:12:13.620395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.015 qpair failed and we were unable to recover it. 00:33:59.015 [2024-07-24 02:12:13.620535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.015 [2024-07-24 02:12:13.620560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.015 qpair failed and we were unable to recover it. 00:33:59.015 [2024-07-24 02:12:13.620663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.015 [2024-07-24 02:12:13.620689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.015 qpair failed and we were unable to recover it. 00:33:59.016 [2024-07-24 02:12:13.620807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.016 [2024-07-24 02:12:13.620832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.016 qpair failed and we were unable to recover it. 00:33:59.016 [2024-07-24 02:12:13.620967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.016 [2024-07-24 02:12:13.620992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.016 qpair failed and we were unable to recover it. 00:33:59.016 [2024-07-24 02:12:13.621144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.016 [2024-07-24 02:12:13.621170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.016 qpair failed and we were unable to recover it. 00:33:59.016 [2024-07-24 02:12:13.621328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.016 [2024-07-24 02:12:13.621354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.016 qpair failed and we were unable to recover it. 00:33:59.016 [2024-07-24 02:12:13.621459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.016 [2024-07-24 02:12:13.621484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.016 qpair failed and we were unable to recover it. 00:33:59.016 [2024-07-24 02:12:13.621587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.016 [2024-07-24 02:12:13.621622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.016 qpair failed and we were unable to recover it. 00:33:59.016 [2024-07-24 02:12:13.621755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.016 [2024-07-24 02:12:13.621780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.016 qpair failed and we were unable to recover it. 00:33:59.016 [2024-07-24 02:12:13.621908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.016 [2024-07-24 02:12:13.621933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.016 qpair failed and we were unable to recover it. 00:33:59.016 [2024-07-24 02:12:13.622074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.016 [2024-07-24 02:12:13.622100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.016 qpair failed and we were unable to recover it. 00:33:59.016 [2024-07-24 02:12:13.622205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.016 [2024-07-24 02:12:13.622230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.016 qpair failed and we were unable to recover it. 00:33:59.016 [2024-07-24 02:12:13.622358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.016 [2024-07-24 02:12:13.622384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.016 qpair failed and we were unable to recover it. 00:33:59.016 [2024-07-24 02:12:13.622522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.016 [2024-07-24 02:12:13.622551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.016 qpair failed and we were unable to recover it. 00:33:59.016 [2024-07-24 02:12:13.622691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.016 [2024-07-24 02:12:13.622716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.016 qpair failed and we were unable to recover it. 00:33:59.016 [2024-07-24 02:12:13.622874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.016 [2024-07-24 02:12:13.622899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.016 qpair failed and we were unable to recover it. 00:33:59.016 [2024-07-24 02:12:13.623000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.016 [2024-07-24 02:12:13.623025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.016 qpair failed and we were unable to recover it. 00:33:59.016 [2024-07-24 02:12:13.623185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.016 [2024-07-24 02:12:13.623210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.016 qpair failed and we were unable to recover it. 00:33:59.016 [2024-07-24 02:12:13.623308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.016 [2024-07-24 02:12:13.623338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.016 qpair failed and we were unable to recover it. 00:33:59.016 [2024-07-24 02:12:13.623473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.016 [2024-07-24 02:12:13.623499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.016 qpair failed and we were unable to recover it. 00:33:59.016 [2024-07-24 02:12:13.623608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.016 [2024-07-24 02:12:13.623634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.016 qpair failed and we were unable to recover it. 00:33:59.016 [2024-07-24 02:12:13.623735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.016 [2024-07-24 02:12:13.623761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.016 qpair failed and we were unable to recover it. 00:33:59.016 [2024-07-24 02:12:13.623866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.016 [2024-07-24 02:12:13.623893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.016 qpair failed and we were unable to recover it. 00:33:59.016 [2024-07-24 02:12:13.624053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.016 [2024-07-24 02:12:13.624078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.016 qpair failed and we were unable to recover it. 00:33:59.016 [2024-07-24 02:12:13.624205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.016 [2024-07-24 02:12:13.624231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.016 qpair failed and we were unable to recover it. 00:33:59.016 [2024-07-24 02:12:13.624363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.016 [2024-07-24 02:12:13.624389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.016 qpair failed and we were unable to recover it. 00:33:59.016 [2024-07-24 02:12:13.624527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.016 [2024-07-24 02:12:13.624552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.016 qpair failed and we were unable to recover it. 00:33:59.016 [2024-07-24 02:12:13.624667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.016 [2024-07-24 02:12:13.624692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.016 qpair failed and we were unable to recover it. 00:33:59.016 [2024-07-24 02:12:13.624821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.016 [2024-07-24 02:12:13.624847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.016 qpair failed and we were unable to recover it. 00:33:59.016 [2024-07-24 02:12:13.625003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.016 [2024-07-24 02:12:13.625028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.016 qpair failed and we were unable to recover it. 00:33:59.016 [2024-07-24 02:12:13.625135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.016 [2024-07-24 02:12:13.625161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.016 qpair failed and we were unable to recover it. 00:33:59.016 [2024-07-24 02:12:13.625263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.016 [2024-07-24 02:12:13.625287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.016 qpair failed and we were unable to recover it. 00:33:59.016 [2024-07-24 02:12:13.625454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.016 [2024-07-24 02:12:13.625479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.016 qpair failed and we were unable to recover it. 00:33:59.016 [2024-07-24 02:12:13.625584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.016 [2024-07-24 02:12:13.625611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.016 qpair failed and we were unable to recover it. 00:33:59.016 [2024-07-24 02:12:13.625739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.016 [2024-07-24 02:12:13.625765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.016 qpair failed and we were unable to recover it. 00:33:59.016 [2024-07-24 02:12:13.625870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.016 [2024-07-24 02:12:13.625897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.016 qpair failed and we were unable to recover it. 00:33:59.016 [2024-07-24 02:12:13.626059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.016 [2024-07-24 02:12:13.626084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.016 qpair failed and we were unable to recover it. 00:33:59.016 [2024-07-24 02:12:13.626215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.016 [2024-07-24 02:12:13.626240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.016 qpair failed and we were unable to recover it. 00:33:59.016 [2024-07-24 02:12:13.626363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.017 [2024-07-24 02:12:13.626390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.017 qpair failed and we were unable to recover it. 00:33:59.017 [2024-07-24 02:12:13.626527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.017 [2024-07-24 02:12:13.626551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.017 qpair failed and we were unable to recover it. 00:33:59.017 [2024-07-24 02:12:13.626704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.017 [2024-07-24 02:12:13.626730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.017 qpair failed and we were unable to recover it. 00:33:59.017 [2024-07-24 02:12:13.626868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.017 [2024-07-24 02:12:13.626892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.017 qpair failed and we were unable to recover it. 00:33:59.017 [2024-07-24 02:12:13.626994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.017 [2024-07-24 02:12:13.627018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.017 qpair failed and we were unable to recover it. 00:33:59.017 [2024-07-24 02:12:13.627151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.017 [2024-07-24 02:12:13.627176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.017 qpair failed and we were unable to recover it. 00:33:59.017 [2024-07-24 02:12:13.627287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.017 [2024-07-24 02:12:13.627312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.017 qpair failed and we were unable to recover it. 00:33:59.017 [2024-07-24 02:12:13.627432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.017 [2024-07-24 02:12:13.627458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.017 qpair failed and we were unable to recover it. 00:33:59.017 [2024-07-24 02:12:13.627589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.017 [2024-07-24 02:12:13.627614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.017 qpair failed and we were unable to recover it. 00:33:59.017 [2024-07-24 02:12:13.627756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.017 [2024-07-24 02:12:13.627782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.017 qpair failed and we were unable to recover it. 00:33:59.017 [2024-07-24 02:12:13.627936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.017 [2024-07-24 02:12:13.627962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.017 qpair failed and we were unable to recover it. 00:33:59.017 [2024-07-24 02:12:13.628072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.017 [2024-07-24 02:12:13.628096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.017 qpair failed and we were unable to recover it. 00:33:59.017 EAL: No free 2048 kB hugepages reported on node 1 00:33:59.017 [2024-07-24 02:12:13.628229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.017 [2024-07-24 02:12:13.628255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.017 qpair failed and we were unable to recover it. 00:33:59.017 [2024-07-24 02:12:13.628361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.017 [2024-07-24 02:12:13.628386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.017 qpair failed and we were unable to recover it. 00:33:59.017 [2024-07-24 02:12:13.628554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.017 [2024-07-24 02:12:13.628580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.017 qpair failed and we were unable to recover it. 00:33:59.017 [2024-07-24 02:12:13.628760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.017 [2024-07-24 02:12:13.628798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.017 qpair failed and we were unable to recover it. 00:33:59.017 [2024-07-24 02:12:13.628945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.017 [2024-07-24 02:12:13.628973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.017 qpair failed and we were unable to recover it. 00:33:59.017 [2024-07-24 02:12:13.629110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.017 [2024-07-24 02:12:13.629136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.017 qpair failed and we were unable to recover it. 00:33:59.017 [2024-07-24 02:12:13.629271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.017 [2024-07-24 02:12:13.629297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.017 qpair failed and we were unable to recover it. 00:33:59.017 [2024-07-24 02:12:13.629407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.017 [2024-07-24 02:12:13.629433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.017 qpair failed and we were unable to recover it. 00:33:59.017 [2024-07-24 02:12:13.629560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.017 [2024-07-24 02:12:13.629586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.017 qpair failed and we were unable to recover it. 00:33:59.017 [2024-07-24 02:12:13.629744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.017 [2024-07-24 02:12:13.629769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.017 qpair failed and we were unable to recover it. 00:33:59.017 [2024-07-24 02:12:13.629908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.017 [2024-07-24 02:12:13.629933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.017 qpair failed and we were unable to recover it. 00:33:59.017 [2024-07-24 02:12:13.630069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.017 [2024-07-24 02:12:13.630095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.017 qpair failed and we were unable to recover it. 00:33:59.017 [2024-07-24 02:12:13.630207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.017 [2024-07-24 02:12:13.630235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.017 qpair failed and we were unable to recover it. 00:33:59.017 [2024-07-24 02:12:13.630387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.017 [2024-07-24 02:12:13.630413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.017 qpair failed and we were unable to recover it. 00:33:59.017 [2024-07-24 02:12:13.630542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.017 [2024-07-24 02:12:13.630568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.017 qpair failed and we were unable to recover it. 00:33:59.017 [2024-07-24 02:12:13.630706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.017 [2024-07-24 02:12:13.630731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.017 qpair failed and we were unable to recover it. 00:33:59.017 [2024-07-24 02:12:13.630846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.017 [2024-07-24 02:12:13.630875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.017 qpair failed and we were unable to recover it. 00:33:59.017 [2024-07-24 02:12:13.630979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.017 [2024-07-24 02:12:13.631005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.017 qpair failed and we were unable to recover it. 00:33:59.017 [2024-07-24 02:12:13.631111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.017 [2024-07-24 02:12:13.631138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.017 qpair failed and we were unable to recover it. 00:33:59.017 [2024-07-24 02:12:13.631255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.017 [2024-07-24 02:12:13.631282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.017 qpair failed and we were unable to recover it. 00:33:59.017 [2024-07-24 02:12:13.631501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.017 [2024-07-24 02:12:13.631528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.017 qpair failed and we were unable to recover it. 00:33:59.017 [2024-07-24 02:12:13.631677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.017 [2024-07-24 02:12:13.631702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.017 qpair failed and we were unable to recover it. 00:33:59.017 [2024-07-24 02:12:13.631838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.017 [2024-07-24 02:12:13.631863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.017 qpair failed and we were unable to recover it. 00:33:59.017 [2024-07-24 02:12:13.631995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.017 [2024-07-24 02:12:13.632021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.017 qpair failed and we were unable to recover it. 00:33:59.018 [2024-07-24 02:12:13.632156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.018 [2024-07-24 02:12:13.632181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.018 qpair failed and we were unable to recover it. 00:33:59.018 [2024-07-24 02:12:13.632290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.018 [2024-07-24 02:12:13.632330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.018 qpair failed and we were unable to recover it. 00:33:59.018 [2024-07-24 02:12:13.632465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.018 [2024-07-24 02:12:13.632491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.018 qpair failed and we were unable to recover it. 00:33:59.018 [2024-07-24 02:12:13.632629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.018 [2024-07-24 02:12:13.632655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.018 qpair failed and we were unable to recover it. 00:33:59.018 [2024-07-24 02:12:13.632751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.018 [2024-07-24 02:12:13.632777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.018 qpair failed and we were unable to recover it. 00:33:59.018 [2024-07-24 02:12:13.632907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.018 [2024-07-24 02:12:13.632932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.018 qpair failed and we were unable to recover it. 00:33:59.018 [2024-07-24 02:12:13.633044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.018 [2024-07-24 02:12:13.633069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.018 qpair failed and we were unable to recover it. 00:33:59.018 [2024-07-24 02:12:13.633221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.018 [2024-07-24 02:12:13.633246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.018 qpair failed and we were unable to recover it. 00:33:59.018 [2024-07-24 02:12:13.633383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.018 [2024-07-24 02:12:13.633409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.018 qpair failed and we were unable to recover it. 00:33:59.018 [2024-07-24 02:12:13.633558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.018 [2024-07-24 02:12:13.633586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.018 qpair failed and we were unable to recover it. 00:33:59.018 [2024-07-24 02:12:13.633702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.018 [2024-07-24 02:12:13.633728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.018 qpair failed and we were unable to recover it. 00:33:59.018 [2024-07-24 02:12:13.633858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.018 [2024-07-24 02:12:13.633883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.018 qpair failed and we were unable to recover it. 00:33:59.018 [2024-07-24 02:12:13.634018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.018 [2024-07-24 02:12:13.634044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.018 qpair failed and we were unable to recover it. 00:33:59.018 [2024-07-24 02:12:13.634181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.018 [2024-07-24 02:12:13.634206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.018 qpair failed and we were unable to recover it. 00:33:59.018 [2024-07-24 02:12:13.634312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.018 [2024-07-24 02:12:13.634341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.018 qpair failed and we were unable to recover it. 00:33:59.018 [2024-07-24 02:12:13.634454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.018 [2024-07-24 02:12:13.634478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.018 qpair failed and we were unable to recover it. 00:33:59.018 [2024-07-24 02:12:13.634589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.018 [2024-07-24 02:12:13.634613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.018 qpair failed and we were unable to recover it. 00:33:59.018 [2024-07-24 02:12:13.634715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.018 [2024-07-24 02:12:13.634740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.018 qpair failed and we were unable to recover it. 00:33:59.018 [2024-07-24 02:12:13.634892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.018 [2024-07-24 02:12:13.634916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.018 qpair failed and we were unable to recover it. 00:33:59.018 [2024-07-24 02:12:13.635070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.018 [2024-07-24 02:12:13.635109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.018 qpair failed and we were unable to recover it. 00:33:59.018 [2024-07-24 02:12:13.635218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.018 [2024-07-24 02:12:13.635245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.018 qpair failed and we were unable to recover it. 00:33:59.018 [2024-07-24 02:12:13.635381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.018 [2024-07-24 02:12:13.635409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.018 qpair failed and we were unable to recover it. 00:33:59.018 [2024-07-24 02:12:13.635543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.018 [2024-07-24 02:12:13.635568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.018 qpair failed and we were unable to recover it. 00:33:59.018 [2024-07-24 02:12:13.635682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.018 [2024-07-24 02:12:13.635707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.018 qpair failed and we were unable to recover it. 00:33:59.018 [2024-07-24 02:12:13.635868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.018 [2024-07-24 02:12:13.635893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.018 qpair failed and we were unable to recover it. 00:33:59.018 [2024-07-24 02:12:13.635998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.018 [2024-07-24 02:12:13.636024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.018 qpair failed and we were unable to recover it. 00:33:59.018 [2024-07-24 02:12:13.636152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.018 [2024-07-24 02:12:13.636177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.018 qpair failed and we were unable to recover it. 00:33:59.018 [2024-07-24 02:12:13.636347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.018 [2024-07-24 02:12:13.636373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.018 qpair failed and we were unable to recover it. 00:33:59.018 [2024-07-24 02:12:13.636481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.018 [2024-07-24 02:12:13.636507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.018 qpair failed and we were unable to recover it. 00:33:59.018 [2024-07-24 02:12:13.636639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.018 [2024-07-24 02:12:13.636664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.018 qpair failed and we were unable to recover it. 00:33:59.018 [2024-07-24 02:12:13.636779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.018 [2024-07-24 02:12:13.636804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.018 qpair failed and we were unable to recover it. 00:33:59.018 [2024-07-24 02:12:13.636953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.018 [2024-07-24 02:12:13.636980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.018 qpair failed and we were unable to recover it. 00:33:59.018 [2024-07-24 02:12:13.637087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.018 [2024-07-24 02:12:13.637117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.018 qpair failed and we were unable to recover it. 00:33:59.018 [2024-07-24 02:12:13.637264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.018 [2024-07-24 02:12:13.637290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.018 qpair failed and we were unable to recover it. 00:33:59.018 [2024-07-24 02:12:13.637454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.018 [2024-07-24 02:12:13.637479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.018 qpair failed and we were unable to recover it. 00:33:59.018 [2024-07-24 02:12:13.637585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.018 [2024-07-24 02:12:13.637609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.018 qpair failed and we were unable to recover it. 00:33:59.018 [2024-07-24 02:12:13.637763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.019 [2024-07-24 02:12:13.637789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.019 qpair failed and we were unable to recover it. 00:33:59.019 [2024-07-24 02:12:13.637895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.019 [2024-07-24 02:12:13.637920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.019 qpair failed and we were unable to recover it. 00:33:59.019 [2024-07-24 02:12:13.638024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.019 [2024-07-24 02:12:13.638050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.019 qpair failed and we were unable to recover it. 00:33:59.019 [2024-07-24 02:12:13.638212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.019 [2024-07-24 02:12:13.638237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.019 qpair failed and we were unable to recover it. 00:33:59.019 [2024-07-24 02:12:13.638372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.019 [2024-07-24 02:12:13.638397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.019 qpair failed and we were unable to recover it. 00:33:59.019 [2024-07-24 02:12:13.638503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.019 [2024-07-24 02:12:13.638530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.019 qpair failed and we were unable to recover it. 00:33:59.019 [2024-07-24 02:12:13.638661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.019 [2024-07-24 02:12:13.638685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.019 qpair failed and we were unable to recover it. 00:33:59.019 [2024-07-24 02:12:13.638794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.019 [2024-07-24 02:12:13.638820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.019 qpair failed and we were unable to recover it. 00:33:59.019 [2024-07-24 02:12:13.638918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.019 [2024-07-24 02:12:13.638943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.019 qpair failed and we were unable to recover it. 00:33:59.019 [2024-07-24 02:12:13.639079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.019 [2024-07-24 02:12:13.639103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.019 qpair failed and we were unable to recover it. 00:33:59.019 [2024-07-24 02:12:13.639212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.019 [2024-07-24 02:12:13.639237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.019 qpair failed and we were unable to recover it. 00:33:59.019 [2024-07-24 02:12:13.639376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.019 [2024-07-24 02:12:13.639401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.019 qpair failed and we were unable to recover it. 00:33:59.019 [2024-07-24 02:12:13.639535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.019 [2024-07-24 02:12:13.639560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.019 qpair failed and we were unable to recover it. 00:33:59.019 [2024-07-24 02:12:13.639664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.019 [2024-07-24 02:12:13.639690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.019 qpair failed and we were unable to recover it. 00:33:59.019 [2024-07-24 02:12:13.639817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.019 [2024-07-24 02:12:13.639841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.019 qpair failed and we were unable to recover it. 00:33:59.019 [2024-07-24 02:12:13.639948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.019 [2024-07-24 02:12:13.639973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.019 qpair failed and we were unable to recover it. 00:33:59.019 [2024-07-24 02:12:13.640082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.019 [2024-07-24 02:12:13.640108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.019 qpair failed and we were unable to recover it. 00:33:59.019 [2024-07-24 02:12:13.640220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.019 [2024-07-24 02:12:13.640246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.019 qpair failed and we were unable to recover it. 00:33:59.019 [2024-07-24 02:12:13.640372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.019 [2024-07-24 02:12:13.640397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.019 qpair failed and we were unable to recover it. 00:33:59.019 [2024-07-24 02:12:13.640525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.019 [2024-07-24 02:12:13.640550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.019 qpair failed and we were unable to recover it. 00:33:59.019 [2024-07-24 02:12:13.640699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.019 [2024-07-24 02:12:13.640724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.019 qpair failed and we were unable to recover it. 00:33:59.019 [2024-07-24 02:12:13.640861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.019 [2024-07-24 02:12:13.640886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.019 qpair failed and we were unable to recover it. 00:33:59.019 [2024-07-24 02:12:13.640992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.019 [2024-07-24 02:12:13.641018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.019 qpair failed and we were unable to recover it. 00:33:59.019 [2024-07-24 02:12:13.641179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.019 [2024-07-24 02:12:13.641204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.019 qpair failed and we were unable to recover it. 00:33:59.019 [2024-07-24 02:12:13.641335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.019 [2024-07-24 02:12:13.641360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.019 qpair failed and we were unable to recover it. 00:33:59.019 [2024-07-24 02:12:13.641477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.019 [2024-07-24 02:12:13.641503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.019 qpair failed and we were unable to recover it. 00:33:59.019 [2024-07-24 02:12:13.641603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.019 [2024-07-24 02:12:13.641629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.019 qpair failed and we were unable to recover it. 00:33:59.019 [2024-07-24 02:12:13.641788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.019 [2024-07-24 02:12:13.641814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.019 qpair failed and we were unable to recover it. 00:33:59.019 [2024-07-24 02:12:13.641921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.019 [2024-07-24 02:12:13.641947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.019 qpair failed and we were unable to recover it. 00:33:59.019 [2024-07-24 02:12:13.642056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.019 [2024-07-24 02:12:13.642081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.019 qpair failed and we were unable to recover it. 00:33:59.019 [2024-07-24 02:12:13.642189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.019 [2024-07-24 02:12:13.642214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.019 qpair failed and we were unable to recover it. 00:33:59.019 [2024-07-24 02:12:13.642311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.019 [2024-07-24 02:12:13.642341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.019 qpair failed and we were unable to recover it. 00:33:59.019 [2024-07-24 02:12:13.642474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.019 [2024-07-24 02:12:13.642500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.019 qpair failed and we were unable to recover it. 00:33:59.019 [2024-07-24 02:12:13.642612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.019 [2024-07-24 02:12:13.642638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.019 qpair failed and we were unable to recover it. 00:33:59.019 [2024-07-24 02:12:13.642737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.019 [2024-07-24 02:12:13.642763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.019 qpair failed and we were unable to recover it. 00:33:59.019 [2024-07-24 02:12:13.642896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.019 [2024-07-24 02:12:13.642921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.019 qpair failed and we were unable to recover it. 00:33:59.019 [2024-07-24 02:12:13.643029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.019 [2024-07-24 02:12:13.643054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.019 qpair failed and we were unable to recover it. 00:33:59.020 [2024-07-24 02:12:13.643165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.020 [2024-07-24 02:12:13.643190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.020 qpair failed and we were unable to recover it. 00:33:59.020 [2024-07-24 02:12:13.643297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.020 [2024-07-24 02:12:13.643342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.020 qpair failed and we were unable to recover it. 00:33:59.020 [2024-07-24 02:12:13.643478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.020 [2024-07-24 02:12:13.643505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.020 qpair failed and we were unable to recover it. 00:33:59.020 [2024-07-24 02:12:13.643640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.020 [2024-07-24 02:12:13.643665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.020 qpair failed and we were unable to recover it. 00:33:59.020 [2024-07-24 02:12:13.643793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.020 [2024-07-24 02:12:13.643818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.020 qpair failed and we were unable to recover it. 00:33:59.020 [2024-07-24 02:12:13.643956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.020 [2024-07-24 02:12:13.643982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.020 qpair failed and we were unable to recover it. 00:33:59.020 [2024-07-24 02:12:13.644115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.020 [2024-07-24 02:12:13.644140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.020 qpair failed and we were unable to recover it. 00:33:59.020 [2024-07-24 02:12:13.644273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.020 [2024-07-24 02:12:13.644299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.020 qpair failed and we were unable to recover it. 00:33:59.020 [2024-07-24 02:12:13.644434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.020 [2024-07-24 02:12:13.644459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.020 qpair failed and we were unable to recover it. 00:33:59.020 [2024-07-24 02:12:13.644617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.020 [2024-07-24 02:12:13.644643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.020 qpair failed and we were unable to recover it. 00:33:59.020 [2024-07-24 02:12:13.644751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.020 [2024-07-24 02:12:13.644778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.020 qpair failed and we were unable to recover it. 00:33:59.020 [2024-07-24 02:12:13.644933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.020 [2024-07-24 02:12:13.644959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.020 qpair failed and we were unable to recover it. 00:33:59.020 [2024-07-24 02:12:13.645078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.020 [2024-07-24 02:12:13.645116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.020 qpair failed and we were unable to recover it. 00:33:59.020 [2024-07-24 02:12:13.645226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.020 [2024-07-24 02:12:13.645253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.020 qpair failed and we were unable to recover it. 00:33:59.020 [2024-07-24 02:12:13.645405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.020 [2024-07-24 02:12:13.645433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.020 qpair failed and we were unable to recover it. 00:33:59.020 [2024-07-24 02:12:13.645567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.020 [2024-07-24 02:12:13.645593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.020 qpair failed and we were unable to recover it. 00:33:59.020 [2024-07-24 02:12:13.645755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.020 [2024-07-24 02:12:13.645780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.020 qpair failed and we were unable to recover it. 00:33:59.020 [2024-07-24 02:12:13.645929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.020 [2024-07-24 02:12:13.645955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.020 qpair failed and we were unable to recover it. 00:33:59.020 [2024-07-24 02:12:13.646062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.020 [2024-07-24 02:12:13.646087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.020 qpair failed and we were unable to recover it. 00:33:59.020 [2024-07-24 02:12:13.646203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.020 [2024-07-24 02:12:13.646228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.020 qpair failed and we were unable to recover it. 00:33:59.020 [2024-07-24 02:12:13.646352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.020 [2024-07-24 02:12:13.646379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.020 qpair failed and we were unable to recover it. 00:33:59.020 [2024-07-24 02:12:13.646511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.020 [2024-07-24 02:12:13.646537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.020 qpair failed and we were unable to recover it. 00:33:59.020 [2024-07-24 02:12:13.646672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.020 [2024-07-24 02:12:13.646697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.020 qpair failed and we were unable to recover it. 00:33:59.020 [2024-07-24 02:12:13.646832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.020 [2024-07-24 02:12:13.646857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.020 qpair failed and we were unable to recover it. 00:33:59.020 [2024-07-24 02:12:13.647005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.020 [2024-07-24 02:12:13.647030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.020 qpair failed and we were unable to recover it. 00:33:59.020 [2024-07-24 02:12:13.647162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.020 [2024-07-24 02:12:13.647187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.020 qpair failed and we were unable to recover it. 00:33:59.020 [2024-07-24 02:12:13.647352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.020 [2024-07-24 02:12:13.647389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.020 qpair failed and we were unable to recover it. 00:33:59.020 [2024-07-24 02:12:13.647499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.020 [2024-07-24 02:12:13.647527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.020 qpair failed and we were unable to recover it. 00:33:59.020 [2024-07-24 02:12:13.647699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.020 [2024-07-24 02:12:13.647738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.020 qpair failed and we were unable to recover it. 00:33:59.020 [2024-07-24 02:12:13.647882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.020 [2024-07-24 02:12:13.647909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.021 qpair failed and we were unable to recover it. 00:33:59.021 [2024-07-24 02:12:13.648010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.021 [2024-07-24 02:12:13.648036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.021 qpair failed and we were unable to recover it. 00:33:59.021 [2024-07-24 02:12:13.648177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.021 [2024-07-24 02:12:13.648202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.021 qpair failed and we were unable to recover it. 00:33:59.021 [2024-07-24 02:12:13.648351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.021 [2024-07-24 02:12:13.648378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.021 qpair failed and we were unable to recover it. 00:33:59.021 [2024-07-24 02:12:13.648484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.021 [2024-07-24 02:12:13.648509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.021 qpair failed and we were unable to recover it. 00:33:59.021 [2024-07-24 02:12:13.648622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.021 [2024-07-24 02:12:13.648649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.021 qpair failed and we were unable to recover it. 00:33:59.021 [2024-07-24 02:12:13.648763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.021 [2024-07-24 02:12:13.648787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.021 qpair failed and we were unable to recover it. 00:33:59.021 [2024-07-24 02:12:13.648895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.021 [2024-07-24 02:12:13.648924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.021 qpair failed and we were unable to recover it. 00:33:59.021 [2024-07-24 02:12:13.649035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.021 [2024-07-24 02:12:13.649060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.021 qpair failed and we were unable to recover it. 00:33:59.021 [2024-07-24 02:12:13.649166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.021 [2024-07-24 02:12:13.649192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.021 qpair failed and we were unable to recover it. 00:33:59.021 [2024-07-24 02:12:13.649312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.021 [2024-07-24 02:12:13.649343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.021 qpair failed and we were unable to recover it. 00:33:59.021 [2024-07-24 02:12:13.649479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.021 [2024-07-24 02:12:13.649504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.021 qpair failed and we were unable to recover it. 00:33:59.021 [2024-07-24 02:12:13.649649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.021 [2024-07-24 02:12:13.649675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.021 qpair failed and we were unable to recover it. 00:33:59.021 [2024-07-24 02:12:13.649777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.021 [2024-07-24 02:12:13.649802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.021 qpair failed and we were unable to recover it. 00:33:59.021 [2024-07-24 02:12:13.649958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.021 [2024-07-24 02:12:13.649983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.021 qpair failed and we were unable to recover it. 00:33:59.021 [2024-07-24 02:12:13.650116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.021 [2024-07-24 02:12:13.650142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.021 qpair failed and we were unable to recover it. 00:33:59.021 [2024-07-24 02:12:13.650260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.021 [2024-07-24 02:12:13.650285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.021 qpair failed and we were unable to recover it. 00:33:59.021 [2024-07-24 02:12:13.650405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.021 [2024-07-24 02:12:13.650432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.021 qpair failed and we were unable to recover it. 00:33:59.021 [2024-07-24 02:12:13.650569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.021 [2024-07-24 02:12:13.650593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.021 qpair failed and we were unable to recover it. 00:33:59.021 [2024-07-24 02:12:13.650732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.021 [2024-07-24 02:12:13.650757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.021 qpair failed and we were unable to recover it. 00:33:59.021 [2024-07-24 02:12:13.650890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.021 [2024-07-24 02:12:13.650915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.021 qpair failed and we were unable to recover it. 00:33:59.021 [2024-07-24 02:12:13.651048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.021 [2024-07-24 02:12:13.651073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.021 qpair failed and we were unable to recover it. 00:33:59.021 [2024-07-24 02:12:13.651211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.021 [2024-07-24 02:12:13.651236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.021 qpair failed and we were unable to recover it. 00:33:59.021 [2024-07-24 02:12:13.651368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.021 [2024-07-24 02:12:13.651394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.021 qpair failed and we were unable to recover it. 00:33:59.021 [2024-07-24 02:12:13.651514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.021 [2024-07-24 02:12:13.651546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.021 qpair failed and we were unable to recover it. 00:33:59.021 [2024-07-24 02:12:13.651660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.021 [2024-07-24 02:12:13.651687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.021 qpair failed and we were unable to recover it. 00:33:59.021 [2024-07-24 02:12:13.651821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.021 [2024-07-24 02:12:13.651847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.021 qpair failed and we were unable to recover it. 00:33:59.021 [2024-07-24 02:12:13.652013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.021 [2024-07-24 02:12:13.652039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.021 qpair failed and we were unable to recover it. 00:33:59.021 [2024-07-24 02:12:13.652253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.021 [2024-07-24 02:12:13.652279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.021 qpair failed and we were unable to recover it. 00:33:59.021 [2024-07-24 02:12:13.652397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.021 [2024-07-24 02:12:13.652424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.021 qpair failed and we were unable to recover it. 00:33:59.021 [2024-07-24 02:12:13.652538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.021 [2024-07-24 02:12:13.652563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.021 qpair failed and we were unable to recover it. 00:33:59.021 [2024-07-24 02:12:13.652725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.021 [2024-07-24 02:12:13.652751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.021 qpair failed and we were unable to recover it. 00:33:59.021 [2024-07-24 02:12:13.652882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.021 [2024-07-24 02:12:13.652908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.021 qpair failed and we were unable to recover it. 00:33:59.021 [2024-07-24 02:12:13.653065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.021 [2024-07-24 02:12:13.653091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.021 qpair failed and we were unable to recover it. 00:33:59.021 [2024-07-24 02:12:13.653225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.021 [2024-07-24 02:12:13.653252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.021 qpair failed and we were unable to recover it. 00:33:59.021 [2024-07-24 02:12:13.653464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.021 [2024-07-24 02:12:13.653490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.021 qpair failed and we were unable to recover it. 00:33:59.021 [2024-07-24 02:12:13.653652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.021 [2024-07-24 02:12:13.653678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.021 qpair failed and we were unable to recover it. 00:33:59.021 [2024-07-24 02:12:13.653794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.022 [2024-07-24 02:12:13.653824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.022 qpair failed and we were unable to recover it. 00:33:59.022 [2024-07-24 02:12:13.653930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.022 [2024-07-24 02:12:13.653956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.022 qpair failed and we were unable to recover it. 00:33:59.022 [2024-07-24 02:12:13.654094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.022 [2024-07-24 02:12:13.654121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.022 qpair failed and we were unable to recover it. 00:33:59.022 [2024-07-24 02:12:13.654268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.022 [2024-07-24 02:12:13.654293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.022 qpair failed and we were unable to recover it. 00:33:59.022 [2024-07-24 02:12:13.654416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.022 [2024-07-24 02:12:13.654443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.022 qpair failed and we were unable to recover it. 00:33:59.022 [2024-07-24 02:12:13.654543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.022 [2024-07-24 02:12:13.654570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.022 qpair failed and we were unable to recover it. 00:33:59.022 [2024-07-24 02:12:13.654696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.022 [2024-07-24 02:12:13.654721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.022 qpair failed and we were unable to recover it. 00:33:59.022 [2024-07-24 02:12:13.654854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.022 [2024-07-24 02:12:13.654880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.022 qpair failed and we were unable to recover it. 00:33:59.022 [2024-07-24 02:12:13.655021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.022 [2024-07-24 02:12:13.655047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.022 qpair failed and we were unable to recover it. 00:33:59.022 [2024-07-24 02:12:13.655188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.022 [2024-07-24 02:12:13.655228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.022 qpair failed and we were unable to recover it. 00:33:59.022 [2024-07-24 02:12:13.655386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.022 [2024-07-24 02:12:13.655425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.022 qpair failed and we were unable to recover it. 00:33:59.022 [2024-07-24 02:12:13.655534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.022 [2024-07-24 02:12:13.655561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.022 qpair failed and we were unable to recover it. 00:33:59.022 [2024-07-24 02:12:13.655698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.022 [2024-07-24 02:12:13.655724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.022 qpair failed and we were unable to recover it. 00:33:59.022 [2024-07-24 02:12:13.655857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.022 [2024-07-24 02:12:13.655882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.022 qpair failed and we were unable to recover it. 00:33:59.022 [2024-07-24 02:12:13.656021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.022 [2024-07-24 02:12:13.656047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.022 qpair failed and we were unable to recover it. 00:33:59.022 [2024-07-24 02:12:13.656149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.022 [2024-07-24 02:12:13.656174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.022 qpair failed and we were unable to recover it. 00:33:59.022 [2024-07-24 02:12:13.656273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.022 [2024-07-24 02:12:13.656297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.022 qpair failed and we were unable to recover it. 00:33:59.022 [2024-07-24 02:12:13.656445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.022 [2024-07-24 02:12:13.656474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.022 qpair failed and we were unable to recover it. 00:33:59.022 [2024-07-24 02:12:13.656586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.022 [2024-07-24 02:12:13.656612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.022 qpair failed and we were unable to recover it. 00:33:59.022 [2024-07-24 02:12:13.656750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.022 [2024-07-24 02:12:13.656776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.022 qpair failed and we were unable to recover it. 00:33:59.022 [2024-07-24 02:12:13.656879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.022 [2024-07-24 02:12:13.656905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.022 qpair failed and we were unable to recover it. 00:33:59.022 [2024-07-24 02:12:13.657032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.022 [2024-07-24 02:12:13.657057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.022 qpair failed and we were unable to recover it. 00:33:59.022 [2024-07-24 02:12:13.657159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.022 [2024-07-24 02:12:13.657184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.022 qpair failed and we were unable to recover it. 00:33:59.022 [2024-07-24 02:12:13.657343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.022 [2024-07-24 02:12:13.657368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.022 qpair failed and we were unable to recover it. 00:33:59.022 [2024-07-24 02:12:13.657530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.022 [2024-07-24 02:12:13.657555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.022 qpair failed and we were unable to recover it. 00:33:59.022 [2024-07-24 02:12:13.657669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.022 [2024-07-24 02:12:13.657694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.022 qpair failed and we were unable to recover it. 00:33:59.022 [2024-07-24 02:12:13.657857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.022 [2024-07-24 02:12:13.657881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.022 qpair failed and we were unable to recover it. 00:33:59.022 [2024-07-24 02:12:13.657986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.022 [2024-07-24 02:12:13.658015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.022 qpair failed and we were unable to recover it. 00:33:59.022 [2024-07-24 02:12:13.658114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.022 [2024-07-24 02:12:13.658140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.022 qpair failed and we were unable to recover it. 00:33:59.022 [2024-07-24 02:12:13.658276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.022 [2024-07-24 02:12:13.658303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.022 qpair failed and we were unable to recover it. 00:33:59.022 [2024-07-24 02:12:13.658422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.022 [2024-07-24 02:12:13.658448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.022 qpair failed and we were unable to recover it. 00:33:59.022 [2024-07-24 02:12:13.658550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.022 [2024-07-24 02:12:13.658575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.022 qpair failed and we were unable to recover it. 00:33:59.022 [2024-07-24 02:12:13.658707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.022 [2024-07-24 02:12:13.658732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.022 qpair failed and we were unable to recover it. 00:33:59.022 [2024-07-24 02:12:13.658866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.022 [2024-07-24 02:12:13.658891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.022 qpair failed and we were unable to recover it. 00:33:59.022 [2024-07-24 02:12:13.659104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.022 [2024-07-24 02:12:13.659129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.022 qpair failed and we were unable to recover it. 00:33:59.022 [2024-07-24 02:12:13.659343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.022 [2024-07-24 02:12:13.659368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.022 qpair failed and we were unable to recover it. 00:33:59.022 [2024-07-24 02:12:13.659482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.022 [2024-07-24 02:12:13.659507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.023 qpair failed and we were unable to recover it. 00:33:59.023 [2024-07-24 02:12:13.659607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.023 [2024-07-24 02:12:13.659632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.023 qpair failed and we were unable to recover it. 00:33:59.023 [2024-07-24 02:12:13.659768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.023 [2024-07-24 02:12:13.659792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.023 qpair failed and we were unable to recover it. 00:33:59.023 [2024-07-24 02:12:13.659887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.023 [2024-07-24 02:12:13.659912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.023 qpair failed and we were unable to recover it. 00:33:59.023 [2024-07-24 02:12:13.660043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.023 [2024-07-24 02:12:13.660068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.023 qpair failed and we were unable to recover it. 00:33:59.023 [2024-07-24 02:12:13.660281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.023 [2024-07-24 02:12:13.660306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.023 qpair failed and we were unable to recover it. 00:33:59.023 [2024-07-24 02:12:13.660424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.023 [2024-07-24 02:12:13.660452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.023 qpair failed and we were unable to recover it. 00:33:59.023 [2024-07-24 02:12:13.660558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.023 [2024-07-24 02:12:13.660583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.023 qpair failed and we were unable to recover it. 00:33:59.023 [2024-07-24 02:12:13.660685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.023 [2024-07-24 02:12:13.660709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.023 qpair failed and we were unable to recover it. 00:33:59.023 [2024-07-24 02:12:13.660838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.023 [2024-07-24 02:12:13.660863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.023 qpair failed and we were unable to recover it. 00:33:59.023 [2024-07-24 02:12:13.660996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.023 [2024-07-24 02:12:13.661021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.023 qpair failed and we were unable to recover it. 00:33:59.023 [2024-07-24 02:12:13.661172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.023 [2024-07-24 02:12:13.661197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.023 qpair failed and we were unable to recover it. 00:33:59.023 [2024-07-24 02:12:13.661298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.023 [2024-07-24 02:12:13.661334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.023 qpair failed and we were unable to recover it. 00:33:59.023 [2024-07-24 02:12:13.661470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.023 [2024-07-24 02:12:13.661495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.023 qpair failed and we were unable to recover it. 00:33:59.023 [2024-07-24 02:12:13.661633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.023 [2024-07-24 02:12:13.661661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.023 qpair failed and we were unable to recover it. 00:33:59.023 [2024-07-24 02:12:13.661793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.023 [2024-07-24 02:12:13.661818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.023 qpair failed and we were unable to recover it. 00:33:59.023 [2024-07-24 02:12:13.661923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.023 [2024-07-24 02:12:13.661949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.023 qpair failed and we were unable to recover it. 00:33:59.023 [2024-07-24 02:12:13.662070] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:59.023 [2024-07-24 02:12:13.662090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.023 [2024-07-24 02:12:13.662115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.023 qpair failed and we were unable to recover it. 00:33:59.023 [2024-07-24 02:12:13.662253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.023 [2024-07-24 02:12:13.662279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.023 qpair failed and we were unable to recover it. 00:33:59.023 [2024-07-24 02:12:13.662495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.023 [2024-07-24 02:12:13.662520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.023 qpair failed and we were unable to recover it. 00:33:59.023 [2024-07-24 02:12:13.662622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.023 [2024-07-24 02:12:13.662648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.023 qpair failed and we were unable to recover it. 00:33:59.023 [2024-07-24 02:12:13.662779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.023 [2024-07-24 02:12:13.662803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.023 qpair failed and we were unable to recover it. 00:33:59.023 [2024-07-24 02:12:13.662934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.023 [2024-07-24 02:12:13.662959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.023 qpair failed and we were unable to recover it. 00:33:59.023 [2024-07-24 02:12:13.663101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.023 [2024-07-24 02:12:13.663128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.023 qpair failed and we were unable to recover it. 00:33:59.023 [2024-07-24 02:12:13.663244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.023 [2024-07-24 02:12:13.663269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.023 qpair failed and we were unable to recover it. 00:33:59.023 [2024-07-24 02:12:13.663413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.023 [2024-07-24 02:12:13.663439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.023 qpair failed and we were unable to recover it. 00:33:59.023 [2024-07-24 02:12:13.663545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.023 [2024-07-24 02:12:13.663570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.023 qpair failed and we were unable to recover it. 00:33:59.023 [2024-07-24 02:12:13.663678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.023 [2024-07-24 02:12:13.663703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.023 qpair failed and we were unable to recover it. 00:33:59.023 [2024-07-24 02:12:13.663835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.023 [2024-07-24 02:12:13.663861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.023 qpair failed and we were unable to recover it. 00:33:59.023 [2024-07-24 02:12:13.663999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.023 [2024-07-24 02:12:13.664026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.023 qpair failed and we were unable to recover it. 00:33:59.023 [2024-07-24 02:12:13.664138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.023 [2024-07-24 02:12:13.664163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.023 qpair failed and we were unable to recover it. 00:33:59.023 [2024-07-24 02:12:13.664299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.023 [2024-07-24 02:12:13.664330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.023 qpair failed and we were unable to recover it. 00:33:59.023 [2024-07-24 02:12:13.664444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.023 [2024-07-24 02:12:13.664471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.023 qpair failed and we were unable to recover it. 00:33:59.023 [2024-07-24 02:12:13.664627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.023 [2024-07-24 02:12:13.664653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.023 qpair failed and we were unable to recover it. 00:33:59.023 [2024-07-24 02:12:13.664780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.023 [2024-07-24 02:12:13.664805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.023 qpair failed and we were unable to recover it. 00:33:59.023 [2024-07-24 02:12:13.664909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.023 [2024-07-24 02:12:13.664936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.023 qpair failed and we were unable to recover it. 00:33:59.023 [2024-07-24 02:12:13.665099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.023 [2024-07-24 02:12:13.665125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.023 qpair failed and we were unable to recover it. 00:33:59.024 [2024-07-24 02:12:13.665256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.024 [2024-07-24 02:12:13.665281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.024 qpair failed and we were unable to recover it. 00:33:59.024 [2024-07-24 02:12:13.665389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.024 [2024-07-24 02:12:13.665414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.024 qpair failed and we were unable to recover it. 00:33:59.024 [2024-07-24 02:12:13.665550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.024 [2024-07-24 02:12:13.665575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.024 qpair failed and we were unable to recover it. 00:33:59.024 [2024-07-24 02:12:13.665711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.024 [2024-07-24 02:12:13.665735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.024 qpair failed and we were unable to recover it. 00:33:59.024 [2024-07-24 02:12:13.665838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.024 [2024-07-24 02:12:13.665862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.024 qpair failed and we were unable to recover it. 00:33:59.024 [2024-07-24 02:12:13.666000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.024 [2024-07-24 02:12:13.666025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.024 qpair failed and we were unable to recover it. 00:33:59.024 [2024-07-24 02:12:13.666126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.024 [2024-07-24 02:12:13.666151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.024 qpair failed and we were unable to recover it. 00:33:59.024 [2024-07-24 02:12:13.666378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.024 [2024-07-24 02:12:13.666417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.024 qpair failed and we were unable to recover it. 00:33:59.024 [2024-07-24 02:12:13.666573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.024 [2024-07-24 02:12:13.666617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.024 qpair failed and we were unable to recover it. 00:33:59.024 [2024-07-24 02:12:13.666723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.024 [2024-07-24 02:12:13.666751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.024 qpair failed and we were unable to recover it. 00:33:59.024 [2024-07-24 02:12:13.666893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.024 [2024-07-24 02:12:13.666919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.024 qpair failed and we were unable to recover it. 00:33:59.024 [2024-07-24 02:12:13.667056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.024 [2024-07-24 02:12:13.667082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.024 qpair failed and we were unable to recover it. 00:33:59.024 [2024-07-24 02:12:13.667240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.024 [2024-07-24 02:12:13.667266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.024 qpair failed and we were unable to recover it. 00:33:59.024 [2024-07-24 02:12:13.667395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.024 [2024-07-24 02:12:13.667423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.024 qpair failed and we were unable to recover it. 00:33:59.024 [2024-07-24 02:12:13.667576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.024 [2024-07-24 02:12:13.667601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.024 qpair failed and we were unable to recover it. 00:33:59.024 [2024-07-24 02:12:13.667707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.024 [2024-07-24 02:12:13.667732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.024 qpair failed and we were unable to recover it. 00:33:59.024 [2024-07-24 02:12:13.667870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.024 [2024-07-24 02:12:13.667895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.024 qpair failed and we were unable to recover it. 00:33:59.024 [2024-07-24 02:12:13.668009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.024 [2024-07-24 02:12:13.668035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.024 qpair failed and we were unable to recover it. 00:33:59.024 [2024-07-24 02:12:13.668146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.024 [2024-07-24 02:12:13.668171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.024 qpair failed and we were unable to recover it. 00:33:59.024 [2024-07-24 02:12:13.668275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.024 [2024-07-24 02:12:13.668310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.024 qpair failed and we were unable to recover it. 00:33:59.024 [2024-07-24 02:12:13.668461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.024 [2024-07-24 02:12:13.668486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.024 qpair failed and we were unable to recover it. 00:33:59.024 [2024-07-24 02:12:13.668648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.024 [2024-07-24 02:12:13.668673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.024 qpair failed and we were unable to recover it. 00:33:59.024 [2024-07-24 02:12:13.668785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.024 [2024-07-24 02:12:13.668810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.024 qpair failed and we were unable to recover it. 00:33:59.024 [2024-07-24 02:12:13.668921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.024 [2024-07-24 02:12:13.668947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.024 qpair failed and we were unable to recover it. 00:33:59.024 [2024-07-24 02:12:13.669079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.024 [2024-07-24 02:12:13.669104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.024 qpair failed and we were unable to recover it. 00:33:59.024 [2024-07-24 02:12:13.669247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.024 [2024-07-24 02:12:13.669272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.024 qpair failed and we were unable to recover it. 00:33:59.024 [2024-07-24 02:12:13.669403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.024 [2024-07-24 02:12:13.669429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.024 qpair failed and we were unable to recover it. 00:33:59.024 [2024-07-24 02:12:13.669564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.024 [2024-07-24 02:12:13.669589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.024 qpair failed and we were unable to recover it. 00:33:59.024 [2024-07-24 02:12:13.669733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.024 [2024-07-24 02:12:13.669758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.024 qpair failed and we were unable to recover it. 00:33:59.024 [2024-07-24 02:12:13.669887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.024 [2024-07-24 02:12:13.669912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.024 qpair failed and we were unable to recover it. 00:33:59.024 [2024-07-24 02:12:13.670056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.024 [2024-07-24 02:12:13.670081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.024 qpair failed and we were unable to recover it. 00:33:59.024 [2024-07-24 02:12:13.670211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.024 [2024-07-24 02:12:13.670236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.024 qpair failed and we were unable to recover it. 00:33:59.024 [2024-07-24 02:12:13.670349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.024 [2024-07-24 02:12:13.670374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.024 qpair failed and we were unable to recover it. 00:33:59.024 [2024-07-24 02:12:13.670478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.024 [2024-07-24 02:12:13.670504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.024 qpair failed and we were unable to recover it. 00:33:59.024 [2024-07-24 02:12:13.670618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.024 [2024-07-24 02:12:13.670643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.024 qpair failed and we were unable to recover it. 00:33:59.024 [2024-07-24 02:12:13.670792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.024 [2024-07-24 02:12:13.670824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.024 qpair failed and we were unable to recover it. 00:33:59.024 [2024-07-24 02:12:13.670948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.024 [2024-07-24 02:12:13.670975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.024 qpair failed and we were unable to recover it. 00:33:59.025 [2024-07-24 02:12:13.671110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.025 [2024-07-24 02:12:13.671136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.025 qpair failed and we were unable to recover it. 00:33:59.025 [2024-07-24 02:12:13.671273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.025 [2024-07-24 02:12:13.671299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.025 qpair failed and we were unable to recover it. 00:33:59.025 [2024-07-24 02:12:13.671412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.025 [2024-07-24 02:12:13.671439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.025 qpair failed and we were unable to recover it. 00:33:59.025 [2024-07-24 02:12:13.671586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.025 [2024-07-24 02:12:13.671611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.025 qpair failed and we were unable to recover it. 00:33:59.025 [2024-07-24 02:12:13.671769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.025 [2024-07-24 02:12:13.671794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.025 qpair failed and we were unable to recover it. 00:33:59.025 [2024-07-24 02:12:13.671925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.025 [2024-07-24 02:12:13.671950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.025 qpair failed and we were unable to recover it. 00:33:59.025 [2024-07-24 02:12:13.672067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.025 [2024-07-24 02:12:13.672092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.025 qpair failed and we were unable to recover it. 00:33:59.025 [2024-07-24 02:12:13.672228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.025 [2024-07-24 02:12:13.672254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.025 qpair failed and we were unable to recover it. 00:33:59.025 [2024-07-24 02:12:13.672387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.025 [2024-07-24 02:12:13.672412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.025 qpair failed and we were unable to recover it. 00:33:59.025 [2024-07-24 02:12:13.672519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.025 [2024-07-24 02:12:13.672544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.025 qpair failed and we were unable to recover it. 00:33:59.025 [2024-07-24 02:12:13.672680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.025 [2024-07-24 02:12:13.672705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.025 qpair failed and we were unable to recover it. 00:33:59.025 [2024-07-24 02:12:13.672808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.025 [2024-07-24 02:12:13.672833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.025 qpair failed and we were unable to recover it. 00:33:59.025 [2024-07-24 02:12:13.672937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.025 [2024-07-24 02:12:13.672962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.025 qpair failed and we were unable to recover it. 00:33:59.025 [2024-07-24 02:12:13.673075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.025 [2024-07-24 02:12:13.673099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.025 qpair failed and we were unable to recover it. 00:33:59.025 [2024-07-24 02:12:13.673204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.025 [2024-07-24 02:12:13.673229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.025 qpair failed and we were unable to recover it. 00:33:59.025 [2024-07-24 02:12:13.673387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.025 [2024-07-24 02:12:13.673412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.025 qpair failed and we were unable to recover it. 00:33:59.025 [2024-07-24 02:12:13.673511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.025 [2024-07-24 02:12:13.673537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.025 qpair failed and we were unable to recover it. 00:33:59.025 [2024-07-24 02:12:13.673633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.025 [2024-07-24 02:12:13.673658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.025 qpair failed and we were unable to recover it. 00:33:59.025 [2024-07-24 02:12:13.673785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.025 [2024-07-24 02:12:13.673809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.025 qpair failed and we were unable to recover it. 00:33:59.025 [2024-07-24 02:12:13.673911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.025 [2024-07-24 02:12:13.673937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.025 qpair failed and we were unable to recover it. 00:33:59.025 [2024-07-24 02:12:13.674095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.025 [2024-07-24 02:12:13.674120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.025 qpair failed and we were unable to recover it. 00:33:59.025 [2024-07-24 02:12:13.674258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.025 [2024-07-24 02:12:13.674283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.025 qpair failed and we were unable to recover it. 00:33:59.025 [2024-07-24 02:12:13.674402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.025 [2024-07-24 02:12:13.674430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.025 qpair failed and we were unable to recover it. 00:33:59.025 [2024-07-24 02:12:13.674538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.025 [2024-07-24 02:12:13.674564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.025 qpair failed and we were unable to recover it. 00:33:59.025 [2024-07-24 02:12:13.674704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.025 [2024-07-24 02:12:13.674729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.025 qpair failed and we were unable to recover it. 00:33:59.025 [2024-07-24 02:12:13.674836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.025 [2024-07-24 02:12:13.674862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.025 qpair failed and we were unable to recover it. 00:33:59.025 [2024-07-24 02:12:13.674971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.025 [2024-07-24 02:12:13.674995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.025 qpair failed and we were unable to recover it. 00:33:59.025 [2024-07-24 02:12:13.675102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.025 [2024-07-24 02:12:13.675126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.025 qpair failed and we were unable to recover it. 00:33:59.025 [2024-07-24 02:12:13.675258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.025 [2024-07-24 02:12:13.675283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.025 qpair failed and we were unable to recover it. 00:33:59.025 [2024-07-24 02:12:13.675432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.025 [2024-07-24 02:12:13.675459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.025 qpair failed and we were unable to recover it. 00:33:59.025 [2024-07-24 02:12:13.675567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.025 [2024-07-24 02:12:13.675593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.025 qpair failed and we were unable to recover it. 00:33:59.025 [2024-07-24 02:12:13.675741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.026 [2024-07-24 02:12:13.675766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.026 qpair failed and we were unable to recover it. 00:33:59.026 [2024-07-24 02:12:13.675895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.026 [2024-07-24 02:12:13.675920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.026 qpair failed and we were unable to recover it. 00:33:59.026 [2024-07-24 02:12:13.676065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.026 [2024-07-24 02:12:13.676089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.026 qpair failed and we were unable to recover it. 00:33:59.026 [2024-07-24 02:12:13.676192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.026 [2024-07-24 02:12:13.676219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.026 qpair failed and we were unable to recover it. 00:33:59.026 [2024-07-24 02:12:13.676355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.026 [2024-07-24 02:12:13.676381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.026 qpair failed and we were unable to recover it. 00:33:59.026 [2024-07-24 02:12:13.676487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.026 [2024-07-24 02:12:13.676512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.026 qpair failed and we were unable to recover it. 00:33:59.026 [2024-07-24 02:12:13.676623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.026 [2024-07-24 02:12:13.676647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.026 qpair failed and we were unable to recover it. 00:33:59.026 [2024-07-24 02:12:13.676752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.026 [2024-07-24 02:12:13.676776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.026 qpair failed and we were unable to recover it. 00:33:59.026 [2024-07-24 02:12:13.676889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.026 [2024-07-24 02:12:13.676914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.026 qpair failed and we were unable to recover it. 00:33:59.026 [2024-07-24 02:12:13.677048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.026 [2024-07-24 02:12:13.677073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.026 qpair failed and we were unable to recover it. 00:33:59.026 [2024-07-24 02:12:13.677199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.026 [2024-07-24 02:12:13.677224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.026 qpair failed and we were unable to recover it. 00:33:59.026 [2024-07-24 02:12:13.677358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.026 [2024-07-24 02:12:13.677383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.026 qpair failed and we were unable to recover it. 00:33:59.026 [2024-07-24 02:12:13.677495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.026 [2024-07-24 02:12:13.677533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.026 qpair failed and we were unable to recover it. 00:33:59.026 [2024-07-24 02:12:13.677719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.026 [2024-07-24 02:12:13.677760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.026 qpair failed and we were unable to recover it. 00:33:59.026 [2024-07-24 02:12:13.678003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.026 [2024-07-24 02:12:13.678044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.026 qpair failed and we were unable to recover it. 00:33:59.026 [2024-07-24 02:12:13.678189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.026 [2024-07-24 02:12:13.678218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.026 qpair failed and we were unable to recover it. 00:33:59.026 [2024-07-24 02:12:13.678366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.026 [2024-07-24 02:12:13.678394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.026 qpair failed and we were unable to recover it. 00:33:59.026 [2024-07-24 02:12:13.678547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.026 [2024-07-24 02:12:13.678573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.026 qpair failed and we were unable to recover it. 00:33:59.026 [2024-07-24 02:12:13.678681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.026 [2024-07-24 02:12:13.678707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.026 qpair failed and we were unable to recover it. 00:33:59.026 [2024-07-24 02:12:13.678845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.026 [2024-07-24 02:12:13.678870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.026 qpair failed and we were unable to recover it. 00:33:59.026 [2024-07-24 02:12:13.679002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.026 [2024-07-24 02:12:13.679027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.026 qpair failed and we were unable to recover it. 00:33:59.026 [2024-07-24 02:12:13.679160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.026 [2024-07-24 02:12:13.679189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.026 qpair failed and we were unable to recover it. 00:33:59.026 [2024-07-24 02:12:13.679298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.026 [2024-07-24 02:12:13.679327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.026 qpair failed and we were unable to recover it. 00:33:59.026 [2024-07-24 02:12:13.679461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.026 [2024-07-24 02:12:13.679486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.026 qpair failed and we were unable to recover it. 00:33:59.026 [2024-07-24 02:12:13.679656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.026 [2024-07-24 02:12:13.679683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.026 qpair failed and we were unable to recover it. 00:33:59.026 [2024-07-24 02:12:13.679790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.026 [2024-07-24 02:12:13.679816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.026 qpair failed and we were unable to recover it. 00:33:59.026 [2024-07-24 02:12:13.679923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.026 [2024-07-24 02:12:13.679951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.026 qpair failed and we were unable to recover it. 00:33:59.026 [2024-07-24 02:12:13.680054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.026 [2024-07-24 02:12:13.680081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.026 qpair failed and we were unable to recover it. 00:33:59.026 [2024-07-24 02:12:13.680218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.026 [2024-07-24 02:12:13.680245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.026 qpair failed and we were unable to recover it. 00:33:59.026 [2024-07-24 02:12:13.680363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.026 [2024-07-24 02:12:13.680402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.026 qpair failed and we were unable to recover it. 00:33:59.026 [2024-07-24 02:12:13.680561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.026 [2024-07-24 02:12:13.680587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.026 qpair failed and we were unable to recover it. 00:33:59.026 [2024-07-24 02:12:13.680695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.026 [2024-07-24 02:12:13.680720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.026 qpair failed and we were unable to recover it. 00:33:59.026 [2024-07-24 02:12:13.680848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.026 [2024-07-24 02:12:13.680873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.026 qpair failed and we were unable to recover it. 00:33:59.026 [2024-07-24 02:12:13.680972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.026 [2024-07-24 02:12:13.680996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.026 qpair failed and we were unable to recover it. 00:33:59.026 [2024-07-24 02:12:13.681104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.026 [2024-07-24 02:12:13.681128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.026 qpair failed and we were unable to recover it. 00:33:59.026 [2024-07-24 02:12:13.681237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.026 [2024-07-24 02:12:13.681261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.026 qpair failed and we were unable to recover it. 00:33:59.026 [2024-07-24 02:12:13.681391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.026 [2024-07-24 02:12:13.681416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.026 qpair failed and we were unable to recover it. 00:33:59.027 [2024-07-24 02:12:13.681555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.027 [2024-07-24 02:12:13.681581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.027 qpair failed and we were unable to recover it. 00:33:59.027 [2024-07-24 02:12:13.681694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.027 [2024-07-24 02:12:13.681719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.027 qpair failed and we were unable to recover it. 00:33:59.027 [2024-07-24 02:12:13.681855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.027 [2024-07-24 02:12:13.681880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.027 qpair failed and we were unable to recover it. 00:33:59.027 [2024-07-24 02:12:13.681987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.027 [2024-07-24 02:12:13.682013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.027 qpair failed and we were unable to recover it. 00:33:59.027 [2024-07-24 02:12:13.682143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.027 [2024-07-24 02:12:13.682167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.027 qpair failed and we were unable to recover it. 00:33:59.027 [2024-07-24 02:12:13.682297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.027 [2024-07-24 02:12:13.682330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.027 qpair failed and we were unable to recover it. 00:33:59.027 [2024-07-24 02:12:13.682449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.027 [2024-07-24 02:12:13.682475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.027 qpair failed and we were unable to recover it. 00:33:59.027 [2024-07-24 02:12:13.682687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.027 [2024-07-24 02:12:13.682711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.027 qpair failed and we were unable to recover it. 00:33:59.027 [2024-07-24 02:12:13.682840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.027 [2024-07-24 02:12:13.682865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.027 qpair failed and we were unable to recover it. 00:33:59.027 [2024-07-24 02:12:13.682992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.027 [2024-07-24 02:12:13.683017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.027 qpair failed and we were unable to recover it. 00:33:59.027 [2024-07-24 02:12:13.683140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.027 [2024-07-24 02:12:13.683165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.027 qpair failed and we were unable to recover it. 00:33:59.027 [2024-07-24 02:12:13.683270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.027 [2024-07-24 02:12:13.683299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.027 qpair failed and we were unable to recover it. 00:33:59.027 [2024-07-24 02:12:13.683451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.027 [2024-07-24 02:12:13.683476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.027 qpair failed and we were unable to recover it. 00:33:59.027 [2024-07-24 02:12:13.683594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.027 [2024-07-24 02:12:13.683632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.027 qpair failed and we were unable to recover it. 00:33:59.027 [2024-07-24 02:12:13.683758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.027 [2024-07-24 02:12:13.683785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.027 qpair failed and we were unable to recover it. 00:33:59.027 [2024-07-24 02:12:13.683925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.027 [2024-07-24 02:12:13.683951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.027 qpair failed and we were unable to recover it. 00:33:59.027 [2024-07-24 02:12:13.684053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.027 [2024-07-24 02:12:13.684079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.027 qpair failed and we were unable to recover it. 00:33:59.027 [2024-07-24 02:12:13.684206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.027 [2024-07-24 02:12:13.684231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.027 qpair failed and we were unable to recover it. 00:33:59.027 [2024-07-24 02:12:13.684342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.027 [2024-07-24 02:12:13.684369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.027 qpair failed and we were unable to recover it. 00:33:59.027 [2024-07-24 02:12:13.684474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.027 [2024-07-24 02:12:13.684500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.027 qpair failed and we were unable to recover it. 00:33:59.027 [2024-07-24 02:12:13.684632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.027 [2024-07-24 02:12:13.684657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.027 qpair failed and we were unable to recover it. 00:33:59.027 [2024-07-24 02:12:13.684763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.027 [2024-07-24 02:12:13.684788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.027 qpair failed and we were unable to recover it. 00:33:59.027 [2024-07-24 02:12:13.684916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.027 [2024-07-24 02:12:13.684940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.027 qpair failed and we were unable to recover it. 00:33:59.027 [2024-07-24 02:12:13.685074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.027 [2024-07-24 02:12:13.685098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.027 qpair failed and we were unable to recover it. 00:33:59.027 [2024-07-24 02:12:13.685202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.027 [2024-07-24 02:12:13.685228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.027 qpair failed and we were unable to recover it. 00:33:59.027 [2024-07-24 02:12:13.685376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.027 [2024-07-24 02:12:13.685401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.027 qpair failed and we were unable to recover it. 00:33:59.027 [2024-07-24 02:12:13.685499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.027 [2024-07-24 02:12:13.685524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.027 qpair failed and we were unable to recover it. 00:33:59.027 [2024-07-24 02:12:13.685672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.027 [2024-07-24 02:12:13.685697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.027 qpair failed and we were unable to recover it. 00:33:59.027 [2024-07-24 02:12:13.685837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.027 [2024-07-24 02:12:13.685864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.027 qpair failed and we were unable to recover it. 00:33:59.027 [2024-07-24 02:12:13.685995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.027 [2024-07-24 02:12:13.686021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.027 qpair failed and we were unable to recover it. 00:33:59.027 [2024-07-24 02:12:13.686181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.027 [2024-07-24 02:12:13.686207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.027 qpair failed and we were unable to recover it. 00:33:59.027 [2024-07-24 02:12:13.686312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.027 [2024-07-24 02:12:13.686343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.027 qpair failed and we were unable to recover it. 00:33:59.027 [2024-07-24 02:12:13.686448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.027 [2024-07-24 02:12:13.686473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.027 qpair failed and we were unable to recover it. 00:33:59.027 [2024-07-24 02:12:13.686607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.027 [2024-07-24 02:12:13.686632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.027 qpair failed and we were unable to recover it. 00:33:59.027 [2024-07-24 02:12:13.686847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.027 [2024-07-24 02:12:13.686873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.027 qpair failed and we were unable to recover it. 00:33:59.027 [2024-07-24 02:12:13.686978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.028 [2024-07-24 02:12:13.687005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.028 qpair failed and we were unable to recover it. 00:33:59.028 [2024-07-24 02:12:13.687176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.028 [2024-07-24 02:12:13.687202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.028 qpair failed and we were unable to recover it. 00:33:59.028 [2024-07-24 02:12:13.687337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.028 [2024-07-24 02:12:13.687363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.028 qpair failed and we were unable to recover it. 00:33:59.028 [2024-07-24 02:12:13.687509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.028 [2024-07-24 02:12:13.687553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.028 qpair failed and we were unable to recover it. 00:33:59.028 [2024-07-24 02:12:13.687727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.028 [2024-07-24 02:12:13.687754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.028 qpair failed and we were unable to recover it. 00:33:59.028 [2024-07-24 02:12:13.687891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.028 [2024-07-24 02:12:13.687916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.028 qpair failed and we were unable to recover it. 00:33:59.028 [2024-07-24 02:12:13.688024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.028 [2024-07-24 02:12:13.688050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.028 qpair failed and we were unable to recover it. 00:33:59.028 [2024-07-24 02:12:13.688185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.028 [2024-07-24 02:12:13.688210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.028 qpair failed and we were unable to recover it. 00:33:59.028 [2024-07-24 02:12:13.688343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.028 [2024-07-24 02:12:13.688370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.028 qpair failed and we were unable to recover it. 00:33:59.028 [2024-07-24 02:12:13.688504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.028 [2024-07-24 02:12:13.688529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.028 qpair failed and we were unable to recover it. 00:33:59.028 [2024-07-24 02:12:13.688655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.028 [2024-07-24 02:12:13.688680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.028 qpair failed and we were unable to recover it. 00:33:59.028 [2024-07-24 02:12:13.688840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.028 [2024-07-24 02:12:13.688865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.028 qpair failed and we were unable to recover it. 00:33:59.028 [2024-07-24 02:12:13.688969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.028 [2024-07-24 02:12:13.688994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.028 qpair failed and we were unable to recover it. 00:33:59.028 [2024-07-24 02:12:13.689137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.028 [2024-07-24 02:12:13.689162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.028 qpair failed and we were unable to recover it. 00:33:59.028 [2024-07-24 02:12:13.689329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.028 [2024-07-24 02:12:13.689357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.028 qpair failed and we were unable to recover it. 00:33:59.028 [2024-07-24 02:12:13.689573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.028 [2024-07-24 02:12:13.689599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.028 qpair failed and we were unable to recover it. 00:33:59.028 [2024-07-24 02:12:13.689717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.028 [2024-07-24 02:12:13.689743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.028 qpair failed and we were unable to recover it. 00:33:59.028 [2024-07-24 02:12:13.689886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.028 [2024-07-24 02:12:13.689913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.028 qpair failed and we were unable to recover it. 00:33:59.028 [2024-07-24 02:12:13.690073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.028 [2024-07-24 02:12:13.690098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.028 qpair failed and we were unable to recover it. 00:33:59.028 [2024-07-24 02:12:13.690255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.028 [2024-07-24 02:12:13.690281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.028 qpair failed and we were unable to recover it. 00:33:59.028 [2024-07-24 02:12:13.690393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.028 [2024-07-24 02:12:13.690421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.028 qpair failed and we were unable to recover it. 00:33:59.028 [2024-07-24 02:12:13.690534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.028 [2024-07-24 02:12:13.690559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.028 qpair failed and we were unable to recover it. 00:33:59.028 [2024-07-24 02:12:13.690703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.028 [2024-07-24 02:12:13.690729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.028 qpair failed and we were unable to recover it. 00:33:59.028 [2024-07-24 02:12:13.690860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.028 [2024-07-24 02:12:13.690886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.028 qpair failed and we were unable to recover it. 00:33:59.028 [2024-07-24 02:12:13.691019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.028 [2024-07-24 02:12:13.691044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.028 qpair failed and we were unable to recover it. 00:33:59.028 [2024-07-24 02:12:13.691184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.028 [2024-07-24 02:12:13.691210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.028 qpair failed and we were unable to recover it. 00:33:59.028 [2024-07-24 02:12:13.691353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.028 [2024-07-24 02:12:13.691379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.028 qpair failed and we were unable to recover it. 00:33:59.028 [2024-07-24 02:12:13.691482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.028 [2024-07-24 02:12:13.691507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.028 qpair failed and we were unable to recover it. 00:33:59.028 [2024-07-24 02:12:13.691642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.028 [2024-07-24 02:12:13.691669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.028 qpair failed and we were unable to recover it. 00:33:59.028 [2024-07-24 02:12:13.691802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.028 [2024-07-24 02:12:13.691828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.028 qpair failed and we were unable to recover it. 00:33:59.028 [2024-07-24 02:12:13.691933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.028 [2024-07-24 02:12:13.691959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.028 qpair failed and we were unable to recover it. 00:33:59.028 [2024-07-24 02:12:13.692076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.028 [2024-07-24 02:12:13.692102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.028 qpair failed and we were unable to recover it. 00:33:59.028 [2024-07-24 02:12:13.692222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.028 [2024-07-24 02:12:13.692261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.028 qpair failed and we were unable to recover it. 00:33:59.028 [2024-07-24 02:12:13.692431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.028 [2024-07-24 02:12:13.692469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.028 qpair failed and we were unable to recover it. 00:33:59.028 [2024-07-24 02:12:13.692638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.028 [2024-07-24 02:12:13.692664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.028 qpair failed and we were unable to recover it. 00:33:59.028 [2024-07-24 02:12:13.692768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.028 [2024-07-24 02:12:13.692793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.028 qpair failed and we were unable to recover it. 00:33:59.028 [2024-07-24 02:12:13.692899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.028 [2024-07-24 02:12:13.692925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.028 qpair failed and we were unable to recover it. 00:33:59.029 [2024-07-24 02:12:13.693025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.029 [2024-07-24 02:12:13.693050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.029 qpair failed and we were unable to recover it. 00:33:59.029 [2024-07-24 02:12:13.693177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.029 [2024-07-24 02:12:13.693202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.029 qpair failed and we were unable to recover it. 00:33:59.029 [2024-07-24 02:12:13.693329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.029 [2024-07-24 02:12:13.693369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.029 qpair failed and we were unable to recover it. 00:33:59.029 [2024-07-24 02:12:13.693489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.029 [2024-07-24 02:12:13.693517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.029 qpair failed and we were unable to recover it. 00:33:59.029 [2024-07-24 02:12:13.693630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.029 [2024-07-24 02:12:13.693657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.029 qpair failed and we were unable to recover it. 00:33:59.029 [2024-07-24 02:12:13.693785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.029 [2024-07-24 02:12:13.693812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.029 qpair failed and we were unable to recover it. 00:33:59.029 [2024-07-24 02:12:13.693976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.029 [2024-07-24 02:12:13.694008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.029 qpair failed and we were unable to recover it. 00:33:59.029 [2024-07-24 02:12:13.694145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.029 [2024-07-24 02:12:13.694172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.029 qpair failed and we were unable to recover it. 00:33:59.029 [2024-07-24 02:12:13.694309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.029 [2024-07-24 02:12:13.694342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.029 qpair failed and we were unable to recover it. 00:33:59.029 [2024-07-24 02:12:13.694487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.029 [2024-07-24 02:12:13.694512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.029 qpair failed and we were unable to recover it. 00:33:59.029 [2024-07-24 02:12:13.694650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.029 [2024-07-24 02:12:13.694675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.029 qpair failed and we were unable to recover it. 00:33:59.029 [2024-07-24 02:12:13.694804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.029 [2024-07-24 02:12:13.694829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.029 qpair failed and we were unable to recover it. 00:33:59.029 [2024-07-24 02:12:13.694932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.029 [2024-07-24 02:12:13.694957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.029 qpair failed and we were unable to recover it. 00:33:59.029 [2024-07-24 02:12:13.695065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.029 [2024-07-24 02:12:13.695090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.029 qpair failed and we were unable to recover it. 00:33:59.029 [2024-07-24 02:12:13.695208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.029 [2024-07-24 02:12:13.695247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.029 qpair failed and we were unable to recover it. 00:33:59.029 [2024-07-24 02:12:13.695370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.029 [2024-07-24 02:12:13.695398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.029 qpair failed and we were unable to recover it. 00:33:59.029 [2024-07-24 02:12:13.695516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.029 [2024-07-24 02:12:13.695544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.029 qpair failed and we were unable to recover it. 00:33:59.029 [2024-07-24 02:12:13.695647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.029 [2024-07-24 02:12:13.695673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.029 qpair failed and we were unable to recover it. 00:33:59.029 [2024-07-24 02:12:13.695812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.029 [2024-07-24 02:12:13.695837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.029 qpair failed and we were unable to recover it. 00:33:59.029 [2024-07-24 02:12:13.695988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.029 [2024-07-24 02:12:13.696027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.029 qpair failed and we were unable to recover it. 00:33:59.029 [2024-07-24 02:12:13.696147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.029 [2024-07-24 02:12:13.696173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.029 qpair failed and we were unable to recover it. 00:33:59.029 [2024-07-24 02:12:13.696282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.029 [2024-07-24 02:12:13.696307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.029 qpair failed and we were unable to recover it. 00:33:59.029 [2024-07-24 02:12:13.696442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.029 [2024-07-24 02:12:13.696467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.029 qpair failed and we were unable to recover it. 00:33:59.029 [2024-07-24 02:12:13.696571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.029 [2024-07-24 02:12:13.696595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.029 qpair failed and we were unable to recover it. 00:33:59.029 [2024-07-24 02:12:13.696728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.029 [2024-07-24 02:12:13.696752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.029 qpair failed and we were unable to recover it. 00:33:59.029 [2024-07-24 02:12:13.696858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.029 [2024-07-24 02:12:13.696883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.029 qpair failed and we were unable to recover it. 00:33:59.029 [2024-07-24 02:12:13.697012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.029 [2024-07-24 02:12:13.697037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.029 qpair failed and we were unable to recover it. 00:33:59.029 [2024-07-24 02:12:13.697251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.029 [2024-07-24 02:12:13.697276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.029 qpair failed and we were unable to recover it. 00:33:59.029 [2024-07-24 02:12:13.697423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.029 [2024-07-24 02:12:13.697449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.029 qpair failed and we were unable to recover it. 00:33:59.029 [2024-07-24 02:12:13.697551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.029 [2024-07-24 02:12:13.697576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.029 qpair failed and we were unable to recover it. 00:33:59.029 [2024-07-24 02:12:13.697685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.029 [2024-07-24 02:12:13.697712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.029 qpair failed and we were unable to recover it. 00:33:59.029 [2024-07-24 02:12:13.697813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.029 [2024-07-24 02:12:13.697838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.029 qpair failed and we were unable to recover it. 00:33:59.029 [2024-07-24 02:12:13.697959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.029 [2024-07-24 02:12:13.697984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.029 qpair failed and we were unable to recover it. 00:33:59.029 [2024-07-24 02:12:13.698103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.029 [2024-07-24 02:12:13.698133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.029 qpair failed and we were unable to recover it. 00:33:59.029 [2024-07-24 02:12:13.698271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.029 [2024-07-24 02:12:13.698310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.030 qpair failed and we were unable to recover it. 00:33:59.030 [2024-07-24 02:12:13.698456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.030 [2024-07-24 02:12:13.698483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.030 qpair failed and we were unable to recover it. 00:33:59.030 [2024-07-24 02:12:13.698650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.030 [2024-07-24 02:12:13.698676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.030 qpair failed and we were unable to recover it. 00:33:59.030 [2024-07-24 02:12:13.698815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.030 [2024-07-24 02:12:13.698842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.030 qpair failed and we were unable to recover it. 00:33:59.030 [2024-07-24 02:12:13.698985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.030 [2024-07-24 02:12:13.699011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.030 qpair failed and we were unable to recover it. 00:33:59.030 [2024-07-24 02:12:13.699154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.030 [2024-07-24 02:12:13.699179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.030 qpair failed and we were unable to recover it. 00:33:59.030 [2024-07-24 02:12:13.699286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.030 [2024-07-24 02:12:13.699312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.030 qpair failed and we were unable to recover it. 00:33:59.030 [2024-07-24 02:12:13.699462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.030 [2024-07-24 02:12:13.699489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.030 qpair failed and we were unable to recover it. 00:33:59.030 [2024-07-24 02:12:13.699610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.030 [2024-07-24 02:12:13.699635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.030 qpair failed and we were unable to recover it. 00:33:59.030 [2024-07-24 02:12:13.699771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.030 [2024-07-24 02:12:13.699796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.030 qpair failed and we were unable to recover it. 00:33:59.030 [2024-07-24 02:12:13.699921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.030 [2024-07-24 02:12:13.699946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.030 qpair failed and we were unable to recover it. 00:33:59.030 [2024-07-24 02:12:13.700046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.030 [2024-07-24 02:12:13.700071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.030 qpair failed and we were unable to recover it. 00:33:59.030 [2024-07-24 02:12:13.700178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.030 [2024-07-24 02:12:13.700203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.030 qpair failed and we were unable to recover it. 00:33:59.030 [2024-07-24 02:12:13.700376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.030 [2024-07-24 02:12:13.700415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.030 qpair failed and we were unable to recover it. 00:33:59.030 [2024-07-24 02:12:13.700560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.030 [2024-07-24 02:12:13.700586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.030 qpair failed and we were unable to recover it. 00:33:59.030 [2024-07-24 02:12:13.700705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.030 [2024-07-24 02:12:13.700731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.030 qpair failed and we were unable to recover it. 00:33:59.030 [2024-07-24 02:12:13.700860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.030 [2024-07-24 02:12:13.700886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.030 qpair failed and we were unable to recover it. 00:33:59.030 [2024-07-24 02:12:13.700998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.030 [2024-07-24 02:12:13.701025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.030 qpair failed and we were unable to recover it. 00:33:59.030 [2024-07-24 02:12:13.701157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.030 [2024-07-24 02:12:13.701182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.030 qpair failed and we were unable to recover it. 00:33:59.030 [2024-07-24 02:12:13.701289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.030 [2024-07-24 02:12:13.701324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.030 qpair failed and we were unable to recover it. 00:33:59.030 [2024-07-24 02:12:13.701436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.030 [2024-07-24 02:12:13.701461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.030 qpair failed and we were unable to recover it. 00:33:59.030 [2024-07-24 02:12:13.701568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.030 [2024-07-24 02:12:13.701592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.030 qpair failed and we were unable to recover it. 00:33:59.030 [2024-07-24 02:12:13.701727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.030 [2024-07-24 02:12:13.701751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.030 qpair failed and we were unable to recover it. 00:33:59.030 [2024-07-24 02:12:13.701857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.030 [2024-07-24 02:12:13.701882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.030 qpair failed and we were unable to recover it. 00:33:59.030 [2024-07-24 02:12:13.701985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.030 [2024-07-24 02:12:13.702010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.030 qpair failed and we were unable to recover it. 00:33:59.030 [2024-07-24 02:12:13.702118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.030 [2024-07-24 02:12:13.702145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.030 qpair failed and we were unable to recover it. 00:33:59.030 [2024-07-24 02:12:13.702277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.030 [2024-07-24 02:12:13.702308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.030 qpair failed and we were unable to recover it. 00:33:59.030 [2024-07-24 02:12:13.702424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.030 [2024-07-24 02:12:13.702450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.030 qpair failed and we were unable to recover it. 00:33:59.030 [2024-07-24 02:12:13.702585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.030 [2024-07-24 02:12:13.702610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.030 qpair failed and we were unable to recover it. 00:33:59.030 [2024-07-24 02:12:13.702745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.030 [2024-07-24 02:12:13.702770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.030 qpair failed and we were unable to recover it. 00:33:59.030 [2024-07-24 02:12:13.702891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.030 [2024-07-24 02:12:13.702916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.030 qpair failed and we were unable to recover it. 00:33:59.030 [2024-07-24 02:12:13.703054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.030 [2024-07-24 02:12:13.703079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.030 qpair failed and we were unable to recover it. 00:33:59.030 [2024-07-24 02:12:13.703184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.030 [2024-07-24 02:12:13.703209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.030 qpair failed and we were unable to recover it. 00:33:59.030 [2024-07-24 02:12:13.703325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.031 [2024-07-24 02:12:13.703350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.031 qpair failed and we were unable to recover it. 00:33:59.031 [2024-07-24 02:12:13.703460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.031 [2024-07-24 02:12:13.703486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.031 qpair failed and we were unable to recover it. 00:33:59.031 [2024-07-24 02:12:13.703587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.031 [2024-07-24 02:12:13.703612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.031 qpair failed and we were unable to recover it. 00:33:59.031 [2024-07-24 02:12:13.703725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.031 [2024-07-24 02:12:13.703750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.031 qpair failed and we were unable to recover it. 00:33:59.031 [2024-07-24 02:12:13.703848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.031 [2024-07-24 02:12:13.703873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.031 qpair failed and we were unable to recover it. 00:33:59.031 [2024-07-24 02:12:13.703979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.031 [2024-07-24 02:12:13.704003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.031 qpair failed and we were unable to recover it. 00:33:59.031 [2024-07-24 02:12:13.704132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.031 [2024-07-24 02:12:13.704156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.031 qpair failed and we were unable to recover it. 00:33:59.031 [2024-07-24 02:12:13.704296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.031 [2024-07-24 02:12:13.704329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.031 qpair failed and we were unable to recover it. 00:33:59.031 [2024-07-24 02:12:13.704439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.031 [2024-07-24 02:12:13.704465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.031 qpair failed and we were unable to recover it. 00:33:59.031 [2024-07-24 02:12:13.704565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.031 [2024-07-24 02:12:13.704591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.031 qpair failed and we were unable to recover it. 00:33:59.031 [2024-07-24 02:12:13.704685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.031 [2024-07-24 02:12:13.704710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.031 qpair failed and we were unable to recover it. 00:33:59.031 [2024-07-24 02:12:13.704873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.031 [2024-07-24 02:12:13.704898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.031 qpair failed and we were unable to recover it. 00:33:59.031 [2024-07-24 02:12:13.705023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.031 [2024-07-24 02:12:13.705049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.031 qpair failed and we were unable to recover it. 00:33:59.031 [2024-07-24 02:12:13.705154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.031 [2024-07-24 02:12:13.705180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.031 qpair failed and we were unable to recover it. 00:33:59.031 [2024-07-24 02:12:13.705285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.031 [2024-07-24 02:12:13.705309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.031 qpair failed and we were unable to recover it. 00:33:59.031 [2024-07-24 02:12:13.705444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.031 [2024-07-24 02:12:13.705470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.031 qpair failed and we were unable to recover it. 00:33:59.031 [2024-07-24 02:12:13.705599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.031 [2024-07-24 02:12:13.705624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.031 qpair failed and we were unable to recover it. 00:33:59.031 [2024-07-24 02:12:13.705769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.031 [2024-07-24 02:12:13.705793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.031 qpair failed and we were unable to recover it. 00:33:59.031 [2024-07-24 02:12:13.705925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.031 [2024-07-24 02:12:13.705951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.031 qpair failed and we were unable to recover it. 00:33:59.031 [2024-07-24 02:12:13.706050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.031 [2024-07-24 02:12:13.706074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.031 qpair failed and we were unable to recover it. 00:33:59.031 [2024-07-24 02:12:13.706182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.031 [2024-07-24 02:12:13.706213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.031 qpair failed and we were unable to recover it. 00:33:59.031 [2024-07-24 02:12:13.706333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.031 [2024-07-24 02:12:13.706373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.031 qpair failed and we were unable to recover it. 00:33:59.031 [2024-07-24 02:12:13.706559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.031 [2024-07-24 02:12:13.706586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.031 qpair failed and we were unable to recover it. 00:33:59.031 [2024-07-24 02:12:13.706694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.031 [2024-07-24 02:12:13.706719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.031 qpair failed and we were unable to recover it. 00:33:59.031 [2024-07-24 02:12:13.706931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.031 [2024-07-24 02:12:13.706955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.031 qpair failed and we were unable to recover it. 00:33:59.031 [2024-07-24 02:12:13.707069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.031 [2024-07-24 02:12:13.707093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.031 qpair failed and we were unable to recover it. 00:33:59.031 [2024-07-24 02:12:13.707226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.031 [2024-07-24 02:12:13.707250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.031 qpair failed and we were unable to recover it. 00:33:59.031 [2024-07-24 02:12:13.707356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.031 [2024-07-24 02:12:13.707382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.031 qpair failed and we were unable to recover it. 00:33:59.031 [2024-07-24 02:12:13.707538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.031 [2024-07-24 02:12:13.707562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.031 qpair failed and we were unable to recover it. 00:33:59.031 [2024-07-24 02:12:13.707721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.031 [2024-07-24 02:12:13.707746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.031 qpair failed and we were unable to recover it. 00:33:59.031 [2024-07-24 02:12:13.707879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.031 [2024-07-24 02:12:13.707903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.031 qpair failed and we were unable to recover it. 00:33:59.031 [2024-07-24 02:12:13.708044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.031 [2024-07-24 02:12:13.708069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.031 qpair failed and we were unable to recover it. 00:33:59.031 [2024-07-24 02:12:13.708182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.031 [2024-07-24 02:12:13.708206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.031 qpair failed and we were unable to recover it. 00:33:59.031 [2024-07-24 02:12:13.708313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.031 [2024-07-24 02:12:13.708346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.031 qpair failed and we were unable to recover it. 00:33:59.031 [2024-07-24 02:12:13.708457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.031 [2024-07-24 02:12:13.708483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.031 qpair failed and we were unable to recover it. 00:33:59.031 [2024-07-24 02:12:13.708615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.031 [2024-07-24 02:12:13.708640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.031 qpair failed and we were unable to recover it. 00:33:59.031 [2024-07-24 02:12:13.708773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.031 [2024-07-24 02:12:13.708798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.031 qpair failed and we were unable to recover it. 00:33:59.032 [2024-07-24 02:12:13.708901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.032 [2024-07-24 02:12:13.708925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.032 qpair failed and we were unable to recover it. 00:33:59.032 [2024-07-24 02:12:13.709136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.032 [2024-07-24 02:12:13.709160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.032 qpair failed and we were unable to recover it. 00:33:59.032 [2024-07-24 02:12:13.709322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.032 [2024-07-24 02:12:13.709348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.032 qpair failed and we were unable to recover it. 00:33:59.032 [2024-07-24 02:12:13.709485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.032 [2024-07-24 02:12:13.709509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.032 qpair failed and we were unable to recover it. 00:33:59.032 [2024-07-24 02:12:13.709625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.032 [2024-07-24 02:12:13.709650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.032 qpair failed and we were unable to recover it. 00:33:59.032 [2024-07-24 02:12:13.709786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.032 [2024-07-24 02:12:13.709811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.032 qpair failed and we were unable to recover it. 00:33:59.032 [2024-07-24 02:12:13.709966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.032 [2024-07-24 02:12:13.709991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.032 qpair failed and we were unable to recover it. 00:33:59.032 [2024-07-24 02:12:13.710092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.032 [2024-07-24 02:12:13.710117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.032 qpair failed and we were unable to recover it. 00:33:59.032 [2024-07-24 02:12:13.710224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.032 [2024-07-24 02:12:13.710248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.032 qpair failed and we were unable to recover it. 00:33:59.032 [2024-07-24 02:12:13.710355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.032 [2024-07-24 02:12:13.710382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.032 qpair failed and we were unable to recover it. 00:33:59.032 [2024-07-24 02:12:13.710486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.032 [2024-07-24 02:12:13.710516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.032 qpair failed and we were unable to recover it. 00:33:59.032 [2024-07-24 02:12:13.710650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.032 [2024-07-24 02:12:13.710675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.032 qpair failed and we were unable to recover it. 00:33:59.032 [2024-07-24 02:12:13.710835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.032 [2024-07-24 02:12:13.710860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.032 qpair failed and we were unable to recover it. 00:33:59.032 [2024-07-24 02:12:13.710968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.032 [2024-07-24 02:12:13.710993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.032 qpair failed and we were unable to recover it. 00:33:59.032 [2024-07-24 02:12:13.711149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.032 [2024-07-24 02:12:13.711174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.032 qpair failed and we were unable to recover it. 00:33:59.032 [2024-07-24 02:12:13.711297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.032 [2024-07-24 02:12:13.711328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.032 qpair failed and we were unable to recover it. 00:33:59.032 [2024-07-24 02:12:13.711436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.032 [2024-07-24 02:12:13.711461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.032 qpair failed and we were unable to recover it. 00:33:59.032 [2024-07-24 02:12:13.711595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.032 [2024-07-24 02:12:13.711619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.032 qpair failed and we were unable to recover it. 00:33:59.032 [2024-07-24 02:12:13.711714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.032 [2024-07-24 02:12:13.711738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.032 qpair failed and we were unable to recover it. 00:33:59.032 [2024-07-24 02:12:13.711870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.032 [2024-07-24 02:12:13.711894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.032 qpair failed and we were unable to recover it. 00:33:59.032 [2024-07-24 02:12:13.712042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.032 [2024-07-24 02:12:13.712067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.032 qpair failed and we were unable to recover it. 00:33:59.032 [2024-07-24 02:12:13.712164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.032 [2024-07-24 02:12:13.712189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.032 qpair failed and we were unable to recover it. 00:33:59.032 [2024-07-24 02:12:13.712348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.032 [2024-07-24 02:12:13.712373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.032 qpair failed and we were unable to recover it. 00:33:59.032 [2024-07-24 02:12:13.712511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.032 [2024-07-24 02:12:13.712535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.032 qpair failed and we were unable to recover it. 00:33:59.032 [2024-07-24 02:12:13.712718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.032 [2024-07-24 02:12:13.712758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.032 qpair failed and we were unable to recover it. 00:33:59.032 [2024-07-24 02:12:13.712896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.032 [2024-07-24 02:12:13.712923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.032 qpair failed and we were unable to recover it. 00:33:59.032 [2024-07-24 02:12:13.713086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.032 [2024-07-24 02:12:13.713112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.032 qpair failed and we were unable to recover it. 00:33:59.032 [2024-07-24 02:12:13.713252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.032 [2024-07-24 02:12:13.713279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.032 qpair failed and we were unable to recover it. 00:33:59.032 [2024-07-24 02:12:13.713446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.032 [2024-07-24 02:12:13.713473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.032 qpair failed and we were unable to recover it. 00:33:59.032 [2024-07-24 02:12:13.713605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.032 [2024-07-24 02:12:13.713630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.032 qpair failed and we were unable to recover it. 00:33:59.032 [2024-07-24 02:12:13.713764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.032 [2024-07-24 02:12:13.713790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.032 qpair failed and we were unable to recover it. 00:33:59.032 [2024-07-24 02:12:13.713946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.032 [2024-07-24 02:12:13.713971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.032 qpair failed and we were unable to recover it. 00:33:59.032 [2024-07-24 02:12:13.714112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.032 [2024-07-24 02:12:13.714138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.032 qpair failed and we were unable to recover it. 00:33:59.032 [2024-07-24 02:12:13.714271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.032 [2024-07-24 02:12:13.714298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.032 qpair failed and we were unable to recover it. 00:33:59.032 [2024-07-24 02:12:13.714411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.032 [2024-07-24 02:12:13.714437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.032 qpair failed and we were unable to recover it. 00:33:59.032 [2024-07-24 02:12:13.714582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.032 [2024-07-24 02:12:13.714608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.032 qpair failed and we were unable to recover it. 00:33:59.032 [2024-07-24 02:12:13.714735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.032 [2024-07-24 02:12:13.714761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.032 qpair failed and we were unable to recover it. 00:33:59.033 [2024-07-24 02:12:13.714874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.033 [2024-07-24 02:12:13.714905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.033 qpair failed and we were unable to recover it. 00:33:59.033 [2024-07-24 02:12:13.715008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.033 [2024-07-24 02:12:13.715034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.033 qpair failed and we were unable to recover it. 00:33:59.033 [2024-07-24 02:12:13.715209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.033 [2024-07-24 02:12:13.715248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.033 qpair failed and we were unable to recover it. 00:33:59.033 [2024-07-24 02:12:13.715371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.033 [2024-07-24 02:12:13.715399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.033 qpair failed and we were unable to recover it. 00:33:59.033 [2024-07-24 02:12:13.715504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.033 [2024-07-24 02:12:13.715530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.033 qpair failed and we were unable to recover it. 00:33:59.033 [2024-07-24 02:12:13.715664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.033 [2024-07-24 02:12:13.715689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.033 qpair failed and we were unable to recover it. 00:33:59.033 [2024-07-24 02:12:13.715794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.033 [2024-07-24 02:12:13.715819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.033 qpair failed and we were unable to recover it. 00:33:59.033 [2024-07-24 02:12:13.715924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.033 [2024-07-24 02:12:13.715949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.033 qpair failed and we were unable to recover it. 00:33:59.033 [2024-07-24 02:12:13.716078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.033 [2024-07-24 02:12:13.716103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.033 qpair failed and we were unable to recover it. 00:33:59.033 [2024-07-24 02:12:13.716247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.033 [2024-07-24 02:12:13.716272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.033 qpair failed and we were unable to recover it. 00:33:59.033 [2024-07-24 02:12:13.716393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.033 [2024-07-24 02:12:13.716418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.033 qpair failed and we were unable to recover it. 00:33:59.033 [2024-07-24 02:12:13.716547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.033 [2024-07-24 02:12:13.716572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.033 qpair failed and we were unable to recover it. 00:33:59.033 [2024-07-24 02:12:13.716707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.033 [2024-07-24 02:12:13.716733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.033 qpair failed and we were unable to recover it. 00:33:59.033 [2024-07-24 02:12:13.716865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.033 [2024-07-24 02:12:13.716889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.033 qpair failed and we were unable to recover it. 00:33:59.033 [2024-07-24 02:12:13.717025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.033 [2024-07-24 02:12:13.717050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.033 qpair failed and we were unable to recover it. 00:33:59.033 [2024-07-24 02:12:13.717185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.033 [2024-07-24 02:12:13.717210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.033 qpair failed and we were unable to recover it. 00:33:59.033 [2024-07-24 02:12:13.717420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.033 [2024-07-24 02:12:13.717445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.033 qpair failed and we were unable to recover it. 00:33:59.033 [2024-07-24 02:12:13.717577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.033 [2024-07-24 02:12:13.717602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.033 qpair failed and we were unable to recover it. 00:33:59.033 [2024-07-24 02:12:13.717706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.033 [2024-07-24 02:12:13.717731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.033 qpair failed and we were unable to recover it. 00:33:59.033 [2024-07-24 02:12:13.717832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.033 [2024-07-24 02:12:13.717857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.033 qpair failed and we were unable to recover it. 00:33:59.033 [2024-07-24 02:12:13.718011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.033 [2024-07-24 02:12:13.718036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.033 qpair failed and we were unable to recover it. 00:33:59.033 [2024-07-24 02:12:13.718167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.033 [2024-07-24 02:12:13.718191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.033 qpair failed and we were unable to recover it. 00:33:59.033 [2024-07-24 02:12:13.718335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.033 [2024-07-24 02:12:13.718361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.033 qpair failed and we were unable to recover it. 00:33:59.033 [2024-07-24 02:12:13.718503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.033 [2024-07-24 02:12:13.718528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.033 qpair failed and we were unable to recover it. 00:33:59.033 [2024-07-24 02:12:13.718664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.033 [2024-07-24 02:12:13.718689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.033 qpair failed and we were unable to recover it. 00:33:59.033 [2024-07-24 02:12:13.718795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.033 [2024-07-24 02:12:13.718820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.033 qpair failed and we were unable to recover it. 00:33:59.033 [2024-07-24 02:12:13.718922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.033 [2024-07-24 02:12:13.718946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.033 qpair failed and we were unable to recover it. 00:33:59.033 [2024-07-24 02:12:13.719060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.033 [2024-07-24 02:12:13.719086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.033 qpair failed and we were unable to recover it. 00:33:59.033 [2024-07-24 02:12:13.719231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.033 [2024-07-24 02:12:13.719256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.033 qpair failed and we were unable to recover it. 00:33:59.033 [2024-07-24 02:12:13.719386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.033 [2024-07-24 02:12:13.719413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.033 qpair failed and we were unable to recover it. 00:33:59.033 [2024-07-24 02:12:13.719623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.033 [2024-07-24 02:12:13.719648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.033 qpair failed and we were unable to recover it. 00:33:59.033 [2024-07-24 02:12:13.719783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.033 [2024-07-24 02:12:13.719808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.033 qpair failed and we were unable to recover it. 00:33:59.033 [2024-07-24 02:12:13.719908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.033 [2024-07-24 02:12:13.719933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.033 qpair failed and we were unable to recover it. 00:33:59.033 [2024-07-24 02:12:13.720140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.033 [2024-07-24 02:12:13.720165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.033 qpair failed and we were unable to recover it. 00:33:59.033 [2024-07-24 02:12:13.720274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.033 [2024-07-24 02:12:13.720299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.033 qpair failed and we were unable to recover it. 00:33:59.033 [2024-07-24 02:12:13.720510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.033 [2024-07-24 02:12:13.720535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.033 qpair failed and we were unable to recover it. 00:33:59.033 [2024-07-24 02:12:13.720675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.034 [2024-07-24 02:12:13.720701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.034 qpair failed and we were unable to recover it. 00:33:59.034 [2024-07-24 02:12:13.720828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.034 [2024-07-24 02:12:13.720853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.034 qpair failed and we were unable to recover it. 00:33:59.034 [2024-07-24 02:12:13.721060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.034 [2024-07-24 02:12:13.721085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.034 qpair failed and we were unable to recover it. 00:33:59.034 [2024-07-24 02:12:13.721254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.034 [2024-07-24 02:12:13.721279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.034 qpair failed and we were unable to recover it. 00:33:59.034 [2024-07-24 02:12:13.721390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.034 [2024-07-24 02:12:13.721416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.034 qpair failed and we were unable to recover it. 00:33:59.034 [2024-07-24 02:12:13.721575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.034 [2024-07-24 02:12:13.721607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.034 qpair failed and we were unable to recover it. 00:33:59.034 [2024-07-24 02:12:13.721718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.034 [2024-07-24 02:12:13.721743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.034 qpair failed and we were unable to recover it. 00:33:59.034 [2024-07-24 02:12:13.721875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.034 [2024-07-24 02:12:13.721899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.034 qpair failed and we were unable to recover it. 00:33:59.034 [2024-07-24 02:12:13.722032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.034 [2024-07-24 02:12:13.722057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.034 qpair failed and we were unable to recover it. 00:33:59.034 [2024-07-24 02:12:13.722193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.034 [2024-07-24 02:12:13.722218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.034 qpair failed and we were unable to recover it. 00:33:59.034 [2024-07-24 02:12:13.722326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.034 [2024-07-24 02:12:13.722352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.034 qpair failed and we were unable to recover it. 00:33:59.034 [2024-07-24 02:12:13.722478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.034 [2024-07-24 02:12:13.722503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.034 qpair failed and we were unable to recover it. 00:33:59.034 [2024-07-24 02:12:13.722636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.034 [2024-07-24 02:12:13.722661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.034 qpair failed and we were unable to recover it. 00:33:59.034 [2024-07-24 02:12:13.722768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.034 [2024-07-24 02:12:13.722793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.034 qpair failed and we were unable to recover it. 00:33:59.034 [2024-07-24 02:12:13.722900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.034 [2024-07-24 02:12:13.722925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.034 qpair failed and we were unable to recover it. 00:33:59.034 [2024-07-24 02:12:13.723090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.034 [2024-07-24 02:12:13.723116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.034 qpair failed and we were unable to recover it. 00:33:59.034 [2024-07-24 02:12:13.723242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.034 [2024-07-24 02:12:13.723267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.034 qpair failed and we were unable to recover it. 00:33:59.034 [2024-07-24 02:12:13.723371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.034 [2024-07-24 02:12:13.723396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.034 qpair failed and we were unable to recover it. 00:33:59.034 [2024-07-24 02:12:13.723531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.034 [2024-07-24 02:12:13.723556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.034 qpair failed and we were unable to recover it. 00:33:59.034 [2024-07-24 02:12:13.723667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.034 [2024-07-24 02:12:13.723693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.034 qpair failed and we were unable to recover it. 00:33:59.034 [2024-07-24 02:12:13.723817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.034 [2024-07-24 02:12:13.723843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.034 qpair failed and we were unable to recover it. 00:33:59.034 [2024-07-24 02:12:13.724052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.034 [2024-07-24 02:12:13.724077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.034 qpair failed and we were unable to recover it. 00:33:59.034 [2024-07-24 02:12:13.724178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.034 [2024-07-24 02:12:13.724204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.034 qpair failed and we were unable to recover it. 00:33:59.034 [2024-07-24 02:12:13.724382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.034 [2024-07-24 02:12:13.724423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.034 qpair failed and we were unable to recover it. 00:33:59.034 [2024-07-24 02:12:13.724573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.034 [2024-07-24 02:12:13.724600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.034 qpair failed and we were unable to recover it. 00:33:59.034 [2024-07-24 02:12:13.724731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.034 [2024-07-24 02:12:13.724757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.034 qpair failed and we were unable to recover it. 00:33:59.034 [2024-07-24 02:12:13.724873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.034 [2024-07-24 02:12:13.724899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.034 qpair failed and we were unable to recover it. 00:33:59.034 [2024-07-24 02:12:13.725065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.034 [2024-07-24 02:12:13.725091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.034 qpair failed and we were unable to recover it. 00:33:59.034 [2024-07-24 02:12:13.725202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.034 [2024-07-24 02:12:13.725230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.034 qpair failed and we were unable to recover it. 00:33:59.034 [2024-07-24 02:12:13.725369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.034 [2024-07-24 02:12:13.725396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.034 qpair failed and we were unable to recover it. 00:33:59.034 [2024-07-24 02:12:13.725499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.034 [2024-07-24 02:12:13.725523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.034 qpair failed and we were unable to recover it. 00:33:59.034 [2024-07-24 02:12:13.725628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.034 [2024-07-24 02:12:13.725653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.035 qpair failed and we were unable to recover it. 00:33:59.035 [2024-07-24 02:12:13.725780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.035 [2024-07-24 02:12:13.725809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.035 qpair failed and we were unable to recover it. 00:33:59.035 [2024-07-24 02:12:13.725940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.035 [2024-07-24 02:12:13.725965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.035 qpair failed and we were unable to recover it. 00:33:59.035 [2024-07-24 02:12:13.726064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.035 [2024-07-24 02:12:13.726089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.035 qpair failed and we were unable to recover it. 00:33:59.035 [2024-07-24 02:12:13.726226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.035 [2024-07-24 02:12:13.726253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.035 qpair failed and we were unable to recover it. 00:33:59.035 [2024-07-24 02:12:13.726376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.035 [2024-07-24 02:12:13.726403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.035 qpair failed and we were unable to recover it. 00:33:59.035 [2024-07-24 02:12:13.726535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.035 [2024-07-24 02:12:13.726561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.035 qpair failed and we were unable to recover it. 00:33:59.035 [2024-07-24 02:12:13.726703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.035 [2024-07-24 02:12:13.726729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.035 qpair failed and we were unable to recover it. 00:33:59.035 [2024-07-24 02:12:13.726839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.035 [2024-07-24 02:12:13.726865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.035 qpair failed and we were unable to recover it. 00:33:59.035 [2024-07-24 02:12:13.726995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.035 [2024-07-24 02:12:13.727021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.035 qpair failed and we were unable to recover it. 00:33:59.035 [2024-07-24 02:12:13.727157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.035 [2024-07-24 02:12:13.727183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.035 qpair failed and we were unable to recover it. 00:33:59.035 [2024-07-24 02:12:13.727289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.035 [2024-07-24 02:12:13.727314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.035 qpair failed and we were unable to recover it. 00:33:59.035 [2024-07-24 02:12:13.727460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.035 [2024-07-24 02:12:13.727485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.035 qpair failed and we were unable to recover it. 00:33:59.035 [2024-07-24 02:12:13.727594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.035 [2024-07-24 02:12:13.727619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.035 qpair failed and we were unable to recover it. 00:33:59.035 [2024-07-24 02:12:13.727724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.035 [2024-07-24 02:12:13.727748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.035 qpair failed and we were unable to recover it. 00:33:59.035 [2024-07-24 02:12:13.727904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.035 [2024-07-24 02:12:13.727928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.035 qpair failed and we were unable to recover it. 00:33:59.035 [2024-07-24 02:12:13.728032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.035 [2024-07-24 02:12:13.728056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.035 qpair failed and we were unable to recover it. 00:33:59.035 [2024-07-24 02:12:13.728190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.035 [2024-07-24 02:12:13.728215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.035 qpair failed and we were unable to recover it. 00:33:59.035 [2024-07-24 02:12:13.728327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.035 [2024-07-24 02:12:13.728353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.035 qpair failed and we were unable to recover it. 00:33:59.035 [2024-07-24 02:12:13.728562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.035 [2024-07-24 02:12:13.728587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.035 qpair failed and we were unable to recover it. 00:33:59.035 [2024-07-24 02:12:13.728718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.035 [2024-07-24 02:12:13.728743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.035 qpair failed and we were unable to recover it. 00:33:59.035 [2024-07-24 02:12:13.728895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.035 [2024-07-24 02:12:13.728920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.035 qpair failed and we were unable to recover it. 00:33:59.035 [2024-07-24 02:12:13.729048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.035 [2024-07-24 02:12:13.729073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.035 qpair failed and we were unable to recover it. 00:33:59.035 [2024-07-24 02:12:13.729205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.035 [2024-07-24 02:12:13.729229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.035 qpair failed and we were unable to recover it. 00:33:59.035 [2024-07-24 02:12:13.729368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.035 [2024-07-24 02:12:13.729394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.035 qpair failed and we were unable to recover it. 00:33:59.035 [2024-07-24 02:12:13.729528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.035 [2024-07-24 02:12:13.729553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.035 qpair failed and we were unable to recover it. 00:33:59.035 [2024-07-24 02:12:13.729687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.035 [2024-07-24 02:12:13.729712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.035 qpair failed and we were unable to recover it. 00:33:59.035 [2024-07-24 02:12:13.729844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.035 [2024-07-24 02:12:13.729870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.035 qpair failed and we were unable to recover it. 00:33:59.035 [2024-07-24 02:12:13.729969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.035 [2024-07-24 02:12:13.729998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.035 qpair failed and we were unable to recover it. 00:33:59.035 [2024-07-24 02:12:13.730100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.035 [2024-07-24 02:12:13.730125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.035 qpair failed and we were unable to recover it. 00:33:59.035 [2024-07-24 02:12:13.730260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.035 [2024-07-24 02:12:13.730285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.035 qpair failed and we were unable to recover it. 00:33:59.035 [2024-07-24 02:12:13.730438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.035 [2024-07-24 02:12:13.730477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.035 qpair failed and we were unable to recover it. 00:33:59.035 [2024-07-24 02:12:13.730594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.035 [2024-07-24 02:12:13.730622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.035 qpair failed and we were unable to recover it. 00:33:59.035 [2024-07-24 02:12:13.730732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.035 [2024-07-24 02:12:13.730758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.035 qpair failed and we were unable to recover it. 00:33:59.035 [2024-07-24 02:12:13.730867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.035 [2024-07-24 02:12:13.730892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.035 qpair failed and we were unable to recover it. 00:33:59.035 [2024-07-24 02:12:13.731033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.035 [2024-07-24 02:12:13.731059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.035 qpair failed and we were unable to recover it. 00:33:59.035 [2024-07-24 02:12:13.731195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.035 [2024-07-24 02:12:13.731221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.035 qpair failed and we were unable to recover it. 00:33:59.035 [2024-07-24 02:12:13.731323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.036 [2024-07-24 02:12:13.731349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.036 qpair failed and we were unable to recover it. 00:33:59.036 [2024-07-24 02:12:13.731461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.036 [2024-07-24 02:12:13.731487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.036 qpair failed and we were unable to recover it. 00:33:59.036 [2024-07-24 02:12:13.731614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.036 [2024-07-24 02:12:13.731639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.036 qpair failed and we were unable to recover it. 00:33:59.036 [2024-07-24 02:12:13.731744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.036 [2024-07-24 02:12:13.731769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.036 qpair failed and we were unable to recover it. 00:33:59.036 [2024-07-24 02:12:13.731910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.036 [2024-07-24 02:12:13.731934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.036 qpair failed and we were unable to recover it. 00:33:59.036 [2024-07-24 02:12:13.732038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.036 [2024-07-24 02:12:13.732062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.036 qpair failed and we were unable to recover it. 00:33:59.036 [2024-07-24 02:12:13.732220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.036 [2024-07-24 02:12:13.732245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.036 qpair failed and we were unable to recover it. 00:33:59.036 [2024-07-24 02:12:13.732373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.036 [2024-07-24 02:12:13.732399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.036 qpair failed and we were unable to recover it. 00:33:59.036 [2024-07-24 02:12:13.732502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.036 [2024-07-24 02:12:13.732527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.036 qpair failed and we were unable to recover it. 00:33:59.036 [2024-07-24 02:12:13.732653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.036 [2024-07-24 02:12:13.732677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.036 qpair failed and we were unable to recover it. 00:33:59.036 [2024-07-24 02:12:13.732837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.036 [2024-07-24 02:12:13.732862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.036 qpair failed and we were unable to recover it. 00:33:59.036 [2024-07-24 02:12:13.732972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.036 [2024-07-24 02:12:13.732996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.036 qpair failed and we were unable to recover it. 00:33:59.036 [2024-07-24 02:12:13.733127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.036 [2024-07-24 02:12:13.733152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.036 qpair failed and we were unable to recover it. 00:33:59.036 [2024-07-24 02:12:13.733285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.036 [2024-07-24 02:12:13.733309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.036 qpair failed and we were unable to recover it. 00:33:59.036 [2024-07-24 02:12:13.733435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.036 [2024-07-24 02:12:13.733459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.036 qpair failed and we were unable to recover it. 00:33:59.036 [2024-07-24 02:12:13.733594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.036 [2024-07-24 02:12:13.733623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.036 qpair failed and we were unable to recover it. 00:33:59.036 [2024-07-24 02:12:13.733760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.036 [2024-07-24 02:12:13.733786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.036 qpair failed and we were unable to recover it. 00:33:59.036 [2024-07-24 02:12:13.733893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.036 [2024-07-24 02:12:13.733919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.036 qpair failed and we were unable to recover it. 00:33:59.036 [2024-07-24 02:12:13.734070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.036 [2024-07-24 02:12:13.734101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.036 qpair failed and we were unable to recover it. 00:33:59.036 [2024-07-24 02:12:13.734238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.036 [2024-07-24 02:12:13.734263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.036 qpair failed and we were unable to recover it. 00:33:59.036 [2024-07-24 02:12:13.734402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.036 [2024-07-24 02:12:13.734428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.036 qpair failed and we were unable to recover it. 00:33:59.036 [2024-07-24 02:12:13.734559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.036 [2024-07-24 02:12:13.734585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.036 qpair failed and we were unable to recover it. 00:33:59.036 [2024-07-24 02:12:13.734748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.036 [2024-07-24 02:12:13.734773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.036 qpair failed and we were unable to recover it. 00:33:59.036 [2024-07-24 02:12:13.734880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.036 [2024-07-24 02:12:13.734905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.036 qpair failed and we were unable to recover it. 00:33:59.036 [2024-07-24 02:12:13.735045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.036 [2024-07-24 02:12:13.735070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.036 qpair failed and we were unable to recover it. 00:33:59.036 [2024-07-24 02:12:13.735280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.036 [2024-07-24 02:12:13.735304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.036 qpair failed and we were unable to recover it. 00:33:59.036 [2024-07-24 02:12:13.735422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.036 [2024-07-24 02:12:13.735447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.036 qpair failed and we were unable to recover it. 00:33:59.036 [2024-07-24 02:12:13.735603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.036 [2024-07-24 02:12:13.735628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.036 qpair failed and we were unable to recover it. 00:33:59.036 [2024-07-24 02:12:13.735754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.036 [2024-07-24 02:12:13.735779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.036 qpair failed and we were unable to recover it. 00:33:59.036 [2024-07-24 02:12:13.735882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.036 [2024-07-24 02:12:13.735907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.036 qpair failed and we were unable to recover it. 00:33:59.036 [2024-07-24 02:12:13.736045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.036 [2024-07-24 02:12:13.736069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.036 qpair failed and we were unable to recover it. 00:33:59.036 [2024-07-24 02:12:13.736178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.036 [2024-07-24 02:12:13.736203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.036 qpair failed and we were unable to recover it. 00:33:59.036 [2024-07-24 02:12:13.736339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.036 [2024-07-24 02:12:13.736365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.036 qpair failed and we were unable to recover it. 00:33:59.036 [2024-07-24 02:12:13.736471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.036 [2024-07-24 02:12:13.736496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.036 qpair failed and we were unable to recover it. 00:33:59.036 [2024-07-24 02:12:13.736632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.036 [2024-07-24 02:12:13.736657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.036 qpair failed and we were unable to recover it. 00:33:59.036 [2024-07-24 02:12:13.736784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.036 [2024-07-24 02:12:13.736808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.036 qpair failed and we were unable to recover it. 00:33:59.036 [2024-07-24 02:12:13.736917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.036 [2024-07-24 02:12:13.736942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.036 qpair failed and we were unable to recover it. 00:33:59.036 [2024-07-24 02:12:13.737044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.036 [2024-07-24 02:12:13.737069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.036 qpair failed and we were unable to recover it. 00:33:59.036 [2024-07-24 02:12:13.737202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.036 [2024-07-24 02:12:13.737227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.037 qpair failed and we were unable to recover it. 00:33:59.037 [2024-07-24 02:12:13.737361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.037 [2024-07-24 02:12:13.737386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.037 qpair failed and we were unable to recover it. 00:33:59.037 [2024-07-24 02:12:13.737498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.037 [2024-07-24 02:12:13.737523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.037 qpair failed and we were unable to recover it. 00:33:59.037 [2024-07-24 02:12:13.737730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.037 [2024-07-24 02:12:13.737755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.037 qpair failed and we were unable to recover it. 00:33:59.037 [2024-07-24 02:12:13.737879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.037 [2024-07-24 02:12:13.737904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.037 qpair failed and we were unable to recover it. 00:33:59.037 [2024-07-24 02:12:13.738009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.037 [2024-07-24 02:12:13.738033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.037 qpair failed and we were unable to recover it. 00:33:59.037 [2024-07-24 02:12:13.738141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.037 [2024-07-24 02:12:13.738166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.037 qpair failed and we were unable to recover it. 00:33:59.037 [2024-07-24 02:12:13.738279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.037 [2024-07-24 02:12:13.738324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.037 qpair failed and we were unable to recover it. 00:33:59.037 [2024-07-24 02:12:13.738482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.037 [2024-07-24 02:12:13.738510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.037 qpair failed and we were unable to recover it. 00:33:59.037 [2024-07-24 02:12:13.738611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.037 [2024-07-24 02:12:13.738637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.037 qpair failed and we were unable to recover it. 00:33:59.037 [2024-07-24 02:12:13.738740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.037 [2024-07-24 02:12:13.738766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.037 qpair failed and we were unable to recover it. 00:33:59.037 [2024-07-24 02:12:13.738928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.037 [2024-07-24 02:12:13.738954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.037 qpair failed and we were unable to recover it. 00:33:59.037 [2024-07-24 02:12:13.739055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.037 [2024-07-24 02:12:13.739080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.037 qpair failed and we were unable to recover it. 00:33:59.037 [2024-07-24 02:12:13.739213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.037 [2024-07-24 02:12:13.739238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.037 qpair failed and we were unable to recover it. 00:33:59.037 [2024-07-24 02:12:13.739352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.037 [2024-07-24 02:12:13.739379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.037 qpair failed and we were unable to recover it. 00:33:59.037 [2024-07-24 02:12:13.739508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.037 [2024-07-24 02:12:13.739534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.037 qpair failed and we were unable to recover it. 00:33:59.037 [2024-07-24 02:12:13.739639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.037 [2024-07-24 02:12:13.739665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.037 qpair failed and we were unable to recover it. 00:33:59.037 [2024-07-24 02:12:13.739767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.037 [2024-07-24 02:12:13.739793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.037 qpair failed and we were unable to recover it. 00:33:59.037 [2024-07-24 02:12:13.739904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.037 [2024-07-24 02:12:13.739931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.037 qpair failed and we were unable to recover it. 00:33:59.037 [2024-07-24 02:12:13.740031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.037 [2024-07-24 02:12:13.740058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.037 qpair failed and we were unable to recover it. 00:33:59.037 [2024-07-24 02:12:13.740202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.037 [2024-07-24 02:12:13.740228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.037 qpair failed and we were unable to recover it. 00:33:59.037 [2024-07-24 02:12:13.740339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.037 [2024-07-24 02:12:13.740366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.037 qpair failed and we were unable to recover it. 00:33:59.037 [2024-07-24 02:12:13.740472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.037 [2024-07-24 02:12:13.740498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.037 qpair failed and we were unable to recover it. 00:33:59.037 [2024-07-24 02:12:13.740610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.037 [2024-07-24 02:12:13.740636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.037 qpair failed and we were unable to recover it. 00:33:59.037 [2024-07-24 02:12:13.740768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.037 [2024-07-24 02:12:13.740793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.037 qpair failed and we were unable to recover it. 00:33:59.037 [2024-07-24 02:12:13.740952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.037 [2024-07-24 02:12:13.740978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.037 qpair failed and we were unable to recover it. 00:33:59.037 [2024-07-24 02:12:13.741115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.037 [2024-07-24 02:12:13.741141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.037 qpair failed and we were unable to recover it. 00:33:59.037 [2024-07-24 02:12:13.741251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.037 [2024-07-24 02:12:13.741277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.037 qpair failed and we were unable to recover it. 00:33:59.037 [2024-07-24 02:12:13.741412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.037 [2024-07-24 02:12:13.741438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.037 qpair failed and we were unable to recover it. 00:33:59.037 [2024-07-24 02:12:13.741578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.037 [2024-07-24 02:12:13.741604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.037 qpair failed and we were unable to recover it. 00:33:59.037 [2024-07-24 02:12:13.741707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.037 [2024-07-24 02:12:13.741732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.037 qpair failed and we were unable to recover it. 00:33:59.037 [2024-07-24 02:12:13.741870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.037 [2024-07-24 02:12:13.741895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.037 qpair failed and we were unable to recover it. 00:33:59.037 [2024-07-24 02:12:13.742027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.037 [2024-07-24 02:12:13.742053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.037 qpair failed and we were unable to recover it. 00:33:59.037 [2024-07-24 02:12:13.742159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.037 [2024-07-24 02:12:13.742186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.037 qpair failed and we were unable to recover it. 00:33:59.037 [2024-07-24 02:12:13.742327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.037 [2024-07-24 02:12:13.742359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.037 qpair failed and we were unable to recover it. 00:33:59.037 [2024-07-24 02:12:13.742471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.037 [2024-07-24 02:12:13.742498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.037 qpair failed and we were unable to recover it. 00:33:59.037 [2024-07-24 02:12:13.742638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.037 [2024-07-24 02:12:13.742664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.037 qpair failed and we were unable to recover it. 00:33:59.037 [2024-07-24 02:12:13.742766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.037 [2024-07-24 02:12:13.742792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.037 qpair failed and we were unable to recover it. 00:33:59.037 [2024-07-24 02:12:13.742925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.037 [2024-07-24 02:12:13.742951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.037 qpair failed and we were unable to recover it. 00:33:59.037 [2024-07-24 02:12:13.743090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.037 [2024-07-24 02:12:13.743116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.037 qpair failed and we were unable to recover it. 00:33:59.037 [2024-07-24 02:12:13.743253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.038 [2024-07-24 02:12:13.743280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.038 qpair failed and we were unable to recover it. 00:33:59.038 [2024-07-24 02:12:13.743428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.038 [2024-07-24 02:12:13.743454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.038 qpair failed and we were unable to recover it. 00:33:59.038 [2024-07-24 02:12:13.743588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.038 [2024-07-24 02:12:13.743614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.038 qpair failed and we were unable to recover it. 00:33:59.038 [2024-07-24 02:12:13.743740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.038 [2024-07-24 02:12:13.743766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.038 qpair failed and we were unable to recover it. 00:33:59.038 [2024-07-24 02:12:13.743903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.038 [2024-07-24 02:12:13.743929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.038 qpair failed and we were unable to recover it. 00:33:59.038 [2024-07-24 02:12:13.744031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.038 [2024-07-24 02:12:13.744058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.038 qpair failed and we were unable to recover it. 00:33:59.038 [2024-07-24 02:12:13.744215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.038 [2024-07-24 02:12:13.744241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.038 qpair failed and we were unable to recover it. 00:33:59.038 [2024-07-24 02:12:13.744403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.038 [2024-07-24 02:12:13.744430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.038 qpair failed and we were unable to recover it. 00:33:59.038 [2024-07-24 02:12:13.744542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.038 [2024-07-24 02:12:13.744568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.038 qpair failed and we were unable to recover it. 00:33:59.038 [2024-07-24 02:12:13.744670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.038 [2024-07-24 02:12:13.744696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.038 qpair failed and we were unable to recover it. 00:33:59.038 [2024-07-24 02:12:13.744840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.038 [2024-07-24 02:12:13.744866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.038 qpair failed and we were unable to recover it. 00:33:59.038 [2024-07-24 02:12:13.745024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.038 [2024-07-24 02:12:13.745050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.038 qpair failed and we were unable to recover it. 00:33:59.038 [2024-07-24 02:12:13.745154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.038 [2024-07-24 02:12:13.745180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.038 qpair failed and we were unable to recover it. 00:33:59.038 [2024-07-24 02:12:13.745324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.038 [2024-07-24 02:12:13.745351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.038 qpair failed and we were unable to recover it. 00:33:59.038 [2024-07-24 02:12:13.745453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.038 [2024-07-24 02:12:13.745478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.038 qpair failed and we were unable to recover it. 00:33:59.038 [2024-07-24 02:12:13.745590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.038 [2024-07-24 02:12:13.745615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.038 qpair failed and we were unable to recover it. 00:33:59.038 [2024-07-24 02:12:13.745750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.038 [2024-07-24 02:12:13.745775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.038 qpair failed and we were unable to recover it. 00:33:59.038 [2024-07-24 02:12:13.745906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.038 [2024-07-24 02:12:13.745932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.038 qpair failed and we were unable to recover it. 00:33:59.038 [2024-07-24 02:12:13.746076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.038 [2024-07-24 02:12:13.746101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.038 qpair failed and we were unable to recover it. 00:33:59.038 [2024-07-24 02:12:13.746212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.038 [2024-07-24 02:12:13.746239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.038 qpair failed and we were unable to recover it. 00:33:59.038 [2024-07-24 02:12:13.746375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.038 [2024-07-24 02:12:13.746402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.038 qpair failed and we were unable to recover it. 00:33:59.038 [2024-07-24 02:12:13.746525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.038 [2024-07-24 02:12:13.746551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.038 qpair failed and we were unable to recover it. 00:33:59.038 [2024-07-24 02:12:13.746685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.038 [2024-07-24 02:12:13.746712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.038 qpair failed and we were unable to recover it. 00:33:59.038 [2024-07-24 02:12:13.746834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.038 [2024-07-24 02:12:13.746861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.038 qpair failed and we were unable to recover it. 00:33:59.038 [2024-07-24 02:12:13.746958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.038 [2024-07-24 02:12:13.746983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.038 qpair failed and we were unable to recover it. 00:33:59.038 [2024-07-24 02:12:13.747098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.038 [2024-07-24 02:12:13.747124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.038 qpair failed and we were unable to recover it. 00:33:59.038 [2024-07-24 02:12:13.747216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.038 [2024-07-24 02:12:13.747242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.038 qpair failed and we were unable to recover it. 00:33:59.038 [2024-07-24 02:12:13.747350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.038 [2024-07-24 02:12:13.747377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.038 qpair failed and we were unable to recover it. 00:33:59.038 [2024-07-24 02:12:13.747539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.038 [2024-07-24 02:12:13.747565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.038 qpair failed and we were unable to recover it. 00:33:59.038 [2024-07-24 02:12:13.747667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.038 [2024-07-24 02:12:13.747693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.038 qpair failed and we were unable to recover it. 00:33:59.038 [2024-07-24 02:12:13.747850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.038 [2024-07-24 02:12:13.747875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.038 qpair failed and we were unable to recover it. 00:33:59.038 [2024-07-24 02:12:13.748012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.038 [2024-07-24 02:12:13.748037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.038 qpair failed and we were unable to recover it. 00:33:59.038 [2024-07-24 02:12:13.748134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.038 [2024-07-24 02:12:13.748159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.038 qpair failed and we were unable to recover it. 00:33:59.038 [2024-07-24 02:12:13.748274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.038 [2024-07-24 02:12:13.748312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.038 qpair failed and we were unable to recover it. 00:33:59.038 [2024-07-24 02:12:13.748491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.038 [2024-07-24 02:12:13.748523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.038 qpair failed and we were unable to recover it. 00:33:59.038 [2024-07-24 02:12:13.748655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.038 [2024-07-24 02:12:13.748681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.038 qpair failed and we were unable to recover it. 00:33:59.038 [2024-07-24 02:12:13.748810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.038 [2024-07-24 02:12:13.748835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.038 qpair failed and we were unable to recover it. 00:33:59.038 [2024-07-24 02:12:13.748941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.038 [2024-07-24 02:12:13.748966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.038 qpair failed and we were unable to recover it. 00:33:59.038 [2024-07-24 02:12:13.749084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.038 [2024-07-24 02:12:13.749109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.038 qpair failed and we were unable to recover it. 00:33:59.038 [2024-07-24 02:12:13.749238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.038 [2024-07-24 02:12:13.749263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.039 qpair failed and we were unable to recover it. 00:33:59.039 [2024-07-24 02:12:13.749397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.039 [2024-07-24 02:12:13.749423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.039 qpair failed and we were unable to recover it. 00:33:59.039 [2024-07-24 02:12:13.749527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.039 [2024-07-24 02:12:13.749553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.039 qpair failed and we were unable to recover it. 00:33:59.039 [2024-07-24 02:12:13.749683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.039 [2024-07-24 02:12:13.749708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.039 qpair failed and we were unable to recover it. 00:33:59.039 [2024-07-24 02:12:13.749840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.039 [2024-07-24 02:12:13.749864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.039 qpair failed and we were unable to recover it. 00:33:59.039 [2024-07-24 02:12:13.749991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.039 [2024-07-24 02:12:13.750016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.039 qpair failed and we were unable to recover it. 00:33:59.039 [2024-07-24 02:12:13.750148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.039 [2024-07-24 02:12:13.750173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.039 qpair failed and we were unable to recover it. 00:33:59.039 [2024-07-24 02:12:13.750341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.039 [2024-07-24 02:12:13.750367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.039 qpair failed and we were unable to recover it. 00:33:59.039 [2024-07-24 02:12:13.750495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.039 [2024-07-24 02:12:13.750519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.039 qpair failed and we were unable to recover it. 00:33:59.039 [2024-07-24 02:12:13.750645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.039 [2024-07-24 02:12:13.750684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.039 qpair failed and we were unable to recover it. 00:33:59.039 [2024-07-24 02:12:13.750811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.039 [2024-07-24 02:12:13.750838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.039 qpair failed and we were unable to recover it. 00:33:59.039 [2024-07-24 02:12:13.751006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.039 [2024-07-24 02:12:13.751031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.039 qpair failed and we were unable to recover it. 00:33:59.039 [2024-07-24 02:12:13.751137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.039 [2024-07-24 02:12:13.751163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.039 qpair failed and we were unable to recover it. 00:33:59.039 [2024-07-24 02:12:13.751269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.039 [2024-07-24 02:12:13.751295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.039 qpair failed and we were unable to recover it. 00:33:59.039 [2024-07-24 02:12:13.751484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.039 [2024-07-24 02:12:13.751524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.039 qpair failed and we were unable to recover it. 00:33:59.039 [2024-07-24 02:12:13.751659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.039 [2024-07-24 02:12:13.751685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.039 qpair failed and we were unable to recover it. 00:33:59.039 [2024-07-24 02:12:13.751795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.039 [2024-07-24 02:12:13.751820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.039 qpair failed and we were unable to recover it. 00:33:59.039 [2024-07-24 02:12:13.751938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.039 [2024-07-24 02:12:13.751964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.039 qpair failed and we were unable to recover it. 00:33:59.039 [2024-07-24 02:12:13.752061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.039 [2024-07-24 02:12:13.752086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.039 qpair failed and we were unable to recover it. 00:33:59.039 [2024-07-24 02:12:13.752190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.039 [2024-07-24 02:12:13.752215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.039 qpair failed and we were unable to recover it. 00:33:59.039 [2024-07-24 02:12:13.752340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.039 [2024-07-24 02:12:13.752366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.039 qpair failed and we were unable to recover it. 00:33:59.039 [2024-07-24 02:12:13.752475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.039 [2024-07-24 02:12:13.752501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.039 qpair failed and we were unable to recover it. 00:33:59.039 [2024-07-24 02:12:13.752712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.039 [2024-07-24 02:12:13.752741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.039 qpair failed and we were unable to recover it. 00:33:59.039 [2024-07-24 02:12:13.752877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.039 [2024-07-24 02:12:13.752902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.039 qpair failed and we were unable to recover it. 00:33:59.039 [2024-07-24 02:12:13.753007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.039 [2024-07-24 02:12:13.753037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.039 qpair failed and we were unable to recover it. 00:33:59.039 [2024-07-24 02:12:13.753151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.039 [2024-07-24 02:12:13.753176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.039 qpair failed and we were unable to recover it. 00:33:59.039 [2024-07-24 02:12:13.753288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.039 [2024-07-24 02:12:13.753313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.039 qpair failed and we were unable to recover it. 00:33:59.039 [2024-07-24 02:12:13.753421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.039 [2024-07-24 02:12:13.753447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.039 qpair failed and we were unable to recover it. 00:33:59.039 [2024-07-24 02:12:13.753550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.039 [2024-07-24 02:12:13.753575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.039 qpair failed and we were unable to recover it. 00:33:59.039 [2024-07-24 02:12:13.753676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.039 [2024-07-24 02:12:13.753701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.039 qpair failed and we were unable to recover it. 00:33:59.039 [2024-07-24 02:12:13.753737] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:59.039 [2024-07-24 02:12:13.753771] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:59.039 [2024-07-24 02:12:13.753785] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:59.039 [2024-07-24 02:12:13.753797] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:59.039 [2024-07-24 02:12:13.753807] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:59.039 [2024-07-24 02:12:13.753807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.039 [2024-07-24 02:12:13.753831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.039 qpair failed and we were unable to recover it. 00:33:59.039 [2024-07-24 02:12:13.753867] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:33:59.039 [2024-07-24 02:12:13.753945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.039 [2024-07-24 02:12:13.753970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.039 qpair failed and we were unable to recover it. 00:33:59.039 [2024-07-24 02:12:13.753896] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:33:59.040 [2024-07-24 02:12:13.753945] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:33:59.040 [2024-07-24 02:12:13.753948] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:33:59.040 [2024-07-24 02:12:13.754070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.040 [2024-07-24 02:12:13.754095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.040 qpair failed and we were unable to recover it. 00:33:59.040 [2024-07-24 02:12:13.754223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.040 [2024-07-24 02:12:13.754248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.040 qpair failed and we were unable to recover it. 00:33:59.040 [2024-07-24 02:12:13.754381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.040 [2024-07-24 02:12:13.754420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.040 qpair failed and we were unable to recover it. 00:33:59.040 [2024-07-24 02:12:13.754536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.040 [2024-07-24 02:12:13.754564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.040 qpair failed and we were unable to recover it. 00:33:59.040 [2024-07-24 02:12:13.754680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.040 [2024-07-24 02:12:13.754707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.040 qpair failed and we were unable to recover it. 00:33:59.040 [2024-07-24 02:12:13.754818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.040 [2024-07-24 02:12:13.754843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.040 qpair failed and we were unable to recover it. 00:33:59.040 [2024-07-24 02:12:13.754952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.040 [2024-07-24 02:12:13.754980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.040 qpair failed and we were unable to recover it. 00:33:59.040 [2024-07-24 02:12:13.755083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.040 [2024-07-24 02:12:13.755108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.040 qpair failed and we were unable to recover it. 00:33:59.040 [2024-07-24 02:12:13.755240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.040 [2024-07-24 02:12:13.755266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.040 qpair failed and we were unable to recover it. 00:33:59.040 [2024-07-24 02:12:13.755409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.040 [2024-07-24 02:12:13.755435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.040 qpair failed and we were unable to recover it. 00:33:59.040 [2024-07-24 02:12:13.755563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.040 [2024-07-24 02:12:13.755588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.040 qpair failed and we were unable to recover it. 00:33:59.040 [2024-07-24 02:12:13.755696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.040 [2024-07-24 02:12:13.755722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.040 qpair failed and we were unable to recover it. 00:33:59.040 [2024-07-24 02:12:13.755827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.040 [2024-07-24 02:12:13.755851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.040 qpair failed and we were unable to recover it. 00:33:59.040 [2024-07-24 02:12:13.755950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.040 [2024-07-24 02:12:13.755976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.040 qpair failed and we were unable to recover it. 00:33:59.040 [2024-07-24 02:12:13.756123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.040 [2024-07-24 02:12:13.756162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.040 qpair failed and we were unable to recover it. 00:33:59.040 [2024-07-24 02:12:13.756296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.040 [2024-07-24 02:12:13.756331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.040 qpair failed and we were unable to recover it. 00:33:59.040 [2024-07-24 02:12:13.756481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.040 [2024-07-24 02:12:13.756507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.040 qpair failed and we were unable to recover it. 00:33:59.040 [2024-07-24 02:12:13.756646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.040 [2024-07-24 02:12:13.756672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.040 qpair failed and we were unable to recover it. 00:33:59.040 [2024-07-24 02:12:13.756778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.040 [2024-07-24 02:12:13.756804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.040 qpair failed and we were unable to recover it. 00:33:59.040 [2024-07-24 02:12:13.756943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.040 [2024-07-24 02:12:13.756968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.040 qpair failed and we were unable to recover it. 00:33:59.040 [2024-07-24 02:12:13.757101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.040 [2024-07-24 02:12:13.757128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.040 qpair failed and we were unable to recover it. 00:33:59.040 [2024-07-24 02:12:13.757237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.040 [2024-07-24 02:12:13.757263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.040 qpair failed and we were unable to recover it. 00:33:59.040 [2024-07-24 02:12:13.757436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.040 [2024-07-24 02:12:13.757461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.040 qpair failed and we were unable to recover it. 00:33:59.040 [2024-07-24 02:12:13.757560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.040 [2024-07-24 02:12:13.757585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.040 qpair failed and we were unable to recover it. 00:33:59.040 [2024-07-24 02:12:13.757711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.040 [2024-07-24 02:12:13.757736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.040 qpair failed and we were unable to recover it. 00:33:59.040 [2024-07-24 02:12:13.757839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.040 [2024-07-24 02:12:13.757864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.040 qpair failed and we were unable to recover it. 00:33:59.040 [2024-07-24 02:12:13.757963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.040 [2024-07-24 02:12:13.757991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.040 qpair failed and we were unable to recover it. 00:33:59.040 [2024-07-24 02:12:13.758095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.040 [2024-07-24 02:12:13.758128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.040 qpair failed and we were unable to recover it. 00:33:59.040 [2024-07-24 02:12:13.758235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.040 [2024-07-24 02:12:13.758262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.040 qpair failed and we were unable to recover it. 00:33:59.040 [2024-07-24 02:12:13.758394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.040 [2024-07-24 02:12:13.758421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.040 qpair failed and we were unable to recover it. 00:33:59.040 [2024-07-24 02:12:13.758523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.040 [2024-07-24 02:12:13.758548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.040 qpair failed and we were unable to recover it. 00:33:59.040 [2024-07-24 02:12:13.758654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.040 [2024-07-24 02:12:13.758679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.040 qpair failed and we were unable to recover it. 00:33:59.040 [2024-07-24 02:12:13.758776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.040 [2024-07-24 02:12:13.758801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.040 qpair failed and we were unable to recover it. 00:33:59.040 [2024-07-24 02:12:13.758901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.040 [2024-07-24 02:12:13.758927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.040 qpair failed and we were unable to recover it. 00:33:59.040 [2024-07-24 02:12:13.759037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.040 [2024-07-24 02:12:13.759063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.040 qpair failed and we were unable to recover it. 00:33:59.040 [2024-07-24 02:12:13.759194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.040 [2024-07-24 02:12:13.759222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.040 qpair failed and we were unable to recover it. 00:33:59.040 [2024-07-24 02:12:13.759332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.040 [2024-07-24 02:12:13.759357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.040 qpair failed and we were unable to recover it. 00:33:59.040 [2024-07-24 02:12:13.759461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.040 [2024-07-24 02:12:13.759487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.040 qpair failed and we were unable to recover it. 00:33:59.040 [2024-07-24 02:12:13.759584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.040 [2024-07-24 02:12:13.759609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.041 qpair failed and we were unable to recover it. 00:33:59.041 [2024-07-24 02:12:13.759741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.041 [2024-07-24 02:12:13.759766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.041 qpair failed and we were unable to recover it. 00:33:59.041 [2024-07-24 02:12:13.759865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.041 [2024-07-24 02:12:13.759890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.041 qpair failed and we were unable to recover it. 00:33:59.041 [2024-07-24 02:12:13.760031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.041 [2024-07-24 02:12:13.760057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.041 qpair failed and we were unable to recover it. 00:33:59.041 [2024-07-24 02:12:13.760171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.041 [2024-07-24 02:12:13.760209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.041 qpair failed and we were unable to recover it. 00:33:59.041 [2024-07-24 02:12:13.760330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.041 [2024-07-24 02:12:13.760357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.041 qpair failed and we were unable to recover it. 00:33:59.041 [2024-07-24 02:12:13.760466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.041 [2024-07-24 02:12:13.760492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.041 qpair failed and we were unable to recover it. 00:33:59.041 [2024-07-24 02:12:13.760619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.041 [2024-07-24 02:12:13.760644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.041 qpair failed and we were unable to recover it. 00:33:59.041 [2024-07-24 02:12:13.760782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.041 [2024-07-24 02:12:13.760807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.041 qpair failed and we were unable to recover it. 00:33:59.041 [2024-07-24 02:12:13.760914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.041 [2024-07-24 02:12:13.760939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.041 qpair failed and we were unable to recover it. 00:33:59.041 [2024-07-24 02:12:13.761071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.041 [2024-07-24 02:12:13.761096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.041 qpair failed and we were unable to recover it. 00:33:59.041 [2024-07-24 02:12:13.761204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.041 [2024-07-24 02:12:13.761229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.041 qpair failed and we were unable to recover it. 00:33:59.041 [2024-07-24 02:12:13.761336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.041 [2024-07-24 02:12:13.761362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.041 qpair failed and we were unable to recover it. 00:33:59.041 [2024-07-24 02:12:13.761466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.041 [2024-07-24 02:12:13.761491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.041 qpair failed and we were unable to recover it. 00:33:59.041 [2024-07-24 02:12:13.761594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.041 [2024-07-24 02:12:13.761619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.041 qpair failed and we were unable to recover it. 00:33:59.041 [2024-07-24 02:12:13.761722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.041 [2024-07-24 02:12:13.761749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.041 qpair failed and we were unable to recover it. 00:33:59.041 [2024-07-24 02:12:13.761861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.041 [2024-07-24 02:12:13.761891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.041 qpair failed and we were unable to recover it. 00:33:59.041 [2024-07-24 02:12:13.762024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.041 [2024-07-24 02:12:13.762049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.041 qpair failed and we were unable to recover it. 00:33:59.041 [2024-07-24 02:12:13.762149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.041 [2024-07-24 02:12:13.762173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.041 qpair failed and we were unable to recover it. 00:33:59.041 [2024-07-24 02:12:13.762279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.041 [2024-07-24 02:12:13.762304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.041 qpair failed and we were unable to recover it. 00:33:59.041 [2024-07-24 02:12:13.762482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.041 [2024-07-24 02:12:13.762508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.041 qpair failed and we were unable to recover it. 00:33:59.041 [2024-07-24 02:12:13.762614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.041 [2024-07-24 02:12:13.762639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.041 qpair failed and we were unable to recover it. 00:33:59.041 [2024-07-24 02:12:13.762769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.041 [2024-07-24 02:12:13.762794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.041 qpair failed and we were unable to recover it. 00:33:59.041 [2024-07-24 02:12:13.762903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.041 [2024-07-24 02:12:13.762929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.041 qpair failed and we were unable to recover it. 00:33:59.041 [2024-07-24 02:12:13.763047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.041 [2024-07-24 02:12:13.763073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.041 qpair failed and we were unable to recover it. 00:33:59.041 [2024-07-24 02:12:13.763187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.041 [2024-07-24 02:12:13.763212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.041 qpair failed and we were unable to recover it. 00:33:59.041 [2024-07-24 02:12:13.763352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.041 [2024-07-24 02:12:13.763377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.041 qpair failed and we were unable to recover it. 00:33:59.041 [2024-07-24 02:12:13.763476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.041 [2024-07-24 02:12:13.763501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.041 qpair failed and we were unable to recover it. 00:33:59.041 [2024-07-24 02:12:13.763611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.041 [2024-07-24 02:12:13.763644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.041 qpair failed and we were unable to recover it. 00:33:59.041 [2024-07-24 02:12:13.763765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.041 [2024-07-24 02:12:13.763791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.041 qpair failed and we were unable to recover it. 00:33:59.041 [2024-07-24 02:12:13.763908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.041 [2024-07-24 02:12:13.763932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.041 qpair failed and we were unable to recover it. 00:33:59.041 [2024-07-24 02:12:13.764034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.041 [2024-07-24 02:12:13.764060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.041 qpair failed and we were unable to recover it. 00:33:59.041 [2024-07-24 02:12:13.764200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.041 [2024-07-24 02:12:13.764225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.041 qpair failed and we were unable to recover it. 00:33:59.041 [2024-07-24 02:12:13.764333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.041 [2024-07-24 02:12:13.764361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.041 qpair failed and we were unable to recover it. 00:33:59.041 [2024-07-24 02:12:13.764479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.041 [2024-07-24 02:12:13.764504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.041 qpair failed and we were unable to recover it. 00:33:59.041 [2024-07-24 02:12:13.764604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.041 [2024-07-24 02:12:13.764629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.041 qpair failed and we were unable to recover it. 00:33:59.041 [2024-07-24 02:12:13.764756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.041 [2024-07-24 02:12:13.764781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.041 qpair failed and we were unable to recover it. 00:33:59.041 [2024-07-24 02:12:13.764875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.041 [2024-07-24 02:12:13.764899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.041 qpair failed and we were unable to recover it. 00:33:59.041 [2024-07-24 02:12:13.765024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.041 [2024-07-24 02:12:13.765049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.041 qpair failed and we were unable to recover it. 00:33:59.041 [2024-07-24 02:12:13.765157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.041 [2024-07-24 02:12:13.765184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.041 qpair failed and we were unable to recover it. 00:33:59.041 [2024-07-24 02:12:13.765289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.041 [2024-07-24 02:12:13.765313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.041 qpair failed and we were unable to recover it. 00:33:59.041 [2024-07-24 02:12:13.765471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.041 [2024-07-24 02:12:13.765496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.041 qpair failed and we were unable to recover it. 00:33:59.041 [2024-07-24 02:12:13.765603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.041 [2024-07-24 02:12:13.765631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.041 qpair failed and we were unable to recover it. 00:33:59.042 [2024-07-24 02:12:13.765739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.042 [2024-07-24 02:12:13.765764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.042 qpair failed and we were unable to recover it. 00:33:59.042 [2024-07-24 02:12:13.765897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.042 [2024-07-24 02:12:13.765922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.042 qpair failed and we were unable to recover it. 00:33:59.042 [2024-07-24 02:12:13.766024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.042 [2024-07-24 02:12:13.766049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.042 qpair failed and we were unable to recover it. 00:33:59.042 [2024-07-24 02:12:13.766181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.042 [2024-07-24 02:12:13.766206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.042 qpair failed and we were unable to recover it. 00:33:59.042 [2024-07-24 02:12:13.766311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.042 [2024-07-24 02:12:13.766344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.042 qpair failed and we were unable to recover it. 00:33:59.042 [2024-07-24 02:12:13.766450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.042 [2024-07-24 02:12:13.766475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.042 qpair failed and we were unable to recover it. 00:33:59.042 [2024-07-24 02:12:13.766577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.042 [2024-07-24 02:12:13.766602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.042 qpair failed and we were unable to recover it. 00:33:59.042 [2024-07-24 02:12:13.766711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.042 [2024-07-24 02:12:13.766737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.042 qpair failed and we were unable to recover it. 00:33:59.042 [2024-07-24 02:12:13.766839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.042 [2024-07-24 02:12:13.766867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.042 qpair failed and we were unable to recover it. 00:33:59.042 [2024-07-24 02:12:13.766975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.042 [2024-07-24 02:12:13.767000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.042 qpair failed and we were unable to recover it. 00:33:59.042 [2024-07-24 02:12:13.767127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.042 [2024-07-24 02:12:13.767153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.042 qpair failed and we were unable to recover it. 00:33:59.042 [2024-07-24 02:12:13.767285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.042 [2024-07-24 02:12:13.767310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.042 qpair failed and we were unable to recover it. 00:33:59.042 [2024-07-24 02:12:13.767427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.042 [2024-07-24 02:12:13.767452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.042 qpair failed and we were unable to recover it. 00:33:59.042 [2024-07-24 02:12:13.767553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.042 [2024-07-24 02:12:13.767580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.042 qpair failed and we were unable to recover it. 00:33:59.042 [2024-07-24 02:12:13.767692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.042 [2024-07-24 02:12:13.767718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.042 qpair failed and we were unable to recover it. 00:33:59.042 [2024-07-24 02:12:13.767866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.042 [2024-07-24 02:12:13.767891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.042 qpair failed and we were unable to recover it. 00:33:59.042 [2024-07-24 02:12:13.767994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.042 [2024-07-24 02:12:13.768021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.042 qpair failed and we were unable to recover it. 00:33:59.042 [2024-07-24 02:12:13.768112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.042 [2024-07-24 02:12:13.768137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.042 qpair failed and we were unable to recover it. 00:33:59.042 [2024-07-24 02:12:13.768236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.042 [2024-07-24 02:12:13.768261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.042 qpair failed and we were unable to recover it. 00:33:59.042 [2024-07-24 02:12:13.768380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.042 [2024-07-24 02:12:13.768406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.042 qpair failed and we were unable to recover it. 00:33:59.042 [2024-07-24 02:12:13.768518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.042 [2024-07-24 02:12:13.768543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.042 qpair failed and we were unable to recover it. 00:33:59.042 [2024-07-24 02:12:13.768642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.042 [2024-07-24 02:12:13.768667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.042 qpair failed and we were unable to recover it. 00:33:59.042 [2024-07-24 02:12:13.768772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.042 [2024-07-24 02:12:13.768798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.042 qpair failed and we were unable to recover it. 00:33:59.042 [2024-07-24 02:12:13.768903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.042 [2024-07-24 02:12:13.768930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.042 qpair failed and we were unable to recover it. 00:33:59.042 [2024-07-24 02:12:13.769036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.042 [2024-07-24 02:12:13.769062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.042 qpair failed and we were unable to recover it. 00:33:59.042 [2024-07-24 02:12:13.769165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.042 [2024-07-24 02:12:13.769189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.042 qpair failed and we were unable to recover it. 00:33:59.042 [2024-07-24 02:12:13.769298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.042 [2024-07-24 02:12:13.769335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.042 qpair failed and we were unable to recover it. 00:33:59.042 [2024-07-24 02:12:13.769440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.042 [2024-07-24 02:12:13.769465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.042 qpair failed and we were unable to recover it. 00:33:59.042 [2024-07-24 02:12:13.769598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.042 [2024-07-24 02:12:13.769622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.042 qpair failed and we were unable to recover it. 00:33:59.042 [2024-07-24 02:12:13.769750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.042 [2024-07-24 02:12:13.769774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.042 qpair failed and we were unable to recover it. 00:33:59.042 [2024-07-24 02:12:13.769908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.042 [2024-07-24 02:12:13.769934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.042 qpair failed and we were unable to recover it. 00:33:59.042 [2024-07-24 02:12:13.770041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.042 [2024-07-24 02:12:13.770066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.042 qpair failed and we were unable to recover it. 00:33:59.042 [2024-07-24 02:12:13.770197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.042 [2024-07-24 02:12:13.770224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.042 qpair failed and we were unable to recover it. 00:33:59.042 [2024-07-24 02:12:13.770330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.042 [2024-07-24 02:12:13.770356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.042 qpair failed and we were unable to recover it. 00:33:59.042 [2024-07-24 02:12:13.770491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.042 [2024-07-24 02:12:13.770517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.042 qpair failed and we were unable to recover it. 00:33:59.042 [2024-07-24 02:12:13.770628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.042 [2024-07-24 02:12:13.770653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.042 qpair failed and we were unable to recover it. 00:33:59.042 [2024-07-24 02:12:13.770750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.042 [2024-07-24 02:12:13.770776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.042 qpair failed and we were unable to recover it. 00:33:59.042 [2024-07-24 02:12:13.770892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.042 [2024-07-24 02:12:13.770918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.042 qpair failed and we were unable to recover it. 00:33:59.042 [2024-07-24 02:12:13.771016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.042 [2024-07-24 02:12:13.771043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.042 qpair failed and we were unable to recover it. 00:33:59.042 [2024-07-24 02:12:13.771146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.042 [2024-07-24 02:12:13.771172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.042 qpair failed and we were unable to recover it. 00:33:59.042 [2024-07-24 02:12:13.771279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.042 [2024-07-24 02:12:13.771310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.042 qpair failed and we were unable to recover it. 00:33:59.042 [2024-07-24 02:12:13.771446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.042 [2024-07-24 02:12:13.771471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.042 qpair failed and we were unable to recover it. 00:33:59.042 [2024-07-24 02:12:13.771583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.042 [2024-07-24 02:12:13.771608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.043 qpair failed and we were unable to recover it. 00:33:59.043 [2024-07-24 02:12:13.771751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.043 [2024-07-24 02:12:13.771777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.043 qpair failed and we were unable to recover it. 00:33:59.043 [2024-07-24 02:12:13.771885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.043 [2024-07-24 02:12:13.771911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.043 qpair failed and we were unable to recover it. 00:33:59.043 [2024-07-24 02:12:13.772012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.043 [2024-07-24 02:12:13.772037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.043 qpair failed and we were unable to recover it. 00:33:59.043 [2024-07-24 02:12:13.772174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.043 [2024-07-24 02:12:13.772199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.043 qpair failed and we were unable to recover it. 00:33:59.043 [2024-07-24 02:12:13.772300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.043 [2024-07-24 02:12:13.772332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.043 qpair failed and we were unable to recover it. 00:33:59.043 [2024-07-24 02:12:13.772441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.043 [2024-07-24 02:12:13.772467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.043 qpair failed and we were unable to recover it. 00:33:59.043 [2024-07-24 02:12:13.772580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.043 [2024-07-24 02:12:13.772605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.043 qpair failed and we were unable to recover it. 00:33:59.043 [2024-07-24 02:12:13.772702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.043 [2024-07-24 02:12:13.772727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.043 qpair failed and we were unable to recover it. 00:33:59.043 [2024-07-24 02:12:13.772848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.043 [2024-07-24 02:12:13.772873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.043 qpair failed and we were unable to recover it. 00:33:59.043 [2024-07-24 02:12:13.772969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.043 [2024-07-24 02:12:13.772994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.043 qpair failed and we were unable to recover it. 00:33:59.043 [2024-07-24 02:12:13.773101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.043 [2024-07-24 02:12:13.773126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.043 qpair failed and we were unable to recover it. 00:33:59.043 [2024-07-24 02:12:13.773228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.043 [2024-07-24 02:12:13.773253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.043 qpair failed and we were unable to recover it. 00:33:59.043 [2024-07-24 02:12:13.773391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.043 [2024-07-24 02:12:13.773416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.043 qpair failed and we were unable to recover it. 00:33:59.043 [2024-07-24 02:12:13.773628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.043 [2024-07-24 02:12:13.773652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.043 qpair failed and we were unable to recover it. 00:33:59.043 [2024-07-24 02:12:13.773764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.043 [2024-07-24 02:12:13.773788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.043 qpair failed and we were unable to recover it. 00:33:59.043 [2024-07-24 02:12:13.773922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.043 [2024-07-24 02:12:13.773947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.043 qpair failed and we were unable to recover it. 00:33:59.043 [2024-07-24 02:12:13.774054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.043 [2024-07-24 02:12:13.774079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.043 qpair failed and we were unable to recover it. 00:33:59.043 [2024-07-24 02:12:13.774187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.043 [2024-07-24 02:12:13.774211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.043 qpair failed and we were unable to recover it. 00:33:59.043 [2024-07-24 02:12:13.774351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.043 [2024-07-24 02:12:13.774377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.043 qpair failed and we were unable to recover it. 00:33:59.043 [2024-07-24 02:12:13.774475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.043 [2024-07-24 02:12:13.774500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.043 qpair failed and we were unable to recover it. 00:33:59.043 [2024-07-24 02:12:13.774605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.043 [2024-07-24 02:12:13.774629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.043 qpair failed and we were unable to recover it. 00:33:59.043 [2024-07-24 02:12:13.774731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.043 [2024-07-24 02:12:13.774756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.043 qpair failed and we were unable to recover it. 00:33:59.043 [2024-07-24 02:12:13.774856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.043 [2024-07-24 02:12:13.774881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.043 qpair failed and we were unable to recover it. 00:33:59.043 [2024-07-24 02:12:13.774983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.043 [2024-07-24 02:12:13.775008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.043 qpair failed and we were unable to recover it. 00:33:59.043 [2024-07-24 02:12:13.775106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.043 [2024-07-24 02:12:13.775131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.043 qpair failed and we were unable to recover it. 00:33:59.043 [2024-07-24 02:12:13.775229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.043 [2024-07-24 02:12:13.775253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.043 qpair failed and we were unable to recover it. 00:33:59.043 [2024-07-24 02:12:13.775369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.043 [2024-07-24 02:12:13.775395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.043 qpair failed and we were unable to recover it. 00:33:59.043 [2024-07-24 02:12:13.775504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.043 [2024-07-24 02:12:13.775528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.043 qpair failed and we were unable to recover it. 00:33:59.043 [2024-07-24 02:12:13.775631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.043 [2024-07-24 02:12:13.775661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.043 qpair failed and we were unable to recover it. 00:33:59.043 [2024-07-24 02:12:13.775765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.043 [2024-07-24 02:12:13.775790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.043 qpair failed and we were unable to recover it. 00:33:59.043 [2024-07-24 02:12:13.775927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.043 [2024-07-24 02:12:13.775953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.043 qpair failed and we were unable to recover it. 00:33:59.043 [2024-07-24 02:12:13.776075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.043 [2024-07-24 02:12:13.776100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.043 qpair failed and we were unable to recover it. 00:33:59.043 [2024-07-24 02:12:13.776203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.043 [2024-07-24 02:12:13.776228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.043 qpair failed and we were unable to recover it. 00:33:59.043 [2024-07-24 02:12:13.776358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.043 [2024-07-24 02:12:13.776383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.043 qpair failed and we were unable to recover it. 00:33:59.043 [2024-07-24 02:12:13.776504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.043 [2024-07-24 02:12:13.776545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.043 qpair failed and we were unable to recover it. 00:33:59.043 [2024-07-24 02:12:13.776674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.043 [2024-07-24 02:12:13.776700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.043 qpair failed and we were unable to recover it. 00:33:59.043 [2024-07-24 02:12:13.776814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.043 [2024-07-24 02:12:13.776839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.043 qpair failed and we were unable to recover it. 00:33:59.043 [2024-07-24 02:12:13.776940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.044 [2024-07-24 02:12:13.776965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.044 qpair failed and we were unable to recover it. 00:33:59.044 [2024-07-24 02:12:13.777076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.044 [2024-07-24 02:12:13.777102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.044 qpair failed and we were unable to recover it. 00:33:59.044 [2024-07-24 02:12:13.777206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.044 [2024-07-24 02:12:13.777232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.044 qpair failed and we were unable to recover it. 00:33:59.044 [2024-07-24 02:12:13.777336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.044 [2024-07-24 02:12:13.777361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.044 qpair failed and we were unable to recover it. 00:33:59.044 [2024-07-24 02:12:13.777471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.044 [2024-07-24 02:12:13.777497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.044 qpair failed and we were unable to recover it. 00:33:59.044 [2024-07-24 02:12:13.777594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.044 [2024-07-24 02:12:13.777619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.044 qpair failed and we were unable to recover it. 00:33:59.044 [2024-07-24 02:12:13.777747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.044 [2024-07-24 02:12:13.777772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.044 qpair failed and we were unable to recover it. 00:33:59.044 [2024-07-24 02:12:13.777879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.044 [2024-07-24 02:12:13.777903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.044 qpair failed and we were unable to recover it. 00:33:59.044 [2024-07-24 02:12:13.778020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.044 [2024-07-24 02:12:13.778046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.044 qpair failed and we were unable to recover it. 00:33:59.044 [2024-07-24 02:12:13.778146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.044 [2024-07-24 02:12:13.778174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.044 qpair failed and we were unable to recover it. 00:33:59.044 [2024-07-24 02:12:13.778275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.044 [2024-07-24 02:12:13.778300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.044 qpair failed and we were unable to recover it. 00:33:59.044 [2024-07-24 02:12:13.778439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.044 [2024-07-24 02:12:13.778464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.044 qpair failed and we were unable to recover it. 00:33:59.044 [2024-07-24 02:12:13.778599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.044 [2024-07-24 02:12:13.778624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.044 qpair failed and we were unable to recover it. 00:33:59.044 [2024-07-24 02:12:13.778719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.044 [2024-07-24 02:12:13.778743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.044 qpair failed and we were unable to recover it. 00:33:59.044 [2024-07-24 02:12:13.778961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.044 [2024-07-24 02:12:13.778986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.044 qpair failed and we were unable to recover it. 00:33:59.044 [2024-07-24 02:12:13.779135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.044 [2024-07-24 02:12:13.779160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.044 qpair failed and we were unable to recover it. 00:33:59.044 [2024-07-24 02:12:13.779265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.044 [2024-07-24 02:12:13.779289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.044 qpair failed and we were unable to recover it. 00:33:59.044 [2024-07-24 02:12:13.779400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.044 [2024-07-24 02:12:13.779426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.044 qpair failed and we were unable to recover it. 00:33:59.044 [2024-07-24 02:12:13.779554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.044 [2024-07-24 02:12:13.779578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.044 qpair failed and we were unable to recover it. 00:33:59.044 [2024-07-24 02:12:13.779692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.044 [2024-07-24 02:12:13.779717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.044 qpair failed and we were unable to recover it. 00:33:59.044 [2024-07-24 02:12:13.779927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.044 [2024-07-24 02:12:13.779952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.044 qpair failed and we were unable to recover it. 00:33:59.044 [2024-07-24 02:12:13.780049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.044 [2024-07-24 02:12:13.780077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.044 qpair failed and we were unable to recover it. 00:33:59.044 [2024-07-24 02:12:13.780180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.044 [2024-07-24 02:12:13.780205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.044 qpair failed and we were unable to recover it. 00:33:59.044 [2024-07-24 02:12:13.780341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.044 [2024-07-24 02:12:13.780367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.044 qpair failed and we were unable to recover it. 00:33:59.044 [2024-07-24 02:12:13.780474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.044 [2024-07-24 02:12:13.780499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.044 qpair failed and we were unable to recover it. 00:33:59.044 [2024-07-24 02:12:13.780629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.044 [2024-07-24 02:12:13.780653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.044 qpair failed and we were unable to recover it. 00:33:59.044 [2024-07-24 02:12:13.780762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.045 [2024-07-24 02:12:13.780787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.045 qpair failed and we were unable to recover it. 00:33:59.045 [2024-07-24 02:12:13.780894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.045 [2024-07-24 02:12:13.780920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.045 qpair failed and we were unable to recover it. 00:33:59.045 [2024-07-24 02:12:13.781022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.045 [2024-07-24 02:12:13.781047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.045 qpair failed and we were unable to recover it. 00:33:59.045 [2024-07-24 02:12:13.781195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.045 [2024-07-24 02:12:13.781220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.045 qpair failed and we were unable to recover it. 00:33:59.045 [2024-07-24 02:12:13.781344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.045 [2024-07-24 02:12:13.781370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.045 qpair failed and we were unable to recover it. 00:33:59.045 [2024-07-24 02:12:13.781468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.045 [2024-07-24 02:12:13.781493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.045 qpair failed and we were unable to recover it. 00:33:59.045 [2024-07-24 02:12:13.781601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.045 [2024-07-24 02:12:13.781626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.045 qpair failed and we were unable to recover it. 00:33:59.045 [2024-07-24 02:12:13.781739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.045 [2024-07-24 02:12:13.781765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.045 qpair failed and we were unable to recover it. 00:33:59.045 [2024-07-24 02:12:13.781905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.045 [2024-07-24 02:12:13.781930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.045 qpair failed and we were unable to recover it. 00:33:59.045 [2024-07-24 02:12:13.782089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.045 [2024-07-24 02:12:13.782113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.045 qpair failed and we were unable to recover it. 00:33:59.045 [2024-07-24 02:12:13.782207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.045 [2024-07-24 02:12:13.782232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.045 qpair failed and we were unable to recover it. 00:33:59.045 [2024-07-24 02:12:13.782333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.045 [2024-07-24 02:12:13.782360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.045 qpair failed and we were unable to recover it. 00:33:59.045 [2024-07-24 02:12:13.782493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.045 [2024-07-24 02:12:13.782518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.045 qpair failed and we were unable to recover it. 00:33:59.045 [2024-07-24 02:12:13.782649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.045 [2024-07-24 02:12:13.782674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.045 qpair failed and we were unable to recover it. 00:33:59.045 [2024-07-24 02:12:13.782773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.045 [2024-07-24 02:12:13.782799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.045 qpair failed and we were unable to recover it. 00:33:59.045 [2024-07-24 02:12:13.782907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.045 [2024-07-24 02:12:13.782932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.045 qpair failed and we were unable to recover it. 00:33:59.045 [2024-07-24 02:12:13.783054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.045 [2024-07-24 02:12:13.783080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.045 qpair failed and we were unable to recover it. 00:33:59.045 [2024-07-24 02:12:13.783192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.045 [2024-07-24 02:12:13.783218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.045 qpair failed and we were unable to recover it. 00:33:59.045 [2024-07-24 02:12:13.783356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.045 [2024-07-24 02:12:13.783382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.045 qpair failed and we were unable to recover it. 00:33:59.045 [2024-07-24 02:12:13.783514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.045 [2024-07-24 02:12:13.783539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.045 qpair failed and we were unable to recover it. 00:33:59.045 [2024-07-24 02:12:13.783640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.045 [2024-07-24 02:12:13.783666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.045 qpair failed and we were unable to recover it. 00:33:59.045 [2024-07-24 02:12:13.783798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.045 [2024-07-24 02:12:13.783823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.045 qpair failed and we were unable to recover it. 00:33:59.045 [2024-07-24 02:12:13.783957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.045 [2024-07-24 02:12:13.783984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.045 qpair failed and we were unable to recover it. 00:33:59.045 [2024-07-24 02:12:13.784087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.045 [2024-07-24 02:12:13.784111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.045 qpair failed and we were unable to recover it. 00:33:59.045 [2024-07-24 02:12:13.784218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.045 [2024-07-24 02:12:13.784242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.045 qpair failed and we were unable to recover it. 00:33:59.045 [2024-07-24 02:12:13.784394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.045 [2024-07-24 02:12:13.784420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.045 qpair failed and we were unable to recover it. 00:33:59.045 [2024-07-24 02:12:13.784521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.045 [2024-07-24 02:12:13.784546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.045 qpair failed and we were unable to recover it. 00:33:59.045 [2024-07-24 02:12:13.784681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.045 [2024-07-24 02:12:13.784706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.045 qpair failed and we were unable to recover it. 00:33:59.045 [2024-07-24 02:12:13.784807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.045 [2024-07-24 02:12:13.784833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.045 qpair failed and we were unable to recover it. 00:33:59.045 [2024-07-24 02:12:13.784968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.045 [2024-07-24 02:12:13.784993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.045 qpair failed and we were unable to recover it. 00:33:59.045 [2024-07-24 02:12:13.785093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.045 [2024-07-24 02:12:13.785118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.045 qpair failed and we were unable to recover it. 00:33:59.045 [2024-07-24 02:12:13.785219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.045 [2024-07-24 02:12:13.785244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.045 qpair failed and we were unable to recover it. 00:33:59.045 [2024-07-24 02:12:13.785347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.045 [2024-07-24 02:12:13.785372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.045 qpair failed and we were unable to recover it. 00:33:59.045 [2024-07-24 02:12:13.785468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.045 [2024-07-24 02:12:13.785493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.045 qpair failed and we were unable to recover it. 00:33:59.045 [2024-07-24 02:12:13.785592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.045 [2024-07-24 02:12:13.785618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.045 qpair failed and we were unable to recover it. 00:33:59.045 [2024-07-24 02:12:13.785717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.045 [2024-07-24 02:12:13.785742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.045 qpair failed and we were unable to recover it. 00:33:59.045 [2024-07-24 02:12:13.785850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.045 [2024-07-24 02:12:13.785874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.045 qpair failed and we were unable to recover it. 00:33:59.046 [2024-07-24 02:12:13.786086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.046 [2024-07-24 02:12:13.786111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.046 qpair failed and we were unable to recover it. 00:33:59.046 [2024-07-24 02:12:13.786248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.046 [2024-07-24 02:12:13.786272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.046 qpair failed and we were unable to recover it. 00:33:59.046 [2024-07-24 02:12:13.786379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.046 [2024-07-24 02:12:13.786404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.046 qpair failed and we were unable to recover it. 00:33:59.046 [2024-07-24 02:12:13.786518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.046 [2024-07-24 02:12:13.786545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.046 qpair failed and we were unable to recover it. 00:33:59.046 [2024-07-24 02:12:13.786683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.046 [2024-07-24 02:12:13.786709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.046 qpair failed and we were unable to recover it. 00:33:59.046 [2024-07-24 02:12:13.786821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.046 [2024-07-24 02:12:13.786846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.046 qpair failed and we were unable to recover it. 00:33:59.046 [2024-07-24 02:12:13.786957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.046 [2024-07-24 02:12:13.786983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.046 qpair failed and we were unable to recover it. 00:33:59.046 [2024-07-24 02:12:13.787113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.046 [2024-07-24 02:12:13.787138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.046 qpair failed and we were unable to recover it. 00:33:59.046 [2024-07-24 02:12:13.787270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.046 [2024-07-24 02:12:13.787295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.046 qpair failed and we were unable to recover it. 00:33:59.046 [2024-07-24 02:12:13.787397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.046 [2024-07-24 02:12:13.787423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.046 qpair failed and we were unable to recover it. 00:33:59.046 [2024-07-24 02:12:13.787526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.046 [2024-07-24 02:12:13.787551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.046 qpair failed and we were unable to recover it. 00:33:59.046 [2024-07-24 02:12:13.787690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.046 [2024-07-24 02:12:13.787715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.046 qpair failed and we were unable to recover it. 00:33:59.046 [2024-07-24 02:12:13.787840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.046 [2024-07-24 02:12:13.787864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.046 qpair failed and we were unable to recover it. 00:33:59.046 [2024-07-24 02:12:13.787974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.046 [2024-07-24 02:12:13.787999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.046 qpair failed and we were unable to recover it. 00:33:59.046 [2024-07-24 02:12:13.788132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.046 [2024-07-24 02:12:13.788158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.046 qpair failed and we were unable to recover it. 00:33:59.046 [2024-07-24 02:12:13.788290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.046 [2024-07-24 02:12:13.788315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.046 qpair failed and we were unable to recover it. 00:33:59.046 [2024-07-24 02:12:13.788422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.046 [2024-07-24 02:12:13.788447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.046 qpair failed and we were unable to recover it. 00:33:59.046 [2024-07-24 02:12:13.788551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.046 [2024-07-24 02:12:13.788576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.046 qpair failed and we were unable to recover it. 00:33:59.046 [2024-07-24 02:12:13.788675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.046 [2024-07-24 02:12:13.788700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.046 qpair failed and we were unable to recover it. 00:33:59.046 [2024-07-24 02:12:13.788831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.046 [2024-07-24 02:12:13.788856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.046 qpair failed and we were unable to recover it. 00:33:59.046 [2024-07-24 02:12:13.788960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.046 [2024-07-24 02:12:13.788985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.046 qpair failed and we were unable to recover it. 00:33:59.046 [2024-07-24 02:12:13.789119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.046 [2024-07-24 02:12:13.789143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.046 qpair failed and we were unable to recover it. 00:33:59.046 [2024-07-24 02:12:13.789272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.046 [2024-07-24 02:12:13.789297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.046 qpair failed and we were unable to recover it. 00:33:59.046 [2024-07-24 02:12:13.789425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.046 [2024-07-24 02:12:13.789462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.046 qpair failed and we were unable to recover it. 00:33:59.046 [2024-07-24 02:12:13.789563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.046 [2024-07-24 02:12:13.789590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.046 qpair failed and we were unable to recover it. 00:33:59.046 [2024-07-24 02:12:13.789743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.046 [2024-07-24 02:12:13.789769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.046 qpair failed and we were unable to recover it. 00:33:59.046 [2024-07-24 02:12:13.789876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.046 [2024-07-24 02:12:13.789903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.046 qpair failed and we were unable to recover it. 00:33:59.046 [2024-07-24 02:12:13.790050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.046 [2024-07-24 02:12:13.790076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.046 qpair failed and we were unable to recover it. 00:33:59.046 [2024-07-24 02:12:13.790203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.046 [2024-07-24 02:12:13.790229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.046 qpair failed and we were unable to recover it. 00:33:59.046 [2024-07-24 02:12:13.790335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.046 [2024-07-24 02:12:13.790361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.046 qpair failed and we were unable to recover it. 00:33:59.046 [2024-07-24 02:12:13.790490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.046 [2024-07-24 02:12:13.790517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.046 qpair failed and we were unable to recover it. 00:33:59.046 [2024-07-24 02:12:13.790638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.046 [2024-07-24 02:12:13.790665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.046 qpair failed and we were unable to recover it. 00:33:59.046 [2024-07-24 02:12:13.790771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.046 [2024-07-24 02:12:13.790796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.046 qpair failed and we were unable to recover it. 00:33:59.046 [2024-07-24 02:12:13.790925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.046 [2024-07-24 02:12:13.790950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.046 qpair failed and we were unable to recover it. 00:33:59.046 [2024-07-24 02:12:13.791081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.046 [2024-07-24 02:12:13.791107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.046 qpair failed and we were unable to recover it. 00:33:59.046 [2024-07-24 02:12:13.791326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.046 [2024-07-24 02:12:13.791354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.046 qpair failed and we were unable to recover it. 00:33:59.046 [2024-07-24 02:12:13.791493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.047 [2024-07-24 02:12:13.791518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.047 qpair failed and we were unable to recover it. 00:33:59.047 [2024-07-24 02:12:13.791729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.047 [2024-07-24 02:12:13.791753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.047 qpair failed and we were unable to recover it. 00:33:59.047 [2024-07-24 02:12:13.791844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.047 [2024-07-24 02:12:13.791869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.047 qpair failed and we were unable to recover it. 00:33:59.047 [2024-07-24 02:12:13.791995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.047 [2024-07-24 02:12:13.792020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.047 qpair failed and we were unable to recover it. 00:33:59.047 [2024-07-24 02:12:13.792117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.047 [2024-07-24 02:12:13.792142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.047 qpair failed and we were unable to recover it. 00:33:59.047 [2024-07-24 02:12:13.792251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.047 [2024-07-24 02:12:13.792276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.047 qpair failed and we were unable to recover it. 00:33:59.047 [2024-07-24 02:12:13.792386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.047 [2024-07-24 02:12:13.792412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.047 qpair failed and we were unable to recover it. 00:33:59.047 [2024-07-24 02:12:13.792516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.047 [2024-07-24 02:12:13.792540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.047 qpair failed and we were unable to recover it. 00:33:59.047 [2024-07-24 02:12:13.792636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.047 [2024-07-24 02:12:13.792661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.047 qpair failed and we were unable to recover it. 00:33:59.047 [2024-07-24 02:12:13.792781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.047 [2024-07-24 02:12:13.792806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.047 qpair failed and we were unable to recover it. 00:33:59.047 [2024-07-24 02:12:13.792929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.047 [2024-07-24 02:12:13.792970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.047 qpair failed and we were unable to recover it. 00:33:59.047 [2024-07-24 02:12:13.793084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.047 [2024-07-24 02:12:13.793112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.047 qpair failed and we were unable to recover it. 00:33:59.047 [2024-07-24 02:12:13.793223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.047 [2024-07-24 02:12:13.793249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.047 qpair failed and we were unable to recover it. 00:33:59.047 [2024-07-24 02:12:13.793364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.047 [2024-07-24 02:12:13.793392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.047 qpair failed and we were unable to recover it. 00:33:59.047 [2024-07-24 02:12:13.793542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.047 [2024-07-24 02:12:13.793568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.047 qpair failed and we were unable to recover it. 00:33:59.047 [2024-07-24 02:12:13.793664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.047 [2024-07-24 02:12:13.793689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.047 qpair failed and we were unable to recover it. 00:33:59.047 [2024-07-24 02:12:13.793818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.047 [2024-07-24 02:12:13.793845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.047 qpair failed and we were unable to recover it. 00:33:59.047 [2024-07-24 02:12:13.793946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.047 [2024-07-24 02:12:13.793971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.047 qpair failed and we were unable to recover it. 00:33:59.047 [2024-07-24 02:12:13.794076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.047 [2024-07-24 02:12:13.794100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.047 qpair failed and we were unable to recover it. 00:33:59.047 [2024-07-24 02:12:13.794224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.047 [2024-07-24 02:12:13.794249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.047 qpair failed and we were unable to recover it. 00:33:59.047 [2024-07-24 02:12:13.794464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.047 [2024-07-24 02:12:13.794490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.047 qpair failed and we were unable to recover it. 00:33:59.047 [2024-07-24 02:12:13.794623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.047 [2024-07-24 02:12:13.794648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.047 qpair failed and we were unable to recover it. 00:33:59.047 [2024-07-24 02:12:13.794778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.047 [2024-07-24 02:12:13.794803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.047 qpair failed and we were unable to recover it. 00:33:59.047 [2024-07-24 02:12:13.794908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.047 [2024-07-24 02:12:13.794933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.047 qpair failed and we were unable to recover it. 00:33:59.047 [2024-07-24 02:12:13.795047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.047 [2024-07-24 02:12:13.795071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.047 qpair failed and we were unable to recover it. 00:33:59.047 [2024-07-24 02:12:13.795187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.047 [2024-07-24 02:12:13.795211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.047 qpair failed and we were unable to recover it. 00:33:59.047 [2024-07-24 02:12:13.795351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.047 [2024-07-24 02:12:13.795376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.047 qpair failed and we were unable to recover it. 00:33:59.047 [2024-07-24 02:12:13.795505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.047 [2024-07-24 02:12:13.795530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.047 qpair failed and we were unable to recover it. 00:33:59.047 [2024-07-24 02:12:13.795633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.047 [2024-07-24 02:12:13.795658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.047 qpair failed and we were unable to recover it. 00:33:59.047 [2024-07-24 02:12:13.795769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.047 [2024-07-24 02:12:13.795794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.047 qpair failed and we were unable to recover it. 00:33:59.047 [2024-07-24 02:12:13.795901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.047 [2024-07-24 02:12:13.795926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.047 qpair failed and we were unable to recover it. 00:33:59.047 [2024-07-24 02:12:13.796029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.047 [2024-07-24 02:12:13.796054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.047 qpair failed and we were unable to recover it. 00:33:59.047 [2024-07-24 02:12:13.796194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.047 [2024-07-24 02:12:13.796219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.047 qpair failed and we were unable to recover it. 00:33:59.047 [2024-07-24 02:12:13.796357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.047 [2024-07-24 02:12:13.796383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.047 qpair failed and we were unable to recover it. 00:33:59.047 [2024-07-24 02:12:13.796476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.047 [2024-07-24 02:12:13.796501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.047 qpair failed and we were unable to recover it. 00:33:59.047 [2024-07-24 02:12:13.796603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.047 [2024-07-24 02:12:13.796628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.047 qpair failed and we were unable to recover it. 00:33:59.047 [2024-07-24 02:12:13.796727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.047 [2024-07-24 02:12:13.796752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.047 qpair failed and we were unable to recover it. 00:33:59.048 [2024-07-24 02:12:13.796857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.048 [2024-07-24 02:12:13.796882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.048 qpair failed and we were unable to recover it. 00:33:59.048 [2024-07-24 02:12:13.797020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.048 [2024-07-24 02:12:13.797045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.048 qpair failed and we were unable to recover it. 00:33:59.048 [2024-07-24 02:12:13.797203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.048 [2024-07-24 02:12:13.797244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.048 qpair failed and we were unable to recover it. 00:33:59.048 [2024-07-24 02:12:13.797386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.048 [2024-07-24 02:12:13.797415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.048 qpair failed and we were unable to recover it. 00:33:59.048 [2024-07-24 02:12:13.797520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.048 [2024-07-24 02:12:13.797545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.048 qpair failed and we were unable to recover it. 00:33:59.048 [2024-07-24 02:12:13.797680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.048 [2024-07-24 02:12:13.797706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.048 qpair failed and we were unable to recover it. 00:33:59.048 [2024-07-24 02:12:13.797812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.048 [2024-07-24 02:12:13.797836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.048 qpair failed and we were unable to recover it. 00:33:59.048 [2024-07-24 02:12:13.797944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.048 [2024-07-24 02:12:13.797969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.048 qpair failed and we were unable to recover it. 00:33:59.048 [2024-07-24 02:12:13.798100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.048 [2024-07-24 02:12:13.798125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.048 qpair failed and we were unable to recover it. 00:33:59.048 [2024-07-24 02:12:13.798233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.048 [2024-07-24 02:12:13.798257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.048 qpair failed and we were unable to recover it. 00:33:59.048 [2024-07-24 02:12:13.798375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.048 [2024-07-24 02:12:13.798401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.048 qpair failed and we were unable to recover it. 00:33:59.048 [2024-07-24 02:12:13.798529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.048 [2024-07-24 02:12:13.798554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.048 qpair failed and we were unable to recover it. 00:33:59.048 [2024-07-24 02:12:13.798686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.048 [2024-07-24 02:12:13.798712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.048 qpair failed and we were unable to recover it. 00:33:59.048 [2024-07-24 02:12:13.798863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.048 [2024-07-24 02:12:13.798887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.048 qpair failed and we were unable to recover it. 00:33:59.048 [2024-07-24 02:12:13.799024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.048 [2024-07-24 02:12:13.799051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.048 qpair failed and we were unable to recover it. 00:33:59.048 [2024-07-24 02:12:13.799166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.048 [2024-07-24 02:12:13.799192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.048 qpair failed and we were unable to recover it. 00:33:59.048 [2024-07-24 02:12:13.799305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.048 [2024-07-24 02:12:13.799337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.048 qpair failed and we were unable to recover it. 00:33:59.048 [2024-07-24 02:12:13.799436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.048 [2024-07-24 02:12:13.799460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.048 qpair failed and we were unable to recover it. 00:33:59.048 [2024-07-24 02:12:13.799585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.048 [2024-07-24 02:12:13.799611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.048 qpair failed and we were unable to recover it. 00:33:59.048 [2024-07-24 02:12:13.799716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.048 [2024-07-24 02:12:13.799741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.048 qpair failed and we were unable to recover it. 00:33:59.048 [2024-07-24 02:12:13.799844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.048 [2024-07-24 02:12:13.799871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.048 qpair failed and we were unable to recover it. 00:33:59.048 [2024-07-24 02:12:13.799989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.048 [2024-07-24 02:12:13.800015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.048 qpair failed and we were unable to recover it. 00:33:59.048 [2024-07-24 02:12:13.800122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.048 [2024-07-24 02:12:13.800148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.048 qpair failed and we were unable to recover it. 00:33:59.048 [2024-07-24 02:12:13.800280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.048 [2024-07-24 02:12:13.800305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.048 qpair failed and we were unable to recover it. 00:33:59.048 [2024-07-24 02:12:13.800421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.048 [2024-07-24 02:12:13.800446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.048 qpair failed and we were unable to recover it. 00:33:59.048 [2024-07-24 02:12:13.800550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.048 [2024-07-24 02:12:13.800575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.048 qpair failed and we were unable to recover it. 00:33:59.048 [2024-07-24 02:12:13.800720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.048 [2024-07-24 02:12:13.800746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.048 qpair failed and we were unable to recover it. 00:33:59.048 [2024-07-24 02:12:13.800870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.048 [2024-07-24 02:12:13.800895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.048 qpair failed and we were unable to recover it. 00:33:59.048 [2024-07-24 02:12:13.801004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.049 [2024-07-24 02:12:13.801030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.049 qpair failed and we were unable to recover it. 00:33:59.049 [2024-07-24 02:12:13.801134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.049 [2024-07-24 02:12:13.801159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.049 qpair failed and we were unable to recover it. 00:33:59.049 [2024-07-24 02:12:13.801285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.049 [2024-07-24 02:12:13.801333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.049 qpair failed and we were unable to recover it. 00:33:59.049 [2024-07-24 02:12:13.801454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.049 [2024-07-24 02:12:13.801482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.049 qpair failed and we were unable to recover it. 00:33:59.049 [2024-07-24 02:12:13.801626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.049 [2024-07-24 02:12:13.801652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.049 qpair failed and we were unable to recover it. 00:33:59.049 [2024-07-24 02:12:13.801779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.049 [2024-07-24 02:12:13.801804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.049 qpair failed and we were unable to recover it. 00:33:59.049 [2024-07-24 02:12:13.801904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.049 [2024-07-24 02:12:13.801930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.049 qpair failed and we were unable to recover it. 00:33:59.049 [2024-07-24 02:12:13.802036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.049 [2024-07-24 02:12:13.802063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.049 qpair failed and we were unable to recover it. 00:33:59.049 [2024-07-24 02:12:13.802165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.049 [2024-07-24 02:12:13.802190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.049 qpair failed and we were unable to recover it. 00:33:59.049 [2024-07-24 02:12:13.802307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.049 [2024-07-24 02:12:13.802340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.049 qpair failed and we were unable to recover it. 00:33:59.049 [2024-07-24 02:12:13.802453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.049 [2024-07-24 02:12:13.802482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.049 qpair failed and we were unable to recover it. 00:33:59.049 [2024-07-24 02:12:13.802626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.049 [2024-07-24 02:12:13.802664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.049 qpair failed and we were unable to recover it. 00:33:59.049 [2024-07-24 02:12:13.802780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.049 [2024-07-24 02:12:13.802812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.049 qpair failed and we were unable to recover it. 00:33:59.049 [2024-07-24 02:12:13.802922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.049 [2024-07-24 02:12:13.802948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.049 qpair failed and we were unable to recover it. 00:33:59.049 [2024-07-24 02:12:13.803049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.049 [2024-07-24 02:12:13.803074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.049 qpair failed and we were unable to recover it. 00:33:59.049 [2024-07-24 02:12:13.803178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.049 [2024-07-24 02:12:13.803203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.049 qpair failed and we were unable to recover it. 00:33:59.049 [2024-07-24 02:12:13.803321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.049 [2024-07-24 02:12:13.803370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.049 qpair failed and we were unable to recover it. 00:33:59.049 [2024-07-24 02:12:13.803531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.049 [2024-07-24 02:12:13.803557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.049 qpair failed and we were unable to recover it. 00:33:59.049 [2024-07-24 02:12:13.803704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.049 [2024-07-24 02:12:13.803729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.049 qpair failed and we were unable to recover it. 00:33:59.049 [2024-07-24 02:12:13.803831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.049 [2024-07-24 02:12:13.803856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.049 qpair failed and we were unable to recover it. 00:33:59.049 [2024-07-24 02:12:13.803989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.049 [2024-07-24 02:12:13.804015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.049 qpair failed and we were unable to recover it. 00:33:59.049 [2024-07-24 02:12:13.804116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.049 [2024-07-24 02:12:13.804141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.049 qpair failed and we were unable to recover it. 00:33:59.049 [2024-07-24 02:12:13.804244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.049 [2024-07-24 02:12:13.804269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.049 qpair failed and we were unable to recover it. 00:33:59.049 [2024-07-24 02:12:13.804393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.049 [2024-07-24 02:12:13.804421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.049 qpair failed and we were unable to recover it. 00:33:59.049 [2024-07-24 02:12:13.804531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.049 [2024-07-24 02:12:13.804556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.049 qpair failed and we were unable to recover it. 00:33:59.049 [2024-07-24 02:12:13.804650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.049 [2024-07-24 02:12:13.804675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.049 qpair failed and we were unable to recover it. 00:33:59.049 [2024-07-24 02:12:13.804815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.049 [2024-07-24 02:12:13.804840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.049 qpair failed and we were unable to recover it. 00:33:59.049 [2024-07-24 02:12:13.804949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.049 [2024-07-24 02:12:13.804974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.049 qpair failed and we were unable to recover it. 00:33:59.049 [2024-07-24 02:12:13.805124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.049 [2024-07-24 02:12:13.805163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.049 qpair failed and we were unable to recover it. 00:33:59.049 [2024-07-24 02:12:13.805272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.049 [2024-07-24 02:12:13.805298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.049 qpair failed and we were unable to recover it. 00:33:59.049 [2024-07-24 02:12:13.805411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.049 [2024-07-24 02:12:13.805436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.049 qpair failed and we were unable to recover it. 00:33:59.049 [2024-07-24 02:12:13.805531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.049 [2024-07-24 02:12:13.805555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.049 qpair failed and we were unable to recover it. 00:33:59.049 [2024-07-24 02:12:13.805686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.049 [2024-07-24 02:12:13.805711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.049 qpair failed and we were unable to recover it. 00:33:59.049 [2024-07-24 02:12:13.805810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.049 [2024-07-24 02:12:13.805834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.049 qpair failed and we were unable to recover it. 00:33:59.049 [2024-07-24 02:12:13.805936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.049 [2024-07-24 02:12:13.805960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.049 qpair failed and we were unable to recover it. 00:33:59.049 [2024-07-24 02:12:13.806092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.049 [2024-07-24 02:12:13.806116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.049 qpair failed and we were unable to recover it. 00:33:59.049 [2024-07-24 02:12:13.806213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.049 [2024-07-24 02:12:13.806238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.049 qpair failed and we were unable to recover it. 00:33:59.050 [2024-07-24 02:12:13.806362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.050 [2024-07-24 02:12:13.806390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.050 qpair failed and we were unable to recover it. 00:33:59.050 [2024-07-24 02:12:13.806519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.050 [2024-07-24 02:12:13.806544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.050 qpair failed and we were unable to recover it. 00:33:59.050 [2024-07-24 02:12:13.806648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.050 [2024-07-24 02:12:13.806679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.050 qpair failed and we were unable to recover it. 00:33:59.050 [2024-07-24 02:12:13.806811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.050 [2024-07-24 02:12:13.806836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.050 qpair failed and we were unable to recover it. 00:33:59.050 [2024-07-24 02:12:13.806944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.050 [2024-07-24 02:12:13.806969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.050 qpair failed and we were unable to recover it. 00:33:59.050 [2024-07-24 02:12:13.807081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.050 [2024-07-24 02:12:13.807106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.050 qpair failed and we were unable to recover it. 00:33:59.050 [2024-07-24 02:12:13.807203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.050 [2024-07-24 02:12:13.807227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.050 qpair failed and we were unable to recover it. 00:33:59.050 [2024-07-24 02:12:13.807342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.050 [2024-07-24 02:12:13.807382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.050 qpair failed and we were unable to recover it. 00:33:59.050 [2024-07-24 02:12:13.807500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.050 [2024-07-24 02:12:13.807527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.050 qpair failed and we were unable to recover it. 00:33:59.050 [2024-07-24 02:12:13.807634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.050 [2024-07-24 02:12:13.807661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.050 qpair failed and we were unable to recover it. 00:33:59.050 [2024-07-24 02:12:13.807793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.050 [2024-07-24 02:12:13.807819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.050 qpair failed and we were unable to recover it. 00:33:59.050 [2024-07-24 02:12:13.807952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.050 [2024-07-24 02:12:13.807977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.050 qpair failed and we were unable to recover it. 00:33:59.050 [2024-07-24 02:12:13.808089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.050 [2024-07-24 02:12:13.808115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.050 qpair failed and we were unable to recover it. 00:33:59.050 [2024-07-24 02:12:13.808230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.050 [2024-07-24 02:12:13.808256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.050 qpair failed and we were unable to recover it. 00:33:59.050 [2024-07-24 02:12:13.808363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.050 [2024-07-24 02:12:13.808388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.050 qpair failed and we were unable to recover it. 00:33:59.050 [2024-07-24 02:12:13.808523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.050 [2024-07-24 02:12:13.808548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.050 qpair failed and we were unable to recover it. 00:33:59.050 [2024-07-24 02:12:13.808662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.050 [2024-07-24 02:12:13.808687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.050 qpair failed and we were unable to recover it. 00:33:59.050 [2024-07-24 02:12:13.808792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.050 [2024-07-24 02:12:13.808818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.050 qpair failed and we were unable to recover it. 00:33:59.050 [2024-07-24 02:12:13.808970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.050 [2024-07-24 02:12:13.808996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.050 qpair failed and we were unable to recover it. 00:33:59.050 [2024-07-24 02:12:13.809096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.050 [2024-07-24 02:12:13.809120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.050 qpair failed and we were unable to recover it. 00:33:59.050 [2024-07-24 02:12:13.809224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.050 [2024-07-24 02:12:13.809249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.050 qpair failed and we were unable to recover it. 00:33:59.050 [2024-07-24 02:12:13.809386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.050 [2024-07-24 02:12:13.809412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.050 qpair failed and we were unable to recover it. 00:33:59.050 [2024-07-24 02:12:13.809521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.050 [2024-07-24 02:12:13.809546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.050 qpair failed and we were unable to recover it. 00:33:59.050 [2024-07-24 02:12:13.809674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.050 [2024-07-24 02:12:13.809699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.050 qpair failed and we were unable to recover it. 00:33:59.050 [2024-07-24 02:12:13.809800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.050 [2024-07-24 02:12:13.809824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.050 qpair failed and we were unable to recover it. 00:33:59.050 [2024-07-24 02:12:13.809929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.050 [2024-07-24 02:12:13.809957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.050 qpair failed and we were unable to recover it. 00:33:59.050 [2024-07-24 02:12:13.810101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.050 [2024-07-24 02:12:13.810127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.050 qpair failed and we were unable to recover it. 00:33:59.050 [2024-07-24 02:12:13.810235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.050 [2024-07-24 02:12:13.810262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.050 qpair failed and we were unable to recover it. 00:33:59.050 [2024-07-24 02:12:13.810398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.050 [2024-07-24 02:12:13.810425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.050 qpair failed and we were unable to recover it. 00:33:59.050 [2024-07-24 02:12:13.810533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.050 [2024-07-24 02:12:13.810565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.050 qpair failed and we were unable to recover it. 00:33:59.050 [2024-07-24 02:12:13.810699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.050 [2024-07-24 02:12:13.810725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.050 qpair failed and we were unable to recover it. 00:33:59.050 [2024-07-24 02:12:13.810830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.050 [2024-07-24 02:12:13.810856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.050 qpair failed and we were unable to recover it. 00:33:59.050 [2024-07-24 02:12:13.810966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.050 [2024-07-24 02:12:13.810992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.050 qpair failed and we were unable to recover it. 00:33:59.050 [2024-07-24 02:12:13.811129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.050 [2024-07-24 02:12:13.811155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.050 qpair failed and we were unable to recover it. 00:33:59.050 [2024-07-24 02:12:13.811260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.050 [2024-07-24 02:12:13.811287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.050 qpair failed and we were unable to recover it. 00:33:59.050 [2024-07-24 02:12:13.811392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.050 [2024-07-24 02:12:13.811418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.050 qpair failed and we were unable to recover it. 00:33:59.050 [2024-07-24 02:12:13.811545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.050 [2024-07-24 02:12:13.811570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.050 qpair failed and we were unable to recover it. 00:33:59.051 [2024-07-24 02:12:13.811668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.051 [2024-07-24 02:12:13.811693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.051 qpair failed and we were unable to recover it. 00:33:59.051 [2024-07-24 02:12:13.811795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.051 [2024-07-24 02:12:13.811820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.051 qpair failed and we were unable to recover it. 00:33:59.051 [2024-07-24 02:12:13.811947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.051 [2024-07-24 02:12:13.811971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.051 qpair failed and we were unable to recover it. 00:33:59.051 [2024-07-24 02:12:13.812120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.051 [2024-07-24 02:12:13.812159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.051 qpair failed and we were unable to recover it. 00:33:59.051 [2024-07-24 02:12:13.812267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.051 [2024-07-24 02:12:13.812294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.051 qpair failed and we were unable to recover it. 00:33:59.051 [2024-07-24 02:12:13.812454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.051 [2024-07-24 02:12:13.812493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.051 qpair failed and we were unable to recover it. 00:33:59.051 [2024-07-24 02:12:13.812611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.051 [2024-07-24 02:12:13.812637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.051 qpair failed and we were unable to recover it. 00:33:59.051 [2024-07-24 02:12:13.812771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.051 [2024-07-24 02:12:13.812797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.051 qpair failed and we were unable to recover it. 00:33:59.051 [2024-07-24 02:12:13.812896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.051 [2024-07-24 02:12:13.812921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.051 qpair failed and we were unable to recover it. 00:33:59.051 [2024-07-24 02:12:13.813017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.051 [2024-07-24 02:12:13.813042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.051 qpair failed and we were unable to recover it. 00:33:59.051 [2024-07-24 02:12:13.813144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.051 [2024-07-24 02:12:13.813169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.051 qpair failed and we were unable to recover it. 00:33:59.051 [2024-07-24 02:12:13.813290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.051 [2024-07-24 02:12:13.813337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.051 qpair failed and we were unable to recover it. 00:33:59.051 [2024-07-24 02:12:13.813452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.051 [2024-07-24 02:12:13.813481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.051 qpair failed and we were unable to recover it. 00:33:59.051 [2024-07-24 02:12:13.813594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.051 [2024-07-24 02:12:13.813621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.051 qpair failed and we were unable to recover it. 00:33:59.051 [2024-07-24 02:12:13.813788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.051 [2024-07-24 02:12:13.813814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.051 qpair failed and we were unable to recover it. 00:33:59.051 [2024-07-24 02:12:13.813911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.051 [2024-07-24 02:12:13.813938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.051 qpair failed and we were unable to recover it. 00:33:59.051 [2024-07-24 02:12:13.814051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.051 [2024-07-24 02:12:13.814077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.051 qpair failed and we were unable to recover it. 00:33:59.051 [2024-07-24 02:12:13.814186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.051 [2024-07-24 02:12:13.814212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.051 qpair failed and we were unable to recover it. 00:33:59.051 [2024-07-24 02:12:13.814322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.051 [2024-07-24 02:12:13.814353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.051 qpair failed and we were unable to recover it. 00:33:59.051 [2024-07-24 02:12:13.814468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.051 [2024-07-24 02:12:13.814500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.051 qpair failed and we were unable to recover it. 00:33:59.051 [2024-07-24 02:12:13.814659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.051 [2024-07-24 02:12:13.814684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.051 qpair failed and we were unable to recover it. 00:33:59.051 [2024-07-24 02:12:13.814816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.051 [2024-07-24 02:12:13.814842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.051 qpair failed and we were unable to recover it. 00:33:59.051 [2024-07-24 02:12:13.814950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.051 [2024-07-24 02:12:13.814975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.051 qpair failed and we were unable to recover it. 00:33:59.051 [2024-07-24 02:12:13.815077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.051 [2024-07-24 02:12:13.815103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.051 qpair failed and we were unable to recover it. 00:33:59.051 [2024-07-24 02:12:13.815202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.051 [2024-07-24 02:12:13.815229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.051 qpair failed and we were unable to recover it. 00:33:59.051 [2024-07-24 02:12:13.815363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.051 [2024-07-24 02:12:13.815390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.051 qpair failed and we were unable to recover it. 00:33:59.051 [2024-07-24 02:12:13.815543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.051 [2024-07-24 02:12:13.815581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.051 qpair failed and we were unable to recover it. 00:33:59.051 [2024-07-24 02:12:13.815721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.051 [2024-07-24 02:12:13.815748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.051 qpair failed and we were unable to recover it. 00:33:59.051 [2024-07-24 02:12:13.815883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.051 [2024-07-24 02:12:13.815909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.051 qpair failed and we were unable to recover it. 00:33:59.051 [2024-07-24 02:12:13.816008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.051 [2024-07-24 02:12:13.816033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.051 qpair failed and we were unable to recover it. 00:33:59.051 [2024-07-24 02:12:13.816143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.051 [2024-07-24 02:12:13.816171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.051 qpair failed and we were unable to recover it. 00:33:59.051 [2024-07-24 02:12:13.816313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.051 [2024-07-24 02:12:13.816345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.051 qpair failed and we were unable to recover it. 00:33:59.051 [2024-07-24 02:12:13.816505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.051 [2024-07-24 02:12:13.816531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.051 qpair failed and we were unable to recover it. 00:33:59.051 [2024-07-24 02:12:13.816637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.051 [2024-07-24 02:12:13.816664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.051 qpair failed and we were unable to recover it. 00:33:59.051 [2024-07-24 02:12:13.816776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.051 [2024-07-24 02:12:13.816802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.051 qpair failed and we were unable to recover it. 00:33:59.051 [2024-07-24 02:12:13.816937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.051 [2024-07-24 02:12:13.816962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.051 qpair failed and we were unable to recover it. 00:33:59.051 [2024-07-24 02:12:13.817092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.051 [2024-07-24 02:12:13.817117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.052 qpair failed and we were unable to recover it. 00:33:59.052 [2024-07-24 02:12:13.817217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.052 [2024-07-24 02:12:13.817242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.052 qpair failed and we were unable to recover it. 00:33:59.052 [2024-07-24 02:12:13.817377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.052 [2024-07-24 02:12:13.817403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.052 qpair failed and we were unable to recover it. 00:33:59.052 [2024-07-24 02:12:13.817515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.052 [2024-07-24 02:12:13.817542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.052 qpair failed and we were unable to recover it. 00:33:59.052 [2024-07-24 02:12:13.817683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.052 [2024-07-24 02:12:13.817708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.052 qpair failed and we were unable to recover it. 00:33:59.052 [2024-07-24 02:12:13.817841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.052 [2024-07-24 02:12:13.817866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.052 qpair failed and we were unable to recover it. 00:33:59.052 [2024-07-24 02:12:13.817989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.052 [2024-07-24 02:12:13.818014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.052 qpair failed and we were unable to recover it. 00:33:59.052 [2024-07-24 02:12:13.818116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.052 [2024-07-24 02:12:13.818141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.052 qpair failed and we were unable to recover it. 00:33:59.052 [2024-07-24 02:12:13.818258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.052 [2024-07-24 02:12:13.818297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.052 qpair failed and we were unable to recover it. 00:33:59.052 [2024-07-24 02:12:13.818473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.052 [2024-07-24 02:12:13.818500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.052 qpair failed and we were unable to recover it. 00:33:59.052 [2024-07-24 02:12:13.818616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.052 [2024-07-24 02:12:13.818648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.052 qpair failed and we were unable to recover it. 00:33:59.052 [2024-07-24 02:12:13.818751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.052 [2024-07-24 02:12:13.818776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.052 qpair failed and we were unable to recover it. 00:33:59.052 [2024-07-24 02:12:13.818881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.052 [2024-07-24 02:12:13.818907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.052 qpair failed and we were unable to recover it. 00:33:59.052 [2024-07-24 02:12:13.819020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.052 [2024-07-24 02:12:13.819046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.052 qpair failed and we were unable to recover it. 00:33:59.052 [2024-07-24 02:12:13.819147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.052 [2024-07-24 02:12:13.819172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.052 qpair failed and we were unable to recover it. 00:33:59.052 [2024-07-24 02:12:13.819309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.052 [2024-07-24 02:12:13.819342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.052 qpair failed and we were unable to recover it. 00:33:59.052 [2024-07-24 02:12:13.819455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.052 [2024-07-24 02:12:13.819480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.052 qpair failed and we were unable to recover it. 00:33:59.052 [2024-07-24 02:12:13.819587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.052 [2024-07-24 02:12:13.819614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.052 qpair failed and we were unable to recover it. 00:33:59.052 [2024-07-24 02:12:13.819749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.052 [2024-07-24 02:12:13.819775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.052 qpair failed and we were unable to recover it. 00:33:59.052 [2024-07-24 02:12:13.819881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.052 [2024-07-24 02:12:13.819906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.052 qpair failed and we were unable to recover it. 00:33:59.052 [2024-07-24 02:12:13.820044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.052 [2024-07-24 02:12:13.820071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.052 qpair failed and we were unable to recover it. 00:33:59.052 [2024-07-24 02:12:13.820179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.052 [2024-07-24 02:12:13.820207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.052 qpair failed and we were unable to recover it. 00:33:59.052 [2024-07-24 02:12:13.820314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.052 [2024-07-24 02:12:13.820346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.052 qpair failed and we were unable to recover it. 00:33:59.052 [2024-07-24 02:12:13.820479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.052 [2024-07-24 02:12:13.820504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.052 qpair failed and we were unable to recover it. 00:33:59.052 [2024-07-24 02:12:13.820614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.052 [2024-07-24 02:12:13.820640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.052 qpair failed and we were unable to recover it. 00:33:59.052 [2024-07-24 02:12:13.820777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.052 [2024-07-24 02:12:13.820803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.052 qpair failed and we were unable to recover it. 00:33:59.052 [2024-07-24 02:12:13.820947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.052 [2024-07-24 02:12:13.820974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.052 qpair failed and we were unable to recover it. 00:33:59.052 [2024-07-24 02:12:13.821086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.052 [2024-07-24 02:12:13.821113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.052 qpair failed and we were unable to recover it. 00:33:59.052 [2024-07-24 02:12:13.821223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.052 [2024-07-24 02:12:13.821248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.052 qpair failed and we were unable to recover it. 00:33:59.052 [2024-07-24 02:12:13.821356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.052 [2024-07-24 02:12:13.821382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.052 qpair failed and we were unable to recover it. 00:33:59.052 [2024-07-24 02:12:13.821511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.052 [2024-07-24 02:12:13.821536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.052 qpair failed and we were unable to recover it. 00:33:59.052 [2024-07-24 02:12:13.821631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.052 [2024-07-24 02:12:13.821656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.052 qpair failed and we were unable to recover it. 00:33:59.052 [2024-07-24 02:12:13.821825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.052 [2024-07-24 02:12:13.821850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.052 qpair failed and we were unable to recover it. 00:33:59.052 [2024-07-24 02:12:13.821977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.052 [2024-07-24 02:12:13.822002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.052 qpair failed and we were unable to recover it. 00:33:59.052 [2024-07-24 02:12:13.822102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.052 [2024-07-24 02:12:13.822127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.052 qpair failed and we were unable to recover it. 00:33:59.052 [2024-07-24 02:12:13.822232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.052 [2024-07-24 02:12:13.822257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.052 qpair failed and we were unable to recover it. 00:33:59.052 [2024-07-24 02:12:13.822379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.052 [2024-07-24 02:12:13.822419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.052 qpair failed and we were unable to recover it. 00:33:59.052 [2024-07-24 02:12:13.822581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.053 [2024-07-24 02:12:13.822617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.053 qpair failed and we were unable to recover it. 00:33:59.053 [2024-07-24 02:12:13.822757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.053 [2024-07-24 02:12:13.822783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.053 qpair failed and we were unable to recover it. 00:33:59.053 [2024-07-24 02:12:13.822911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.053 [2024-07-24 02:12:13.822936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.053 qpair failed and we were unable to recover it. 00:33:59.053 [2024-07-24 02:12:13.823068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.053 [2024-07-24 02:12:13.823094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.053 qpair failed and we were unable to recover it. 00:33:59.053 [2024-07-24 02:12:13.823224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.053 [2024-07-24 02:12:13.823249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.053 qpair failed and we were unable to recover it. 00:33:59.053 [2024-07-24 02:12:13.823382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.053 [2024-07-24 02:12:13.823409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.053 qpair failed and we were unable to recover it. 00:33:59.053 [2024-07-24 02:12:13.823510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.053 [2024-07-24 02:12:13.823537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.053 qpair failed and we were unable to recover it. 00:33:59.053 [2024-07-24 02:12:13.823647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.053 [2024-07-24 02:12:13.823672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.053 qpair failed and we were unable to recover it. 00:33:59.053 [2024-07-24 02:12:13.823774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.053 [2024-07-24 02:12:13.823800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.053 qpair failed and we were unable to recover it. 00:33:59.053 [2024-07-24 02:12:13.823900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.053 [2024-07-24 02:12:13.823926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.053 qpair failed and we were unable to recover it. 00:33:59.053 [2024-07-24 02:12:13.824035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.053 [2024-07-24 02:12:13.824060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.053 qpair failed and we were unable to recover it. 00:33:59.053 [2024-07-24 02:12:13.824172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.053 [2024-07-24 02:12:13.824199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.053 qpair failed and we were unable to recover it. 00:33:59.053 [2024-07-24 02:12:13.824407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.053 [2024-07-24 02:12:13.824433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.053 qpair failed and we were unable to recover it. 00:33:59.053 [2024-07-24 02:12:13.824536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.053 [2024-07-24 02:12:13.824567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.053 qpair failed and we were unable to recover it. 00:33:59.053 [2024-07-24 02:12:13.824662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.053 [2024-07-24 02:12:13.824687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.053 qpair failed and we were unable to recover it. 00:33:59.053 [2024-07-24 02:12:13.824820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.053 [2024-07-24 02:12:13.824845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.053 qpair failed and we were unable to recover it. 00:33:59.053 [2024-07-24 02:12:13.824951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.053 [2024-07-24 02:12:13.824976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.053 qpair failed and we were unable to recover it. 00:33:59.053 [2024-07-24 02:12:13.825105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.053 [2024-07-24 02:12:13.825130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.053 qpair failed and we were unable to recover it. 00:33:59.053 [2024-07-24 02:12:13.825257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.053 [2024-07-24 02:12:13.825282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.053 qpair failed and we were unable to recover it. 00:33:59.053 [2024-07-24 02:12:13.825396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.053 [2024-07-24 02:12:13.825421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.053 qpair failed and we were unable to recover it. 00:33:59.053 [2024-07-24 02:12:13.825522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.053 [2024-07-24 02:12:13.825547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.053 qpair failed and we were unable to recover it. 00:33:59.053 [2024-07-24 02:12:13.825672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.053 [2024-07-24 02:12:13.825697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.053 qpair failed and we were unable to recover it. 00:33:59.053 [2024-07-24 02:12:13.825801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.053 [2024-07-24 02:12:13.825826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.053 qpair failed and we were unable to recover it. 00:33:59.053 [2024-07-24 02:12:13.825926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.053 [2024-07-24 02:12:13.825951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.053 qpair failed and we were unable to recover it. 00:33:59.053 [2024-07-24 02:12:13.826052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.053 [2024-07-24 02:12:13.826076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.053 qpair failed and we were unable to recover it. 00:33:59.053 [2024-07-24 02:12:13.826173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.053 [2024-07-24 02:12:13.826197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.053 qpair failed and we were unable to recover it. 00:33:59.053 [2024-07-24 02:12:13.826297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.053 [2024-07-24 02:12:13.826331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.053 qpair failed and we were unable to recover it. 00:33:59.053 [2024-07-24 02:12:13.826487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.053 [2024-07-24 02:12:13.826525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.053 qpair failed and we were unable to recover it. 00:33:59.053 [2024-07-24 02:12:13.826698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.053 [2024-07-24 02:12:13.826725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.053 qpair failed and we were unable to recover it. 00:33:59.053 [2024-07-24 02:12:13.826832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.053 [2024-07-24 02:12:13.826858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.053 qpair failed and we were unable to recover it. 00:33:59.053 [2024-07-24 02:12:13.826993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.054 [2024-07-24 02:12:13.827018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.054 qpair failed and we were unable to recover it. 00:33:59.054 [2024-07-24 02:12:13.827125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.054 [2024-07-24 02:12:13.827151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.054 qpair failed and we were unable to recover it. 00:33:59.054 [2024-07-24 02:12:13.827302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.054 [2024-07-24 02:12:13.827347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.054 qpair failed and we were unable to recover it. 00:33:59.054 [2024-07-24 02:12:13.827457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.054 [2024-07-24 02:12:13.827484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.054 qpair failed and we were unable to recover it. 00:33:59.054 [2024-07-24 02:12:13.827618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.054 [2024-07-24 02:12:13.827643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.054 qpair failed and we were unable to recover it. 00:33:59.054 [2024-07-24 02:12:13.827802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.054 [2024-07-24 02:12:13.827827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.054 qpair failed and we were unable to recover it. 00:33:59.054 [2024-07-24 02:12:13.827926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.054 [2024-07-24 02:12:13.827951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.054 qpair failed and we were unable to recover it. 00:33:59.054 [2024-07-24 02:12:13.828057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.054 [2024-07-24 02:12:13.828082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.054 qpair failed and we were unable to recover it. 00:33:59.054 [2024-07-24 02:12:13.828208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.054 [2024-07-24 02:12:13.828233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.054 qpair failed and we were unable to recover it. 00:33:59.054 [2024-07-24 02:12:13.828381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.054 [2024-07-24 02:12:13.828409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.054 qpair failed and we were unable to recover it. 00:33:59.054 [2024-07-24 02:12:13.828516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.054 [2024-07-24 02:12:13.828543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.054 qpair failed and we were unable to recover it. 00:33:59.054 [2024-07-24 02:12:13.828651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.054 [2024-07-24 02:12:13.828676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.054 qpair failed and we were unable to recover it. 00:33:59.054 [2024-07-24 02:12:13.828773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.054 [2024-07-24 02:12:13.828798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.054 qpair failed and we were unable to recover it. 00:33:59.054 [2024-07-24 02:12:13.828907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.054 [2024-07-24 02:12:13.828932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.054 qpair failed and we were unable to recover it. 00:33:59.054 [2024-07-24 02:12:13.829033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.054 [2024-07-24 02:12:13.829059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.054 qpair failed and we were unable to recover it. 00:33:59.054 [2024-07-24 02:12:13.829198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.054 [2024-07-24 02:12:13.829225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.054 qpair failed and we were unable to recover it. 00:33:59.054 [2024-07-24 02:12:13.829355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.054 [2024-07-24 02:12:13.829380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.054 qpair failed and we were unable to recover it. 00:33:59.054 [2024-07-24 02:12:13.829486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.054 [2024-07-24 02:12:13.829512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.054 qpair failed and we were unable to recover it. 00:33:59.054 [2024-07-24 02:12:13.829610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.054 [2024-07-24 02:12:13.829635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.054 qpair failed and we were unable to recover it. 00:33:59.054 [2024-07-24 02:12:13.829745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.054 [2024-07-24 02:12:13.829770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.054 qpair failed and we were unable to recover it. 00:33:59.054 [2024-07-24 02:12:13.829878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.054 [2024-07-24 02:12:13.829905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.054 qpair failed and we were unable to recover it. 00:33:59.054 [2024-07-24 02:12:13.830008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.054 [2024-07-24 02:12:13.830033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.054 qpair failed and we were unable to recover it. 00:33:59.054 [2024-07-24 02:12:13.830142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.054 [2024-07-24 02:12:13.830168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.054 qpair failed and we were unable to recover it. 00:33:59.054 [2024-07-24 02:12:13.830302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.054 [2024-07-24 02:12:13.830342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.054 qpair failed and we were unable to recover it. 00:33:59.054 [2024-07-24 02:12:13.830444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.054 [2024-07-24 02:12:13.830470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.054 qpair failed and we were unable to recover it. 00:33:59.054 [2024-07-24 02:12:13.830608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.054 [2024-07-24 02:12:13.830633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.054 qpair failed and we were unable to recover it. 00:33:59.054 [2024-07-24 02:12:13.830762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.054 [2024-07-24 02:12:13.830787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.054 qpair failed and we were unable to recover it. 00:33:59.054 [2024-07-24 02:12:13.830917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.054 [2024-07-24 02:12:13.830942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.054 qpair failed and we were unable to recover it. 00:33:59.054 [2024-07-24 02:12:13.831075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.054 [2024-07-24 02:12:13.831100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.054 qpair failed and we were unable to recover it. 00:33:59.054 [2024-07-24 02:12:13.831226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.054 [2024-07-24 02:12:13.831251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.054 qpair failed and we were unable to recover it. 00:33:59.054 [2024-07-24 02:12:13.831351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.054 [2024-07-24 02:12:13.831378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.054 qpair failed and we were unable to recover it. 00:33:59.054 [2024-07-24 02:12:13.831487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.054 [2024-07-24 02:12:13.831512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.054 qpair failed and we were unable to recover it. 00:33:59.054 [2024-07-24 02:12:13.831617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.054 [2024-07-24 02:12:13.831642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.054 qpair failed and we were unable to recover it. 00:33:59.054 [2024-07-24 02:12:13.831741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.054 [2024-07-24 02:12:13.831767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.054 qpair failed and we were unable to recover it. 00:33:59.054 [2024-07-24 02:12:13.831874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.054 [2024-07-24 02:12:13.831899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.054 qpair failed and we were unable to recover it. 00:33:59.054 [2024-07-24 02:12:13.832003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.054 [2024-07-24 02:12:13.832027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.054 qpair failed and we were unable to recover it. 00:33:59.054 [2024-07-24 02:12:13.832155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.054 [2024-07-24 02:12:13.832179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.054 qpair failed and we were unable to recover it. 00:33:59.054 [2024-07-24 02:12:13.832289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.055 [2024-07-24 02:12:13.832314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.055 qpair failed and we were unable to recover it. 00:33:59.055 [2024-07-24 02:12:13.832423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.055 [2024-07-24 02:12:13.832448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.055 qpair failed and we were unable to recover it. 00:33:59.055 [2024-07-24 02:12:13.832583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.055 [2024-07-24 02:12:13.832607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.055 qpair failed and we were unable to recover it. 00:33:59.055 [2024-07-24 02:12:13.832715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.055 [2024-07-24 02:12:13.832740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.055 qpair failed and we were unable to recover it. 00:33:59.055 [2024-07-24 02:12:13.832867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.055 [2024-07-24 02:12:13.832892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.055 qpair failed and we were unable to recover it. 00:33:59.055 [2024-07-24 02:12:13.832993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.055 [2024-07-24 02:12:13.833020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.055 qpair failed and we were unable to recover it. 00:33:59.055 [2024-07-24 02:12:13.833148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.055 [2024-07-24 02:12:13.833173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.055 qpair failed and we were unable to recover it. 00:33:59.055 [2024-07-24 02:12:13.833285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.055 [2024-07-24 02:12:13.833330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.055 qpair failed and we were unable to recover it. 00:33:59.055 [2024-07-24 02:12:13.833447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.055 [2024-07-24 02:12:13.833475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.055 qpair failed and we were unable to recover it. 00:33:59.055 [2024-07-24 02:12:13.833585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.055 [2024-07-24 02:12:13.833611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.055 qpair failed and we were unable to recover it. 00:33:59.055 [2024-07-24 02:12:13.833744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.055 [2024-07-24 02:12:13.833770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.055 qpair failed and we were unable to recover it. 00:33:59.055 [2024-07-24 02:12:13.833873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.055 [2024-07-24 02:12:13.833899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.055 qpair failed and we were unable to recover it. 00:33:59.055 [2024-07-24 02:12:13.833999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.055 [2024-07-24 02:12:13.834024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.055 qpair failed and we were unable to recover it. 00:33:59.055 [2024-07-24 02:12:13.834130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.055 [2024-07-24 02:12:13.834156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.055 qpair failed and we were unable to recover it. 00:33:59.055 [2024-07-24 02:12:13.834265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.055 [2024-07-24 02:12:13.834293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.055 qpair failed and we were unable to recover it. 00:33:59.055 [2024-07-24 02:12:13.834436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.055 [2024-07-24 02:12:13.834462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.055 qpair failed and we were unable to recover it. 00:33:59.055 [2024-07-24 02:12:13.834571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.055 [2024-07-24 02:12:13.834596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.055 qpair failed and we were unable to recover it. 00:33:59.055 [2024-07-24 02:12:13.834753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.055 [2024-07-24 02:12:13.834778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.055 qpair failed and we were unable to recover it. 00:33:59.055 [2024-07-24 02:12:13.834909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.055 [2024-07-24 02:12:13.834934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.055 qpair failed and we were unable to recover it. 00:33:59.055 [2024-07-24 02:12:13.835047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.055 [2024-07-24 02:12:13.835074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.055 qpair failed and we were unable to recover it. 00:33:59.055 [2024-07-24 02:12:13.835173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.055 [2024-07-24 02:12:13.835197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.055 qpair failed and we were unable to recover it. 00:33:59.055 [2024-07-24 02:12:13.835304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.055 [2024-07-24 02:12:13.835336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.055 qpair failed and we were unable to recover it. 00:33:59.055 [2024-07-24 02:12:13.835470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.055 [2024-07-24 02:12:13.835495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.055 qpair failed and we were unable to recover it. 00:33:59.055 [2024-07-24 02:12:13.835591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.055 [2024-07-24 02:12:13.835617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.055 qpair failed and we were unable to recover it. 00:33:59.055 [2024-07-24 02:12:13.835742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.055 [2024-07-24 02:12:13.835767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.055 qpair failed and we were unable to recover it. 00:33:59.055 [2024-07-24 02:12:13.835900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.055 [2024-07-24 02:12:13.835925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.055 qpair failed and we were unable to recover it. 00:33:59.055 [2024-07-24 02:12:13.836033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.055 [2024-07-24 02:12:13.836071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.055 qpair failed and we were unable to recover it. 00:33:59.055 [2024-07-24 02:12:13.836213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.055 [2024-07-24 02:12:13.836244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.055 qpair failed and we were unable to recover it. 00:33:59.055 [2024-07-24 02:12:13.836364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.055 [2024-07-24 02:12:13.836402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.055 qpair failed and we were unable to recover it. 00:33:59.055 [2024-07-24 02:12:13.836517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.055 [2024-07-24 02:12:13.836544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.055 qpair failed and we were unable to recover it. 00:33:59.055 [2024-07-24 02:12:13.836650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.055 [2024-07-24 02:12:13.836676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.055 qpair failed and we were unable to recover it. 00:33:59.055 [2024-07-24 02:12:13.836777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.055 [2024-07-24 02:12:13.836802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.055 qpair failed and we were unable to recover it. 00:33:59.055 [2024-07-24 02:12:13.836909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.055 [2024-07-24 02:12:13.836936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.055 qpair failed and we were unable to recover it. 00:33:59.055 [2024-07-24 02:12:13.837068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.055 [2024-07-24 02:12:13.837093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.055 qpair failed and we were unable to recover it. 00:33:59.055 [2024-07-24 02:12:13.837196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.055 [2024-07-24 02:12:13.837221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.055 qpair failed and we were unable to recover it. 00:33:59.055 [2024-07-24 02:12:13.837348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.055 [2024-07-24 02:12:13.837374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.055 qpair failed and we were unable to recover it. 00:33:59.055 [2024-07-24 02:12:13.837477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.055 [2024-07-24 02:12:13.837501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.055 qpair failed and we were unable to recover it. 00:33:59.056 [2024-07-24 02:12:13.837600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.056 [2024-07-24 02:12:13.837625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.056 qpair failed and we were unable to recover it. 00:33:59.056 [2024-07-24 02:12:13.837735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.056 [2024-07-24 02:12:13.837763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.056 qpair failed and we were unable to recover it. 00:33:59.056 [2024-07-24 02:12:13.837874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.056 [2024-07-24 02:12:13.837899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.056 qpair failed and we were unable to recover it. 00:33:59.056 [2024-07-24 02:12:13.838040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.056 [2024-07-24 02:12:13.838078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.056 qpair failed and we were unable to recover it. 00:33:59.056 [2024-07-24 02:12:13.838225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.056 [2024-07-24 02:12:13.838250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.056 qpair failed and we were unable to recover it. 00:33:59.056 [2024-07-24 02:12:13.838378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.056 [2024-07-24 02:12:13.838404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.056 qpair failed and we were unable to recover it. 00:33:59.056 [2024-07-24 02:12:13.838533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.056 [2024-07-24 02:12:13.838558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.056 qpair failed and we were unable to recover it. 00:33:59.056 [2024-07-24 02:12:13.838665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.056 [2024-07-24 02:12:13.838689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.056 qpair failed and we were unable to recover it. 00:33:59.056 [2024-07-24 02:12:13.838821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.056 [2024-07-24 02:12:13.838846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.056 qpair failed and we were unable to recover it. 00:33:59.056 [2024-07-24 02:12:13.838970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.056 [2024-07-24 02:12:13.838995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.056 qpair failed and we were unable to recover it. 00:33:59.056 [2024-07-24 02:12:13.839108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.056 [2024-07-24 02:12:13.839136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.056 qpair failed and we were unable to recover it. 00:33:59.056 [2024-07-24 02:12:13.839253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.056 [2024-07-24 02:12:13.839278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.056 qpair failed and we were unable to recover it. 00:33:59.056 [2024-07-24 02:12:13.839386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.056 [2024-07-24 02:12:13.839413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.056 qpair failed and we were unable to recover it. 00:33:59.056 [2024-07-24 02:12:13.839525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.056 [2024-07-24 02:12:13.839553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.056 qpair failed and we were unable to recover it. 00:33:59.056 [2024-07-24 02:12:13.839684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.056 [2024-07-24 02:12:13.839709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.056 qpair failed and we were unable to recover it. 00:33:59.056 [2024-07-24 02:12:13.839810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.056 [2024-07-24 02:12:13.839836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.056 qpair failed and we were unable to recover it. 00:33:59.056 [2024-07-24 02:12:13.839940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.056 [2024-07-24 02:12:13.839967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.056 qpair failed and we were unable to recover it. 00:33:59.056 [2024-07-24 02:12:13.840076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.056 [2024-07-24 02:12:13.840105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.056 qpair failed and we were unable to recover it. 00:33:59.056 [2024-07-24 02:12:13.840200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.056 [2024-07-24 02:12:13.840225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.056 qpair failed and we were unable to recover it. 00:33:59.056 [2024-07-24 02:12:13.840360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.056 [2024-07-24 02:12:13.840385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.056 qpair failed and we were unable to recover it. 00:33:59.056 [2024-07-24 02:12:13.840485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.056 [2024-07-24 02:12:13.840510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.056 qpair failed and we were unable to recover it. 00:33:59.056 [2024-07-24 02:12:13.840642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.056 [2024-07-24 02:12:13.840667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.056 qpair failed and we were unable to recover it. 00:33:59.056 [2024-07-24 02:12:13.840764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.056 [2024-07-24 02:12:13.840791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.056 qpair failed and we were unable to recover it. 00:33:59.056 [2024-07-24 02:12:13.840902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.056 [2024-07-24 02:12:13.840927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.056 qpair failed and we were unable to recover it. 00:33:59.056 [2024-07-24 02:12:13.841061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.056 [2024-07-24 02:12:13.841086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.056 qpair failed and we were unable to recover it. 00:33:59.056 [2024-07-24 02:12:13.841212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.056 [2024-07-24 02:12:13.841237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.056 qpair failed and we were unable to recover it. 00:33:59.056 [2024-07-24 02:12:13.841346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.056 [2024-07-24 02:12:13.841371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.056 qpair failed and we were unable to recover it. 00:33:59.056 [2024-07-24 02:12:13.841467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.056 [2024-07-24 02:12:13.841492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.056 qpair failed and we were unable to recover it. 00:33:59.056 [2024-07-24 02:12:13.841596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.056 [2024-07-24 02:12:13.841621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.056 qpair failed and we were unable to recover it. 00:33:59.056 [2024-07-24 02:12:13.841721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.056 [2024-07-24 02:12:13.841746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.056 qpair failed and we were unable to recover it. 00:33:59.056 [2024-07-24 02:12:13.841857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.056 [2024-07-24 02:12:13.841882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.056 qpair failed and we were unable to recover it. 00:33:59.056 [2024-07-24 02:12:13.842025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.056 [2024-07-24 02:12:13.842052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.056 qpair failed and we were unable to recover it. 00:33:59.056 [2024-07-24 02:12:13.842189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.056 [2024-07-24 02:12:13.842214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.056 qpair failed and we were unable to recover it. 00:33:59.056 [2024-07-24 02:12:13.842346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.056 [2024-07-24 02:12:13.842372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.056 qpair failed and we were unable to recover it. 00:33:59.056 [2024-07-24 02:12:13.842504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.056 [2024-07-24 02:12:13.842529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.056 qpair failed and we were unable to recover it. 00:33:59.056 [2024-07-24 02:12:13.842642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.056 [2024-07-24 02:12:13.842667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.056 qpair failed and we were unable to recover it. 00:33:59.057 [2024-07-24 02:12:13.842808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.057 [2024-07-24 02:12:13.842833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1164000b90 with addr=10.0.0.2, port=4420 00:33:59.057 qpair failed and we were unable to recover it. 00:33:59.057 [2024-07-24 02:12:13.842969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.057 [2024-07-24 02:12:13.842996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.057 qpair failed and we were unable to recover it. 00:33:59.057 [2024-07-24 02:12:13.843109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.057 [2024-07-24 02:12:13.843135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.057 qpair failed and we were unable to recover it. 00:33:59.057 [2024-07-24 02:12:13.843247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.057 [2024-07-24 02:12:13.843272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.057 qpair failed and we were unable to recover it. 00:33:59.057 [2024-07-24 02:12:13.843379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.057 [2024-07-24 02:12:13.843405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.057 qpair failed and we were unable to recover it. 00:33:59.057 [2024-07-24 02:12:13.843510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.057 [2024-07-24 02:12:13.843536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.057 qpair failed and we were unable to recover it. 00:33:59.057 [2024-07-24 02:12:13.843668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.057 [2024-07-24 02:12:13.843693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.057 qpair failed and we were unable to recover it. 00:33:59.057 [2024-07-24 02:12:13.843801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.057 [2024-07-24 02:12:13.843826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.057 qpair failed and we were unable to recover it. 00:33:59.057 [2024-07-24 02:12:13.843923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.057 [2024-07-24 02:12:13.843952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.057 qpair failed and we were unable to recover it. 00:33:59.057 [2024-07-24 02:12:13.844070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.057 [2024-07-24 02:12:13.844095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.057 qpair failed and we were unable to recover it. 00:33:59.057 [2024-07-24 02:12:13.844190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.057 [2024-07-24 02:12:13.844214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.057 qpair failed and we were unable to recover it. 00:33:59.057 [2024-07-24 02:12:13.844323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.057 [2024-07-24 02:12:13.844348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.057 qpair failed and we were unable to recover it. 00:33:59.057 [2024-07-24 02:12:13.844458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.057 [2024-07-24 02:12:13.844482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.057 qpair failed and we were unable to recover it. 00:33:59.057 [2024-07-24 02:12:13.844692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.057 [2024-07-24 02:12:13.844717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.057 qpair failed and we were unable to recover it. 00:33:59.057 [2024-07-24 02:12:13.844850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.057 [2024-07-24 02:12:13.844875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.057 qpair failed and we were unable to recover it. 00:33:59.057 [2024-07-24 02:12:13.845003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.057 [2024-07-24 02:12:13.845027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.057 qpair failed and we were unable to recover it. 00:33:59.057 [2024-07-24 02:12:13.845130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.057 [2024-07-24 02:12:13.845155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.057 qpair failed and we were unable to recover it. 00:33:59.057 [2024-07-24 02:12:13.845255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.057 [2024-07-24 02:12:13.845279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.057 qpair failed and we were unable to recover it. 00:33:59.057 [2024-07-24 02:12:13.845396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.057 [2024-07-24 02:12:13.845435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.057 qpair failed and we were unable to recover it. 00:33:59.057 [2024-07-24 02:12:13.845541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.057 [2024-07-24 02:12:13.845568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.057 qpair failed and we were unable to recover it. 00:33:59.057 [2024-07-24 02:12:13.845680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.057 [2024-07-24 02:12:13.845706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.057 qpair failed and we were unable to recover it. 00:33:59.057 [2024-07-24 02:12:13.845805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.057 [2024-07-24 02:12:13.845830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.057 qpair failed and we were unable to recover it. 00:33:59.057 [2024-07-24 02:12:13.845975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.057 [2024-07-24 02:12:13.846002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.057 qpair failed and we were unable to recover it. 00:33:59.057 [2024-07-24 02:12:13.846164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.057 [2024-07-24 02:12:13.846189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.057 qpair failed and we were unable to recover it. 00:33:59.057 [2024-07-24 02:12:13.846287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.057 [2024-07-24 02:12:13.846313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.057 qpair failed and we were unable to recover it. 00:33:59.057 [2024-07-24 02:12:13.846430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.057 [2024-07-24 02:12:13.846455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.057 qpair failed and we were unable to recover it. 00:33:59.057 [2024-07-24 02:12:13.846563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.057 [2024-07-24 02:12:13.846587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.057 qpair failed and we were unable to recover it. 00:33:59.057 [2024-07-24 02:12:13.846686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.057 [2024-07-24 02:12:13.846711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.057 qpair failed and we were unable to recover it. 00:33:59.057 [2024-07-24 02:12:13.846804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.057 [2024-07-24 02:12:13.846829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.057 qpair failed and we were unable to recover it. 00:33:59.057 [2024-07-24 02:12:13.846933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.057 [2024-07-24 02:12:13.846958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.057 qpair failed and we were unable to recover it. 00:33:59.057 [2024-07-24 02:12:13.847060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.057 [2024-07-24 02:12:13.847086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.057 qpair failed and we were unable to recover it. 00:33:59.057 [2024-07-24 02:12:13.847222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.057 [2024-07-24 02:12:13.847247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.057 qpair failed and we were unable to recover it. 00:33:59.057 [2024-07-24 02:12:13.847358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.057 [2024-07-24 02:12:13.847384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.057 qpair failed and we were unable to recover it. 00:33:59.057 [2024-07-24 02:12:13.847493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.057 [2024-07-24 02:12:13.847519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.057 qpair failed and we were unable to recover it. 00:33:59.057 [2024-07-24 02:12:13.847627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.057 [2024-07-24 02:12:13.847652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.057 qpair failed and we were unable to recover it. 00:33:59.057 [2024-07-24 02:12:13.847760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.057 [2024-07-24 02:12:13.847790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.057 qpair failed and we were unable to recover it. 00:33:59.057 [2024-07-24 02:12:13.847893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.057 [2024-07-24 02:12:13.847918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.057 qpair failed and we were unable to recover it. 00:33:59.058 [2024-07-24 02:12:13.848043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.058 [2024-07-24 02:12:13.848068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.058 qpair failed and we were unable to recover it. 00:33:59.058 [2024-07-24 02:12:13.848180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.058 [2024-07-24 02:12:13.848205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.058 qpair failed and we were unable to recover it. 00:33:59.058 [2024-07-24 02:12:13.848329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.058 [2024-07-24 02:12:13.848354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.058 qpair failed and we were unable to recover it. 00:33:59.058 [2024-07-24 02:12:13.848450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.058 [2024-07-24 02:12:13.848476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.058 qpair failed and we were unable to recover it. 00:33:59.058 [2024-07-24 02:12:13.848613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.058 [2024-07-24 02:12:13.848640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.058 qpair failed and we were unable to recover it. 00:33:59.058 [2024-07-24 02:12:13.848739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.058 [2024-07-24 02:12:13.848765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.058 qpair failed and we were unable to recover it. 00:33:59.058 [2024-07-24 02:12:13.848904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.058 [2024-07-24 02:12:13.848929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.058 qpair failed and we were unable to recover it. 00:33:59.058 [2024-07-24 02:12:13.849028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.058 [2024-07-24 02:12:13.849053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.058 qpair failed and we were unable to recover it. 00:33:59.058 [2024-07-24 02:12:13.849184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.058 [2024-07-24 02:12:13.849210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.058 qpair failed and we were unable to recover it. 00:33:59.058 [2024-07-24 02:12:13.849309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.058 [2024-07-24 02:12:13.849340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.058 qpair failed and we were unable to recover it. 00:33:59.058 [2024-07-24 02:12:13.849478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.058 [2024-07-24 02:12:13.849503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.058 qpair failed and we were unable to recover it. 00:33:59.058 [2024-07-24 02:12:13.849612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.058 [2024-07-24 02:12:13.849637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.058 qpair failed and we were unable to recover it. 00:33:59.058 [2024-07-24 02:12:13.849741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.058 [2024-07-24 02:12:13.849766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.058 qpair failed and we were unable to recover it. 00:33:59.058 [2024-07-24 02:12:13.849903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.058 [2024-07-24 02:12:13.849928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.058 qpair failed and we were unable to recover it. 00:33:59.058 [2024-07-24 02:12:13.850034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.058 [2024-07-24 02:12:13.850059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.058 qpair failed and we were unable to recover it. 00:33:59.058 [2024-07-24 02:12:13.850179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.058 [2024-07-24 02:12:13.850204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.058 qpair failed and we were unable to recover it. 00:33:59.058 [2024-07-24 02:12:13.850332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.058 [2024-07-24 02:12:13.850357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.058 qpair failed and we were unable to recover it. 00:33:59.058 [2024-07-24 02:12:13.850453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.058 [2024-07-24 02:12:13.850477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.058 qpair failed and we were unable to recover it. 00:33:59.058 [2024-07-24 02:12:13.850577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.058 [2024-07-24 02:12:13.850601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.058 qpair failed and we were unable to recover it. 00:33:59.058 [2024-07-24 02:12:13.850703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.058 [2024-07-24 02:12:13.850729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.058 qpair failed and we were unable to recover it. 00:33:59.058 [2024-07-24 02:12:13.850860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.058 [2024-07-24 02:12:13.850885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.058 qpair failed and we were unable to recover it. 00:33:59.058 [2024-07-24 02:12:13.850987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.058 [2024-07-24 02:12:13.851011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.058 qpair failed and we were unable to recover it. 00:33:59.058 [2024-07-24 02:12:13.851143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.058 [2024-07-24 02:12:13.851168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.058 qpair failed and we were unable to recover it. 00:33:59.058 [2024-07-24 02:12:13.851268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.058 [2024-07-24 02:12:13.851292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.058 qpair failed and we were unable to recover it. 00:33:59.058 [2024-07-24 02:12:13.851433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.058 [2024-07-24 02:12:13.851460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.058 qpair failed and we were unable to recover it. 00:33:59.058 [2024-07-24 02:12:13.851632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.058 [2024-07-24 02:12:13.851657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.058 qpair failed and we were unable to recover it. 00:33:59.058 [2024-07-24 02:12:13.851782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.058 [2024-07-24 02:12:13.851807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.058 qpair failed and we were unable to recover it. 00:33:59.058 [2024-07-24 02:12:13.851913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.058 [2024-07-24 02:12:13.851939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.058 qpair failed and we were unable to recover it. 00:33:59.058 [2024-07-24 02:12:13.852052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.058 [2024-07-24 02:12:13.852077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.058 qpair failed and we were unable to recover it. 00:33:59.058 [2024-07-24 02:12:13.852169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.058 [2024-07-24 02:12:13.852194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.058 qpair failed and we were unable to recover it. 00:33:59.058 [2024-07-24 02:12:13.852297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.058 [2024-07-24 02:12:13.852330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.058 qpair failed and we were unable to recover it. 00:33:59.059 [2024-07-24 02:12:13.852447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.059 [2024-07-24 02:12:13.852473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.059 qpair failed and we were unable to recover it. 00:33:59.059 [2024-07-24 02:12:13.852602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.059 [2024-07-24 02:12:13.852627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.059 qpair failed and we were unable to recover it. 00:33:59.059 [2024-07-24 02:12:13.852760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.059 [2024-07-24 02:12:13.852786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.059 qpair failed and we were unable to recover it. 00:33:59.059 [2024-07-24 02:12:13.852907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.059 [2024-07-24 02:12:13.852933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.059 qpair failed and we were unable to recover it. 00:33:59.059 [2024-07-24 02:12:13.853038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.059 [2024-07-24 02:12:13.853064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.059 qpair failed and we were unable to recover it. 00:33:59.059 [2024-07-24 02:12:13.853161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.059 [2024-07-24 02:12:13.853186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.059 qpair failed and we were unable to recover it. 00:33:59.059 [2024-07-24 02:12:13.853312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.059 [2024-07-24 02:12:13.853342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.059 qpair failed and we were unable to recover it. 00:33:59.059 [2024-07-24 02:12:13.853473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.059 [2024-07-24 02:12:13.853503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.059 qpair failed and we were unable to recover it. 00:33:59.059 [2024-07-24 02:12:13.853607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.059 [2024-07-24 02:12:13.853632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.059 qpair failed and we were unable to recover it. 00:33:59.059 [2024-07-24 02:12:13.853743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.059 [2024-07-24 02:12:13.853769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.059 qpair failed and we were unable to recover it. 00:33:59.059 [2024-07-24 02:12:13.853881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.059 [2024-07-24 02:12:13.853905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.059 qpair failed and we were unable to recover it. 00:33:59.059 [2024-07-24 02:12:13.854040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.059 [2024-07-24 02:12:13.854065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.059 qpair failed and we were unable to recover it. 00:33:59.059 [2024-07-24 02:12:13.854173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.059 [2024-07-24 02:12:13.854197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.059 qpair failed and we were unable to recover it. 00:33:59.059 [2024-07-24 02:12:13.854290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.059 [2024-07-24 02:12:13.854314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.059 qpair failed and we were unable to recover it. 00:33:59.059 [2024-07-24 02:12:13.854480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.059 [2024-07-24 02:12:13.854506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.059 qpair failed and we were unable to recover it. 00:33:59.059 [2024-07-24 02:12:13.854641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.059 [2024-07-24 02:12:13.854665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.059 qpair failed and we were unable to recover it. 00:33:59.059 [2024-07-24 02:12:13.854776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.059 [2024-07-24 02:12:13.854802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.059 qpair failed and we were unable to recover it. 00:33:59.059 [2024-07-24 02:12:13.854914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.059 [2024-07-24 02:12:13.854938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.059 qpair failed and we were unable to recover it. 00:33:59.059 [2024-07-24 02:12:13.855044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.059 [2024-07-24 02:12:13.855069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.059 qpair failed and we were unable to recover it. 00:33:59.059 [2024-07-24 02:12:13.855169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.059 [2024-07-24 02:12:13.855194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.059 qpair failed and we were unable to recover it. 00:33:59.059 [2024-07-24 02:12:13.855327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.059 [2024-07-24 02:12:13.855352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.059 qpair failed and we were unable to recover it. 00:33:59.059 [2024-07-24 02:12:13.855492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.059 [2024-07-24 02:12:13.855517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.059 qpair failed and we were unable to recover it. 00:33:59.059 [2024-07-24 02:12:13.855645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.059 [2024-07-24 02:12:13.855671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.059 qpair failed and we were unable to recover it. 00:33:59.059 [2024-07-24 02:12:13.855826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.059 [2024-07-24 02:12:13.855851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.059 qpair failed and we were unable to recover it. 00:33:59.059 [2024-07-24 02:12:13.855980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.059 [2024-07-24 02:12:13.856004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.059 qpair failed and we were unable to recover it. 00:33:59.059 [2024-07-24 02:12:13.856114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.059 [2024-07-24 02:12:13.856139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.059 qpair failed and we were unable to recover it. 00:33:59.059 [2024-07-24 02:12:13.856299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.059 [2024-07-24 02:12:13.856330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.059 qpair failed and we were unable to recover it. 00:33:59.059 [2024-07-24 02:12:13.856469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.059 [2024-07-24 02:12:13.856494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.059 qpair failed and we were unable to recover it. 00:33:59.059 [2024-07-24 02:12:13.856593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.059 [2024-07-24 02:12:13.856617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.059 qpair failed and we were unable to recover it. 00:33:59.059 [2024-07-24 02:12:13.856748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.059 [2024-07-24 02:12:13.856773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.059 qpair failed and we were unable to recover it. 00:33:59.059 [2024-07-24 02:12:13.856901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.059 [2024-07-24 02:12:13.856926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.059 qpair failed and we were unable to recover it. 00:33:59.059 [2024-07-24 02:12:13.857049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.059 [2024-07-24 02:12:13.857074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.059 qpair failed and we were unable to recover it. 00:33:59.059 [2024-07-24 02:12:13.857172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.060 [2024-07-24 02:12:13.857196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.060 qpair failed and we were unable to recover it. 00:33:59.060 [2024-07-24 02:12:13.857309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.060 [2024-07-24 02:12:13.857339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.060 qpair failed and we were unable to recover it. 00:33:59.060 [2024-07-24 02:12:13.857449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.060 [2024-07-24 02:12:13.857475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.060 qpair failed and we were unable to recover it. 00:33:59.060 [2024-07-24 02:12:13.857603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.060 [2024-07-24 02:12:13.857627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.060 qpair failed and we were unable to recover it. 00:33:59.060 [2024-07-24 02:12:13.857754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.060 [2024-07-24 02:12:13.857779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.060 qpair failed and we were unable to recover it. 00:33:59.060 [2024-07-24 02:12:13.857877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.060 [2024-07-24 02:12:13.857901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.060 qpair failed and we were unable to recover it. 00:33:59.060 [2024-07-24 02:12:13.858013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.060 [2024-07-24 02:12:13.858038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.060 qpair failed and we were unable to recover it. 00:33:59.060 [2024-07-24 02:12:13.858170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.060 [2024-07-24 02:12:13.858196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.060 qpair failed and we were unable to recover it. 00:33:59.060 [2024-07-24 02:12:13.858304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.060 [2024-07-24 02:12:13.858343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.060 qpair failed and we were unable to recover it. 00:33:59.060 [2024-07-24 02:12:13.858476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.060 [2024-07-24 02:12:13.858501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.060 qpair failed and we were unable to recover it. 00:33:59.060 [2024-07-24 02:12:13.858630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.060 [2024-07-24 02:12:13.858654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.060 qpair failed and we were unable to recover it. 00:33:59.060 [2024-07-24 02:12:13.858757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.060 [2024-07-24 02:12:13.858781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.060 qpair failed and we were unable to recover it. 00:33:59.060 [2024-07-24 02:12:13.858878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.060 [2024-07-24 02:12:13.858903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.060 qpair failed and we were unable to recover it. 00:33:59.060 [2024-07-24 02:12:13.859007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.060 [2024-07-24 02:12:13.859031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.060 qpair failed and we were unable to recover it. 00:33:59.060 [2024-07-24 02:12:13.859165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.060 [2024-07-24 02:12:13.859191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.060 qpair failed and we were unable to recover it. 00:33:59.060 [2024-07-24 02:12:13.859314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.060 [2024-07-24 02:12:13.859349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.060 qpair failed and we were unable to recover it. 00:33:59.060 [2024-07-24 02:12:13.859463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.060 [2024-07-24 02:12:13.859488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.060 qpair failed and we were unable to recover it. 00:33:59.060 [2024-07-24 02:12:13.859642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.060 [2024-07-24 02:12:13.859667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.060 qpair failed and we were unable to recover it. 00:33:59.060 [2024-07-24 02:12:13.859777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.060 [2024-07-24 02:12:13.859803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.060 qpair failed and we were unable to recover it. 00:33:59.060 [2024-07-24 02:12:13.859902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.060 [2024-07-24 02:12:13.859927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.060 qpair failed and we were unable to recover it. 00:33:59.060 [2024-07-24 02:12:13.860033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.060 [2024-07-24 02:12:13.860058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.060 qpair failed and we were unable to recover it. 00:33:59.060 [2024-07-24 02:12:13.860169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.060 [2024-07-24 02:12:13.860194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.060 qpair failed and we were unable to recover it. 00:33:59.060 [2024-07-24 02:12:13.860352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.060 [2024-07-24 02:12:13.860378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.060 qpair failed and we were unable to recover it. 00:33:59.060 [2024-07-24 02:12:13.860481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.060 [2024-07-24 02:12:13.860505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.060 qpair failed and we were unable to recover it. 00:33:59.060 [2024-07-24 02:12:13.860641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.060 [2024-07-24 02:12:13.860665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.060 qpair failed and we were unable to recover it. 00:33:59.060 [2024-07-24 02:12:13.860809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.060 [2024-07-24 02:12:13.860834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.060 qpair failed and we were unable to recover it. 00:33:59.060 [2024-07-24 02:12:13.860936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.060 [2024-07-24 02:12:13.860961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.060 qpair failed and we were unable to recover it. 00:33:59.060 [2024-07-24 02:12:13.861067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.060 [2024-07-24 02:12:13.861094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.060 qpair failed and we were unable to recover it. 00:33:59.060 [2024-07-24 02:12:13.861196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.060 [2024-07-24 02:12:13.861221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.060 qpair failed and we were unable to recover it. 00:33:59.060 [2024-07-24 02:12:13.861338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.060 [2024-07-24 02:12:13.861365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.060 qpair failed and we were unable to recover it. 00:33:59.060 [2024-07-24 02:12:13.861481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.060 [2024-07-24 02:12:13.861506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.060 qpair failed and we were unable to recover it. 00:33:59.060 [2024-07-24 02:12:13.861637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.060 [2024-07-24 02:12:13.861662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.060 qpair failed and we were unable to recover it. 00:33:59.060 [2024-07-24 02:12:13.861763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.060 [2024-07-24 02:12:13.861787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.060 qpair failed and we were unable to recover it. 00:33:59.060 [2024-07-24 02:12:13.861887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.060 [2024-07-24 02:12:13.861911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.060 qpair failed and we were unable to recover it. 00:33:59.060 [2024-07-24 02:12:13.862024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.060 [2024-07-24 02:12:13.862048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.060 qpair failed and we were unable to recover it. 00:33:59.060 [2024-07-24 02:12:13.862153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.061 [2024-07-24 02:12:13.862179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.061 qpair failed and we were unable to recover it. 00:33:59.061 [2024-07-24 02:12:13.862304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.061 [2024-07-24 02:12:13.862334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.061 qpair failed and we were unable to recover it. 00:33:59.061 [2024-07-24 02:12:13.862434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.061 [2024-07-24 02:12:13.862459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.061 qpair failed and we were unable to recover it. 00:33:59.061 [2024-07-24 02:12:13.862560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.061 [2024-07-24 02:12:13.862585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.061 qpair failed and we were unable to recover it. 00:33:59.061 [2024-07-24 02:12:13.862686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.061 [2024-07-24 02:12:13.862711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.061 qpair failed and we were unable to recover it. 00:33:59.061 [2024-07-24 02:12:13.862821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.061 [2024-07-24 02:12:13.862846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.061 qpair failed and we were unable to recover it. 00:33:59.061 [2024-07-24 02:12:13.862974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.061 [2024-07-24 02:12:13.862999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.061 qpair failed and we were unable to recover it. 00:33:59.061 [2024-07-24 02:12:13.863111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.061 [2024-07-24 02:12:13.863135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.061 qpair failed and we were unable to recover it. 00:33:59.061 [2024-07-24 02:12:13.863253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.061 [2024-07-24 02:12:13.863277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.061 qpair failed and we were unable to recover it. 00:33:59.061 [2024-07-24 02:12:13.863380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.061 [2024-07-24 02:12:13.863405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.061 qpair failed and we were unable to recover it. 00:33:59.061 [2024-07-24 02:12:13.863511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.061 [2024-07-24 02:12:13.863535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.061 qpair failed and we were unable to recover it. 00:33:59.061 [2024-07-24 02:12:13.863641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.061 [2024-07-24 02:12:13.863665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.061 qpair failed and we were unable to recover it. 00:33:59.061 [2024-07-24 02:12:13.863765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.061 [2024-07-24 02:12:13.863790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.061 qpair failed and we were unable to recover it. 00:33:59.061 [2024-07-24 02:12:13.863891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.061 [2024-07-24 02:12:13.863916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.061 qpair failed and we were unable to recover it. 00:33:59.061 [2024-07-24 02:12:13.864056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.061 [2024-07-24 02:12:13.864082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.061 qpair failed and we were unable to recover it. 00:33:59.061 [2024-07-24 02:12:13.864217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.061 [2024-07-24 02:12:13.864241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.061 qpair failed and we were unable to recover it. 00:33:59.061 [2024-07-24 02:12:13.864367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.061 [2024-07-24 02:12:13.864393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.061 qpair failed and we were unable to recover it. 00:33:59.061 [2024-07-24 02:12:13.864495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.061 [2024-07-24 02:12:13.864520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.061 qpair failed and we were unable to recover it. 00:33:59.061 [2024-07-24 02:12:13.864621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.061 [2024-07-24 02:12:13.864647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.061 qpair failed and we were unable to recover it. 00:33:59.333 [2024-07-24 02:12:13.864742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.333 [2024-07-24 02:12:13.864767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.333 qpair failed and we were unable to recover it. 00:33:59.333 [2024-07-24 02:12:13.864905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.333 [2024-07-24 02:12:13.864935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.333 qpair failed and we were unable to recover it. 00:33:59.333 [2024-07-24 02:12:13.865042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.333 [2024-07-24 02:12:13.865067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.333 qpair failed and we were unable to recover it. 00:33:59.333 [2024-07-24 02:12:13.865194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.334 [2024-07-24 02:12:13.865219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.334 qpair failed and we were unable to recover it. 00:33:59.334 [2024-07-24 02:12:13.865331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.334 [2024-07-24 02:12:13.865356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.334 qpair failed and we were unable to recover it. 00:33:59.334 [2024-07-24 02:12:13.865453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.334 [2024-07-24 02:12:13.865477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.334 qpair failed and we were unable to recover it. 00:33:59.334 [2024-07-24 02:12:13.865609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.334 [2024-07-24 02:12:13.865634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.334 qpair failed and we were unable to recover it. 00:33:59.334 [2024-07-24 02:12:13.865743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.334 [2024-07-24 02:12:13.865767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.334 qpair failed and we were unable to recover it. 00:33:59.334 [2024-07-24 02:12:13.865869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.334 [2024-07-24 02:12:13.865894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.334 qpair failed and we were unable to recover it. 00:33:59.334 [2024-07-24 02:12:13.865994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.334 [2024-07-24 02:12:13.866019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.334 qpair failed and we were unable to recover it. 00:33:59.334 [2024-07-24 02:12:13.866131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.334 [2024-07-24 02:12:13.866156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.334 qpair failed and we were unable to recover it. 00:33:59.334 [2024-07-24 02:12:13.866292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.334 [2024-07-24 02:12:13.866334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.334 qpair failed and we were unable to recover it. 00:33:59.334 [2024-07-24 02:12:13.866444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.334 [2024-07-24 02:12:13.866469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.334 qpair failed and we were unable to recover it. 00:33:59.334 [2024-07-24 02:12:13.866578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.334 [2024-07-24 02:12:13.866604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.334 qpair failed and we were unable to recover it. 00:33:59.334 [2024-07-24 02:12:13.866705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.334 [2024-07-24 02:12:13.866730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.334 qpair failed and we were unable to recover it. 00:33:59.334 [2024-07-24 02:12:13.866839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.334 [2024-07-24 02:12:13.866864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.334 qpair failed and we were unable to recover it. 00:33:59.334 [2024-07-24 02:12:13.867025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.334 [2024-07-24 02:12:13.867051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.334 qpair failed and we were unable to recover it. 00:33:59.334 [2024-07-24 02:12:13.867194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.334 [2024-07-24 02:12:13.867220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.334 qpair failed and we were unable to recover it. 00:33:59.334 [2024-07-24 02:12:13.867351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.334 [2024-07-24 02:12:13.867376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.334 qpair failed and we were unable to recover it. 00:33:59.334 [2024-07-24 02:12:13.867519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.334 [2024-07-24 02:12:13.867545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.334 qpair failed and we were unable to recover it. 00:33:59.334 [2024-07-24 02:12:13.867645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.334 [2024-07-24 02:12:13.867669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.334 qpair failed and we were unable to recover it. 00:33:59.334 [2024-07-24 02:12:13.867769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.334 [2024-07-24 02:12:13.867794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.334 qpair failed and we were unable to recover it. 00:33:59.334 [2024-07-24 02:12:13.867924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.334 [2024-07-24 02:12:13.867948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.334 qpair failed and we were unable to recover it. 00:33:59.334 [2024-07-24 02:12:13.868046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.334 [2024-07-24 02:12:13.868071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.334 qpair failed and we were unable to recover it. 00:33:59.334 [2024-07-24 02:12:13.868203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.334 [2024-07-24 02:12:13.868229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.334 qpair failed and we were unable to recover it. 00:33:59.334 [2024-07-24 02:12:13.868354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.334 [2024-07-24 02:12:13.868379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.334 qpair failed and we were unable to recover it. 00:33:59.334 [2024-07-24 02:12:13.868477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.334 [2024-07-24 02:12:13.868503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.334 qpair failed and we were unable to recover it. 00:33:59.334 [2024-07-24 02:12:13.868627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.334 [2024-07-24 02:12:13.868651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.334 qpair failed and we were unable to recover it. 00:33:59.334 [2024-07-24 02:12:13.868772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.334 [2024-07-24 02:12:13.868797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.334 qpair failed and we were unable to recover it. 00:33:59.334 [2024-07-24 02:12:13.868927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.334 [2024-07-24 02:12:13.868953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.334 qpair failed and we were unable to recover it. 00:33:59.334 [2024-07-24 02:12:13.869064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.334 [2024-07-24 02:12:13.869088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.334 qpair failed and we were unable to recover it. 00:33:59.334 [2024-07-24 02:12:13.869194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.334 [2024-07-24 02:12:13.869219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.334 qpair failed and we were unable to recover it. 00:33:59.334 [2024-07-24 02:12:13.869376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.334 [2024-07-24 02:12:13.869401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.334 qpair failed and we were unable to recover it. 00:33:59.334 [2024-07-24 02:12:13.869503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.334 [2024-07-24 02:12:13.869527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.334 qpair failed and we were unable to recover it. 00:33:59.334 [2024-07-24 02:12:13.869656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.334 [2024-07-24 02:12:13.869682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.334 qpair failed and we were unable to recover it. 00:33:59.334 [2024-07-24 02:12:13.869797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.334 [2024-07-24 02:12:13.869822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.334 qpair failed and we were unable to recover it. 00:33:59.334 [2024-07-24 02:12:13.869926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.334 [2024-07-24 02:12:13.869953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.334 qpair failed and we were unable to recover it. 00:33:59.334 [2024-07-24 02:12:13.870058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.334 [2024-07-24 02:12:13.870083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.334 qpair failed and we were unable to recover it. 00:33:59.334 [2024-07-24 02:12:13.870239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.334 [2024-07-24 02:12:13.870264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.334 qpair failed and we were unable to recover it. 00:33:59.334 [2024-07-24 02:12:13.870369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.335 [2024-07-24 02:12:13.870394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.335 qpair failed and we were unable to recover it. 00:33:59.335 [2024-07-24 02:12:13.870502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.335 [2024-07-24 02:12:13.870526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.335 qpair failed and we were unable to recover it. 00:33:59.335 [2024-07-24 02:12:13.870624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.335 [2024-07-24 02:12:13.870653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.335 qpair failed and we were unable to recover it. 00:33:59.335 [2024-07-24 02:12:13.870755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.335 [2024-07-24 02:12:13.870779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.335 qpair failed and we were unable to recover it. 00:33:59.335 [2024-07-24 02:12:13.870887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.335 [2024-07-24 02:12:13.870911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.335 qpair failed and we were unable to recover it. 00:33:59.335 [2024-07-24 02:12:13.871010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.335 [2024-07-24 02:12:13.871036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.335 qpair failed and we were unable to recover it. 00:33:59.335 [2024-07-24 02:12:13.871169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.335 [2024-07-24 02:12:13.871194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.335 qpair failed and we were unable to recover it. 00:33:59.335 [2024-07-24 02:12:13.871295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.335 [2024-07-24 02:12:13.871325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.335 qpair failed and we were unable to recover it. 00:33:59.335 [2024-07-24 02:12:13.871484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.335 [2024-07-24 02:12:13.871510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.335 qpair failed and we were unable to recover it. 00:33:59.335 [2024-07-24 02:12:13.871612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.335 [2024-07-24 02:12:13.871636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.335 qpair failed and we were unable to recover it. 00:33:59.335 [2024-07-24 02:12:13.871739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.335 [2024-07-24 02:12:13.871765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.335 qpair failed and we were unable to recover it. 00:33:59.335 [2024-07-24 02:12:13.871867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.335 [2024-07-24 02:12:13.871891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.335 qpair failed and we were unable to recover it. 00:33:59.335 [2024-07-24 02:12:13.872000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.335 [2024-07-24 02:12:13.872025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.335 qpair failed and we were unable to recover it. 00:33:59.335 [2024-07-24 02:12:13.872124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.335 [2024-07-24 02:12:13.872148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.335 qpair failed and we were unable to recover it. 00:33:59.335 [2024-07-24 02:12:13.872245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.335 [2024-07-24 02:12:13.872270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.335 qpair failed and we were unable to recover it. 00:33:59.335 [2024-07-24 02:12:13.872374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.335 [2024-07-24 02:12:13.872399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.335 qpair failed and we were unable to recover it. 00:33:59.335 [2024-07-24 02:12:13.872513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.335 [2024-07-24 02:12:13.872538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.335 qpair failed and we were unable to recover it. 00:33:59.335 [2024-07-24 02:12:13.872648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.335 [2024-07-24 02:12:13.872673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.335 qpair failed and we were unable to recover it. 00:33:59.335 [2024-07-24 02:12:13.872787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.335 [2024-07-24 02:12:13.872811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.335 qpair failed and we were unable to recover it. 00:33:59.335 [2024-07-24 02:12:13.872938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.335 [2024-07-24 02:12:13.872963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.335 qpair failed and we were unable to recover it. 00:33:59.335 [2024-07-24 02:12:13.873096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.335 [2024-07-24 02:12:13.873121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.335 qpair failed and we were unable to recover it. 00:33:59.335 [2024-07-24 02:12:13.873227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.335 [2024-07-24 02:12:13.873252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.335 qpair failed and we were unable to recover it. 00:33:59.335 [2024-07-24 02:12:13.873368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.335 [2024-07-24 02:12:13.873394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.335 qpair failed and we were unable to recover it. 00:33:59.335 [2024-07-24 02:12:13.873514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.335 [2024-07-24 02:12:13.873538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.335 qpair failed and we were unable to recover it. 00:33:59.335 [2024-07-24 02:12:13.873635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.335 [2024-07-24 02:12:13.873660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.335 qpair failed and we were unable to recover it. 00:33:59.335 [2024-07-24 02:12:13.873797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.335 [2024-07-24 02:12:13.873822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.335 qpair failed and we were unable to recover it. 00:33:59.335 [2024-07-24 02:12:13.873925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.335 [2024-07-24 02:12:13.873950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.335 qpair failed and we were unable to recover it. 00:33:59.335 [2024-07-24 02:12:13.874051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.335 [2024-07-24 02:12:13.874075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.335 qpair failed and we were unable to recover it. 00:33:59.335 [2024-07-24 02:12:13.874178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.335 [2024-07-24 02:12:13.874205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.335 qpair failed and we were unable to recover it. 00:33:59.335 [2024-07-24 02:12:13.874360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.335 [2024-07-24 02:12:13.874386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.335 qpair failed and we were unable to recover it. 00:33:59.335 [2024-07-24 02:12:13.874489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.335 [2024-07-24 02:12:13.874514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.335 qpair failed and we were unable to recover it. 00:33:59.335 [2024-07-24 02:12:13.874640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.335 [2024-07-24 02:12:13.874664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.335 qpair failed and we were unable to recover it. 00:33:59.335 [2024-07-24 02:12:13.874776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.335 [2024-07-24 02:12:13.874801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.335 qpair failed and we were unable to recover it. 00:33:59.335 [2024-07-24 02:12:13.874903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.335 [2024-07-24 02:12:13.874927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.335 qpair failed and we were unable to recover it. 00:33:59.335 [2024-07-24 02:12:13.875034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.335 [2024-07-24 02:12:13.875061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.335 qpair failed and we were unable to recover it. 00:33:59.335 [2024-07-24 02:12:13.875171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.335 [2024-07-24 02:12:13.875195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.335 qpair failed and we were unable to recover it. 00:33:59.335 02:12:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:59.335 [2024-07-24 02:12:13.875292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.336 [2024-07-24 02:12:13.875335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.336 qpair failed and we were unable to recover it. 00:33:59.336 02:12:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:33:59.336 [2024-07-24 02:12:13.875441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.336 [2024-07-24 02:12:13.875466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.336 qpair failed and we were unable to recover it. 00:33:59.336 02:12:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:59.336 [2024-07-24 02:12:13.875597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.336 [2024-07-24 02:12:13.875624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.336 qpair failed and we were unable to recover it. 00:33:59.336 02:12:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:59.336 [2024-07-24 02:12:13.875730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.336 02:12:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:59.336 [2024-07-24 02:12:13.875755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.336 qpair failed and we were unable to recover it. 00:33:59.336 [2024-07-24 02:12:13.875885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.336 [2024-07-24 02:12:13.875914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.336 qpair failed and we were unable to recover it. 00:33:59.336 [2024-07-24 02:12:13.876012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.336 [2024-07-24 02:12:13.876037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.336 qpair failed and we were unable to recover it. 00:33:59.336 [2024-07-24 02:12:13.876174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.336 [2024-07-24 02:12:13.876199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.336 qpair failed and we were unable to recover it. 00:33:59.336 [2024-07-24 02:12:13.876313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.336 [2024-07-24 02:12:13.876345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.336 qpair failed and we were unable to recover it. 00:33:59.336 [2024-07-24 02:12:13.876473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.336 [2024-07-24 02:12:13.876497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.336 qpair failed and we were unable to recover it. 00:33:59.336 [2024-07-24 02:12:13.876626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.336 [2024-07-24 02:12:13.876651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.336 qpair failed and we were unable to recover it. 00:33:59.336 [2024-07-24 02:12:13.876780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.336 [2024-07-24 02:12:13.876805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.336 qpair failed and we were unable to recover it. 00:33:59.336 [2024-07-24 02:12:13.876911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.336 [2024-07-24 02:12:13.876936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.336 qpair failed and we were unable to recover it. 00:33:59.336 [2024-07-24 02:12:13.877045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.336 [2024-07-24 02:12:13.877070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.336 qpair failed and we were unable to recover it. 00:33:59.336 [2024-07-24 02:12:13.877212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.336 [2024-07-24 02:12:13.877237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.336 qpair failed and we were unable to recover it. 00:33:59.336 [2024-07-24 02:12:13.877336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.336 [2024-07-24 02:12:13.877362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.336 qpair failed and we were unable to recover it. 00:33:59.336 [2024-07-24 02:12:13.877460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.336 [2024-07-24 02:12:13.877485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.336 qpair failed and we were unable to recover it. 00:33:59.336 [2024-07-24 02:12:13.877593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.336 [2024-07-24 02:12:13.877621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.336 qpair failed and we were unable to recover it. 00:33:59.336 [2024-07-24 02:12:13.877727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.336 [2024-07-24 02:12:13.877753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.336 qpair failed and we were unable to recover it. 00:33:59.336 [2024-07-24 02:12:13.877854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.336 [2024-07-24 02:12:13.877891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.336 qpair failed and we were unable to recover it. 00:33:59.336 [2024-07-24 02:12:13.878029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.336 [2024-07-24 02:12:13.878054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.336 qpair failed and we were unable to recover it. 00:33:59.336 [2024-07-24 02:12:13.878155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.336 [2024-07-24 02:12:13.878181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.336 qpair failed and we were unable to recover it. 00:33:59.336 [2024-07-24 02:12:13.878314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.336 [2024-07-24 02:12:13.878345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.336 qpair failed and we were unable to recover it. 00:33:59.336 [2024-07-24 02:12:13.878449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.336 [2024-07-24 02:12:13.878474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.336 qpair failed and we were unable to recover it. 00:33:59.336 [2024-07-24 02:12:13.878609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.336 [2024-07-24 02:12:13.878634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.336 qpair failed and we were unable to recover it. 00:33:59.336 [2024-07-24 02:12:13.878727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.336 [2024-07-24 02:12:13.878752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.336 qpair failed and we were unable to recover it. 00:33:59.336 [2024-07-24 02:12:13.878858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.336 [2024-07-24 02:12:13.878883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.336 qpair failed and we were unable to recover it. 00:33:59.336 [2024-07-24 02:12:13.879039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.336 [2024-07-24 02:12:13.879064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.336 qpair failed and we were unable to recover it. 00:33:59.336 [2024-07-24 02:12:13.879167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.336 [2024-07-24 02:12:13.879192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.336 qpair failed and we were unable to recover it. 00:33:59.336 [2024-07-24 02:12:13.879301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.336 [2024-07-24 02:12:13.879338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.336 qpair failed and we were unable to recover it. 00:33:59.336 [2024-07-24 02:12:13.879438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.336 [2024-07-24 02:12:13.879463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.336 qpair failed and we were unable to recover it. 00:33:59.336 [2024-07-24 02:12:13.879562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.336 [2024-07-24 02:12:13.879588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.336 qpair failed and we were unable to recover it. 00:33:59.336 [2024-07-24 02:12:13.879699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.336 [2024-07-24 02:12:13.879724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.336 qpair failed and we were unable to recover it. 00:33:59.336 [2024-07-24 02:12:13.879823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.336 [2024-07-24 02:12:13.879847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.336 qpair failed and we were unable to recover it. 00:33:59.336 [2024-07-24 02:12:13.879963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.336 [2024-07-24 02:12:13.879988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.336 qpair failed and we were unable to recover it. 00:33:59.336 [2024-07-24 02:12:13.880099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.336 [2024-07-24 02:12:13.880124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.337 qpair failed and we were unable to recover it. 00:33:59.337 [2024-07-24 02:12:13.880252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.337 [2024-07-24 02:12:13.880278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.337 qpair failed and we were unable to recover it. 00:33:59.337 [2024-07-24 02:12:13.880392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.337 [2024-07-24 02:12:13.880418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.337 qpair failed and we were unable to recover it. 00:33:59.337 [2024-07-24 02:12:13.880557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.337 [2024-07-24 02:12:13.880581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.337 qpair failed and we were unable to recover it. 00:33:59.337 [2024-07-24 02:12:13.880720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.337 [2024-07-24 02:12:13.880745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.337 qpair failed and we were unable to recover it. 00:33:59.337 [2024-07-24 02:12:13.880861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.337 [2024-07-24 02:12:13.880887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.337 qpair failed and we were unable to recover it. 00:33:59.337 [2024-07-24 02:12:13.881014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.337 [2024-07-24 02:12:13.881040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.337 qpair failed and we were unable to recover it. 00:33:59.337 [2024-07-24 02:12:13.881169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.337 [2024-07-24 02:12:13.881195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.337 qpair failed and we were unable to recover it. 00:33:59.337 [2024-07-24 02:12:13.881336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.337 [2024-07-24 02:12:13.881362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.337 qpair failed and we were unable to recover it. 00:33:59.337 [2024-07-24 02:12:13.881471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.337 [2024-07-24 02:12:13.881498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.337 qpair failed and we were unable to recover it. 00:33:59.337 [2024-07-24 02:12:13.881636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.337 [2024-07-24 02:12:13.881666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.337 qpair failed and we were unable to recover it. 00:33:59.337 [2024-07-24 02:12:13.881799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.337 [2024-07-24 02:12:13.881826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.337 qpair failed and we were unable to recover it. 00:33:59.337 [2024-07-24 02:12:13.881960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.337 [2024-07-24 02:12:13.881987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.337 qpair failed and we were unable to recover it. 00:33:59.337 [2024-07-24 02:12:13.882085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.337 [2024-07-24 02:12:13.882110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.337 qpair failed and we were unable to recover it. 00:33:59.337 [2024-07-24 02:12:13.882217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.337 [2024-07-24 02:12:13.882242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.337 qpair failed and we were unable to recover it. 00:33:59.337 [2024-07-24 02:12:13.882407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.337 [2024-07-24 02:12:13.882434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.337 qpair failed and we were unable to recover it. 00:33:59.337 [2024-07-24 02:12:13.882567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.337 [2024-07-24 02:12:13.882592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.337 qpair failed and we were unable to recover it. 00:33:59.337 [2024-07-24 02:12:13.882755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.337 [2024-07-24 02:12:13.882781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.337 qpair failed and we were unable to recover it. 00:33:59.337 [2024-07-24 02:12:13.882896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.337 [2024-07-24 02:12:13.882922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.337 qpair failed and we were unable to recover it. 00:33:59.337 [2024-07-24 02:12:13.883032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.337 [2024-07-24 02:12:13.883058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.337 qpair failed and we were unable to recover it. 00:33:59.337 [2024-07-24 02:12:13.883199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.337 [2024-07-24 02:12:13.883225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.337 qpair failed and we were unable to recover it. 00:33:59.337 [2024-07-24 02:12:13.883364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.337 [2024-07-24 02:12:13.883390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.337 qpair failed and we were unable to recover it. 00:33:59.337 [2024-07-24 02:12:13.883489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.337 [2024-07-24 02:12:13.883514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.337 qpair failed and we were unable to recover it. 00:33:59.337 [2024-07-24 02:12:13.883660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.337 [2024-07-24 02:12:13.883690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.337 qpair failed and we were unable to recover it. 00:33:59.337 [2024-07-24 02:12:13.883832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.337 [2024-07-24 02:12:13.883857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.337 qpair failed and we were unable to recover it. 00:33:59.337 [2024-07-24 02:12:13.884006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.337 [2024-07-24 02:12:13.884031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.337 qpair failed and we were unable to recover it. 00:33:59.337 [2024-07-24 02:12:13.884136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.337 [2024-07-24 02:12:13.884162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.337 qpair failed and we were unable to recover it. 00:33:59.337 [2024-07-24 02:12:13.884267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.337 [2024-07-24 02:12:13.884293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.337 qpair failed and we were unable to recover it. 00:33:59.337 [2024-07-24 02:12:13.884416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.337 [2024-07-24 02:12:13.884443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.337 qpair failed and we were unable to recover it. 00:33:59.337 [2024-07-24 02:12:13.884576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.337 [2024-07-24 02:12:13.884612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.337 qpair failed and we were unable to recover it. 00:33:59.337 [2024-07-24 02:12:13.884719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.337 [2024-07-24 02:12:13.884745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.337 qpair failed and we were unable to recover it. 00:33:59.337 [2024-07-24 02:12:13.884850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.337 [2024-07-24 02:12:13.884876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.337 qpair failed and we were unable to recover it. 00:33:59.337 [2024-07-24 02:12:13.884984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.337 [2024-07-24 02:12:13.885009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.337 qpair failed and we were unable to recover it. 00:33:59.337 [2024-07-24 02:12:13.885179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.337 [2024-07-24 02:12:13.885205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.337 qpair failed and we were unable to recover it. 00:33:59.337 [2024-07-24 02:12:13.885314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.337 [2024-07-24 02:12:13.885344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.337 qpair failed and we were unable to recover it. 00:33:59.337 [2024-07-24 02:12:13.885441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.337 [2024-07-24 02:12:13.885467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.337 qpair failed and we were unable to recover it. 00:33:59.337 [2024-07-24 02:12:13.885598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.337 [2024-07-24 02:12:13.885624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.338 qpair failed and we were unable to recover it. 00:33:59.338 [2024-07-24 02:12:13.885742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.338 [2024-07-24 02:12:13.885767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.338 qpair failed and we were unable to recover it. 00:33:59.338 [2024-07-24 02:12:13.885863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.338 [2024-07-24 02:12:13.885888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.338 qpair failed and we were unable to recover it. 00:33:59.338 [2024-07-24 02:12:13.885995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.338 [2024-07-24 02:12:13.886021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.338 qpair failed and we were unable to recover it. 00:33:59.338 [2024-07-24 02:12:13.886118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.338 [2024-07-24 02:12:13.886143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.338 qpair failed and we were unable to recover it. 00:33:59.338 [2024-07-24 02:12:13.886300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.338 [2024-07-24 02:12:13.886331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.338 qpair failed and we were unable to recover it. 00:33:59.338 [2024-07-24 02:12:13.886449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.338 [2024-07-24 02:12:13.886475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.338 qpair failed and we were unable to recover it. 00:33:59.338 [2024-07-24 02:12:13.886608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.338 [2024-07-24 02:12:13.886633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.338 qpair failed and we were unable to recover it. 00:33:59.338 [2024-07-24 02:12:13.886737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.338 [2024-07-24 02:12:13.886765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.338 qpair failed and we were unable to recover it. 00:33:59.338 [2024-07-24 02:12:13.886900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.338 [2024-07-24 02:12:13.886925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.338 qpair failed and we were unable to recover it. 00:33:59.338 [2024-07-24 02:12:13.887064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.338 [2024-07-24 02:12:13.887090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.338 qpair failed and we were unable to recover it. 00:33:59.338 [2024-07-24 02:12:13.887187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.338 [2024-07-24 02:12:13.887212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.338 qpair failed and we were unable to recover it. 00:33:59.338 [2024-07-24 02:12:13.887375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.338 [2024-07-24 02:12:13.887401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.338 qpair failed and we were unable to recover it. 00:33:59.338 [2024-07-24 02:12:13.887503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.338 [2024-07-24 02:12:13.887530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.338 qpair failed and we were unable to recover it. 00:33:59.338 [2024-07-24 02:12:13.887656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.338 [2024-07-24 02:12:13.887685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.338 qpair failed and we were unable to recover it. 00:33:59.338 [2024-07-24 02:12:13.887783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.338 [2024-07-24 02:12:13.887809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.338 qpair failed and we were unable to recover it. 00:33:59.338 [2024-07-24 02:12:13.887952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.338 [2024-07-24 02:12:13.887977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.338 qpair failed and we were unable to recover it. 00:33:59.338 [2024-07-24 02:12:13.888078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.338 [2024-07-24 02:12:13.888104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.338 qpair failed and we were unable to recover it. 00:33:59.338 [2024-07-24 02:12:13.888211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.338 [2024-07-24 02:12:13.888236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.338 qpair failed and we were unable to recover it. 00:33:59.338 [2024-07-24 02:12:13.888333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.338 [2024-07-24 02:12:13.888359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.338 qpair failed and we were unable to recover it. 00:33:59.338 [2024-07-24 02:12:13.888472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.338 [2024-07-24 02:12:13.888497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.338 qpair failed and we were unable to recover it. 00:33:59.338 [2024-07-24 02:12:13.888608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.338 [2024-07-24 02:12:13.888634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.338 qpair failed and we were unable to recover it. 00:33:59.338 [2024-07-24 02:12:13.888762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.338 [2024-07-24 02:12:13.888789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.338 qpair failed and we were unable to recover it. 00:33:59.338 [2024-07-24 02:12:13.888893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.338 [2024-07-24 02:12:13.888919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.338 qpair failed and we were unable to recover it. 00:33:59.338 [2024-07-24 02:12:13.889051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.338 [2024-07-24 02:12:13.889076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.338 qpair failed and we were unable to recover it. 00:33:59.338 [2024-07-24 02:12:13.889243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.338 [2024-07-24 02:12:13.889269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.338 qpair failed and we were unable to recover it. 00:33:59.338 [2024-07-24 02:12:13.889413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.338 [2024-07-24 02:12:13.889440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.338 qpair failed and we were unable to recover it. 00:33:59.338 [2024-07-24 02:12:13.889552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.338 [2024-07-24 02:12:13.889578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.338 qpair failed and we were unable to recover it. 00:33:59.338 [2024-07-24 02:12:13.889691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.338 [2024-07-24 02:12:13.889717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.338 qpair failed and we were unable to recover it. 00:33:59.338 [2024-07-24 02:12:13.889851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.339 [2024-07-24 02:12:13.889877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.339 qpair failed and we were unable to recover it. 00:33:59.339 [2024-07-24 02:12:13.890018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.339 [2024-07-24 02:12:13.890044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.339 qpair failed and we were unable to recover it. 00:33:59.339 [2024-07-24 02:12:13.890153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.339 [2024-07-24 02:12:13.890179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.339 qpair failed and we were unable to recover it. 00:33:59.339 [2024-07-24 02:12:13.890281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.339 [2024-07-24 02:12:13.890307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.339 qpair failed and we were unable to recover it. 00:33:59.339 [2024-07-24 02:12:13.890458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.339 [2024-07-24 02:12:13.890483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.339 qpair failed and we were unable to recover it. 00:33:59.339 [2024-07-24 02:12:13.890583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.339 [2024-07-24 02:12:13.890608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.339 qpair failed and we were unable to recover it. 00:33:59.339 [2024-07-24 02:12:13.890743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.339 [2024-07-24 02:12:13.890769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.339 qpair failed and we were unable to recover it. 00:33:59.339 [2024-07-24 02:12:13.890880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.339 [2024-07-24 02:12:13.890905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.339 qpair failed and we were unable to recover it. 00:33:59.339 [2024-07-24 02:12:13.891041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.339 [2024-07-24 02:12:13.891068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.339 qpair failed and we were unable to recover it. 00:33:59.339 [2024-07-24 02:12:13.891176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.339 [2024-07-24 02:12:13.891201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.339 qpair failed and we were unable to recover it. 00:33:59.339 [2024-07-24 02:12:13.891311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.339 [2024-07-24 02:12:13.891346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.339 qpair failed and we were unable to recover it. 00:33:59.339 [2024-07-24 02:12:13.891506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.339 [2024-07-24 02:12:13.891532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.339 qpair failed and we were unable to recover it. 00:33:59.339 [2024-07-24 02:12:13.891637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.339 [2024-07-24 02:12:13.891663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.339 qpair failed and we were unable to recover it. 00:33:59.339 [2024-07-24 02:12:13.891778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.339 [2024-07-24 02:12:13.891804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.339 qpair failed and we were unable to recover it. 00:33:59.339 [2024-07-24 02:12:13.891905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.339 [2024-07-24 02:12:13.891930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.339 qpair failed and we were unable to recover it. 00:33:59.339 [2024-07-24 02:12:13.892059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.339 [2024-07-24 02:12:13.892084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.339 qpair failed and we were unable to recover it. 00:33:59.339 [2024-07-24 02:12:13.892189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.339 [2024-07-24 02:12:13.892215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.339 qpair failed and we were unable to recover it. 00:33:59.339 [2024-07-24 02:12:13.892337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.339 [2024-07-24 02:12:13.892363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.339 qpair failed and we were unable to recover it. 00:33:59.339 [2024-07-24 02:12:13.892466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.339 [2024-07-24 02:12:13.892491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.339 qpair failed and we were unable to recover it. 00:33:59.339 [2024-07-24 02:12:13.892600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.339 [2024-07-24 02:12:13.892638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.339 qpair failed and we were unable to recover it. 00:33:59.339 [2024-07-24 02:12:13.892785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.339 [2024-07-24 02:12:13.892809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.339 qpair failed and we were unable to recover it. 00:33:59.339 [2024-07-24 02:12:13.892920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.339 [2024-07-24 02:12:13.892945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.339 02:12:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:59.339 qpair failed and we were unable to recover it. 00:33:59.339 02:12:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:59.339 [2024-07-24 02:12:13.893104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.339 [2024-07-24 02:12:13.893130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.339 qpair failed and we were unable to recover it. 00:33:59.339 02:12:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:59.339 [2024-07-24 02:12:13.893228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.339 [2024-07-24 02:12:13.893254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.339 qpair failed and we were unable to recover it. 00:33:59.339 02:12:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:59.339 [2024-07-24 02:12:13.893370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.339 [2024-07-24 02:12:13.893396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.339 qpair failed and we were unable to recover it. 00:33:59.339 [2024-07-24 02:12:13.893511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.339 [2024-07-24 02:12:13.893536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.339 qpair failed and we were unable to recover it. 00:33:59.339 [2024-07-24 02:12:13.893633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.339 [2024-07-24 02:12:13.893657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.339 qpair failed and we were unable to recover it. 00:33:59.339 [2024-07-24 02:12:13.893793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.339 [2024-07-24 02:12:13.893817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.339 qpair failed and we were unable to recover it. 00:33:59.339 [2024-07-24 02:12:13.893914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.339 [2024-07-24 02:12:13.893940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.339 qpair failed and we were unable to recover it. 00:33:59.339 [2024-07-24 02:12:13.894074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.339 [2024-07-24 02:12:13.894098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.339 qpair failed and we were unable to recover it. 00:33:59.339 [2024-07-24 02:12:13.894189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.339 [2024-07-24 02:12:13.894214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.339 qpair failed and we were unable to recover it. 00:33:59.339 [2024-07-24 02:12:13.894328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.339 [2024-07-24 02:12:13.894354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.339 qpair failed and we were unable to recover it. 00:33:59.339 [2024-07-24 02:12:13.894486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.339 [2024-07-24 02:12:13.894512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.339 qpair failed and we were unable to recover it. 00:33:59.339 [2024-07-24 02:12:13.894611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.339 [2024-07-24 02:12:13.894639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.339 qpair failed and we were unable to recover it. 00:33:59.339 [2024-07-24 02:12:13.894777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.340 [2024-07-24 02:12:13.894802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.340 qpair failed and we were unable to recover it. 00:33:59.340 [2024-07-24 02:12:13.894898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.340 [2024-07-24 02:12:13.894923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.340 qpair failed and we were unable to recover it. 00:33:59.340 [2024-07-24 02:12:13.895030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.340 [2024-07-24 02:12:13.895055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.340 qpair failed and we were unable to recover it. 00:33:59.340 [2024-07-24 02:12:13.895194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.340 [2024-07-24 02:12:13.895219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.340 qpair failed and we were unable to recover it. 00:33:59.340 [2024-07-24 02:12:13.895328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.340 [2024-07-24 02:12:13.895353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.340 qpair failed and we were unable to recover it. 00:33:59.340 [2024-07-24 02:12:13.895453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.340 [2024-07-24 02:12:13.895477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.340 qpair failed and we were unable to recover it. 00:33:59.340 [2024-07-24 02:12:13.895613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.340 [2024-07-24 02:12:13.895639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.340 qpair failed and we were unable to recover it. 00:33:59.340 [2024-07-24 02:12:13.895743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.340 [2024-07-24 02:12:13.895767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.340 qpair failed and we were unable to recover it. 00:33:59.340 [2024-07-24 02:12:13.895871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.340 [2024-07-24 02:12:13.895896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.340 qpair failed and we were unable to recover it. 00:33:59.340 [2024-07-24 02:12:13.896021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.340 [2024-07-24 02:12:13.896046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.340 qpair failed and we were unable to recover it. 00:33:59.340 [2024-07-24 02:12:13.896153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.340 [2024-07-24 02:12:13.896178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.340 qpair failed and we were unable to recover it. 00:33:59.340 [2024-07-24 02:12:13.896286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.340 [2024-07-24 02:12:13.896311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.340 qpair failed and we were unable to recover it. 00:33:59.340 [2024-07-24 02:12:13.896413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.340 [2024-07-24 02:12:13.896439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.340 qpair failed and we were unable to recover it. 00:33:59.340 [2024-07-24 02:12:13.896573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.340 [2024-07-24 02:12:13.896597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.340 qpair failed and we were unable to recover it. 00:33:59.340 [2024-07-24 02:12:13.896701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.340 [2024-07-24 02:12:13.896725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.340 qpair failed and we were unable to recover it. 00:33:59.340 [2024-07-24 02:12:13.896857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.340 [2024-07-24 02:12:13.896881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.340 qpair failed and we were unable to recover it. 00:33:59.340 [2024-07-24 02:12:13.896988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.340 [2024-07-24 02:12:13.897014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.340 qpair failed and we were unable to recover it. 00:33:59.340 [2024-07-24 02:12:13.897147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.340 [2024-07-24 02:12:13.897173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.340 qpair failed and we were unable to recover it. 00:33:59.340 [2024-07-24 02:12:13.897301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.340 [2024-07-24 02:12:13.897331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.340 qpair failed and we were unable to recover it. 00:33:59.340 [2024-07-24 02:12:13.897468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.340 [2024-07-24 02:12:13.897493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.340 qpair failed and we were unable to recover it. 00:33:59.340 [2024-07-24 02:12:13.897619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.340 [2024-07-24 02:12:13.897643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.340 qpair failed and we were unable to recover it. 00:33:59.340 [2024-07-24 02:12:13.897785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.340 [2024-07-24 02:12:13.897810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.340 qpair failed and we were unable to recover it. 00:33:59.340 [2024-07-24 02:12:13.897909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.340 [2024-07-24 02:12:13.897934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.340 qpair failed and we were unable to recover it. 00:33:59.340 [2024-07-24 02:12:13.898089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.340 [2024-07-24 02:12:13.898113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.340 qpair failed and we were unable to recover it. 00:33:59.340 [2024-07-24 02:12:13.898243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.340 [2024-07-24 02:12:13.898267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.340 qpair failed and we were unable to recover it. 00:33:59.340 [2024-07-24 02:12:13.898400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.340 [2024-07-24 02:12:13.898427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.340 qpair failed and we were unable to recover it. 00:33:59.340 [2024-07-24 02:12:13.898549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.340 [2024-07-24 02:12:13.898575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.340 qpair failed and we were unable to recover it. 00:33:59.340 [2024-07-24 02:12:13.898719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.340 [2024-07-24 02:12:13.898745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.340 qpair failed and we were unable to recover it. 00:33:59.340 [2024-07-24 02:12:13.898873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.340 [2024-07-24 02:12:13.898897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.340 qpair failed and we were unable to recover it. 00:33:59.340 [2024-07-24 02:12:13.899027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.340 [2024-07-24 02:12:13.899056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.340 qpair failed and we were unable to recover it. 00:33:59.340 [2024-07-24 02:12:13.899159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.340 [2024-07-24 02:12:13.899184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.340 qpair failed and we were unable to recover it. 00:33:59.340 [2024-07-24 02:12:13.899287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.340 [2024-07-24 02:12:13.899312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.340 qpair failed and we were unable to recover it. 00:33:59.340 [2024-07-24 02:12:13.899441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.340 [2024-07-24 02:12:13.899467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.340 qpair failed and we were unable to recover it. 00:33:59.340 [2024-07-24 02:12:13.899609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.340 [2024-07-24 02:12:13.899634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.340 qpair failed and we were unable to recover it. 00:33:59.340 [2024-07-24 02:12:13.899754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.340 [2024-07-24 02:12:13.899779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.340 qpair failed and we were unable to recover it. 00:33:59.340 [2024-07-24 02:12:13.899878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.340 [2024-07-24 02:12:13.899903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.340 qpair failed and we were unable to recover it. 00:33:59.340 [2024-07-24 02:12:13.899999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.340 [2024-07-24 02:12:13.900023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.340 qpair failed and we were unable to recover it. 00:33:59.341 [2024-07-24 02:12:13.900155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.341 [2024-07-24 02:12:13.900181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.341 qpair failed and we were unable to recover it. 00:33:59.341 [2024-07-24 02:12:13.900345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.341 [2024-07-24 02:12:13.900370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.341 qpair failed and we were unable to recover it. 00:33:59.341 [2024-07-24 02:12:13.900478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.341 [2024-07-24 02:12:13.900504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.341 qpair failed and we were unable to recover it. 00:33:59.341 [2024-07-24 02:12:13.900651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.341 [2024-07-24 02:12:13.900676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.341 qpair failed and we were unable to recover it. 00:33:59.341 [2024-07-24 02:12:13.900798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.341 [2024-07-24 02:12:13.900823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.341 qpair failed and we were unable to recover it. 00:33:59.341 [2024-07-24 02:12:13.900954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.341 [2024-07-24 02:12:13.900978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.341 qpair failed and we were unable to recover it. 00:33:59.341 [2024-07-24 02:12:13.901094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.341 [2024-07-24 02:12:13.901118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.341 qpair failed and we were unable to recover it. 00:33:59.341 [2024-07-24 02:12:13.901232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.341 [2024-07-24 02:12:13.901257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.341 qpair failed and we were unable to recover it. 00:33:59.341 [2024-07-24 02:12:13.901375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.341 [2024-07-24 02:12:13.901400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.341 qpair failed and we were unable to recover it. 00:33:59.341 [2024-07-24 02:12:13.901561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.341 [2024-07-24 02:12:13.901587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.341 qpair failed and we were unable to recover it. 00:33:59.341 [2024-07-24 02:12:13.901691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.341 [2024-07-24 02:12:13.901716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.341 qpair failed and we were unable to recover it. 00:33:59.341 [2024-07-24 02:12:13.901845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.341 [2024-07-24 02:12:13.901869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.341 qpair failed and we were unable to recover it. 00:33:59.341 [2024-07-24 02:12:13.902000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.341 [2024-07-24 02:12:13.902025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.341 qpair failed and we were unable to recover it. 00:33:59.341 [2024-07-24 02:12:13.902127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.341 [2024-07-24 02:12:13.902152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.341 qpair failed and we were unable to recover it. 00:33:59.341 [2024-07-24 02:12:13.902280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.341 [2024-07-24 02:12:13.902313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.341 qpair failed and we were unable to recover it. 00:33:59.341 [2024-07-24 02:12:13.902429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.341 [2024-07-24 02:12:13.902454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.341 qpair failed and we were unable to recover it. 00:33:59.341 [2024-07-24 02:12:13.902578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.341 [2024-07-24 02:12:13.902603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.341 qpair failed and we were unable to recover it. 00:33:59.341 [2024-07-24 02:12:13.902744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.341 [2024-07-24 02:12:13.902769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.341 qpair failed and we were unable to recover it. 00:33:59.341 [2024-07-24 02:12:13.902899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.341 [2024-07-24 02:12:13.902924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.341 qpair failed and we were unable to recover it. 00:33:59.341 [2024-07-24 02:12:13.903035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.341 [2024-07-24 02:12:13.903061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.341 qpair failed and we were unable to recover it. 00:33:59.341 [2024-07-24 02:12:13.903158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.341 [2024-07-24 02:12:13.903182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.341 qpair failed and we were unable to recover it. 00:33:59.341 [2024-07-24 02:12:13.903283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.341 [2024-07-24 02:12:13.903308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.341 qpair failed and we were unable to recover it. 00:33:59.341 [2024-07-24 02:12:13.903447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.341 [2024-07-24 02:12:13.903472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.341 qpair failed and we were unable to recover it. 00:33:59.341 [2024-07-24 02:12:13.903579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.341 [2024-07-24 02:12:13.903605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.341 qpair failed and we were unable to recover it. 00:33:59.341 [2024-07-24 02:12:13.903711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.341 [2024-07-24 02:12:13.903735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.341 qpair failed and we were unable to recover it. 00:33:59.341 [2024-07-24 02:12:13.903840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.341 [2024-07-24 02:12:13.903865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.341 qpair failed and we were unable to recover it. 00:33:59.341 [2024-07-24 02:12:13.903995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.341 [2024-07-24 02:12:13.904019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.341 qpair failed and we were unable to recover it. 00:33:59.341 [2024-07-24 02:12:13.904111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.341 [2024-07-24 02:12:13.904135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.341 qpair failed and we were unable to recover it. 00:33:59.341 [2024-07-24 02:12:13.904239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.341 [2024-07-24 02:12:13.904263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.341 qpair failed and we were unable to recover it. 00:33:59.341 [2024-07-24 02:12:13.904398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.341 [2024-07-24 02:12:13.904424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.341 qpair failed and we were unable to recover it. 00:33:59.341 [2024-07-24 02:12:13.904555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.341 [2024-07-24 02:12:13.904580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.341 qpair failed and we were unable to recover it. 00:33:59.341 [2024-07-24 02:12:13.904692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.341 [2024-07-24 02:12:13.904717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.341 qpair failed and we were unable to recover it. 00:33:59.341 [2024-07-24 02:12:13.904817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.341 [2024-07-24 02:12:13.904847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.341 qpair failed and we were unable to recover it. 00:33:59.341 [2024-07-24 02:12:13.904978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.341 [2024-07-24 02:12:13.905002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.341 qpair failed and we were unable to recover it. 00:33:59.341 [2024-07-24 02:12:13.905106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.341 [2024-07-24 02:12:13.905132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.341 qpair failed and we were unable to recover it. 00:33:59.341 [2024-07-24 02:12:13.905271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.341 [2024-07-24 02:12:13.905296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.341 qpair failed and we were unable to recover it. 00:33:59.341 [2024-07-24 02:12:13.905435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.342 [2024-07-24 02:12:13.905461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.342 qpair failed and we were unable to recover it. 00:33:59.342 [2024-07-24 02:12:13.905569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.342 [2024-07-24 02:12:13.905594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.342 qpair failed and we were unable to recover it. 00:33:59.342 [2024-07-24 02:12:13.905706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.342 [2024-07-24 02:12:13.905731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.342 qpair failed and we were unable to recover it. 00:33:59.342 [2024-07-24 02:12:13.905829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.342 [2024-07-24 02:12:13.905855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.342 qpair failed and we were unable to recover it. 00:33:59.342 [2024-07-24 02:12:13.905957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.342 [2024-07-24 02:12:13.905982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.342 qpair failed and we were unable to recover it. 00:33:59.342 [2024-07-24 02:12:13.906110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.342 [2024-07-24 02:12:13.906135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.342 qpair failed and we were unable to recover it. 00:33:59.342 [2024-07-24 02:12:13.906270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.342 [2024-07-24 02:12:13.906294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.342 qpair failed and we were unable to recover it. 00:33:59.342 [2024-07-24 02:12:13.906404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.342 [2024-07-24 02:12:13.906430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.342 qpair failed and we were unable to recover it. 00:33:59.342 [2024-07-24 02:12:13.906540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.342 [2024-07-24 02:12:13.906565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.342 qpair failed and we were unable to recover it. 00:33:59.342 [2024-07-24 02:12:13.906676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.342 [2024-07-24 02:12:13.906702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.342 qpair failed and we were unable to recover it. 00:33:59.342 [2024-07-24 02:12:13.906803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.342 [2024-07-24 02:12:13.906828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.342 qpair failed and we were unable to recover it. 00:33:59.342 [2024-07-24 02:12:13.906971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.342 [2024-07-24 02:12:13.906996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.342 qpair failed and we were unable to recover it. 00:33:59.342 [2024-07-24 02:12:13.907132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.342 [2024-07-24 02:12:13.907157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.342 qpair failed and we were unable to recover it. 00:33:59.342 [2024-07-24 02:12:13.907259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.342 [2024-07-24 02:12:13.907283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.342 qpair failed and we were unable to recover it. 00:33:59.342 [2024-07-24 02:12:13.907478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.342 [2024-07-24 02:12:13.907506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.342 qpair failed and we were unable to recover it. 00:33:59.342 [2024-07-24 02:12:13.907606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.342 [2024-07-24 02:12:13.907630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.342 qpair failed and we were unable to recover it. 00:33:59.342 [2024-07-24 02:12:13.907756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.342 [2024-07-24 02:12:13.907781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.342 qpair failed and we were unable to recover it. 00:33:59.342 [2024-07-24 02:12:13.907916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.342 [2024-07-24 02:12:13.907941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.342 qpair failed and we were unable to recover it. 00:33:59.342 [2024-07-24 02:12:13.908046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.342 [2024-07-24 02:12:13.908071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.342 qpair failed and we were unable to recover it. 00:33:59.342 [2024-07-24 02:12:13.908175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.342 [2024-07-24 02:12:13.908200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.342 qpair failed and we were unable to recover it. 00:33:59.342 [2024-07-24 02:12:13.908364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.342 [2024-07-24 02:12:13.908389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.342 qpair failed and we were unable to recover it. 00:33:59.342 [2024-07-24 02:12:13.908493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.342 [2024-07-24 02:12:13.908519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.342 qpair failed and we were unable to recover it. 00:33:59.342 [2024-07-24 02:12:13.908625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.342 [2024-07-24 02:12:13.908650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.342 qpair failed and we were unable to recover it. 00:33:59.342 [2024-07-24 02:12:13.908828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.342 [2024-07-24 02:12:13.908868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.342 qpair failed and we were unable to recover it. 00:33:59.342 [2024-07-24 02:12:13.908978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.342 [2024-07-24 02:12:13.909006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.342 qpair failed and we were unable to recover it. 00:33:59.342 [2024-07-24 02:12:13.909121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.342 [2024-07-24 02:12:13.909149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.342 qpair failed and we were unable to recover it. 00:33:59.342 [2024-07-24 02:12:13.909255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.342 [2024-07-24 02:12:13.909281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.342 qpair failed and we were unable to recover it. 00:33:59.342 [2024-07-24 02:12:13.909426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.342 [2024-07-24 02:12:13.909453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.342 qpair failed and we were unable to recover it. 00:33:59.342 [2024-07-24 02:12:13.909590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.342 [2024-07-24 02:12:13.909615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.342 qpair failed and we were unable to recover it. 00:33:59.342 [2024-07-24 02:12:13.909798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.342 [2024-07-24 02:12:13.909824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.342 qpair failed and we were unable to recover it. 00:33:59.342 [2024-07-24 02:12:13.909954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.342 [2024-07-24 02:12:13.909979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.342 qpair failed and we were unable to recover it. 00:33:59.342 [2024-07-24 02:12:13.910145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.342 [2024-07-24 02:12:13.910170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.342 qpair failed and we were unable to recover it. 00:33:59.342 [2024-07-24 02:12:13.910268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.342 [2024-07-24 02:12:13.910294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.342 qpair failed and we were unable to recover it. 00:33:59.342 [2024-07-24 02:12:13.910403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.342 [2024-07-24 02:12:13.910430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.342 qpair failed and we were unable to recover it. 00:33:59.342 [2024-07-24 02:12:13.910574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.342 [2024-07-24 02:12:13.910601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.342 qpair failed and we were unable to recover it. 00:33:59.342 [2024-07-24 02:12:13.910724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.342 [2024-07-24 02:12:13.910750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.342 qpair failed and we were unable to recover it. 00:33:59.343 [2024-07-24 02:12:13.910880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.343 [2024-07-24 02:12:13.910910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.343 qpair failed and we were unable to recover it. 00:33:59.343 [2024-07-24 02:12:13.911071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.343 [2024-07-24 02:12:13.911096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.343 qpair failed and we were unable to recover it. 00:33:59.343 [2024-07-24 02:12:13.911197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.343 [2024-07-24 02:12:13.911223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.343 qpair failed and we were unable to recover it. 00:33:59.343 [2024-07-24 02:12:13.911347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.343 [2024-07-24 02:12:13.911374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.343 qpair failed and we were unable to recover it. 00:33:59.343 [2024-07-24 02:12:13.911517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.343 [2024-07-24 02:12:13.911543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.343 qpair failed and we were unable to recover it. 00:33:59.343 [2024-07-24 02:12:13.911647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.343 [2024-07-24 02:12:13.911673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.343 qpair failed and we were unable to recover it. 00:33:59.343 [2024-07-24 02:12:13.911778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.343 [2024-07-24 02:12:13.911803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.343 qpair failed and we were unable to recover it. 00:33:59.343 [2024-07-24 02:12:13.911938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.343 [2024-07-24 02:12:13.911964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.343 qpair failed and we were unable to recover it. 00:33:59.343 [2024-07-24 02:12:13.912095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.343 [2024-07-24 02:12:13.912120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.343 qpair failed and we were unable to recover it. 00:33:59.343 [2024-07-24 02:12:13.912255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.343 [2024-07-24 02:12:13.912281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.343 qpair failed and we were unable to recover it. 00:33:59.343 [2024-07-24 02:12:13.912401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.343 [2024-07-24 02:12:13.912428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.343 qpair failed and we were unable to recover it. 00:33:59.343 [2024-07-24 02:12:13.912526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.343 [2024-07-24 02:12:13.912552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.343 qpair failed and we were unable to recover it. 00:33:59.343 [2024-07-24 02:12:13.912665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.343 [2024-07-24 02:12:13.912691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.343 qpair failed and we were unable to recover it. 00:33:59.343 [2024-07-24 02:12:13.912796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.343 [2024-07-24 02:12:13.912822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.343 qpair failed and we were unable to recover it. 00:33:59.343 [2024-07-24 02:12:13.912965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.343 [2024-07-24 02:12:13.912991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.343 qpair failed and we were unable to recover it. 00:33:59.343 [2024-07-24 02:12:13.913103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.343 [2024-07-24 02:12:13.913129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.343 qpair failed and we were unable to recover it. 00:33:59.343 [2024-07-24 02:12:13.913255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.343 [2024-07-24 02:12:13.913281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.343 qpair failed and we were unable to recover it. 00:33:59.343 [2024-07-24 02:12:13.913400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.343 [2024-07-24 02:12:13.913429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.343 qpair failed and we were unable to recover it. 00:33:59.343 [2024-07-24 02:12:13.913589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.343 [2024-07-24 02:12:13.913626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.343 qpair failed and we were unable to recover it. 00:33:59.343 [2024-07-24 02:12:13.913726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.343 [2024-07-24 02:12:13.913751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.343 qpair failed and we were unable to recover it. 00:33:59.343 [2024-07-24 02:12:13.913911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.343 [2024-07-24 02:12:13.913937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.343 qpair failed and we were unable to recover it. 00:33:59.343 [2024-07-24 02:12:13.914070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.343 [2024-07-24 02:12:13.914094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.343 qpair failed and we were unable to recover it. 00:33:59.343 [2024-07-24 02:12:13.914201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.343 [2024-07-24 02:12:13.914226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.343 qpair failed and we were unable to recover it. 00:33:59.343 [2024-07-24 02:12:13.914331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.343 [2024-07-24 02:12:13.914359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.343 qpair failed and we were unable to recover it. 00:33:59.343 [2024-07-24 02:12:13.914459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.343 [2024-07-24 02:12:13.914486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.343 qpair failed and we were unable to recover it. 00:33:59.343 [2024-07-24 02:12:13.914618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.343 [2024-07-24 02:12:13.914644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.343 qpair failed and we were unable to recover it. 00:33:59.343 [2024-07-24 02:12:13.914778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.343 [2024-07-24 02:12:13.914804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.343 qpair failed and we were unable to recover it. 00:33:59.343 [2024-07-24 02:12:13.914916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.343 [2024-07-24 02:12:13.914944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.343 qpair failed and we were unable to recover it. 00:33:59.343 [2024-07-24 02:12:13.915046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.343 [2024-07-24 02:12:13.915072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.343 qpair failed and we were unable to recover it. 00:33:59.343 [2024-07-24 02:12:13.915176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.343 [2024-07-24 02:12:13.915202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.343 qpair failed and we were unable to recover it. 00:33:59.344 [2024-07-24 02:12:13.915353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.344 [2024-07-24 02:12:13.915380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.344 qpair failed and we were unable to recover it. 00:33:59.344 [2024-07-24 02:12:13.915485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.344 [2024-07-24 02:12:13.915510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.344 qpair failed and we were unable to recover it. 00:33:59.344 [2024-07-24 02:12:13.915613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.344 [2024-07-24 02:12:13.915639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.344 qpair failed and we were unable to recover it. 00:33:59.344 [2024-07-24 02:12:13.915745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.344 [2024-07-24 02:12:13.915771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.344 qpair failed and we were unable to recover it. 00:33:59.344 [2024-07-24 02:12:13.915907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.344 [2024-07-24 02:12:13.915932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.344 qpair failed and we were unable to recover it. 00:33:59.344 [2024-07-24 02:12:13.916054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.344 [2024-07-24 02:12:13.916080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.344 qpair failed and we were unable to recover it. 00:33:59.344 [2024-07-24 02:12:13.916184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.344 [2024-07-24 02:12:13.916210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.344 qpair failed and we were unable to recover it. 00:33:59.344 [2024-07-24 02:12:13.916346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.344 [2024-07-24 02:12:13.916374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.344 qpair failed and we were unable to recover it. 00:33:59.344 [2024-07-24 02:12:13.916473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.344 [2024-07-24 02:12:13.916499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.344 qpair failed and we were unable to recover it. 00:33:59.344 [2024-07-24 02:12:13.916605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.344 [2024-07-24 02:12:13.916636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.344 qpair failed and we were unable to recover it. 00:33:59.344 [2024-07-24 02:12:13.916747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.344 [2024-07-24 02:12:13.916777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.344 qpair failed and we were unable to recover it. 00:33:59.344 [2024-07-24 02:12:13.916906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.344 [2024-07-24 02:12:13.916933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.344 qpair failed and we were unable to recover it. 00:33:59.344 Malloc0 00:33:59.344 [2024-07-24 02:12:13.917041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.344 [2024-07-24 02:12:13.917067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.344 qpair failed and we were unable to recover it. 00:33:59.344 [2024-07-24 02:12:13.917175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.344 [2024-07-24 02:12:13.917201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.344 qpair failed and we were unable to recover it. 00:33:59.344 [2024-07-24 02:12:13.917351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.344 02:12:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:59.344 [2024-07-24 02:12:13.917378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.344 qpair failed and we were unable to recover it. 00:33:59.344 02:12:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:33:59.344 [2024-07-24 02:12:13.917517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.344 [2024-07-24 02:12:13.917544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.344 qpair failed and we were unable to recover it. 00:33:59.344 [2024-07-24 02:12:13.917668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.344 02:12:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:59.344 [2024-07-24 02:12:13.917695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.344 qpair failed and we were unable to recover it. 00:33:59.344 [2024-07-24 02:12:13.917804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.344 02:12:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:59.344 [2024-07-24 02:12:13.917831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.344 qpair failed and we were unable to recover it. 00:33:59.344 [2024-07-24 02:12:13.917971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.344 [2024-07-24 02:12:13.917997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.344 qpair failed and we were unable to recover it. 00:33:59.344 [2024-07-24 02:12:13.918134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.344 [2024-07-24 02:12:13.918160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.344 qpair failed and we were unable to recover it. 00:33:59.344 [2024-07-24 02:12:13.918268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.344 [2024-07-24 02:12:13.918294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.344 qpair failed and we were unable to recover it. 00:33:59.344 [2024-07-24 02:12:13.918432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.344 [2024-07-24 02:12:13.918458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.344 qpair failed and we were unable to recover it. 00:33:59.344 [2024-07-24 02:12:13.918584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.344 [2024-07-24 02:12:13.918620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.344 qpair failed and we were unable to recover it. 00:33:59.344 [2024-07-24 02:12:13.918765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.344 [2024-07-24 02:12:13.918790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.344 qpair failed and we were unable to recover it. 00:33:59.344 [2024-07-24 02:12:13.918920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.344 [2024-07-24 02:12:13.918946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.344 qpair failed and we were unable to recover it. 00:33:59.344 [2024-07-24 02:12:13.919049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.344 [2024-07-24 02:12:13.919075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.344 qpair failed and we were unable to recover it. 00:33:59.344 [2024-07-24 02:12:13.919196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.344 [2024-07-24 02:12:13.919221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.344 qpair failed and we were unable to recover it. 00:33:59.344 [2024-07-24 02:12:13.919338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.344 [2024-07-24 02:12:13.919373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.344 qpair failed and we were unable to recover it. 00:33:59.344 [2024-07-24 02:12:13.919477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.344 [2024-07-24 02:12:13.919502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.344 qpair failed and we were unable to recover it. 00:33:59.344 [2024-07-24 02:12:13.919618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.344 [2024-07-24 02:12:13.919644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.344 qpair failed and we were unable to recover it. 00:33:59.344 [2024-07-24 02:12:13.919746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.344 [2024-07-24 02:12:13.919772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.344 qpair failed and we were unable to recover it. 00:33:59.344 [2024-07-24 02:12:13.919902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.344 [2024-07-24 02:12:13.919928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.344 qpair failed and we were unable to recover it. 00:33:59.344 [2024-07-24 02:12:13.920034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.344 [2024-07-24 02:12:13.920060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.344 qpair failed and we were unable to recover it. 00:33:59.344 [2024-07-24 02:12:13.920168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.344 [2024-07-24 02:12:13.920202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.344 qpair failed and we were unable to recover it. 00:33:59.344 [2024-07-24 02:12:13.920334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.344 [2024-07-24 02:12:13.920372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.345 qpair failed and we were unable to recover it. 00:33:59.345 [2024-07-24 02:12:13.920484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.345 [2024-07-24 02:12:13.920515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.345 qpair failed and we were unable to recover it. 00:33:59.345 [2024-07-24 02:12:13.920621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.345 [2024-07-24 02:12:13.920647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.345 qpair failed and we were unable to recover it. 00:33:59.345 [2024-07-24 02:12:13.920716] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:59.345 [2024-07-24 02:12:13.920780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.345 [2024-07-24 02:12:13.920805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.345 qpair failed and we were unable to recover it. 00:33:59.345 [2024-07-24 02:12:13.920939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.345 [2024-07-24 02:12:13.920965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.345 qpair failed and we were unable to recover it. 00:33:59.345 [2024-07-24 02:12:13.921086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.345 [2024-07-24 02:12:13.921111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.345 qpair failed and we were unable to recover it. 00:33:59.345 [2024-07-24 02:12:13.921242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.345 [2024-07-24 02:12:13.921267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.345 qpair failed and we were unable to recover it. 00:33:59.345 [2024-07-24 02:12:13.921389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.345 [2024-07-24 02:12:13.921415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.345 qpair failed and we were unable to recover it. 00:33:59.345 [2024-07-24 02:12:13.921545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.345 [2024-07-24 02:12:13.921570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.345 qpair failed and we were unable to recover it. 00:33:59.345 [2024-07-24 02:12:13.921678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.345 [2024-07-24 02:12:13.921703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.345 qpair failed and we were unable to recover it. 00:33:59.345 [2024-07-24 02:12:13.921821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.345 [2024-07-24 02:12:13.921845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.345 qpair failed and we were unable to recover it. 00:33:59.345 [2024-07-24 02:12:13.921949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.345 [2024-07-24 02:12:13.921976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.345 qpair failed and we were unable to recover it. 00:33:59.345 [2024-07-24 02:12:13.922114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.345 [2024-07-24 02:12:13.922140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.345 qpair failed and we were unable to recover it. 00:33:59.345 [2024-07-24 02:12:13.922242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.345 [2024-07-24 02:12:13.922268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.345 qpair failed and we were unable to recover it. 00:33:59.345 [2024-07-24 02:12:13.922387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.345 [2024-07-24 02:12:13.922418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.345 qpair failed and we were unable to recover it. 00:33:59.345 [2024-07-24 02:12:13.922554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.345 [2024-07-24 02:12:13.922580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.345 qpair failed and we were unable to recover it. 00:33:59.345 [2024-07-24 02:12:13.922689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.345 [2024-07-24 02:12:13.922716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.345 qpair failed and we were unable to recover it. 00:33:59.345 [2024-07-24 02:12:13.922845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.345 [2024-07-24 02:12:13.922871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.345 qpair failed and we were unable to recover it. 00:33:59.345 [2024-07-24 02:12:13.922973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.345 [2024-07-24 02:12:13.922999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.345 qpair failed and we were unable to recover it. 00:33:59.345 [2024-07-24 02:12:13.923107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.345 [2024-07-24 02:12:13.923133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.345 qpair failed and we were unable to recover it. 00:33:59.345 [2024-07-24 02:12:13.923267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.345 [2024-07-24 02:12:13.923294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.345 qpair failed and we were unable to recover it. 00:33:59.345 [2024-07-24 02:12:13.923441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.345 [2024-07-24 02:12:13.923468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.345 qpair failed and we were unable to recover it. 00:33:59.345 [2024-07-24 02:12:13.923580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.345 [2024-07-24 02:12:13.923606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.345 qpair failed and we were unable to recover it. 00:33:59.345 [2024-07-24 02:12:13.923738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.345 [2024-07-24 02:12:13.923763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.345 qpair failed and we were unable to recover it. 00:33:59.345 [2024-07-24 02:12:13.923864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.345 [2024-07-24 02:12:13.923889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.345 qpair failed and we were unable to recover it. 00:33:59.345 [2024-07-24 02:12:13.923997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.345 [2024-07-24 02:12:13.924023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.345 qpair failed and we were unable to recover it. 00:33:59.345 [2024-07-24 02:12:13.924133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.345 [2024-07-24 02:12:13.924157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.345 qpair failed and we were unable to recover it. 00:33:59.345 [2024-07-24 02:12:13.924260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.345 [2024-07-24 02:12:13.924285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.345 qpair failed and we were unable to recover it. 00:33:59.345 [2024-07-24 02:12:13.924412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.345 [2024-07-24 02:12:13.924437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.345 qpair failed and we were unable to recover it. 00:33:59.345 [2024-07-24 02:12:13.924543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.345 [2024-07-24 02:12:13.924568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.345 qpair failed and we were unable to recover it. 00:33:59.345 [2024-07-24 02:12:13.924678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.345 [2024-07-24 02:12:13.924702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.345 qpair failed and we were unable to recover it. 00:33:59.345 [2024-07-24 02:12:13.924805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.345 [2024-07-24 02:12:13.924830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.345 qpair failed and we were unable to recover it. 00:33:59.345 [2024-07-24 02:12:13.924969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.345 [2024-07-24 02:12:13.924997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.345 qpair failed and we were unable to recover it. 00:33:59.345 [2024-07-24 02:12:13.925133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.345 [2024-07-24 02:12:13.925159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.345 qpair failed and we were unable to recover it. 00:33:59.345 [2024-07-24 02:12:13.925266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.345 [2024-07-24 02:12:13.925292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.345 qpair failed and we were unable to recover it. 00:33:59.345 [2024-07-24 02:12:13.925435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.345 [2024-07-24 02:12:13.925461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.345 qpair failed and we were unable to recover it. 00:33:59.345 [2024-07-24 02:12:13.925561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.346 [2024-07-24 02:12:13.925587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.346 qpair failed and we were unable to recover it. 00:33:59.346 [2024-07-24 02:12:13.925717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.346 [2024-07-24 02:12:13.925744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.346 qpair failed and we were unable to recover it. 00:33:59.346 [2024-07-24 02:12:13.925844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.346 [2024-07-24 02:12:13.925869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.346 qpair failed and we were unable to recover it. 00:33:59.346 [2024-07-24 02:12:13.926008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.346 [2024-07-24 02:12:13.926034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.346 qpair failed and we were unable to recover it. 00:33:59.346 [2024-07-24 02:12:13.926195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.346 [2024-07-24 02:12:13.926221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.346 qpair failed and we were unable to recover it. 00:33:59.346 [2024-07-24 02:12:13.926346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.346 [2024-07-24 02:12:13.926373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.346 qpair failed and we were unable to recover it. 00:33:59.346 [2024-07-24 02:12:13.926477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.346 [2024-07-24 02:12:13.926503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.346 qpair failed and we were unable to recover it. 00:33:59.346 [2024-07-24 02:12:13.926622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.346 [2024-07-24 02:12:13.926647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.346 qpair failed and we were unable to recover it. 00:33:59.346 [2024-07-24 02:12:13.926779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.346 [2024-07-24 02:12:13.926804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.346 qpair failed and we were unable to recover it. 00:33:59.346 [2024-07-24 02:12:13.926917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.346 [2024-07-24 02:12:13.926942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.346 qpair failed and we were unable to recover it. 00:33:59.346 [2024-07-24 02:12:13.927050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.346 [2024-07-24 02:12:13.927076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.346 qpair failed and we were unable to recover it. 00:33:59.346 [2024-07-24 02:12:13.927221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.346 [2024-07-24 02:12:13.927245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.346 qpair failed and we were unable to recover it. 00:33:59.346 [2024-07-24 02:12:13.927356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.346 [2024-07-24 02:12:13.927392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.346 qpair failed and we were unable to recover it. 00:33:59.346 [2024-07-24 02:12:13.927503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.346 [2024-07-24 02:12:13.927527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.346 qpair failed and we were unable to recover it. 00:33:59.346 [2024-07-24 02:12:13.927650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.346 [2024-07-24 02:12:13.927675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.346 qpair failed and we were unable to recover it. 00:33:59.346 [2024-07-24 02:12:13.927778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.346 [2024-07-24 02:12:13.927802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.346 qpair failed and we were unable to recover it. 00:33:59.346 [2024-07-24 02:12:13.927937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.346 [2024-07-24 02:12:13.927963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.346 qpair failed and we were unable to recover it. 00:33:59.346 [2024-07-24 02:12:13.928096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.346 [2024-07-24 02:12:13.928124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.346 qpair failed and we were unable to recover it. 00:33:59.346 [2024-07-24 02:12:13.928220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.346 [2024-07-24 02:12:13.928250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.346 qpair failed and we were unable to recover it. 00:33:59.346 [2024-07-24 02:12:13.928363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.346 [2024-07-24 02:12:13.928389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.346 qpair failed and we were unable to recover it. 00:33:59.346 [2024-07-24 02:12:13.928492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.346 [2024-07-24 02:12:13.928518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.346 qpair failed and we were unable to recover it. 00:33:59.346 [2024-07-24 02:12:13.928646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.346 [2024-07-24 02:12:13.928672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.346 qpair failed and we were unable to recover it. 00:33:59.346 [2024-07-24 02:12:13.928779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.346 [2024-07-24 02:12:13.928804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.346 qpair failed and we were unable to recover it. 00:33:59.346 02:12:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:59.346 [2024-07-24 02:12:13.928922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.346 [2024-07-24 02:12:13.928949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.346 qpair failed and we were unable to recover it. 00:33:59.346 [2024-07-24 02:12:13.929057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.346 02:12:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:59.346 [2024-07-24 02:12:13.929084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.346 qpair failed and we were unable to recover it. 00:33:59.346 02:12:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:59.346 [2024-07-24 02:12:13.929214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.346 [2024-07-24 02:12:13.929241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.346 qpair failed and we were unable to recover it. 00:33:59.346 02:12:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:59.346 [2024-07-24 02:12:13.929370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.346 [2024-07-24 02:12:13.929397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.346 qpair failed and we were unable to recover it. 00:33:59.346 [2024-07-24 02:12:13.929497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.346 [2024-07-24 02:12:13.929523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.346 qpair failed and we were unable to recover it. 00:33:59.346 [2024-07-24 02:12:13.929619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.346 [2024-07-24 02:12:13.929645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.346 qpair failed and we were unable to recover it. 00:33:59.346 [2024-07-24 02:12:13.929763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.346 [2024-07-24 02:12:13.929790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.346 qpair failed and we were unable to recover it. 00:33:59.346 [2024-07-24 02:12:13.929899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.346 [2024-07-24 02:12:13.929925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.346 qpair failed and we were unable to recover it. 00:33:59.346 [2024-07-24 02:12:13.930025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.346 [2024-07-24 02:12:13.930051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.346 qpair failed and we were unable to recover it. 00:33:59.346 [2024-07-24 02:12:13.930185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.346 [2024-07-24 02:12:13.930211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.346 qpair failed and we were unable to recover it. 00:33:59.346 [2024-07-24 02:12:13.930377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.346 [2024-07-24 02:12:13.930404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.346 qpair failed and we were unable to recover it. 00:33:59.346 [2024-07-24 02:12:13.930530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.346 [2024-07-24 02:12:13.930556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.346 qpair failed and we were unable to recover it. 00:33:59.346 [2024-07-24 02:12:13.930673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.347 [2024-07-24 02:12:13.930698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.347 qpair failed and we were unable to recover it. 00:33:59.347 [2024-07-24 02:12:13.930801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.347 [2024-07-24 02:12:13.930827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.347 qpair failed and we were unable to recover it. 00:33:59.347 [2024-07-24 02:12:13.930960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.347 [2024-07-24 02:12:13.930986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.347 qpair failed and we were unable to recover it. 00:33:59.347 [2024-07-24 02:12:13.931083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.347 [2024-07-24 02:12:13.931109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.347 qpair failed and we were unable to recover it. 00:33:59.347 [2024-07-24 02:12:13.931266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.347 [2024-07-24 02:12:13.931291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.347 qpair failed and we were unable to recover it. 00:33:59.347 [2024-07-24 02:12:13.931423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.347 [2024-07-24 02:12:13.931449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.347 qpair failed and we were unable to recover it. 00:33:59.347 [2024-07-24 02:12:13.931552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.347 [2024-07-24 02:12:13.931578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.347 qpair failed and we were unable to recover it. 00:33:59.347 [2024-07-24 02:12:13.931691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.347 [2024-07-24 02:12:13.931717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.347 qpair failed and we were unable to recover it. 00:33:59.347 [2024-07-24 02:12:13.931810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.347 [2024-07-24 02:12:13.931839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.347 qpair failed and we were unable to recover it. 00:33:59.347 [2024-07-24 02:12:13.931958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.347 [2024-07-24 02:12:13.931984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.347 qpair failed and we were unable to recover it. 00:33:59.347 [2024-07-24 02:12:13.932089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.347 [2024-07-24 02:12:13.932115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.347 qpair failed and we were unable to recover it. 00:33:59.347 [2024-07-24 02:12:13.932216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.347 [2024-07-24 02:12:13.932242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.347 qpair failed and we were unable to recover it. 00:33:59.347 [2024-07-24 02:12:13.932343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.347 [2024-07-24 02:12:13.932370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.347 qpair failed and we were unable to recover it. 00:33:59.347 [2024-07-24 02:12:13.932506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.347 [2024-07-24 02:12:13.932532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.347 qpair failed and we were unable to recover it. 00:33:59.347 [2024-07-24 02:12:13.932690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.347 [2024-07-24 02:12:13.932715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.347 qpair failed and we were unable to recover it. 00:33:59.347 [2024-07-24 02:12:13.932820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.347 [2024-07-24 02:12:13.932845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.347 qpair failed and we were unable to recover it. 00:33:59.347 [2024-07-24 02:12:13.932974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.347 [2024-07-24 02:12:13.932999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.347 qpair failed and we were unable to recover it. 00:33:59.347 [2024-07-24 02:12:13.933156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.347 [2024-07-24 02:12:13.933181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.347 qpair failed and we were unable to recover it. 00:33:59.347 [2024-07-24 02:12:13.933288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.347 [2024-07-24 02:12:13.933320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.347 qpair failed and we were unable to recover it. 00:33:59.347 [2024-07-24 02:12:13.933430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.347 [2024-07-24 02:12:13.933456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.347 qpair failed and we were unable to recover it. 00:33:59.347 [2024-07-24 02:12:13.933558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.347 [2024-07-24 02:12:13.933584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.347 qpair failed and we were unable to recover it. 00:33:59.347 [2024-07-24 02:12:13.933696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.347 [2024-07-24 02:12:13.933723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.347 qpair failed and we were unable to recover it. 00:33:59.347 [2024-07-24 02:12:13.933831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.347 [2024-07-24 02:12:13.933858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.347 qpair failed and we were unable to recover it. 00:33:59.347 [2024-07-24 02:12:13.933987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.347 [2024-07-24 02:12:13.934013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.347 qpair failed and we were unable to recover it. 00:33:59.347 [2024-07-24 02:12:13.934116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.347 [2024-07-24 02:12:13.934141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.347 qpair failed and we were unable to recover it. 00:33:59.347 [2024-07-24 02:12:13.934268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.347 [2024-07-24 02:12:13.934293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.347 qpair failed and we were unable to recover it. 00:33:59.347 [2024-07-24 02:12:13.934397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.347 [2024-07-24 02:12:13.934423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.347 qpair failed and we were unable to recover it. 00:33:59.347 [2024-07-24 02:12:13.934529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.347 [2024-07-24 02:12:13.934555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.347 qpair failed and we were unable to recover it. 00:33:59.347 [2024-07-24 02:12:13.934717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.347 [2024-07-24 02:12:13.934743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.347 qpair failed and we were unable to recover it. 00:33:59.347 [2024-07-24 02:12:13.934849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.347 [2024-07-24 02:12:13.934874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.347 qpair failed and we were unable to recover it. 00:33:59.347 [2024-07-24 02:12:13.934973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.347 [2024-07-24 02:12:13.934999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.347 qpair failed and we were unable to recover it. 00:33:59.347 [2024-07-24 02:12:13.935130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.347 [2024-07-24 02:12:13.935163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.347 qpair failed and we were unable to recover it. 00:33:59.347 [2024-07-24 02:12:13.935269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.347 [2024-07-24 02:12:13.935295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.347 qpair failed and we were unable to recover it. 00:33:59.347 [2024-07-24 02:12:13.935441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.347 [2024-07-24 02:12:13.935467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.347 qpair failed and we were unable to recover it. 00:33:59.347 [2024-07-24 02:12:13.935595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.347 [2024-07-24 02:12:13.935620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.347 qpair failed and we were unable to recover it. 00:33:59.347 [2024-07-24 02:12:13.935732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.347 [2024-07-24 02:12:13.935757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.347 qpair failed and we were unable to recover it. 00:33:59.347 [2024-07-24 02:12:13.935868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.348 [2024-07-24 02:12:13.935893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.348 qpair failed and we were unable to recover it. 00:33:59.348 [2024-07-24 02:12:13.936001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.348 [2024-07-24 02:12:13.936027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.348 qpair failed and we were unable to recover it. 00:33:59.348 [2024-07-24 02:12:13.936125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.348 [2024-07-24 02:12:13.936152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.348 qpair failed and we were unable to recover it. 00:33:59.348 [2024-07-24 02:12:13.936252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.348 [2024-07-24 02:12:13.936277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.348 qpair failed and we were unable to recover it. 00:33:59.348 [2024-07-24 02:12:13.936392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.348 [2024-07-24 02:12:13.936418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.348 qpair failed and we were unable to recover it. 00:33:59.348 [2024-07-24 02:12:13.936548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.348 [2024-07-24 02:12:13.936573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.348 qpair failed and we were unable to recover it. 00:33:59.348 02:12:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:59.348 02:12:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:59.348 02:12:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:59.348 02:12:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:59.348 [2024-07-24 02:12:13.937429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.348 [2024-07-24 02:12:13.937461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.348 qpair failed and we were unable to recover it. 00:33:59.348 [2024-07-24 02:12:13.937602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.348 [2024-07-24 02:12:13.937629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.348 qpair failed and we were unable to recover it. 00:33:59.348 [2024-07-24 02:12:13.937737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.348 [2024-07-24 02:12:13.937763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.348 qpair failed and we were unable to recover it. 00:33:59.348 [2024-07-24 02:12:13.937866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.348 [2024-07-24 02:12:13.937891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.348 qpair failed and we were unable to recover it. 00:33:59.348 [2024-07-24 02:12:13.937989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.348 [2024-07-24 02:12:13.938019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.348 qpair failed and we were unable to recover it. 00:33:59.348 [2024-07-24 02:12:13.938120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.348 [2024-07-24 02:12:13.938144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.348 qpair failed and we were unable to recover it. 00:33:59.348 [2024-07-24 02:12:13.938245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.348 [2024-07-24 02:12:13.938270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.348 qpair failed and we were unable to recover it. 00:33:59.348 [2024-07-24 02:12:13.938394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.348 [2024-07-24 02:12:13.938434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.348 qpair failed and we were unable to recover it. 00:33:59.348 [2024-07-24 02:12:13.938555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.348 [2024-07-24 02:12:13.938594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.348 qpair failed and we were unable to recover it. 00:33:59.348 [2024-07-24 02:12:13.938714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.348 [2024-07-24 02:12:13.938741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.348 qpair failed and we were unable to recover it. 00:33:59.348 [2024-07-24 02:12:13.938957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.348 [2024-07-24 02:12:13.938982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.348 qpair failed and we were unable to recover it. 00:33:59.348 [2024-07-24 02:12:13.939108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.348 [2024-07-24 02:12:13.939132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.348 qpair failed and we were unable to recover it. 00:33:59.348 [2024-07-24 02:12:13.939265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.348 [2024-07-24 02:12:13.939290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcc6600 with addr=10.0.0.2, port=4420 00:33:59.348 qpair failed and we were unable to recover it. 00:33:59.348 [2024-07-24 02:12:13.939402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.348 [2024-07-24 02:12:13.939430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.348 qpair failed and we were unable to recover it. 00:33:59.348 [2024-07-24 02:12:13.939529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.348 [2024-07-24 02:12:13.939555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.348 qpair failed and we were unable to recover it. 00:33:59.348 [2024-07-24 02:12:13.939716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.348 [2024-07-24 02:12:13.939740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.348 qpair failed and we were unable to recover it. 00:33:59.348 [2024-07-24 02:12:13.939841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.348 [2024-07-24 02:12:13.939866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.348 qpair failed and we were unable to recover it. 00:33:59.348 [2024-07-24 02:12:13.939994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.348 [2024-07-24 02:12:13.940020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.348 qpair failed and we were unable to recover it. 00:33:59.348 [2024-07-24 02:12:13.940159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.348 [2024-07-24 02:12:13.940184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.348 qpair failed and we were unable to recover it. 00:33:59.348 [2024-07-24 02:12:13.940291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.348 [2024-07-24 02:12:13.940322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.348 qpair failed and we were unable to recover it. 00:33:59.348 [2024-07-24 02:12:13.940430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.348 [2024-07-24 02:12:13.940454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.348 qpair failed and we were unable to recover it. 00:33:59.348 [2024-07-24 02:12:13.940564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.348 [2024-07-24 02:12:13.940589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.348 qpair failed and we were unable to recover it. 00:33:59.348 [2024-07-24 02:12:13.940726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.348 [2024-07-24 02:12:13.940750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.348 qpair failed and we were unable to recover it. 00:33:59.348 [2024-07-24 02:12:13.940885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.349 [2024-07-24 02:12:13.940911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.349 qpair failed and we were unable to recover it. 00:33:59.349 [2024-07-24 02:12:13.941075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.349 [2024-07-24 02:12:13.941100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.349 qpair failed and we were unable to recover it. 00:33:59.349 [2024-07-24 02:12:13.941229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.349 [2024-07-24 02:12:13.941253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.349 qpair failed and we were unable to recover it. 00:33:59.349 [2024-07-24 02:12:13.941359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.349 [2024-07-24 02:12:13.941384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.349 qpair failed and we were unable to recover it. 00:33:59.349 [2024-07-24 02:12:13.941488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.349 [2024-07-24 02:12:13.941514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.349 qpair failed and we were unable to recover it. 00:33:59.349 [2024-07-24 02:12:13.941652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.349 [2024-07-24 02:12:13.941677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.349 qpair failed and we were unable to recover it. 00:33:59.349 [2024-07-24 02:12:13.941815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.349 [2024-07-24 02:12:13.941840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.349 qpair failed and we were unable to recover it. 00:33:59.349 [2024-07-24 02:12:13.941977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.349 [2024-07-24 02:12:13.942002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.349 qpair failed and we were unable to recover it. 00:33:59.349 [2024-07-24 02:12:13.942113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.349 [2024-07-24 02:12:13.942142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.349 qpair failed and we were unable to recover it. 00:33:59.349 [2024-07-24 02:12:13.942274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.349 [2024-07-24 02:12:13.942300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.349 qpair failed and we were unable to recover it. 00:33:59.349 [2024-07-24 02:12:13.942423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.349 [2024-07-24 02:12:13.942449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.349 qpair failed and we were unable to recover it. 00:33:59.349 [2024-07-24 02:12:13.942555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.349 [2024-07-24 02:12:13.942580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.349 qpair failed and we were unable to recover it. 00:33:59.349 [2024-07-24 02:12:13.942690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.349 [2024-07-24 02:12:13.942715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.349 qpair failed and we were unable to recover it. 00:33:59.349 [2024-07-24 02:12:13.942815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.349 [2024-07-24 02:12:13.942841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.349 qpair failed and we were unable to recover it. 00:33:59.349 [2024-07-24 02:12:13.942964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.349 [2024-07-24 02:12:13.942989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.349 qpair failed and we were unable to recover it. 00:33:59.349 [2024-07-24 02:12:13.943090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.349 [2024-07-24 02:12:13.943116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.349 qpair failed and we were unable to recover it. 00:33:59.349 [2024-07-24 02:12:13.943250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.349 [2024-07-24 02:12:13.943276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.349 qpair failed and we were unable to recover it. 00:33:59.349 [2024-07-24 02:12:13.943429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.349 [2024-07-24 02:12:13.943455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.349 qpair failed and we were unable to recover it. 00:33:59.349 [2024-07-24 02:12:13.943556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.349 [2024-07-24 02:12:13.943582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.349 qpair failed and we were unable to recover it. 00:33:59.349 [2024-07-24 02:12:13.943693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.349 [2024-07-24 02:12:13.943718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.349 qpair failed and we were unable to recover it. 00:33:59.349 [2024-07-24 02:12:13.943820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.349 [2024-07-24 02:12:13.943847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.349 qpair failed and we were unable to recover it. 00:33:59.349 [2024-07-24 02:12:13.943954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.349 [2024-07-24 02:12:13.943979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.349 qpair failed and we were unable to recover it. 00:33:59.349 [2024-07-24 02:12:13.944092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.349 [2024-07-24 02:12:13.944117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.349 qpair failed and we were unable to recover it. 00:33:59.349 [2024-07-24 02:12:13.944226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.349 [2024-07-24 02:12:13.944251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.349 qpair failed and we were unable to recover it. 00:33:59.349 [2024-07-24 02:12:13.944351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.349 [2024-07-24 02:12:13.944377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.349 qpair failed and we were unable to recover it. 00:33:59.349 [2024-07-24 02:12:13.944481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.349 [2024-07-24 02:12:13.944506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.349 qpair failed and we were unable to recover it. 00:33:59.349 [2024-07-24 02:12:13.944614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.349 [2024-07-24 02:12:13.944639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.349 qpair failed and we were unable to recover it. 00:33:59.349 02:12:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:59.349 02:12:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:59.349 02:12:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:59.349 02:12:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:59.349 [2024-07-24 02:12:13.945426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.349 [2024-07-24 02:12:13.945456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.349 qpair failed and we were unable to recover it. 00:33:59.349 [2024-07-24 02:12:13.945594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.349 [2024-07-24 02:12:13.945621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.349 qpair failed and we were unable to recover it. 00:33:59.349 [2024-07-24 02:12:13.945738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.349 [2024-07-24 02:12:13.945764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.349 qpair failed and we were unable to recover it. 00:33:59.349 [2024-07-24 02:12:13.945893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.349 [2024-07-24 02:12:13.945919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.349 qpair failed and we were unable to recover it. 00:33:59.349 [2024-07-24 02:12:13.946055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.349 [2024-07-24 02:12:13.946081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.349 qpair failed and we were unable to recover it. 00:33:59.349 [2024-07-24 02:12:13.946214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.349 [2024-07-24 02:12:13.946240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f115c000b90 with addr=10.0.0.2, port=4420 00:33:59.349 qpair failed and we were unable to recover it. 00:33:59.349 [2024-07-24 02:12:13.946376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.349 [2024-07-24 02:12:13.946415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.349 qpair failed and we were unable to recover it. 00:33:59.349 [2024-07-24 02:12:13.946530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.349 [2024-07-24 02:12:13.946557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.350 qpair failed and we were unable to recover it. 00:33:59.350 [2024-07-24 02:12:13.946692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.350 [2024-07-24 02:12:13.946718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.350 qpair failed and we were unable to recover it. 00:33:59.350 [2024-07-24 02:12:13.946814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.350 [2024-07-24 02:12:13.946839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.350 qpair failed and we were unable to recover it. 00:33:59.350 [2024-07-24 02:12:13.946952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.350 [2024-07-24 02:12:13.946979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.350 qpair failed and we were unable to recover it. 00:33:59.350 [2024-07-24 02:12:13.947138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.350 [2024-07-24 02:12:13.947164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.350 qpair failed and we were unable to recover it. 00:33:59.350 [2024-07-24 02:12:13.947299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.350 [2024-07-24 02:12:13.947330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.350 qpair failed and we were unable to recover it. 00:33:59.350 [2024-07-24 02:12:13.947450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.350 [2024-07-24 02:12:13.947476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.350 qpair failed and we were unable to recover it. 00:33:59.350 [2024-07-24 02:12:13.947580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.350 [2024-07-24 02:12:13.947606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.350 qpair failed and we were unable to recover it. 00:33:59.350 [2024-07-24 02:12:13.947736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.350 [2024-07-24 02:12:13.947761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.350 qpair failed and we were unable to recover it. 00:33:59.350 [2024-07-24 02:12:13.947870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.350 [2024-07-24 02:12:13.947897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.350 qpair failed and we were unable to recover it. 00:33:59.350 [2024-07-24 02:12:13.948030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.350 [2024-07-24 02:12:13.948056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.350 qpair failed and we were unable to recover it. 00:33:59.350 [2024-07-24 02:12:13.948185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.350 [2024-07-24 02:12:13.948211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.350 qpair failed and we were unable to recover it. 00:33:59.350 [2024-07-24 02:12:13.948314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.350 [2024-07-24 02:12:13.948353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.350 qpair failed and we were unable to recover it. 00:33:59.350 [2024-07-24 02:12:13.948462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.350 [2024-07-24 02:12:13.948488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.350 qpair failed and we were unable to recover it. 00:33:59.350 [2024-07-24 02:12:13.948594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.350 [2024-07-24 02:12:13.948620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.350 qpair failed and we were unable to recover it. 00:33:59.350 [2024-07-24 02:12:13.948756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:59.350 [2024-07-24 02:12:13.948781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1154000b90 with addr=10.0.0.2, port=4420 00:33:59.350 qpair failed and we were unable to recover it. 00:33:59.350 [2024-07-24 02:12:13.948950] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:59.350 [2024-07-24 02:12:13.951439] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.350 [2024-07-24 02:12:13.951567] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.350 [2024-07-24 02:12:13.951595] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.350 [2024-07-24 02:12:13.951610] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.350 [2024-07-24 02:12:13.951631] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1154000b90 00:33:59.350 [2024-07-24 02:12:13.951666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:59.350 qpair failed and we were unable to recover it. 00:33:59.350 02:12:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:59.350 02:12:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:59.350 02:12:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:59.350 02:12:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:59.350 02:12:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:59.350 02:12:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 1583721 00:33:59.350 [2024-07-24 02:12:13.961304] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.350 [2024-07-24 02:12:13.961446] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.350 [2024-07-24 02:12:13.961475] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.350 [2024-07-24 02:12:13.961490] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.350 [2024-07-24 02:12:13.961502] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1154000b90 00:33:59.350 [2024-07-24 02:12:13.961533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:59.350 qpair failed and we were unable to recover it. 00:33:59.350 [2024-07-24 02:12:13.971307] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.350 [2024-07-24 02:12:13.971426] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.350 [2024-07-24 02:12:13.971459] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.350 [2024-07-24 02:12:13.971475] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.350 [2024-07-24 02:12:13.971488] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1154000b90 00:33:59.350 [2024-07-24 02:12:13.971519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:59.350 qpair failed and we were unable to recover it. 00:33:59.350 [2024-07-24 02:12:13.981332] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.350 [2024-07-24 02:12:13.981449] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.350 [2024-07-24 02:12:13.981475] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.350 [2024-07-24 02:12:13.981490] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.350 [2024-07-24 02:12:13.981502] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1154000b90 00:33:59.350 [2024-07-24 02:12:13.981533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:59.350 qpair failed and we were unable to recover it. 00:33:59.350 [2024-07-24 02:12:13.991306] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.350 [2024-07-24 02:12:13.991438] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.350 [2024-07-24 02:12:13.991464] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.350 [2024-07-24 02:12:13.991479] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.350 [2024-07-24 02:12:13.991491] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1154000b90 00:33:59.350 [2024-07-24 02:12:13.991523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:59.350 qpair failed and we were unable to recover it. 00:33:59.350 [2024-07-24 02:12:14.001303] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.350 [2024-07-24 02:12:14.001423] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.350 [2024-07-24 02:12:14.001449] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.350 [2024-07-24 02:12:14.001463] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.350 [2024-07-24 02:12:14.001476] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1154000b90 00:33:59.350 [2024-07-24 02:12:14.001506] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:59.350 qpair failed and we were unable to recover it. 00:33:59.350 [2024-07-24 02:12:14.011342] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.350 [2024-07-24 02:12:14.011465] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.350 [2024-07-24 02:12:14.011491] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.351 [2024-07-24 02:12:14.011505] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.351 [2024-07-24 02:12:14.011523] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1154000b90 00:33:59.351 [2024-07-24 02:12:14.011557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:59.351 qpair failed and we were unable to recover it. 00:33:59.351 [2024-07-24 02:12:14.021374] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.351 [2024-07-24 02:12:14.021513] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.351 [2024-07-24 02:12:14.021539] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.351 [2024-07-24 02:12:14.021553] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.351 [2024-07-24 02:12:14.021566] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1154000b90 00:33:59.351 [2024-07-24 02:12:14.021596] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:59.351 qpair failed and we were unable to recover it. 00:33:59.351 [2024-07-24 02:12:14.031376] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.351 [2024-07-24 02:12:14.031493] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.351 [2024-07-24 02:12:14.031519] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.351 [2024-07-24 02:12:14.031534] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.351 [2024-07-24 02:12:14.031546] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1154000b90 00:33:59.351 [2024-07-24 02:12:14.031578] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:59.351 qpair failed and we were unable to recover it. 00:33:59.351 [2024-07-24 02:12:14.041403] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.351 [2024-07-24 02:12:14.041529] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.351 [2024-07-24 02:12:14.041555] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.351 [2024-07-24 02:12:14.041569] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.351 [2024-07-24 02:12:14.041582] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1154000b90 00:33:59.351 [2024-07-24 02:12:14.041612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:59.351 qpair failed and we were unable to recover it. 00:33:59.351 [2024-07-24 02:12:14.051413] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.351 [2024-07-24 02:12:14.051520] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.351 [2024-07-24 02:12:14.051546] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.351 [2024-07-24 02:12:14.051560] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.351 [2024-07-24 02:12:14.051573] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1154000b90 00:33:59.351 [2024-07-24 02:12:14.051602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:59.351 qpair failed and we were unable to recover it. 00:33:59.351 [2024-07-24 02:12:14.061480] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.351 [2024-07-24 02:12:14.061619] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.351 [2024-07-24 02:12:14.061646] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.351 [2024-07-24 02:12:14.061660] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.351 [2024-07-24 02:12:14.061673] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1154000b90 00:33:59.351 [2024-07-24 02:12:14.061702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:59.351 qpair failed and we were unable to recover it. 00:33:59.351 [2024-07-24 02:12:14.071531] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.351 [2024-07-24 02:12:14.071640] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.351 [2024-07-24 02:12:14.071666] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.351 [2024-07-24 02:12:14.071680] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.351 [2024-07-24 02:12:14.071693] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1154000b90 00:33:59.351 [2024-07-24 02:12:14.071723] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:59.351 qpair failed and we were unable to recover it. 00:33:59.351 [2024-07-24 02:12:14.081577] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.351 [2024-07-24 02:12:14.081695] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.351 [2024-07-24 02:12:14.081723] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.351 [2024-07-24 02:12:14.081738] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.351 [2024-07-24 02:12:14.081754] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1154000b90 00:33:59.351 [2024-07-24 02:12:14.081787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:59.351 qpair failed and we were unable to recover it. 00:33:59.351 [2024-07-24 02:12:14.091594] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.351 [2024-07-24 02:12:14.091756] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.351 [2024-07-24 02:12:14.091783] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.351 [2024-07-24 02:12:14.091798] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.351 [2024-07-24 02:12:14.091812] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1154000b90 00:33:59.351 [2024-07-24 02:12:14.091856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:59.351 qpair failed and we were unable to recover it. 00:33:59.351 [2024-07-24 02:12:14.101589] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.351 [2024-07-24 02:12:14.101696] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.351 [2024-07-24 02:12:14.101722] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.351 [2024-07-24 02:12:14.101742] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.351 [2024-07-24 02:12:14.101756] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1154000b90 00:33:59.351 [2024-07-24 02:12:14.101785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:59.351 qpair failed and we were unable to recover it. 00:33:59.351 [2024-07-24 02:12:14.111707] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.351 [2024-07-24 02:12:14.111827] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.351 [2024-07-24 02:12:14.111853] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.351 [2024-07-24 02:12:14.111868] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.351 [2024-07-24 02:12:14.111880] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1154000b90 00:33:59.351 [2024-07-24 02:12:14.111910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:59.351 qpair failed and we were unable to recover it. 00:33:59.351 [2024-07-24 02:12:14.121667] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.351 [2024-07-24 02:12:14.121777] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.351 [2024-07-24 02:12:14.121803] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.351 [2024-07-24 02:12:14.121818] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.351 [2024-07-24 02:12:14.121830] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1154000b90 00:33:59.351 [2024-07-24 02:12:14.121860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:59.351 qpair failed and we were unable to recover it. 00:33:59.351 [2024-07-24 02:12:14.131667] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.351 [2024-07-24 02:12:14.131766] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.351 [2024-07-24 02:12:14.131792] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.351 [2024-07-24 02:12:14.131806] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.351 [2024-07-24 02:12:14.131819] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1154000b90 00:33:59.351 [2024-07-24 02:12:14.131863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:59.351 qpair failed and we were unable to recover it. 00:33:59.351 [2024-07-24 02:12:14.141670] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.351 [2024-07-24 02:12:14.141782] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.352 [2024-07-24 02:12:14.141808] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.352 [2024-07-24 02:12:14.141822] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.352 [2024-07-24 02:12:14.141835] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1154000b90 00:33:59.352 [2024-07-24 02:12:14.141866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:59.352 qpair failed and we were unable to recover it. 00:33:59.352 [2024-07-24 02:12:14.151747] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.352 [2024-07-24 02:12:14.151860] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.352 [2024-07-24 02:12:14.151885] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.352 [2024-07-24 02:12:14.151899] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.352 [2024-07-24 02:12:14.151912] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1154000b90 00:33:59.352 [2024-07-24 02:12:14.151943] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:59.352 qpair failed and we were unable to recover it. 00:33:59.352 [2024-07-24 02:12:14.161767] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.352 [2024-07-24 02:12:14.161876] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.352 [2024-07-24 02:12:14.161902] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.352 [2024-07-24 02:12:14.161916] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.352 [2024-07-24 02:12:14.161929] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1154000b90 00:33:59.352 [2024-07-24 02:12:14.161959] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:59.352 qpair failed and we were unable to recover it. 00:33:59.352 [2024-07-24 02:12:14.171821] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.352 [2024-07-24 02:12:14.171929] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.352 [2024-07-24 02:12:14.171955] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.352 [2024-07-24 02:12:14.171970] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.352 [2024-07-24 02:12:14.171983] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1154000b90 00:33:59.352 [2024-07-24 02:12:14.172013] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:59.352 qpair failed and we were unable to recover it. 00:33:59.352 [2024-07-24 02:12:14.181825] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.352 [2024-07-24 02:12:14.181949] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.352 [2024-07-24 02:12:14.181975] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.352 [2024-07-24 02:12:14.181989] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.352 [2024-07-24 02:12:14.182002] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1154000b90 00:33:59.352 [2024-07-24 02:12:14.182032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:59.352 qpair failed and we were unable to recover it. 00:33:59.352 [2024-07-24 02:12:14.191979] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.352 [2024-07-24 02:12:14.192083] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.352 [2024-07-24 02:12:14.192110] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.352 [2024-07-24 02:12:14.192130] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.352 [2024-07-24 02:12:14.192144] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1154000b90 00:33:59.352 [2024-07-24 02:12:14.192174] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:59.352 qpair failed and we were unable to recover it. 00:33:59.352 [2024-07-24 02:12:14.201866] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.352 [2024-07-24 02:12:14.201977] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.352 [2024-07-24 02:12:14.202003] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.352 [2024-07-24 02:12:14.202018] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.352 [2024-07-24 02:12:14.202031] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1154000b90 00:33:59.352 [2024-07-24 02:12:14.202060] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:59.352 qpair failed and we were unable to recover it. 00:33:59.352 [2024-07-24 02:12:14.211926] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.352 [2024-07-24 02:12:14.212046] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.352 [2024-07-24 02:12:14.212075] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.352 [2024-07-24 02:12:14.212089] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.352 [2024-07-24 02:12:14.212102] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1154000b90 00:33:59.352 [2024-07-24 02:12:14.212133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:59.352 qpair failed and we were unable to recover it. 00:33:59.612 [2024-07-24 02:12:14.221946] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.612 [2024-07-24 02:12:14.222072] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.612 [2024-07-24 02:12:14.222100] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.612 [2024-07-24 02:12:14.222115] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.612 [2024-07-24 02:12:14.222127] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1154000b90 00:33:59.612 [2024-07-24 02:12:14.222158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:59.612 qpair failed and we were unable to recover it. 00:33:59.612 [2024-07-24 02:12:14.231978] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.612 [2024-07-24 02:12:14.232090] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.612 [2024-07-24 02:12:14.232117] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.612 [2024-07-24 02:12:14.232131] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.612 [2024-07-24 02:12:14.232144] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1154000b90 00:33:59.612 [2024-07-24 02:12:14.232174] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:59.612 qpair failed and we were unable to recover it. 00:33:59.612 [2024-07-24 02:12:14.242020] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.612 [2024-07-24 02:12:14.242126] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.612 [2024-07-24 02:12:14.242152] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.612 [2024-07-24 02:12:14.242166] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.612 [2024-07-24 02:12:14.242179] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1154000b90 00:33:59.612 [2024-07-24 02:12:14.242209] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:59.612 qpair failed and we were unable to recover it. 00:33:59.612 [2024-07-24 02:12:14.252065] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.612 [2024-07-24 02:12:14.252197] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.612 [2024-07-24 02:12:14.252226] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.612 [2024-07-24 02:12:14.252241] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.612 [2024-07-24 02:12:14.252253] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1154000b90 00:33:59.612 [2024-07-24 02:12:14.252284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:59.612 qpair failed and we were unable to recover it. 00:33:59.612 [2024-07-24 02:12:14.262039] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.612 [2024-07-24 02:12:14.262159] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.612 [2024-07-24 02:12:14.262186] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.612 [2024-07-24 02:12:14.262200] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.612 [2024-07-24 02:12:14.262212] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1154000b90 00:33:59.612 [2024-07-24 02:12:14.262243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:59.612 qpair failed and we were unable to recover it. 00:33:59.612 [2024-07-24 02:12:14.272074] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.612 [2024-07-24 02:12:14.272181] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.612 [2024-07-24 02:12:14.272212] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.612 [2024-07-24 02:12:14.272228] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.612 [2024-07-24 02:12:14.272242] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:33:59.612 [2024-07-24 02:12:14.272273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:59.612 qpair failed and we were unable to recover it. 00:33:59.612 [2024-07-24 02:12:14.282113] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.612 [2024-07-24 02:12:14.282220] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.612 [2024-07-24 02:12:14.282253] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.612 [2024-07-24 02:12:14.282268] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.612 [2024-07-24 02:12:14.282282] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:33:59.612 [2024-07-24 02:12:14.282314] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:59.612 qpair failed and we were unable to recover it. 00:33:59.612 [2024-07-24 02:12:14.292162] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.612 [2024-07-24 02:12:14.292264] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.612 [2024-07-24 02:12:14.292292] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.612 [2024-07-24 02:12:14.292306] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.612 [2024-07-24 02:12:14.292326] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:33:59.612 [2024-07-24 02:12:14.292357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:59.612 qpair failed and we were unable to recover it. 00:33:59.612 [2024-07-24 02:12:14.302189] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.612 [2024-07-24 02:12:14.302306] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.612 [2024-07-24 02:12:14.302342] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.612 [2024-07-24 02:12:14.302361] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.612 [2024-07-24 02:12:14.302376] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:33:59.612 [2024-07-24 02:12:14.302408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:59.612 qpair failed and we were unable to recover it. 00:33:59.612 [2024-07-24 02:12:14.312194] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.612 [2024-07-24 02:12:14.312301] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.612 [2024-07-24 02:12:14.312339] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.612 [2024-07-24 02:12:14.312355] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.612 [2024-07-24 02:12:14.312367] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:33:59.612 [2024-07-24 02:12:14.312397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:59.612 qpair failed and we were unable to recover it. 00:33:59.612 [2024-07-24 02:12:14.322226] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.612 [2024-07-24 02:12:14.322357] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.612 [2024-07-24 02:12:14.322384] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.612 [2024-07-24 02:12:14.322398] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.612 [2024-07-24 02:12:14.322411] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:33:59.612 [2024-07-24 02:12:14.322446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:59.612 qpair failed and we were unable to recover it. 00:33:59.612 [2024-07-24 02:12:14.332231] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.612 [2024-07-24 02:12:14.332360] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.612 [2024-07-24 02:12:14.332385] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.612 [2024-07-24 02:12:14.332400] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.612 [2024-07-24 02:12:14.332413] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:33:59.612 [2024-07-24 02:12:14.332442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:59.612 qpair failed and we were unable to recover it. 00:33:59.612 [2024-07-24 02:12:14.342298] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.613 [2024-07-24 02:12:14.342441] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.613 [2024-07-24 02:12:14.342466] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.613 [2024-07-24 02:12:14.342481] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.613 [2024-07-24 02:12:14.342494] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:33:59.613 [2024-07-24 02:12:14.342523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:59.613 qpair failed and we were unable to recover it. 00:33:59.613 [2024-07-24 02:12:14.352298] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.613 [2024-07-24 02:12:14.352416] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.613 [2024-07-24 02:12:14.352442] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.613 [2024-07-24 02:12:14.352456] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.613 [2024-07-24 02:12:14.352470] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:33:59.613 [2024-07-24 02:12:14.352500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:59.613 qpair failed and we were unable to recover it. 00:33:59.613 [2024-07-24 02:12:14.362337] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.613 [2024-07-24 02:12:14.362449] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.613 [2024-07-24 02:12:14.362474] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.613 [2024-07-24 02:12:14.362488] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.613 [2024-07-24 02:12:14.362500] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:33:59.613 [2024-07-24 02:12:14.362531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:59.613 qpair failed and we were unable to recover it. 00:33:59.613 [2024-07-24 02:12:14.372366] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.613 [2024-07-24 02:12:14.372465] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.613 [2024-07-24 02:12:14.372495] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.613 [2024-07-24 02:12:14.372510] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.613 [2024-07-24 02:12:14.372523] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:33:59.613 [2024-07-24 02:12:14.372553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:59.613 qpair failed and we were unable to recover it. 00:33:59.613 [2024-07-24 02:12:14.382404] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.613 [2024-07-24 02:12:14.382534] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.613 [2024-07-24 02:12:14.382561] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.613 [2024-07-24 02:12:14.382575] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.613 [2024-07-24 02:12:14.382592] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:33:59.613 [2024-07-24 02:12:14.382625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:59.613 qpair failed and we were unable to recover it. 00:33:59.613 [2024-07-24 02:12:14.392449] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.613 [2024-07-24 02:12:14.392555] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.613 [2024-07-24 02:12:14.392581] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.613 [2024-07-24 02:12:14.392595] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.613 [2024-07-24 02:12:14.392608] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:33:59.613 [2024-07-24 02:12:14.392637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:59.613 qpair failed and we were unable to recover it. 00:33:59.613 [2024-07-24 02:12:14.402445] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.613 [2024-07-24 02:12:14.402553] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.613 [2024-07-24 02:12:14.402579] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.613 [2024-07-24 02:12:14.402593] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.613 [2024-07-24 02:12:14.402606] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:33:59.613 [2024-07-24 02:12:14.402635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:59.613 qpair failed and we were unable to recover it. 00:33:59.613 [2024-07-24 02:12:14.412478] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.613 [2024-07-24 02:12:14.412583] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.613 [2024-07-24 02:12:14.412609] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.613 [2024-07-24 02:12:14.412623] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.613 [2024-07-24 02:12:14.412642] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:33:59.613 [2024-07-24 02:12:14.412672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:59.613 qpair failed and we were unable to recover it. 00:33:59.613 [2024-07-24 02:12:14.422519] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.613 [2024-07-24 02:12:14.422626] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.613 [2024-07-24 02:12:14.422652] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.613 [2024-07-24 02:12:14.422665] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.613 [2024-07-24 02:12:14.422678] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:33:59.613 [2024-07-24 02:12:14.422708] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:59.613 qpair failed and we were unable to recover it. 00:33:59.613 [2024-07-24 02:12:14.432524] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.613 [2024-07-24 02:12:14.432629] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.613 [2024-07-24 02:12:14.432655] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.613 [2024-07-24 02:12:14.432669] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.613 [2024-07-24 02:12:14.432682] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:33:59.613 [2024-07-24 02:12:14.432712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:59.613 qpair failed and we were unable to recover it. 00:33:59.613 [2024-07-24 02:12:14.442582] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.613 [2024-07-24 02:12:14.442687] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.613 [2024-07-24 02:12:14.442713] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.613 [2024-07-24 02:12:14.442726] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.613 [2024-07-24 02:12:14.442739] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:33:59.613 [2024-07-24 02:12:14.442769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:59.613 qpair failed and we were unable to recover it. 00:33:59.613 [2024-07-24 02:12:14.452573] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.613 [2024-07-24 02:12:14.452676] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.613 [2024-07-24 02:12:14.452702] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.613 [2024-07-24 02:12:14.452716] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.613 [2024-07-24 02:12:14.452729] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:33:59.613 [2024-07-24 02:12:14.452757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:59.613 qpair failed and we were unable to recover it. 00:33:59.613 [2024-07-24 02:12:14.462669] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.613 [2024-07-24 02:12:14.462819] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.613 [2024-07-24 02:12:14.462844] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.613 [2024-07-24 02:12:14.462858] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.613 [2024-07-24 02:12:14.462871] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:33:59.613 [2024-07-24 02:12:14.462900] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:59.613 qpair failed and we were unable to recover it. 00:33:59.614 [2024-07-24 02:12:14.472637] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.614 [2024-07-24 02:12:14.472759] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.614 [2024-07-24 02:12:14.472786] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.614 [2024-07-24 02:12:14.472800] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.614 [2024-07-24 02:12:14.472816] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:33:59.614 [2024-07-24 02:12:14.472849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:59.614 qpair failed and we were unable to recover it. 00:33:59.614 [2024-07-24 02:12:14.482657] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.614 [2024-07-24 02:12:14.482765] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.614 [2024-07-24 02:12:14.482791] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.614 [2024-07-24 02:12:14.482805] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.614 [2024-07-24 02:12:14.482818] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:33:59.614 [2024-07-24 02:12:14.482847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:59.614 qpair failed and we were unable to recover it. 00:33:59.614 [2024-07-24 02:12:14.492679] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.614 [2024-07-24 02:12:14.492781] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.614 [2024-07-24 02:12:14.492808] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.614 [2024-07-24 02:12:14.492822] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.614 [2024-07-24 02:12:14.492835] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:33:59.614 [2024-07-24 02:12:14.492876] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:59.614 qpair failed and we were unable to recover it. 00:33:59.614 [2024-07-24 02:12:14.502757] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.614 [2024-07-24 02:12:14.502913] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.614 [2024-07-24 02:12:14.502939] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.614 [2024-07-24 02:12:14.502959] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.614 [2024-07-24 02:12:14.502973] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:33:59.614 [2024-07-24 02:12:14.503002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:59.614 qpair failed and we were unable to recover it. 00:33:59.873 [2024-07-24 02:12:14.512753] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.873 [2024-07-24 02:12:14.512872] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.873 [2024-07-24 02:12:14.512898] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.873 [2024-07-24 02:12:14.512912] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.873 [2024-07-24 02:12:14.512925] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:33:59.873 [2024-07-24 02:12:14.512954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:59.873 qpair failed and we were unable to recover it. 00:33:59.873 [2024-07-24 02:12:14.522807] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.873 [2024-07-24 02:12:14.522910] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.873 [2024-07-24 02:12:14.522937] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.873 [2024-07-24 02:12:14.522951] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.873 [2024-07-24 02:12:14.522963] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:33:59.873 [2024-07-24 02:12:14.522992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:59.873 qpair failed and we were unable to recover it. 00:33:59.873 [2024-07-24 02:12:14.532864] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.873 [2024-07-24 02:12:14.532970] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.873 [2024-07-24 02:12:14.532997] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.873 [2024-07-24 02:12:14.533011] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.873 [2024-07-24 02:12:14.533024] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:33:59.873 [2024-07-24 02:12:14.533053] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:59.873 qpair failed and we were unable to recover it. 00:33:59.873 [2024-07-24 02:12:14.542833] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.873 [2024-07-24 02:12:14.542953] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.873 [2024-07-24 02:12:14.542978] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.873 [2024-07-24 02:12:14.542992] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.873 [2024-07-24 02:12:14.543005] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:33:59.873 [2024-07-24 02:12:14.543033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:59.873 qpair failed and we were unable to recover it. 00:33:59.873 [2024-07-24 02:12:14.552911] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.873 [2024-07-24 02:12:14.553016] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.873 [2024-07-24 02:12:14.553042] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.873 [2024-07-24 02:12:14.553056] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.873 [2024-07-24 02:12:14.553068] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:33:59.873 [2024-07-24 02:12:14.553096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:59.873 qpair failed and we were unable to recover it. 00:33:59.873 [2024-07-24 02:12:14.562908] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.873 [2024-07-24 02:12:14.563011] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.873 [2024-07-24 02:12:14.563037] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.873 [2024-07-24 02:12:14.563051] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.873 [2024-07-24 02:12:14.563064] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:33:59.873 [2024-07-24 02:12:14.563093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:59.873 qpair failed and we were unable to recover it. 00:33:59.873 [2024-07-24 02:12:14.572906] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.873 [2024-07-24 02:12:14.573011] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.873 [2024-07-24 02:12:14.573037] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.873 [2024-07-24 02:12:14.573051] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.874 [2024-07-24 02:12:14.573063] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:33:59.874 [2024-07-24 02:12:14.573093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:59.874 qpair failed and we were unable to recover it. 00:33:59.874 [2024-07-24 02:12:14.582973] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.874 [2024-07-24 02:12:14.583092] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.874 [2024-07-24 02:12:14.583120] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.874 [2024-07-24 02:12:14.583135] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.874 [2024-07-24 02:12:14.583147] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:33:59.874 [2024-07-24 02:12:14.583177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:59.874 qpair failed and we were unable to recover it. 00:33:59.874 [2024-07-24 02:12:14.592972] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.874 [2024-07-24 02:12:14.593078] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.874 [2024-07-24 02:12:14.593103] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.874 [2024-07-24 02:12:14.593122] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.874 [2024-07-24 02:12:14.593134] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:33:59.874 [2024-07-24 02:12:14.593163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:59.874 qpair failed and we were unable to recover it. 00:33:59.874 [2024-07-24 02:12:14.603085] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.874 [2024-07-24 02:12:14.603211] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.874 [2024-07-24 02:12:14.603237] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.874 [2024-07-24 02:12:14.603251] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.874 [2024-07-24 02:12:14.603264] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:33:59.874 [2024-07-24 02:12:14.603293] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:59.874 qpair failed and we were unable to recover it. 00:33:59.874 [2024-07-24 02:12:14.613055] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.874 [2024-07-24 02:12:14.613163] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.874 [2024-07-24 02:12:14.613189] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.874 [2024-07-24 02:12:14.613203] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.874 [2024-07-24 02:12:14.613218] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:33:59.874 [2024-07-24 02:12:14.613248] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:59.874 qpair failed and we were unable to recover it. 00:33:59.874 [2024-07-24 02:12:14.623049] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.874 [2024-07-24 02:12:14.623155] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.874 [2024-07-24 02:12:14.623181] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.874 [2024-07-24 02:12:14.623194] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.874 [2024-07-24 02:12:14.623207] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:33:59.874 [2024-07-24 02:12:14.623236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:59.874 qpair failed and we were unable to recover it. 00:33:59.874 [2024-07-24 02:12:14.633088] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.874 [2024-07-24 02:12:14.633192] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.874 [2024-07-24 02:12:14.633218] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.874 [2024-07-24 02:12:14.633232] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.874 [2024-07-24 02:12:14.633244] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:33:59.874 [2024-07-24 02:12:14.633273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:59.874 qpair failed and we were unable to recover it. 00:33:59.874 [2024-07-24 02:12:14.643113] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.874 [2024-07-24 02:12:14.643218] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.874 [2024-07-24 02:12:14.643243] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.874 [2024-07-24 02:12:14.643257] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.874 [2024-07-24 02:12:14.643269] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:33:59.874 [2024-07-24 02:12:14.643298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:59.874 qpair failed and we were unable to recover it. 00:33:59.874 [2024-07-24 02:12:14.653140] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.874 [2024-07-24 02:12:14.653247] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.874 [2024-07-24 02:12:14.653272] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.874 [2024-07-24 02:12:14.653287] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.874 [2024-07-24 02:12:14.653300] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:33:59.874 [2024-07-24 02:12:14.653336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:59.874 qpair failed and we were unable to recover it. 00:33:59.874 [2024-07-24 02:12:14.663165] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.874 [2024-07-24 02:12:14.663288] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.874 [2024-07-24 02:12:14.663314] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.874 [2024-07-24 02:12:14.663341] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.874 [2024-07-24 02:12:14.663354] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:33:59.874 [2024-07-24 02:12:14.663384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:59.874 qpair failed and we were unable to recover it. 00:33:59.874 [2024-07-24 02:12:14.673192] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.874 [2024-07-24 02:12:14.673308] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.874 [2024-07-24 02:12:14.673340] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.874 [2024-07-24 02:12:14.673358] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.874 [2024-07-24 02:12:14.673372] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:33:59.874 [2024-07-24 02:12:14.673401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:59.874 qpair failed and we were unable to recover it. 00:33:59.874 [2024-07-24 02:12:14.683271] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.874 [2024-07-24 02:12:14.683433] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.874 [2024-07-24 02:12:14.683464] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.874 [2024-07-24 02:12:14.683480] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.874 [2024-07-24 02:12:14.683492] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:33:59.874 [2024-07-24 02:12:14.683522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:59.874 qpair failed and we were unable to recover it. 00:33:59.874 [2024-07-24 02:12:14.693247] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.874 [2024-07-24 02:12:14.693349] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.874 [2024-07-24 02:12:14.693376] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.874 [2024-07-24 02:12:14.693390] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.874 [2024-07-24 02:12:14.693403] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:33:59.874 [2024-07-24 02:12:14.693446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:59.874 qpair failed and we were unable to recover it. 00:33:59.874 [2024-07-24 02:12:14.703282] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.874 [2024-07-24 02:12:14.703394] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.874 [2024-07-24 02:12:14.703420] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.875 [2024-07-24 02:12:14.703433] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.875 [2024-07-24 02:12:14.703446] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:33:59.875 [2024-07-24 02:12:14.703476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:59.875 qpair failed and we were unable to recover it. 00:33:59.875 [2024-07-24 02:12:14.713328] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.875 [2024-07-24 02:12:14.713447] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.875 [2024-07-24 02:12:14.713475] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.875 [2024-07-24 02:12:14.713490] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.875 [2024-07-24 02:12:14.713502] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:33:59.875 [2024-07-24 02:12:14.713533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:59.875 qpair failed and we were unable to recover it. 00:33:59.875 [2024-07-24 02:12:14.723342] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.875 [2024-07-24 02:12:14.723455] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.875 [2024-07-24 02:12:14.723482] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.875 [2024-07-24 02:12:14.723497] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.875 [2024-07-24 02:12:14.723512] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:33:59.875 [2024-07-24 02:12:14.723548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:59.875 qpair failed and we were unable to recover it. 00:33:59.875 [2024-07-24 02:12:14.733372] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.875 [2024-07-24 02:12:14.733504] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.875 [2024-07-24 02:12:14.733531] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.875 [2024-07-24 02:12:14.733546] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.875 [2024-07-24 02:12:14.733562] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:33:59.875 [2024-07-24 02:12:14.733594] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:59.875 qpair failed and we were unable to recover it. 00:33:59.875 [2024-07-24 02:12:14.743399] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.875 [2024-07-24 02:12:14.743506] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.875 [2024-07-24 02:12:14.743532] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.875 [2024-07-24 02:12:14.743547] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.875 [2024-07-24 02:12:14.743559] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:33:59.875 [2024-07-24 02:12:14.743590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:59.875 qpair failed and we were unable to recover it. 00:33:59.875 [2024-07-24 02:12:14.753437] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.875 [2024-07-24 02:12:14.753555] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.875 [2024-07-24 02:12:14.753582] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.875 [2024-07-24 02:12:14.753596] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.875 [2024-07-24 02:12:14.753609] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:33:59.875 [2024-07-24 02:12:14.753639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:59.875 qpair failed and we were unable to recover it. 00:33:59.875 [2024-07-24 02:12:14.763494] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:59.875 [2024-07-24 02:12:14.763647] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:59.875 [2024-07-24 02:12:14.763673] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:59.875 [2024-07-24 02:12:14.763687] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:59.875 [2024-07-24 02:12:14.763703] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:33:59.875 [2024-07-24 02:12:14.763732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:59.875 qpair failed and we were unable to recover it. 00:34:00.134 [2024-07-24 02:12:14.773648] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.134 [2024-07-24 02:12:14.773768] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.134 [2024-07-24 02:12:14.773799] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.134 [2024-07-24 02:12:14.773814] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.134 [2024-07-24 02:12:14.773827] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:00.134 [2024-07-24 02:12:14.773856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:00.134 qpair failed and we were unable to recover it. 00:34:00.134 [2024-07-24 02:12:14.783556] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.134 [2024-07-24 02:12:14.783665] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.134 [2024-07-24 02:12:14.783691] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.134 [2024-07-24 02:12:14.783706] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.134 [2024-07-24 02:12:14.783719] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:00.134 [2024-07-24 02:12:14.783748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:00.134 qpair failed and we were unable to recover it. 00:34:00.134 [2024-07-24 02:12:14.793625] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.134 [2024-07-24 02:12:14.793739] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.134 [2024-07-24 02:12:14.793765] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.134 [2024-07-24 02:12:14.793779] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.134 [2024-07-24 02:12:14.793792] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:00.134 [2024-07-24 02:12:14.793822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:00.134 qpair failed and we were unable to recover it. 00:34:00.134 [2024-07-24 02:12:14.803602] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.134 [2024-07-24 02:12:14.803709] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.134 [2024-07-24 02:12:14.803735] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.134 [2024-07-24 02:12:14.803749] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.134 [2024-07-24 02:12:14.803761] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:00.134 [2024-07-24 02:12:14.803791] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:00.134 qpair failed and we were unable to recover it. 00:34:00.134 [2024-07-24 02:12:14.813622] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.134 [2024-07-24 02:12:14.813730] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.134 [2024-07-24 02:12:14.813756] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.134 [2024-07-24 02:12:14.813770] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.134 [2024-07-24 02:12:14.813788] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:00.134 [2024-07-24 02:12:14.813819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:00.134 qpair failed and we were unable to recover it. 00:34:00.134 [2024-07-24 02:12:14.823665] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.134 [2024-07-24 02:12:14.823776] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.134 [2024-07-24 02:12:14.823801] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.134 [2024-07-24 02:12:14.823815] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.134 [2024-07-24 02:12:14.823828] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:00.134 [2024-07-24 02:12:14.823857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:00.134 qpair failed and we were unable to recover it. 00:34:00.134 [2024-07-24 02:12:14.833669] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.134 [2024-07-24 02:12:14.833788] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.134 [2024-07-24 02:12:14.833814] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.134 [2024-07-24 02:12:14.833828] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.134 [2024-07-24 02:12:14.833844] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:00.134 [2024-07-24 02:12:14.833874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:00.134 qpair failed and we were unable to recover it. 00:34:00.134 [2024-07-24 02:12:14.843670] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.134 [2024-07-24 02:12:14.843825] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.134 [2024-07-24 02:12:14.843852] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.134 [2024-07-24 02:12:14.843866] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.134 [2024-07-24 02:12:14.843879] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:00.134 [2024-07-24 02:12:14.843907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:00.134 qpair failed and we were unable to recover it. 00:34:00.135 [2024-07-24 02:12:14.853730] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.135 [2024-07-24 02:12:14.853829] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.135 [2024-07-24 02:12:14.853855] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.135 [2024-07-24 02:12:14.853869] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.135 [2024-07-24 02:12:14.853881] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:00.135 [2024-07-24 02:12:14.853910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:00.135 qpair failed and we were unable to recover it. 00:34:00.135 [2024-07-24 02:12:14.863778] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.135 [2024-07-24 02:12:14.863898] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.135 [2024-07-24 02:12:14.863924] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.135 [2024-07-24 02:12:14.863937] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.135 [2024-07-24 02:12:14.863953] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:00.135 [2024-07-24 02:12:14.863983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:00.135 qpair failed and we were unable to recover it. 00:34:00.135 [2024-07-24 02:12:14.873802] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.135 [2024-07-24 02:12:14.873910] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.135 [2024-07-24 02:12:14.873935] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.135 [2024-07-24 02:12:14.873950] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.135 [2024-07-24 02:12:14.873962] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:00.135 [2024-07-24 02:12:14.873991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:00.135 qpair failed and we were unable to recover it. 00:34:00.135 [2024-07-24 02:12:14.883805] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.135 [2024-07-24 02:12:14.883908] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.135 [2024-07-24 02:12:14.883936] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.135 [2024-07-24 02:12:14.883950] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.135 [2024-07-24 02:12:14.883963] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:00.135 [2024-07-24 02:12:14.883992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:00.135 qpair failed and we were unable to recover it. 00:34:00.135 [2024-07-24 02:12:14.893850] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.135 [2024-07-24 02:12:14.893954] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.135 [2024-07-24 02:12:14.893979] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.135 [2024-07-24 02:12:14.893993] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.135 [2024-07-24 02:12:14.894006] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:00.135 [2024-07-24 02:12:14.894035] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:00.135 qpair failed and we were unable to recover it. 00:34:00.135 [2024-07-24 02:12:14.903846] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.135 [2024-07-24 02:12:14.903953] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.135 [2024-07-24 02:12:14.903979] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.135 [2024-07-24 02:12:14.903993] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.135 [2024-07-24 02:12:14.904011] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:00.135 [2024-07-24 02:12:14.904040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:00.135 qpair failed and we were unable to recover it. 00:34:00.135 [2024-07-24 02:12:14.913892] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.135 [2024-07-24 02:12:14.914024] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.135 [2024-07-24 02:12:14.914050] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.135 [2024-07-24 02:12:14.914064] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.135 [2024-07-24 02:12:14.914076] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:00.135 [2024-07-24 02:12:14.914106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:00.135 qpair failed and we were unable to recover it. 00:34:00.135 [2024-07-24 02:12:14.924004] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.135 [2024-07-24 02:12:14.924104] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.135 [2024-07-24 02:12:14.924129] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.135 [2024-07-24 02:12:14.924143] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.135 [2024-07-24 02:12:14.924156] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:00.135 [2024-07-24 02:12:14.924185] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:00.135 qpair failed and we were unable to recover it. 00:34:00.135 [2024-07-24 02:12:14.933991] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.135 [2024-07-24 02:12:14.934100] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.135 [2024-07-24 02:12:14.934126] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.135 [2024-07-24 02:12:14.934139] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.135 [2024-07-24 02:12:14.934152] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:00.135 [2024-07-24 02:12:14.934181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:00.135 qpair failed and we were unable to recover it. 00:34:00.135 [2024-07-24 02:12:14.943960] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.135 [2024-07-24 02:12:14.944069] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.135 [2024-07-24 02:12:14.944095] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.135 [2024-07-24 02:12:14.944108] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.135 [2024-07-24 02:12:14.944121] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:00.135 [2024-07-24 02:12:14.944150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:00.135 qpair failed and we were unable to recover it. 00:34:00.135 [2024-07-24 02:12:14.954037] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.135 [2024-07-24 02:12:14.954148] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.135 [2024-07-24 02:12:14.954174] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.135 [2024-07-24 02:12:14.954188] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.135 [2024-07-24 02:12:14.954200] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:00.135 [2024-07-24 02:12:14.954230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:00.135 qpair failed and we were unable to recover it. 00:34:00.135 [2024-07-24 02:12:14.964033] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.135 [2024-07-24 02:12:14.964137] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.135 [2024-07-24 02:12:14.964163] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.135 [2024-07-24 02:12:14.964177] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.135 [2024-07-24 02:12:14.964189] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:00.135 [2024-07-24 02:12:14.964218] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:00.135 qpair failed and we were unable to recover it. 00:34:00.135 [2024-07-24 02:12:14.974105] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.135 [2024-07-24 02:12:14.974205] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.135 [2024-07-24 02:12:14.974230] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.136 [2024-07-24 02:12:14.974244] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.136 [2024-07-24 02:12:14.974256] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:00.136 [2024-07-24 02:12:14.974285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:00.136 qpair failed and we were unable to recover it. 00:34:00.136 [2024-07-24 02:12:14.984083] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.136 [2024-07-24 02:12:14.984192] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.136 [2024-07-24 02:12:14.984218] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.136 [2024-07-24 02:12:14.984232] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.136 [2024-07-24 02:12:14.984245] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:00.136 [2024-07-24 02:12:14.984273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:00.136 qpair failed and we were unable to recover it. 00:34:00.136 [2024-07-24 02:12:14.994104] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.136 [2024-07-24 02:12:14.994221] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.136 [2024-07-24 02:12:14.994247] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.136 [2024-07-24 02:12:14.994268] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.136 [2024-07-24 02:12:14.994282] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:00.136 [2024-07-24 02:12:14.994311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:00.136 qpair failed and we were unable to recover it. 00:34:00.136 [2024-07-24 02:12:15.004135] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.136 [2024-07-24 02:12:15.004238] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.136 [2024-07-24 02:12:15.004264] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.136 [2024-07-24 02:12:15.004278] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.136 [2024-07-24 02:12:15.004291] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:00.136 [2024-07-24 02:12:15.004329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:00.136 qpair failed and we were unable to recover it. 00:34:00.136 [2024-07-24 02:12:15.014227] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.136 [2024-07-24 02:12:15.014373] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.136 [2024-07-24 02:12:15.014399] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.136 [2024-07-24 02:12:15.014413] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.136 [2024-07-24 02:12:15.014426] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:00.136 [2024-07-24 02:12:15.014455] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:00.136 qpair failed and we were unable to recover it. 00:34:00.136 [2024-07-24 02:12:15.024228] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.136 [2024-07-24 02:12:15.024341] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.136 [2024-07-24 02:12:15.024367] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.136 [2024-07-24 02:12:15.024381] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.136 [2024-07-24 02:12:15.024394] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:00.136 [2024-07-24 02:12:15.024423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:00.136 qpair failed and we were unable to recover it. 00:34:00.394 [2024-07-24 02:12:15.034242] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.394 [2024-07-24 02:12:15.034363] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.394 [2024-07-24 02:12:15.034390] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.394 [2024-07-24 02:12:15.034404] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.394 [2024-07-24 02:12:15.034416] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:00.394 [2024-07-24 02:12:15.034447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:00.394 qpair failed and we were unable to recover it. 00:34:00.394 [2024-07-24 02:12:15.044283] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.394 [2024-07-24 02:12:15.044398] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.394 [2024-07-24 02:12:15.044425] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.394 [2024-07-24 02:12:15.044439] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.394 [2024-07-24 02:12:15.044451] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:00.394 [2024-07-24 02:12:15.044480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:00.395 qpair failed and we were unable to recover it. 00:34:00.395 [2024-07-24 02:12:15.054291] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.395 [2024-07-24 02:12:15.054401] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.395 [2024-07-24 02:12:15.054427] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.395 [2024-07-24 02:12:15.054441] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.395 [2024-07-24 02:12:15.054453] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:00.395 [2024-07-24 02:12:15.054482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:00.395 qpair failed and we were unable to recover it. 00:34:00.395 [2024-07-24 02:12:15.064347] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.395 [2024-07-24 02:12:15.064453] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.395 [2024-07-24 02:12:15.064479] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.395 [2024-07-24 02:12:15.064492] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.395 [2024-07-24 02:12:15.064505] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:00.395 [2024-07-24 02:12:15.064547] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:00.395 qpair failed and we were unable to recover it. 00:34:00.395 [2024-07-24 02:12:15.074311] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.395 [2024-07-24 02:12:15.074420] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.395 [2024-07-24 02:12:15.074445] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.395 [2024-07-24 02:12:15.074459] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.395 [2024-07-24 02:12:15.074472] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:00.395 [2024-07-24 02:12:15.074502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:00.395 qpair failed and we were unable to recover it. 00:34:00.395 [2024-07-24 02:12:15.084403] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.395 [2024-07-24 02:12:15.084509] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.395 [2024-07-24 02:12:15.084539] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.395 [2024-07-24 02:12:15.084554] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.395 [2024-07-24 02:12:15.084567] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:00.395 [2024-07-24 02:12:15.084596] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:00.395 qpair failed and we were unable to recover it. 00:34:00.395 [2024-07-24 02:12:15.094404] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.395 [2024-07-24 02:12:15.094509] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.395 [2024-07-24 02:12:15.094535] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.395 [2024-07-24 02:12:15.094548] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.395 [2024-07-24 02:12:15.094561] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:00.395 [2024-07-24 02:12:15.094590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:00.395 qpair failed and we were unable to recover it. 00:34:00.395 [2024-07-24 02:12:15.104448] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.395 [2024-07-24 02:12:15.104550] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.395 [2024-07-24 02:12:15.104576] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.395 [2024-07-24 02:12:15.104590] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.395 [2024-07-24 02:12:15.104603] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:00.395 [2024-07-24 02:12:15.104632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:00.395 qpair failed and we were unable to recover it. 00:34:00.395 [2024-07-24 02:12:15.114497] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.395 [2024-07-24 02:12:15.114621] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.395 [2024-07-24 02:12:15.114647] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.395 [2024-07-24 02:12:15.114660] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.395 [2024-07-24 02:12:15.114673] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:00.395 [2024-07-24 02:12:15.114703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:00.395 qpair failed and we were unable to recover it. 00:34:00.395 [2024-07-24 02:12:15.124476] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.395 [2024-07-24 02:12:15.124593] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.395 [2024-07-24 02:12:15.124618] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.395 [2024-07-24 02:12:15.124632] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.395 [2024-07-24 02:12:15.124644] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:00.395 [2024-07-24 02:12:15.124679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:00.395 qpair failed and we were unable to recover it. 00:34:00.395 [2024-07-24 02:12:15.134548] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.395 [2024-07-24 02:12:15.134656] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.395 [2024-07-24 02:12:15.134682] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.395 [2024-07-24 02:12:15.134696] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.395 [2024-07-24 02:12:15.134709] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:00.395 [2024-07-24 02:12:15.134739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:00.395 qpair failed and we were unable to recover it. 00:34:00.395 [2024-07-24 02:12:15.144536] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.395 [2024-07-24 02:12:15.144643] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.395 [2024-07-24 02:12:15.144669] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.395 [2024-07-24 02:12:15.144683] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.395 [2024-07-24 02:12:15.144695] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:00.395 [2024-07-24 02:12:15.144725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:00.395 qpair failed and we were unable to recover it. 00:34:00.395 [2024-07-24 02:12:15.154625] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.395 [2024-07-24 02:12:15.154732] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.395 [2024-07-24 02:12:15.154757] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.395 [2024-07-24 02:12:15.154771] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.395 [2024-07-24 02:12:15.154784] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:00.395 [2024-07-24 02:12:15.154814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:00.395 qpair failed and we were unable to recover it. 00:34:00.395 [2024-07-24 02:12:15.164616] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.395 [2024-07-24 02:12:15.164757] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.395 [2024-07-24 02:12:15.164782] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.395 [2024-07-24 02:12:15.164796] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.395 [2024-07-24 02:12:15.164808] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:00.395 [2024-07-24 02:12:15.164837] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:00.395 qpair failed and we were unable to recover it. 00:34:00.395 [2024-07-24 02:12:15.174664] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.395 [2024-07-24 02:12:15.174778] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.395 [2024-07-24 02:12:15.174811] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.395 [2024-07-24 02:12:15.174826] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.396 [2024-07-24 02:12:15.174839] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:00.396 [2024-07-24 02:12:15.174868] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:00.396 qpair failed and we were unable to recover it. 00:34:00.396 [2024-07-24 02:12:15.184777] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.396 [2024-07-24 02:12:15.184889] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.396 [2024-07-24 02:12:15.184914] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.396 [2024-07-24 02:12:15.184928] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.396 [2024-07-24 02:12:15.184941] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:00.396 [2024-07-24 02:12:15.184970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:00.396 qpair failed and we were unable to recover it. 00:34:00.396 [2024-07-24 02:12:15.194696] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.396 [2024-07-24 02:12:15.194797] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.396 [2024-07-24 02:12:15.194823] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.396 [2024-07-24 02:12:15.194837] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.396 [2024-07-24 02:12:15.194849] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:00.396 [2024-07-24 02:12:15.194878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:00.396 qpair failed and we were unable to recover it. 00:34:00.396 [2024-07-24 02:12:15.204780] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.396 [2024-07-24 02:12:15.204883] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.396 [2024-07-24 02:12:15.204908] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.396 [2024-07-24 02:12:15.204922] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.396 [2024-07-24 02:12:15.204935] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:00.396 [2024-07-24 02:12:15.204964] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:00.396 qpair failed and we were unable to recover it. 00:34:00.396 [2024-07-24 02:12:15.214763] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.396 [2024-07-24 02:12:15.214864] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.396 [2024-07-24 02:12:15.214889] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.396 [2024-07-24 02:12:15.214903] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.396 [2024-07-24 02:12:15.214915] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:00.396 [2024-07-24 02:12:15.214950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:00.396 qpair failed and we were unable to recover it. 00:34:00.396 [2024-07-24 02:12:15.224816] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.396 [2024-07-24 02:12:15.224923] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.396 [2024-07-24 02:12:15.224948] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.396 [2024-07-24 02:12:15.224962] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.396 [2024-07-24 02:12:15.224974] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:00.396 [2024-07-24 02:12:15.225003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:00.396 qpair failed and we were unable to recover it. 00:34:00.396 [2024-07-24 02:12:15.234804] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.396 [2024-07-24 02:12:15.234933] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.396 [2024-07-24 02:12:15.234958] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.396 [2024-07-24 02:12:15.234972] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.396 [2024-07-24 02:12:15.234985] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:00.396 [2024-07-24 02:12:15.235014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:00.396 qpair failed and we were unable to recover it. 00:34:00.396 [2024-07-24 02:12:15.244854] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.396 [2024-07-24 02:12:15.244964] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.396 [2024-07-24 02:12:15.244990] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.396 [2024-07-24 02:12:15.245004] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.396 [2024-07-24 02:12:15.245017] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:00.396 [2024-07-24 02:12:15.245046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:00.396 qpair failed and we were unable to recover it. 00:34:00.396 [2024-07-24 02:12:15.254854] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.396 [2024-07-24 02:12:15.254960] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.396 [2024-07-24 02:12:15.254985] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.396 [2024-07-24 02:12:15.254999] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.396 [2024-07-24 02:12:15.255012] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:00.396 [2024-07-24 02:12:15.255042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:00.396 qpair failed and we were unable to recover it. 00:34:00.396 [2024-07-24 02:12:15.264930] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.396 [2024-07-24 02:12:15.265044] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.396 [2024-07-24 02:12:15.265071] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.396 [2024-07-24 02:12:15.265085] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.396 [2024-07-24 02:12:15.265097] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:00.396 [2024-07-24 02:12:15.265127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:00.396 qpair failed and we were unable to recover it. 00:34:00.396 [2024-07-24 02:12:15.274919] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.396 [2024-07-24 02:12:15.275020] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.396 [2024-07-24 02:12:15.275045] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.396 [2024-07-24 02:12:15.275059] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.396 [2024-07-24 02:12:15.275071] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:00.396 [2024-07-24 02:12:15.275101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:00.396 qpair failed and we were unable to recover it. 00:34:00.396 [2024-07-24 02:12:15.284945] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.396 [2024-07-24 02:12:15.285078] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.396 [2024-07-24 02:12:15.285106] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.396 [2024-07-24 02:12:15.285124] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.396 [2024-07-24 02:12:15.285137] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:00.396 [2024-07-24 02:12:15.285167] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:00.396 qpair failed and we were unable to recover it. 00:34:00.655 [2024-07-24 02:12:15.295001] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.655 [2024-07-24 02:12:15.295109] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.655 [2024-07-24 02:12:15.295136] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.655 [2024-07-24 02:12:15.295150] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.655 [2024-07-24 02:12:15.295162] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:00.655 [2024-07-24 02:12:15.295193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:00.655 qpair failed and we were unable to recover it. 00:34:00.655 [2024-07-24 02:12:15.305059] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.655 [2024-07-24 02:12:15.305171] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.655 [2024-07-24 02:12:15.305197] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.655 [2024-07-24 02:12:15.305211] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.655 [2024-07-24 02:12:15.305229] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:00.655 [2024-07-24 02:12:15.305259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:00.655 qpair failed and we were unable to recover it. 00:34:00.655 [2024-07-24 02:12:15.315048] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.655 [2024-07-24 02:12:15.315164] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.655 [2024-07-24 02:12:15.315190] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.655 [2024-07-24 02:12:15.315205] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.655 [2024-07-24 02:12:15.315218] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:00.655 [2024-07-24 02:12:15.315259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:00.655 qpair failed and we were unable to recover it. 00:34:00.655 [2024-07-24 02:12:15.325079] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.655 [2024-07-24 02:12:15.325180] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.655 [2024-07-24 02:12:15.325205] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.655 [2024-07-24 02:12:15.325219] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.655 [2024-07-24 02:12:15.325231] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:00.655 [2024-07-24 02:12:15.325262] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:00.655 qpair failed and we were unable to recover it. 00:34:00.655 [2024-07-24 02:12:15.335086] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.655 [2024-07-24 02:12:15.335186] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.655 [2024-07-24 02:12:15.335211] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.655 [2024-07-24 02:12:15.335225] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.655 [2024-07-24 02:12:15.335238] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:00.655 [2024-07-24 02:12:15.335268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:00.655 qpair failed and we were unable to recover it. 00:34:00.655 [2024-07-24 02:12:15.345132] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.655 [2024-07-24 02:12:15.345242] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.655 [2024-07-24 02:12:15.345267] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.655 [2024-07-24 02:12:15.345281] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.655 [2024-07-24 02:12:15.345294] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:00.655 [2024-07-24 02:12:15.345330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:00.655 qpair failed and we were unable to recover it. 00:34:00.655 [2024-07-24 02:12:15.355162] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.655 [2024-07-24 02:12:15.355280] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.655 [2024-07-24 02:12:15.355307] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.655 [2024-07-24 02:12:15.355328] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.655 [2024-07-24 02:12:15.355341] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:00.655 [2024-07-24 02:12:15.355372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:00.655 qpair failed and we were unable to recover it. 00:34:00.655 [2024-07-24 02:12:15.365195] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.655 [2024-07-24 02:12:15.365314] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.655 [2024-07-24 02:12:15.365347] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.655 [2024-07-24 02:12:15.365361] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.655 [2024-07-24 02:12:15.365373] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:00.655 [2024-07-24 02:12:15.365403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:00.655 qpair failed and we were unable to recover it. 00:34:00.655 [2024-07-24 02:12:15.375203] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.655 [2024-07-24 02:12:15.375304] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.655 [2024-07-24 02:12:15.375337] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.655 [2024-07-24 02:12:15.375352] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.655 [2024-07-24 02:12:15.375365] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:00.655 [2024-07-24 02:12:15.375394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:00.655 qpair failed and we were unable to recover it. 00:34:00.655 [2024-07-24 02:12:15.385218] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.655 [2024-07-24 02:12:15.385333] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.655 [2024-07-24 02:12:15.385358] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.655 [2024-07-24 02:12:15.385372] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.655 [2024-07-24 02:12:15.385385] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:00.656 [2024-07-24 02:12:15.385414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:00.656 qpair failed and we were unable to recover it. 00:34:00.656 [2024-07-24 02:12:15.395264] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.656 [2024-07-24 02:12:15.395391] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.656 [2024-07-24 02:12:15.395417] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.656 [2024-07-24 02:12:15.395440] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.656 [2024-07-24 02:12:15.395453] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:00.656 [2024-07-24 02:12:15.395483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:00.656 qpair failed and we were unable to recover it. 00:34:00.656 [2024-07-24 02:12:15.405293] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.656 [2024-07-24 02:12:15.405441] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.656 [2024-07-24 02:12:15.405466] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.656 [2024-07-24 02:12:15.405480] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.656 [2024-07-24 02:12:15.405493] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:00.656 [2024-07-24 02:12:15.405523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:00.656 qpair failed and we were unable to recover it. 00:34:00.656 [2024-07-24 02:12:15.415343] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.656 [2024-07-24 02:12:15.415439] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.656 [2024-07-24 02:12:15.415465] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.656 [2024-07-24 02:12:15.415479] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.656 [2024-07-24 02:12:15.415492] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:00.656 [2024-07-24 02:12:15.415533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:00.656 qpair failed and we were unable to recover it. 00:34:00.656 [2024-07-24 02:12:15.425394] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.656 [2024-07-24 02:12:15.425556] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.656 [2024-07-24 02:12:15.425582] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.656 [2024-07-24 02:12:15.425595] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.656 [2024-07-24 02:12:15.425608] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:00.656 [2024-07-24 02:12:15.425638] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:00.656 qpair failed and we were unable to recover it. 00:34:00.656 [2024-07-24 02:12:15.435429] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.656 [2024-07-24 02:12:15.435580] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.656 [2024-07-24 02:12:15.435606] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.656 [2024-07-24 02:12:15.435620] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.656 [2024-07-24 02:12:15.435632] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:00.656 [2024-07-24 02:12:15.435663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:00.656 qpair failed and we were unable to recover it. 00:34:00.656 [2024-07-24 02:12:15.445405] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.656 [2024-07-24 02:12:15.445516] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.656 [2024-07-24 02:12:15.445541] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.656 [2024-07-24 02:12:15.445556] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.656 [2024-07-24 02:12:15.445568] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:00.656 [2024-07-24 02:12:15.445597] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:00.656 qpair failed and we were unable to recover it. 00:34:00.656 [2024-07-24 02:12:15.455432] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.656 [2024-07-24 02:12:15.455548] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.656 [2024-07-24 02:12:15.455574] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.656 [2024-07-24 02:12:15.455588] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.656 [2024-07-24 02:12:15.455600] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:00.656 [2024-07-24 02:12:15.455630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:00.656 qpair failed and we were unable to recover it. 00:34:00.656 [2024-07-24 02:12:15.465510] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.656 [2024-07-24 02:12:15.465626] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.656 [2024-07-24 02:12:15.465651] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.656 [2024-07-24 02:12:15.465665] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.656 [2024-07-24 02:12:15.465678] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:00.656 [2024-07-24 02:12:15.465706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:00.656 qpair failed and we were unable to recover it. 00:34:00.656 [2024-07-24 02:12:15.475476] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.656 [2024-07-24 02:12:15.475613] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.656 [2024-07-24 02:12:15.475639] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.656 [2024-07-24 02:12:15.475652] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.656 [2024-07-24 02:12:15.475665] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:00.656 [2024-07-24 02:12:15.475695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:00.656 qpair failed and we were unable to recover it. 00:34:00.656 [2024-07-24 02:12:15.485513] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.656 [2024-07-24 02:12:15.485613] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.656 [2024-07-24 02:12:15.485642] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.656 [2024-07-24 02:12:15.485657] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.656 [2024-07-24 02:12:15.485670] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:00.656 [2024-07-24 02:12:15.485698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:00.656 qpair failed and we were unable to recover it. 00:34:00.656 [2024-07-24 02:12:15.495616] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.656 [2024-07-24 02:12:15.495733] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.656 [2024-07-24 02:12:15.495760] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.656 [2024-07-24 02:12:15.495774] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.656 [2024-07-24 02:12:15.495790] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:00.656 [2024-07-24 02:12:15.495821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:00.656 qpair failed and we were unable to recover it. 00:34:00.656 [2024-07-24 02:12:15.505595] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.656 [2024-07-24 02:12:15.505713] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.656 [2024-07-24 02:12:15.505740] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.656 [2024-07-24 02:12:15.505754] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.656 [2024-07-24 02:12:15.505766] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:00.656 [2024-07-24 02:12:15.505798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:00.656 qpair failed and we were unable to recover it. 00:34:00.656 [2024-07-24 02:12:15.515688] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.656 [2024-07-24 02:12:15.515795] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.656 [2024-07-24 02:12:15.515820] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.657 [2024-07-24 02:12:15.515834] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.657 [2024-07-24 02:12:15.515847] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:00.657 [2024-07-24 02:12:15.515876] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:00.657 qpair failed and we were unable to recover it. 00:34:00.657 [2024-07-24 02:12:15.525677] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.657 [2024-07-24 02:12:15.525820] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.657 [2024-07-24 02:12:15.525846] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.657 [2024-07-24 02:12:15.525859] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.657 [2024-07-24 02:12:15.525872] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:00.657 [2024-07-24 02:12:15.525907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:00.657 qpair failed and we were unable to recover it. 00:34:00.657 [2024-07-24 02:12:15.535667] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.657 [2024-07-24 02:12:15.535764] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.657 [2024-07-24 02:12:15.535789] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.657 [2024-07-24 02:12:15.535803] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.657 [2024-07-24 02:12:15.535816] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:00.657 [2024-07-24 02:12:15.535845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:00.657 qpair failed and we were unable to recover it. 00:34:00.657 [2024-07-24 02:12:15.545739] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.657 [2024-07-24 02:12:15.545887] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.657 [2024-07-24 02:12:15.545912] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.657 [2024-07-24 02:12:15.545926] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.657 [2024-07-24 02:12:15.545939] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:00.657 [2024-07-24 02:12:15.545968] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:00.657 qpair failed and we were unable to recover it. 00:34:00.916 [2024-07-24 02:12:15.555713] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.916 [2024-07-24 02:12:15.555823] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.916 [2024-07-24 02:12:15.555849] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.916 [2024-07-24 02:12:15.555863] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.916 [2024-07-24 02:12:15.555874] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:00.916 [2024-07-24 02:12:15.555903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:00.916 qpair failed and we were unable to recover it. 00:34:00.916 [2024-07-24 02:12:15.565776] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.916 [2024-07-24 02:12:15.565881] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.916 [2024-07-24 02:12:15.565909] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.916 [2024-07-24 02:12:15.565923] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.916 [2024-07-24 02:12:15.565936] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:00.916 [2024-07-24 02:12:15.565964] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:00.916 qpair failed and we were unable to recover it. 00:34:00.916 [2024-07-24 02:12:15.575783] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.916 [2024-07-24 02:12:15.575915] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.916 [2024-07-24 02:12:15.575946] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.916 [2024-07-24 02:12:15.575961] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.916 [2024-07-24 02:12:15.575974] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:00.916 [2024-07-24 02:12:15.576003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:00.916 qpair failed and we were unable to recover it. 00:34:00.916 [2024-07-24 02:12:15.585833] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.916 [2024-07-24 02:12:15.585941] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.916 [2024-07-24 02:12:15.585966] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.916 [2024-07-24 02:12:15.585980] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.916 [2024-07-24 02:12:15.585993] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:00.916 [2024-07-24 02:12:15.586022] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:00.916 qpair failed and we were unable to recover it. 00:34:00.916 [2024-07-24 02:12:15.595880] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.916 [2024-07-24 02:12:15.596036] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.916 [2024-07-24 02:12:15.596060] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.916 [2024-07-24 02:12:15.596073] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.916 [2024-07-24 02:12:15.596085] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:00.916 [2024-07-24 02:12:15.596114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:00.916 qpair failed and we were unable to recover it. 00:34:00.916 [2024-07-24 02:12:15.605883] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.916 [2024-07-24 02:12:15.606021] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.916 [2024-07-24 02:12:15.606047] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.916 [2024-07-24 02:12:15.606061] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.916 [2024-07-24 02:12:15.606074] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:00.916 [2024-07-24 02:12:15.606102] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:00.916 qpair failed and we were unable to recover it. 00:34:00.916 [2024-07-24 02:12:15.615902] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.916 [2024-07-24 02:12:15.616004] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.916 [2024-07-24 02:12:15.616029] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.916 [2024-07-24 02:12:15.616043] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.916 [2024-07-24 02:12:15.616056] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:00.916 [2024-07-24 02:12:15.616091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:00.916 qpair failed and we were unable to recover it. 00:34:00.916 [2024-07-24 02:12:15.625921] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.916 [2024-07-24 02:12:15.626033] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.916 [2024-07-24 02:12:15.626059] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.916 [2024-07-24 02:12:15.626073] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.916 [2024-07-24 02:12:15.626086] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:00.916 [2024-07-24 02:12:15.626127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:00.916 qpair failed and we were unable to recover it. 00:34:00.916 [2024-07-24 02:12:15.635959] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.916 [2024-07-24 02:12:15.636081] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.916 [2024-07-24 02:12:15.636106] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.916 [2024-07-24 02:12:15.636120] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.916 [2024-07-24 02:12:15.636132] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:00.916 [2024-07-24 02:12:15.636160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:00.916 qpair failed and we were unable to recover it. 00:34:00.916 [2024-07-24 02:12:15.645980] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.916 [2024-07-24 02:12:15.646084] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.916 [2024-07-24 02:12:15.646109] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.916 [2024-07-24 02:12:15.646123] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.916 [2024-07-24 02:12:15.646136] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:00.916 [2024-07-24 02:12:15.646165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:00.916 qpair failed and we were unable to recover it. 00:34:00.916 [2024-07-24 02:12:15.655993] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.916 [2024-07-24 02:12:15.656098] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.916 [2024-07-24 02:12:15.656124] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.916 [2024-07-24 02:12:15.656137] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.916 [2024-07-24 02:12:15.656150] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:00.916 [2024-07-24 02:12:15.656180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:00.916 qpair failed and we were unable to recover it. 00:34:00.916 [2024-07-24 02:12:15.666037] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.917 [2024-07-24 02:12:15.666145] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.917 [2024-07-24 02:12:15.666177] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.917 [2024-07-24 02:12:15.666193] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.917 [2024-07-24 02:12:15.666205] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:00.917 [2024-07-24 02:12:15.666235] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:00.917 qpair failed and we were unable to recover it. 00:34:00.917 [2024-07-24 02:12:15.676080] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.917 [2024-07-24 02:12:15.676208] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.917 [2024-07-24 02:12:15.676233] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.917 [2024-07-24 02:12:15.676247] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.917 [2024-07-24 02:12:15.676260] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:00.917 [2024-07-24 02:12:15.676291] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:00.917 qpair failed and we were unable to recover it. 00:34:00.917 [2024-07-24 02:12:15.686128] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.917 [2024-07-24 02:12:15.686237] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.917 [2024-07-24 02:12:15.686262] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.917 [2024-07-24 02:12:15.686276] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.917 [2024-07-24 02:12:15.686288] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:00.917 [2024-07-24 02:12:15.686327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:00.917 qpair failed and we were unable to recover it. 00:34:00.917 [2024-07-24 02:12:15.696105] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.917 [2024-07-24 02:12:15.696206] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.917 [2024-07-24 02:12:15.696232] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.917 [2024-07-24 02:12:15.696246] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.917 [2024-07-24 02:12:15.696258] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:00.917 [2024-07-24 02:12:15.696301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:00.917 qpair failed and we were unable to recover it. 00:34:00.917 [2024-07-24 02:12:15.706184] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.917 [2024-07-24 02:12:15.706296] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.917 [2024-07-24 02:12:15.706328] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.917 [2024-07-24 02:12:15.706344] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.917 [2024-07-24 02:12:15.706362] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:00.917 [2024-07-24 02:12:15.706391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:00.917 qpair failed and we were unable to recover it. 00:34:00.917 [2024-07-24 02:12:15.716178] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.917 [2024-07-24 02:12:15.716287] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.917 [2024-07-24 02:12:15.716312] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.917 [2024-07-24 02:12:15.716338] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.917 [2024-07-24 02:12:15.716352] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:00.917 [2024-07-24 02:12:15.716382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:00.917 qpair failed and we were unable to recover it. 00:34:00.917 [2024-07-24 02:12:15.726235] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.917 [2024-07-24 02:12:15.726375] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.917 [2024-07-24 02:12:15.726401] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.917 [2024-07-24 02:12:15.726415] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.917 [2024-07-24 02:12:15.726427] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:00.917 [2024-07-24 02:12:15.726457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:00.917 qpair failed and we were unable to recover it. 00:34:00.917 [2024-07-24 02:12:15.736208] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.917 [2024-07-24 02:12:15.736324] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.917 [2024-07-24 02:12:15.736350] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.917 [2024-07-24 02:12:15.736363] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.917 [2024-07-24 02:12:15.736376] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:00.917 [2024-07-24 02:12:15.736411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:00.917 qpair failed and we were unable to recover it. 00:34:00.917 [2024-07-24 02:12:15.746282] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.917 [2024-07-24 02:12:15.746398] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.917 [2024-07-24 02:12:15.746424] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.917 [2024-07-24 02:12:15.746437] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.917 [2024-07-24 02:12:15.746450] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:00.917 [2024-07-24 02:12:15.746481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:00.917 qpair failed and we were unable to recover it. 00:34:00.917 [2024-07-24 02:12:15.756264] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.917 [2024-07-24 02:12:15.756377] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.917 [2024-07-24 02:12:15.756403] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.917 [2024-07-24 02:12:15.756417] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.917 [2024-07-24 02:12:15.756429] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:00.917 [2024-07-24 02:12:15.756461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:00.917 qpair failed and we were unable to recover it. 00:34:00.917 [2024-07-24 02:12:15.766296] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.917 [2024-07-24 02:12:15.766400] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.917 [2024-07-24 02:12:15.766426] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.917 [2024-07-24 02:12:15.766439] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.917 [2024-07-24 02:12:15.766453] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:00.917 [2024-07-24 02:12:15.766483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:00.917 qpair failed and we were unable to recover it. 00:34:00.917 [2024-07-24 02:12:15.776336] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.917 [2024-07-24 02:12:15.776491] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.917 [2024-07-24 02:12:15.776519] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.917 [2024-07-24 02:12:15.776533] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.917 [2024-07-24 02:12:15.776546] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:00.917 [2024-07-24 02:12:15.776577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:00.917 qpair failed and we were unable to recover it. 00:34:00.917 [2024-07-24 02:12:15.786367] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.917 [2024-07-24 02:12:15.786478] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.917 [2024-07-24 02:12:15.786504] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.917 [2024-07-24 02:12:15.786518] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.917 [2024-07-24 02:12:15.786530] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:00.918 [2024-07-24 02:12:15.786559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:00.918 qpair failed and we were unable to recover it. 00:34:00.918 [2024-07-24 02:12:15.796410] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.918 [2024-07-24 02:12:15.796516] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.918 [2024-07-24 02:12:15.796542] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.918 [2024-07-24 02:12:15.796563] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.918 [2024-07-24 02:12:15.796576] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:00.918 [2024-07-24 02:12:15.796606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:00.918 qpair failed and we were unable to recover it. 00:34:00.918 [2024-07-24 02:12:15.806470] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:00.918 [2024-07-24 02:12:15.806584] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:00.918 [2024-07-24 02:12:15.806609] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:00.918 [2024-07-24 02:12:15.806623] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:00.918 [2024-07-24 02:12:15.806636] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:00.918 [2024-07-24 02:12:15.806665] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:00.918 qpair failed and we were unable to recover it. 00:34:01.176 [2024-07-24 02:12:15.816465] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:01.176 [2024-07-24 02:12:15.816577] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:01.176 [2024-07-24 02:12:15.816602] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:01.177 [2024-07-24 02:12:15.816617] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:01.177 [2024-07-24 02:12:15.816630] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:01.177 [2024-07-24 02:12:15.816659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:01.177 qpair failed and we were unable to recover it. 00:34:01.177 [2024-07-24 02:12:15.826496] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:01.177 [2024-07-24 02:12:15.826624] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:01.177 [2024-07-24 02:12:15.826649] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:01.177 [2024-07-24 02:12:15.826664] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:01.177 [2024-07-24 02:12:15.826677] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:01.177 [2024-07-24 02:12:15.826708] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:01.177 qpair failed and we were unable to recover it. 00:34:01.177 [2024-07-24 02:12:15.836563] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:01.177 [2024-07-24 02:12:15.836683] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:01.177 [2024-07-24 02:12:15.836711] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:01.177 [2024-07-24 02:12:15.836725] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:01.177 [2024-07-24 02:12:15.836738] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:01.177 [2024-07-24 02:12:15.836769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:01.177 qpair failed and we were unable to recover it. 00:34:01.177 [2024-07-24 02:12:15.846545] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:01.177 [2024-07-24 02:12:15.846645] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:01.177 [2024-07-24 02:12:15.846670] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:01.177 [2024-07-24 02:12:15.846684] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:01.177 [2024-07-24 02:12:15.846697] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:01.177 [2024-07-24 02:12:15.846726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:01.177 qpair failed and we were unable to recover it. 00:34:01.177 [2024-07-24 02:12:15.856559] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:01.177 [2024-07-24 02:12:15.856671] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:01.177 [2024-07-24 02:12:15.856696] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:01.177 [2024-07-24 02:12:15.856710] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:01.177 [2024-07-24 02:12:15.856723] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:01.177 [2024-07-24 02:12:15.856752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:01.177 qpair failed and we were unable to recover it. 00:34:01.177 [2024-07-24 02:12:15.866595] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:01.177 [2024-07-24 02:12:15.866704] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:01.177 [2024-07-24 02:12:15.866730] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:01.177 [2024-07-24 02:12:15.866745] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:01.177 [2024-07-24 02:12:15.866757] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:01.177 [2024-07-24 02:12:15.866787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:01.177 qpair failed and we were unable to recover it. 00:34:01.177 [2024-07-24 02:12:15.876621] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:01.177 [2024-07-24 02:12:15.876732] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:01.177 [2024-07-24 02:12:15.876757] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:01.177 [2024-07-24 02:12:15.876771] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:01.177 [2024-07-24 02:12:15.876784] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:01.177 [2024-07-24 02:12:15.876813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:01.177 qpair failed and we were unable to recover it. 00:34:01.177 [2024-07-24 02:12:15.886661] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:01.177 [2024-07-24 02:12:15.886768] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:01.177 [2024-07-24 02:12:15.886795] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:01.177 [2024-07-24 02:12:15.886814] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:01.177 [2024-07-24 02:12:15.886828] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:01.177 [2024-07-24 02:12:15.886857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:01.177 qpair failed and we were unable to recover it. 00:34:01.177 [2024-07-24 02:12:15.896672] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:01.177 [2024-07-24 02:12:15.896805] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:01.177 [2024-07-24 02:12:15.896831] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:01.177 [2024-07-24 02:12:15.896846] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:01.177 [2024-07-24 02:12:15.896858] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:01.177 [2024-07-24 02:12:15.896889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:01.177 qpair failed and we were unable to recover it. 00:34:01.177 [2024-07-24 02:12:15.906768] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:01.177 [2024-07-24 02:12:15.906883] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:01.177 [2024-07-24 02:12:15.906909] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:01.177 [2024-07-24 02:12:15.906923] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:01.177 [2024-07-24 02:12:15.906936] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:01.177 [2024-07-24 02:12:15.906965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:01.177 qpair failed and we were unable to recover it. 00:34:01.177 [2024-07-24 02:12:15.916780] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:01.177 [2024-07-24 02:12:15.916885] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:01.177 [2024-07-24 02:12:15.916910] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:01.177 [2024-07-24 02:12:15.916924] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:01.177 [2024-07-24 02:12:15.916937] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:01.177 [2024-07-24 02:12:15.916967] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:01.177 qpair failed and we were unable to recover it. 00:34:01.177 [2024-07-24 02:12:15.926796] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:01.177 [2024-07-24 02:12:15.926908] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:01.177 [2024-07-24 02:12:15.926934] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:01.177 [2024-07-24 02:12:15.926948] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:01.177 [2024-07-24 02:12:15.926961] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:01.177 [2024-07-24 02:12:15.926991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:01.177 qpair failed and we were unable to recover it. 00:34:01.177 [2024-07-24 02:12:15.936803] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:01.177 [2024-07-24 02:12:15.936921] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:01.177 [2024-07-24 02:12:15.936947] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:01.177 [2024-07-24 02:12:15.936961] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:01.177 [2024-07-24 02:12:15.936973] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:01.177 [2024-07-24 02:12:15.937003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:01.177 qpair failed and we were unable to recover it. 00:34:01.177 [2024-07-24 02:12:15.946841] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:01.178 [2024-07-24 02:12:15.946949] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:01.178 [2024-07-24 02:12:15.946974] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:01.178 [2024-07-24 02:12:15.946988] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:01.178 [2024-07-24 02:12:15.947002] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:01.178 [2024-07-24 02:12:15.947031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:01.178 qpair failed and we were unable to recover it. 00:34:01.178 [2024-07-24 02:12:15.956872] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:01.178 [2024-07-24 02:12:15.956976] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:01.178 [2024-07-24 02:12:15.957001] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:01.178 [2024-07-24 02:12:15.957016] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:01.178 [2024-07-24 02:12:15.957028] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:01.178 [2024-07-24 02:12:15.957057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:01.178 qpair failed and we were unable to recover it. 00:34:01.178 [2024-07-24 02:12:15.966910] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:01.178 [2024-07-24 02:12:15.967032] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:01.178 [2024-07-24 02:12:15.967058] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:01.178 [2024-07-24 02:12:15.967072] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:01.178 [2024-07-24 02:12:15.967085] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:01.178 [2024-07-24 02:12:15.967115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:01.178 qpair failed and we were unable to recover it. 00:34:01.178 [2024-07-24 02:12:15.976946] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:01.178 [2024-07-24 02:12:15.977049] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:01.178 [2024-07-24 02:12:15.977080] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:01.178 [2024-07-24 02:12:15.977095] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:01.178 [2024-07-24 02:12:15.977108] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:01.178 [2024-07-24 02:12:15.977136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:01.178 qpair failed and we were unable to recover it. 00:34:01.178 [2024-07-24 02:12:15.986973] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:01.178 [2024-07-24 02:12:15.987122] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:01.178 [2024-07-24 02:12:15.987149] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:01.178 [2024-07-24 02:12:15.987164] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:01.178 [2024-07-24 02:12:15.987180] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:01.178 [2024-07-24 02:12:15.987222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:01.178 qpair failed and we were unable to recover it. 00:34:01.178 [2024-07-24 02:12:15.996996] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:01.178 [2024-07-24 02:12:15.997101] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:01.178 [2024-07-24 02:12:15.997128] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:01.178 [2024-07-24 02:12:15.997142] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:01.178 [2024-07-24 02:12:15.997155] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:01.178 [2024-07-24 02:12:15.997184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:01.178 qpair failed and we were unable to recover it. 00:34:01.178 [2024-07-24 02:12:16.006983] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:01.178 [2024-07-24 02:12:16.007090] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:01.178 [2024-07-24 02:12:16.007116] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:01.178 [2024-07-24 02:12:16.007129] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:01.178 [2024-07-24 02:12:16.007143] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:01.178 [2024-07-24 02:12:16.007171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:01.178 qpair failed and we were unable to recover it. 00:34:01.178 [2024-07-24 02:12:16.017026] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:01.178 [2024-07-24 02:12:16.017133] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:01.178 [2024-07-24 02:12:16.017160] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:01.178 [2024-07-24 02:12:16.017173] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:01.178 [2024-07-24 02:12:16.017186] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:01.178 [2024-07-24 02:12:16.017221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:01.178 qpair failed and we were unable to recover it. 00:34:01.178 [2024-07-24 02:12:16.027080] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:01.178 [2024-07-24 02:12:16.027189] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:01.178 [2024-07-24 02:12:16.027214] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:01.178 [2024-07-24 02:12:16.027228] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:01.178 [2024-07-24 02:12:16.027242] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:01.178 [2024-07-24 02:12:16.027270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:01.178 qpair failed and we were unable to recover it. 00:34:01.178 [2024-07-24 02:12:16.037070] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:01.178 [2024-07-24 02:12:16.037173] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:01.178 [2024-07-24 02:12:16.037199] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:01.178 [2024-07-24 02:12:16.037212] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:01.178 [2024-07-24 02:12:16.037225] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:01.178 [2024-07-24 02:12:16.037254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:01.178 qpair failed and we were unable to recover it. 00:34:01.178 [2024-07-24 02:12:16.047091] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:01.178 [2024-07-24 02:12:16.047195] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:01.178 [2024-07-24 02:12:16.047221] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:01.178 [2024-07-24 02:12:16.047235] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:01.178 [2024-07-24 02:12:16.047246] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:01.178 [2024-07-24 02:12:16.047275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:01.178 qpair failed and we were unable to recover it. 00:34:01.178 [2024-07-24 02:12:16.057165] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:01.178 [2024-07-24 02:12:16.057282] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:01.178 [2024-07-24 02:12:16.057307] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:01.178 [2024-07-24 02:12:16.057328] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:01.178 [2024-07-24 02:12:16.057342] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:01.178 [2024-07-24 02:12:16.057372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:01.178 qpair failed and we were unable to recover it. 00:34:01.178 [2024-07-24 02:12:16.067214] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:01.178 [2024-07-24 02:12:16.067338] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:01.178 [2024-07-24 02:12:16.067369] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:01.178 [2024-07-24 02:12:16.067383] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:01.178 [2024-07-24 02:12:16.067396] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:01.178 [2024-07-24 02:12:16.067426] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:01.178 qpair failed and we were unable to recover it. 00:34:01.438 [2024-07-24 02:12:16.077192] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:01.438 [2024-07-24 02:12:16.077333] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:01.438 [2024-07-24 02:12:16.077360] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:01.438 [2024-07-24 02:12:16.077374] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:01.438 [2024-07-24 02:12:16.077387] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:01.438 [2024-07-24 02:12:16.077416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:01.438 qpair failed and we were unable to recover it. 00:34:01.438 [2024-07-24 02:12:16.087252] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:01.438 [2024-07-24 02:12:16.087365] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:01.438 [2024-07-24 02:12:16.087391] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:01.438 [2024-07-24 02:12:16.087405] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:01.438 [2024-07-24 02:12:16.087418] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:01.438 [2024-07-24 02:12:16.087447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:01.438 qpair failed and we were unable to recover it. 00:34:01.438 [2024-07-24 02:12:16.097242] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:01.438 [2024-07-24 02:12:16.097380] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:01.438 [2024-07-24 02:12:16.097406] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:01.438 [2024-07-24 02:12:16.097420] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:01.438 [2024-07-24 02:12:16.097433] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:01.438 [2024-07-24 02:12:16.097463] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:01.438 qpair failed and we were unable to recover it. 00:34:01.438 [2024-07-24 02:12:16.107304] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:01.438 [2024-07-24 02:12:16.107422] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:01.438 [2024-07-24 02:12:16.107447] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:01.438 [2024-07-24 02:12:16.107463] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:01.438 [2024-07-24 02:12:16.107484] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:01.438 [2024-07-24 02:12:16.107514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:01.438 qpair failed and we were unable to recover it. 00:34:01.438 [2024-07-24 02:12:16.117306] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:01.438 [2024-07-24 02:12:16.117426] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:01.438 [2024-07-24 02:12:16.117451] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:01.438 [2024-07-24 02:12:16.117465] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:01.438 [2024-07-24 02:12:16.117478] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:01.438 [2024-07-24 02:12:16.117507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:01.438 qpair failed and we were unable to recover it. 00:34:01.438 [2024-07-24 02:12:16.127313] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:01.438 [2024-07-24 02:12:16.127417] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:01.438 [2024-07-24 02:12:16.127442] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:01.438 [2024-07-24 02:12:16.127455] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:01.438 [2024-07-24 02:12:16.127468] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:01.438 [2024-07-24 02:12:16.127498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:01.438 qpair failed and we were unable to recover it. 00:34:01.438 [2024-07-24 02:12:16.137382] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:01.438 [2024-07-24 02:12:16.137492] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:01.439 [2024-07-24 02:12:16.137517] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:01.439 [2024-07-24 02:12:16.137531] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:01.439 [2024-07-24 02:12:16.137544] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:01.439 [2024-07-24 02:12:16.137573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:01.439 qpair failed and we were unable to recover it. 00:34:01.439 [2024-07-24 02:12:16.147418] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:01.439 [2024-07-24 02:12:16.147582] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:01.439 [2024-07-24 02:12:16.147609] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:01.439 [2024-07-24 02:12:16.147623] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:01.439 [2024-07-24 02:12:16.147639] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:01.439 [2024-07-24 02:12:16.147671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:01.439 qpair failed and we were unable to recover it. 00:34:01.439 [2024-07-24 02:12:16.157436] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:01.439 [2024-07-24 02:12:16.157553] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:01.439 [2024-07-24 02:12:16.157580] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:01.439 [2024-07-24 02:12:16.157594] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:01.439 [2024-07-24 02:12:16.157607] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:01.439 [2024-07-24 02:12:16.157636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:01.439 qpair failed and we were unable to recover it. 00:34:01.439 [2024-07-24 02:12:16.167477] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:01.439 [2024-07-24 02:12:16.167594] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:01.439 [2024-07-24 02:12:16.167620] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:01.439 [2024-07-24 02:12:16.167634] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:01.439 [2024-07-24 02:12:16.167646] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:01.439 [2024-07-24 02:12:16.167676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:01.439 qpair failed and we were unable to recover it. 00:34:01.439 [2024-07-24 02:12:16.177486] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:01.439 [2024-07-24 02:12:16.177591] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:01.439 [2024-07-24 02:12:16.177616] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:01.439 [2024-07-24 02:12:16.177630] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:01.439 [2024-07-24 02:12:16.177643] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:01.439 [2024-07-24 02:12:16.177673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:01.439 qpair failed and we were unable to recover it. 00:34:01.439 [2024-07-24 02:12:16.187522] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:01.439 [2024-07-24 02:12:16.187637] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:01.439 [2024-07-24 02:12:16.187662] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:01.439 [2024-07-24 02:12:16.187677] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:01.439 [2024-07-24 02:12:16.187689] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:01.439 [2024-07-24 02:12:16.187718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:01.439 qpair failed and we were unable to recover it. 00:34:01.439 [2024-07-24 02:12:16.197545] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:01.439 [2024-07-24 02:12:16.197651] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:01.439 [2024-07-24 02:12:16.197677] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:01.439 [2024-07-24 02:12:16.197697] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:01.439 [2024-07-24 02:12:16.197711] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:01.439 [2024-07-24 02:12:16.197740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:01.439 qpair failed and we were unable to recover it. 00:34:01.439 [2024-07-24 02:12:16.207559] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:01.439 [2024-07-24 02:12:16.207666] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:01.439 [2024-07-24 02:12:16.207691] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:01.439 [2024-07-24 02:12:16.207704] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:01.439 [2024-07-24 02:12:16.207717] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:01.439 [2024-07-24 02:12:16.207746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:01.439 qpair failed and we were unable to recover it. 00:34:01.439 [2024-07-24 02:12:16.217611] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:01.439 [2024-07-24 02:12:16.217716] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:01.439 [2024-07-24 02:12:16.217743] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:01.439 [2024-07-24 02:12:16.217757] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:01.439 [2024-07-24 02:12:16.217770] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:01.439 [2024-07-24 02:12:16.217798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:01.439 qpair failed and we were unable to recover it. 00:34:01.439 [2024-07-24 02:12:16.227654] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:01.439 [2024-07-24 02:12:16.227759] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:01.439 [2024-07-24 02:12:16.227784] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:01.439 [2024-07-24 02:12:16.227798] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:01.439 [2024-07-24 02:12:16.227811] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:01.439 [2024-07-24 02:12:16.227853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:01.439 qpair failed and we were unable to recover it. 00:34:01.439 [2024-07-24 02:12:16.237654] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:01.439 [2024-07-24 02:12:16.237789] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:01.439 [2024-07-24 02:12:16.237815] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:01.439 [2024-07-24 02:12:16.237829] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:01.439 [2024-07-24 02:12:16.237841] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:01.439 [2024-07-24 02:12:16.237871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:01.439 qpair failed and we were unable to recover it. 00:34:01.439 [2024-07-24 02:12:16.247697] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:01.439 [2024-07-24 02:12:16.247837] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:01.439 [2024-07-24 02:12:16.247863] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:01.439 [2024-07-24 02:12:16.247877] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:01.439 [2024-07-24 02:12:16.247890] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:01.439 [2024-07-24 02:12:16.247918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:01.439 qpair failed and we were unable to recover it. 00:34:01.439 [2024-07-24 02:12:16.257754] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:01.439 [2024-07-24 02:12:16.257856] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:01.439 [2024-07-24 02:12:16.257881] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:01.439 [2024-07-24 02:12:16.257895] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:01.439 [2024-07-24 02:12:16.257908] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:01.439 [2024-07-24 02:12:16.257938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:01.439 qpair failed and we were unable to recover it. 00:34:01.439 [2024-07-24 02:12:16.267783] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:01.440 [2024-07-24 02:12:16.267895] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:01.440 [2024-07-24 02:12:16.267920] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:01.440 [2024-07-24 02:12:16.267933] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:01.440 [2024-07-24 02:12:16.267946] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:01.440 [2024-07-24 02:12:16.267976] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:01.440 qpair failed and we were unable to recover it. 00:34:01.440 [2024-07-24 02:12:16.277764] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:01.440 [2024-07-24 02:12:16.277870] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:01.440 [2024-07-24 02:12:16.277896] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:01.440 [2024-07-24 02:12:16.277910] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:01.440 [2024-07-24 02:12:16.277923] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:01.440 [2024-07-24 02:12:16.277952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:01.440 qpair failed and we were unable to recover it. 00:34:01.440 [2024-07-24 02:12:16.287842] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:01.440 [2024-07-24 02:12:16.287950] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:01.440 [2024-07-24 02:12:16.287975] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:01.440 [2024-07-24 02:12:16.287995] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:01.440 [2024-07-24 02:12:16.288008] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:01.440 [2024-07-24 02:12:16.288038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:01.440 qpair failed and we were unable to recover it. 00:34:01.440 [2024-07-24 02:12:16.297803] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:01.440 [2024-07-24 02:12:16.297900] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:01.440 [2024-07-24 02:12:16.297925] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:01.440 [2024-07-24 02:12:16.297939] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:01.440 [2024-07-24 02:12:16.297952] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:01.440 [2024-07-24 02:12:16.297982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:01.440 qpair failed and we were unable to recover it. 00:34:01.440 [2024-07-24 02:12:16.307899] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:01.440 [2024-07-24 02:12:16.308013] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:01.440 [2024-07-24 02:12:16.308038] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:01.440 [2024-07-24 02:12:16.308051] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:01.440 [2024-07-24 02:12:16.308064] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:01.440 [2024-07-24 02:12:16.308093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:01.440 qpair failed and we were unable to recover it. 00:34:01.440 [2024-07-24 02:12:16.317902] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:01.440 [2024-07-24 02:12:16.318007] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:01.440 [2024-07-24 02:12:16.318033] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:01.440 [2024-07-24 02:12:16.318047] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:01.440 [2024-07-24 02:12:16.318060] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:01.440 [2024-07-24 02:12:16.318090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:01.440 qpair failed and we were unable to recover it. 00:34:01.440 [2024-07-24 02:12:16.327902] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:01.440 [2024-07-24 02:12:16.328012] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:01.440 [2024-07-24 02:12:16.328037] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:01.440 [2024-07-24 02:12:16.328051] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:01.440 [2024-07-24 02:12:16.328064] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:01.440 [2024-07-24 02:12:16.328093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:01.440 qpair failed and we were unable to recover it. 00:34:01.698 [2024-07-24 02:12:16.337951] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:01.698 [2024-07-24 02:12:16.338067] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:01.698 [2024-07-24 02:12:16.338094] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:01.698 [2024-07-24 02:12:16.338108] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:01.698 [2024-07-24 02:12:16.338120] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:01.698 [2024-07-24 02:12:16.338149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:01.698 qpair failed and we were unable to recover it. 00:34:01.698 [2024-07-24 02:12:16.347975] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:01.698 [2024-07-24 02:12:16.348082] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:01.698 [2024-07-24 02:12:16.348107] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:01.698 [2024-07-24 02:12:16.348121] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:01.698 [2024-07-24 02:12:16.348134] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:01.698 [2024-07-24 02:12:16.348175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:01.698 qpair failed and we were unable to recover it. 00:34:01.698 [2024-07-24 02:12:16.358007] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:01.698 [2024-07-24 02:12:16.358118] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:01.698 [2024-07-24 02:12:16.358144] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:01.698 [2024-07-24 02:12:16.358158] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:01.698 [2024-07-24 02:12:16.358172] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:01.698 [2024-07-24 02:12:16.358200] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:01.698 qpair failed and we were unable to recover it. 00:34:01.698 [2024-07-24 02:12:16.368020] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:01.698 [2024-07-24 02:12:16.368169] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:01.698 [2024-07-24 02:12:16.368194] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:01.698 [2024-07-24 02:12:16.368208] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:01.698 [2024-07-24 02:12:16.368221] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:01.698 [2024-07-24 02:12:16.368251] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:01.698 qpair failed and we were unable to recover it. 00:34:01.698 [2024-07-24 02:12:16.378126] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:01.698 [2024-07-24 02:12:16.378230] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:01.698 [2024-07-24 02:12:16.378262] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:01.698 [2024-07-24 02:12:16.378277] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:01.698 [2024-07-24 02:12:16.378290] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:01.698 [2024-07-24 02:12:16.378327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:01.698 qpair failed and we were unable to recover it. 00:34:01.698 [2024-07-24 02:12:16.388106] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:01.698 [2024-07-24 02:12:16.388216] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:01.698 [2024-07-24 02:12:16.388242] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:01.698 [2024-07-24 02:12:16.388256] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:01.698 [2024-07-24 02:12:16.388268] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:01.698 [2024-07-24 02:12:16.388297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:01.698 qpair failed and we were unable to recover it. 00:34:01.698 [2024-07-24 02:12:16.398104] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:01.698 [2024-07-24 02:12:16.398206] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:01.698 [2024-07-24 02:12:16.398232] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:01.698 [2024-07-24 02:12:16.398246] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:01.698 [2024-07-24 02:12:16.398259] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:01.698 [2024-07-24 02:12:16.398300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:01.698 qpair failed and we were unable to recover it. 00:34:01.698 [2024-07-24 02:12:16.408123] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:01.698 [2024-07-24 02:12:16.408226] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:01.698 [2024-07-24 02:12:16.408252] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:01.698 [2024-07-24 02:12:16.408266] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:01.698 [2024-07-24 02:12:16.408279] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:01.698 [2024-07-24 02:12:16.408308] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:01.698 qpair failed and we were unable to recover it. 00:34:01.698 [2024-07-24 02:12:16.418153] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:01.698 [2024-07-24 02:12:16.418284] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:01.698 [2024-07-24 02:12:16.418309] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:01.698 [2024-07-24 02:12:16.418335] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:01.698 [2024-07-24 02:12:16.418350] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:01.698 [2024-07-24 02:12:16.418386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:01.698 qpair failed and we were unable to recover it. 00:34:01.698 [2024-07-24 02:12:16.428195] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:01.699 [2024-07-24 02:12:16.428338] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:01.699 [2024-07-24 02:12:16.428363] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:01.699 [2024-07-24 02:12:16.428380] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:01.699 [2024-07-24 02:12:16.428393] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:01.699 [2024-07-24 02:12:16.428422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:01.699 qpair failed and we were unable to recover it. 00:34:01.699 [2024-07-24 02:12:16.438215] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:01.699 [2024-07-24 02:12:16.438353] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:01.699 [2024-07-24 02:12:16.438380] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:01.699 [2024-07-24 02:12:16.438394] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:01.699 [2024-07-24 02:12:16.438407] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:01.699 [2024-07-24 02:12:16.438437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:01.699 qpair failed and we were unable to recover it. 00:34:01.699 [2024-07-24 02:12:16.448253] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:01.699 [2024-07-24 02:12:16.448364] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:01.699 [2024-07-24 02:12:16.448391] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:01.699 [2024-07-24 02:12:16.448405] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:01.699 [2024-07-24 02:12:16.448418] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:01.699 [2024-07-24 02:12:16.448449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:01.699 qpair failed and we were unable to recover it. 00:34:01.699 [2024-07-24 02:12:16.458278] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:01.699 [2024-07-24 02:12:16.458391] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:01.699 [2024-07-24 02:12:16.458417] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:01.699 [2024-07-24 02:12:16.458431] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:01.699 [2024-07-24 02:12:16.458443] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:01.699 [2024-07-24 02:12:16.458473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:01.699 qpair failed and we were unable to recover it. 00:34:01.699 [2024-07-24 02:12:16.468299] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:01.699 [2024-07-24 02:12:16.468437] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:01.699 [2024-07-24 02:12:16.468467] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:01.699 [2024-07-24 02:12:16.468482] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:01.699 [2024-07-24 02:12:16.468495] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:01.699 [2024-07-24 02:12:16.468526] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:01.699 qpair failed and we were unable to recover it. 00:34:01.699 [2024-07-24 02:12:16.478327] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:01.699 [2024-07-24 02:12:16.478438] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:01.699 [2024-07-24 02:12:16.478464] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:01.699 [2024-07-24 02:12:16.478477] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:01.699 [2024-07-24 02:12:16.478489] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:01.699 [2024-07-24 02:12:16.478519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:01.699 qpair failed and we were unable to recover it. 00:34:01.699 [2024-07-24 02:12:16.488368] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:01.699 [2024-07-24 02:12:16.488487] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:01.699 [2024-07-24 02:12:16.488513] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:01.699 [2024-07-24 02:12:16.488526] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:01.699 [2024-07-24 02:12:16.488539] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:01.699 [2024-07-24 02:12:16.488569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:01.699 qpair failed and we were unable to recover it. 00:34:01.699 [2024-07-24 02:12:16.498403] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:01.699 [2024-07-24 02:12:16.498533] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:01.699 [2024-07-24 02:12:16.498559] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:01.699 [2024-07-24 02:12:16.498574] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:01.699 [2024-07-24 02:12:16.498587] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:01.699 [2024-07-24 02:12:16.498616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:01.699 qpair failed and we were unable to recover it. 00:34:01.699 [2024-07-24 02:12:16.508466] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:01.699 [2024-07-24 02:12:16.508581] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:01.699 [2024-07-24 02:12:16.508606] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:01.699 [2024-07-24 02:12:16.508620] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:01.699 [2024-07-24 02:12:16.508639] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:01.699 [2024-07-24 02:12:16.508670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:01.699 qpair failed and we were unable to recover it. 00:34:01.699 [2024-07-24 02:12:16.518469] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:01.699 [2024-07-24 02:12:16.518580] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:01.699 [2024-07-24 02:12:16.518606] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:01.699 [2024-07-24 02:12:16.518621] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:01.699 [2024-07-24 02:12:16.518633] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:01.699 [2024-07-24 02:12:16.518662] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:01.699 qpair failed and we were unable to recover it. 00:34:01.699 [2024-07-24 02:12:16.528496] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:01.699 [2024-07-24 02:12:16.528598] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:01.699 [2024-07-24 02:12:16.528623] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:01.699 [2024-07-24 02:12:16.528637] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:01.699 [2024-07-24 02:12:16.528650] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:01.699 [2024-07-24 02:12:16.528680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:01.699 qpair failed and we were unable to recover it. 00:34:01.699 [2024-07-24 02:12:16.538518] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:01.699 [2024-07-24 02:12:16.538621] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:01.699 [2024-07-24 02:12:16.538646] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:01.699 [2024-07-24 02:12:16.538660] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:01.699 [2024-07-24 02:12:16.538673] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:01.699 [2024-07-24 02:12:16.538703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:01.699 qpair failed and we were unable to recover it. 00:34:01.699 [2024-07-24 02:12:16.548550] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:01.699 [2024-07-24 02:12:16.548658] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:01.699 [2024-07-24 02:12:16.548683] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:01.699 [2024-07-24 02:12:16.548696] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:01.700 [2024-07-24 02:12:16.548708] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:01.700 [2024-07-24 02:12:16.548739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:01.700 qpair failed and we were unable to recover it. 00:34:01.700 [2024-07-24 02:12:16.558583] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:01.700 [2024-07-24 02:12:16.558690] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:01.700 [2024-07-24 02:12:16.558715] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:01.700 [2024-07-24 02:12:16.558729] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:01.700 [2024-07-24 02:12:16.558741] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:01.700 [2024-07-24 02:12:16.558769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:01.700 qpair failed and we were unable to recover it. 00:34:01.700 [2024-07-24 02:12:16.568607] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:01.700 [2024-07-24 02:12:16.568709] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:01.700 [2024-07-24 02:12:16.568735] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:01.700 [2024-07-24 02:12:16.568749] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:01.700 [2024-07-24 02:12:16.568762] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:01.700 [2024-07-24 02:12:16.568792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:01.700 qpair failed and we were unable to recover it. 00:34:01.700 [2024-07-24 02:12:16.578599] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:01.700 [2024-07-24 02:12:16.578704] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:01.700 [2024-07-24 02:12:16.578729] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:01.700 [2024-07-24 02:12:16.578743] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:01.700 [2024-07-24 02:12:16.578755] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:01.700 [2024-07-24 02:12:16.578786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:01.700 qpair failed and we were unable to recover it. 00:34:01.700 [2024-07-24 02:12:16.588638] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:01.700 [2024-07-24 02:12:16.588744] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:01.700 [2024-07-24 02:12:16.588769] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:01.700 [2024-07-24 02:12:16.588783] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:01.700 [2024-07-24 02:12:16.588795] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:01.700 [2024-07-24 02:12:16.588825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:01.700 qpair failed and we were unable to recover it. 00:34:01.958 [2024-07-24 02:12:16.598764] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:01.958 [2024-07-24 02:12:16.598911] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:01.958 [2024-07-24 02:12:16.598936] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:01.958 [2024-07-24 02:12:16.598950] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:01.958 [2024-07-24 02:12:16.598967] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:01.958 [2024-07-24 02:12:16.598997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:01.958 qpair failed and we were unable to recover it. 00:34:01.958 [2024-07-24 02:12:16.608771] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:01.958 [2024-07-24 02:12:16.608884] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:01.958 [2024-07-24 02:12:16.608910] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:01.958 [2024-07-24 02:12:16.608924] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:01.958 [2024-07-24 02:12:16.608936] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:01.958 [2024-07-24 02:12:16.608966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:01.958 qpair failed and we were unable to recover it. 00:34:01.958 [2024-07-24 02:12:16.618710] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:01.958 [2024-07-24 02:12:16.618818] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:01.958 [2024-07-24 02:12:16.618845] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:01.958 [2024-07-24 02:12:16.618859] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:01.958 [2024-07-24 02:12:16.618871] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:01.958 [2024-07-24 02:12:16.618900] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:01.958 qpair failed and we were unable to recover it. 00:34:01.958 [2024-07-24 02:12:16.628769] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:01.958 [2024-07-24 02:12:16.628891] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:01.958 [2024-07-24 02:12:16.628916] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:01.958 [2024-07-24 02:12:16.628930] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:01.958 [2024-07-24 02:12:16.628941] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:01.958 [2024-07-24 02:12:16.628971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:01.958 qpair failed and we were unable to recover it. 00:34:01.958 [2024-07-24 02:12:16.638810] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:01.958 [2024-07-24 02:12:16.638907] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:01.958 [2024-07-24 02:12:16.638933] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:01.958 [2024-07-24 02:12:16.638947] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:01.958 [2024-07-24 02:12:16.638960] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:01.958 [2024-07-24 02:12:16.638989] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:01.958 qpair failed and we were unable to recover it. 00:34:01.958 [2024-07-24 02:12:16.648842] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:01.958 [2024-07-24 02:12:16.648949] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:01.958 [2024-07-24 02:12:16.648974] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:01.958 [2024-07-24 02:12:16.648988] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:01.959 [2024-07-24 02:12:16.649000] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:01.959 [2024-07-24 02:12:16.649031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:01.959 qpair failed and we were unable to recover it. 00:34:01.959 [2024-07-24 02:12:16.658871] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:01.959 [2024-07-24 02:12:16.658975] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:01.959 [2024-07-24 02:12:16.659001] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:01.959 [2024-07-24 02:12:16.659015] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:01.959 [2024-07-24 02:12:16.659027] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:01.959 [2024-07-24 02:12:16.659057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:01.959 qpair failed and we were unable to recover it. 00:34:01.959 [2024-07-24 02:12:16.668902] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:01.959 [2024-07-24 02:12:16.669004] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:01.959 [2024-07-24 02:12:16.669029] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:01.959 [2024-07-24 02:12:16.669043] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:01.959 [2024-07-24 02:12:16.669055] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:01.959 [2024-07-24 02:12:16.669085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:01.959 qpair failed and we were unable to recover it. 00:34:01.959 [2024-07-24 02:12:16.678921] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:01.959 [2024-07-24 02:12:16.679024] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:01.959 [2024-07-24 02:12:16.679050] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:01.959 [2024-07-24 02:12:16.679064] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:01.959 [2024-07-24 02:12:16.679076] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:01.959 [2024-07-24 02:12:16.679107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:01.959 qpair failed and we were unable to recover it. 00:34:01.959 [2024-07-24 02:12:16.688919] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:01.959 [2024-07-24 02:12:16.689022] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:01.959 [2024-07-24 02:12:16.689047] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:01.959 [2024-07-24 02:12:16.689067] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:01.959 [2024-07-24 02:12:16.689081] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:01.959 [2024-07-24 02:12:16.689111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:01.959 qpair failed and we were unable to recover it. 00:34:01.959 [2024-07-24 02:12:16.699001] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:01.959 [2024-07-24 02:12:16.699137] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:01.959 [2024-07-24 02:12:16.699164] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:01.959 [2024-07-24 02:12:16.699179] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:01.959 [2024-07-24 02:12:16.699199] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:01.959 [2024-07-24 02:12:16.699233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:01.959 qpair failed and we were unable to recover it. 00:34:01.959 [2024-07-24 02:12:16.709012] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:01.959 [2024-07-24 02:12:16.709117] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:01.959 [2024-07-24 02:12:16.709143] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:01.959 [2024-07-24 02:12:16.709157] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:01.959 [2024-07-24 02:12:16.709170] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:01.959 [2024-07-24 02:12:16.709199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:01.959 qpair failed and we were unable to recover it. 00:34:01.959 [2024-07-24 02:12:16.719042] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:01.959 [2024-07-24 02:12:16.719150] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:01.959 [2024-07-24 02:12:16.719176] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:01.959 [2024-07-24 02:12:16.719191] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:01.959 [2024-07-24 02:12:16.719203] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:01.959 [2024-07-24 02:12:16.719233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:01.959 qpair failed and we were unable to recover it. 00:34:01.959 [2024-07-24 02:12:16.729048] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:01.959 [2024-07-24 02:12:16.729155] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:01.959 [2024-07-24 02:12:16.729181] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:01.959 [2024-07-24 02:12:16.729194] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:01.959 [2024-07-24 02:12:16.729207] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:01.959 [2024-07-24 02:12:16.729237] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:01.959 qpair failed and we were unable to recover it. 00:34:01.959 [2024-07-24 02:12:16.739115] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:01.959 [2024-07-24 02:12:16.739219] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:01.959 [2024-07-24 02:12:16.739247] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:01.959 [2024-07-24 02:12:16.739262] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:01.959 [2024-07-24 02:12:16.739275] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:01.959 [2024-07-24 02:12:16.739306] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:01.959 qpair failed and we were unable to recover it. 00:34:01.959 [2024-07-24 02:12:16.749124] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:01.959 [2024-07-24 02:12:16.749236] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:01.959 [2024-07-24 02:12:16.749262] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:01.959 [2024-07-24 02:12:16.749275] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:01.959 [2024-07-24 02:12:16.749288] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:01.959 [2024-07-24 02:12:16.749324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:01.959 qpair failed and we were unable to recover it. 00:34:01.959 [2024-07-24 02:12:16.759135] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:01.959 [2024-07-24 02:12:16.759237] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:01.959 [2024-07-24 02:12:16.759262] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:01.959 [2024-07-24 02:12:16.759277] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:01.959 [2024-07-24 02:12:16.759290] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:01.959 [2024-07-24 02:12:16.759325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:01.959 qpair failed and we were unable to recover it. 00:34:01.959 [2024-07-24 02:12:16.769180] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:01.959 [2024-07-24 02:12:16.769284] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:01.959 [2024-07-24 02:12:16.769309] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:01.959 [2024-07-24 02:12:16.769335] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:01.959 [2024-07-24 02:12:16.769349] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:01.959 [2024-07-24 02:12:16.769379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:01.959 qpair failed and we were unable to recover it. 00:34:01.959 [2024-07-24 02:12:16.779293] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:01.959 [2024-07-24 02:12:16.779413] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:01.960 [2024-07-24 02:12:16.779443] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:01.960 [2024-07-24 02:12:16.779458] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:01.960 [2024-07-24 02:12:16.779471] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:01.960 [2024-07-24 02:12:16.779499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:01.960 qpair failed and we were unable to recover it. 00:34:01.960 [2024-07-24 02:12:16.789275] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:01.960 [2024-07-24 02:12:16.789395] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:01.960 [2024-07-24 02:12:16.789421] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:01.960 [2024-07-24 02:12:16.789435] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:01.960 [2024-07-24 02:12:16.789448] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:01.960 [2024-07-24 02:12:16.789476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:01.960 qpair failed and we were unable to recover it. 00:34:01.960 [2024-07-24 02:12:16.799289] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:01.960 [2024-07-24 02:12:16.799408] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:01.960 [2024-07-24 02:12:16.799435] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:01.960 [2024-07-24 02:12:16.799449] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:01.960 [2024-07-24 02:12:16.799461] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:01.960 [2024-07-24 02:12:16.799491] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:01.960 qpair failed and we were unable to recover it. 00:34:01.960 [2024-07-24 02:12:16.809332] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:01.960 [2024-07-24 02:12:16.809439] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:01.960 [2024-07-24 02:12:16.809464] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:01.960 [2024-07-24 02:12:16.809478] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:01.960 [2024-07-24 02:12:16.809491] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:01.960 [2024-07-24 02:12:16.809521] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:01.960 qpair failed and we were unable to recover it. 00:34:01.960 [2024-07-24 02:12:16.819322] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:01.960 [2024-07-24 02:12:16.819438] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:01.960 [2024-07-24 02:12:16.819463] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:01.960 [2024-07-24 02:12:16.819476] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:01.960 [2024-07-24 02:12:16.819489] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:01.960 [2024-07-24 02:12:16.819526] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:01.960 qpair failed and we were unable to recover it. 00:34:01.960 [2024-07-24 02:12:16.829353] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:01.960 [2024-07-24 02:12:16.829499] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:01.960 [2024-07-24 02:12:16.829524] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:01.960 [2024-07-24 02:12:16.829538] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:01.960 [2024-07-24 02:12:16.829551] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:01.960 [2024-07-24 02:12:16.829580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:01.960 qpair failed and we were unable to recover it. 00:34:01.960 [2024-07-24 02:12:16.839379] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:01.960 [2024-07-24 02:12:16.839491] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:01.960 [2024-07-24 02:12:16.839516] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:01.960 [2024-07-24 02:12:16.839530] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:01.960 [2024-07-24 02:12:16.839543] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:01.960 [2024-07-24 02:12:16.839573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:01.960 qpair failed and we were unable to recover it. 00:34:01.960 [2024-07-24 02:12:16.849415] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:01.960 [2024-07-24 02:12:16.849519] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:01.960 [2024-07-24 02:12:16.849545] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:01.960 [2024-07-24 02:12:16.849559] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:01.960 [2024-07-24 02:12:16.849572] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:01.960 [2024-07-24 02:12:16.849614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:01.960 qpair failed and we were unable to recover it. 00:34:02.219 [2024-07-24 02:12:16.859421] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:02.219 [2024-07-24 02:12:16.859535] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:02.219 [2024-07-24 02:12:16.859563] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:02.219 [2024-07-24 02:12:16.859584] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:02.219 [2024-07-24 02:12:16.859597] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:02.219 [2024-07-24 02:12:16.859630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:02.219 qpair failed and we were unable to recover it. 00:34:02.219 [2024-07-24 02:12:16.869478] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:02.219 [2024-07-24 02:12:16.869594] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:02.219 [2024-07-24 02:12:16.869627] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:02.219 [2024-07-24 02:12:16.869643] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:02.219 [2024-07-24 02:12:16.869659] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:02.219 [2024-07-24 02:12:16.869690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:02.219 qpair failed and we were unable to recover it. 00:34:02.219 [2024-07-24 02:12:16.879473] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:02.219 [2024-07-24 02:12:16.879601] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:02.219 [2024-07-24 02:12:16.879628] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:02.219 [2024-07-24 02:12:16.879642] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:02.219 [2024-07-24 02:12:16.879655] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:02.219 [2024-07-24 02:12:16.879684] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:02.219 qpair failed and we were unable to recover it. 00:34:02.219 [2024-07-24 02:12:16.889511] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:02.219 [2024-07-24 02:12:16.889616] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:02.219 [2024-07-24 02:12:16.889642] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:02.219 [2024-07-24 02:12:16.889657] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:02.219 [2024-07-24 02:12:16.889670] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:02.219 [2024-07-24 02:12:16.889698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:02.219 qpair failed and we were unable to recover it. 00:34:02.219 [2024-07-24 02:12:16.899555] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:02.219 [2024-07-24 02:12:16.899683] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:02.219 [2024-07-24 02:12:16.899710] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:02.219 [2024-07-24 02:12:16.899724] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:02.219 [2024-07-24 02:12:16.899736] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:02.219 [2024-07-24 02:12:16.899767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:02.219 qpair failed and we were unable to recover it. 00:34:02.219 [2024-07-24 02:12:16.909630] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:02.219 [2024-07-24 02:12:16.909765] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:02.219 [2024-07-24 02:12:16.909791] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:02.219 [2024-07-24 02:12:16.909805] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:02.219 [2024-07-24 02:12:16.909818] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:02.219 [2024-07-24 02:12:16.909853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:02.219 qpair failed and we were unable to recover it. 00:34:02.219 [2024-07-24 02:12:16.919581] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:02.219 [2024-07-24 02:12:16.919686] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:02.219 [2024-07-24 02:12:16.919712] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:02.219 [2024-07-24 02:12:16.919726] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:02.219 [2024-07-24 02:12:16.919738] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:02.219 [2024-07-24 02:12:16.919767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:02.219 qpair failed and we were unable to recover it. 00:34:02.219 [2024-07-24 02:12:16.929667] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:02.219 [2024-07-24 02:12:16.929770] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:02.219 [2024-07-24 02:12:16.929798] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:02.219 [2024-07-24 02:12:16.929813] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:02.219 [2024-07-24 02:12:16.929826] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:02.219 [2024-07-24 02:12:16.929856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:02.219 qpair failed and we were unable to recover it. 00:34:02.219 [2024-07-24 02:12:16.939687] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:02.219 [2024-07-24 02:12:16.939787] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:02.219 [2024-07-24 02:12:16.939812] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:02.219 [2024-07-24 02:12:16.939826] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:02.219 [2024-07-24 02:12:16.939839] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:02.219 [2024-07-24 02:12:16.939869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:02.219 qpair failed and we were unable to recover it. 00:34:02.219 [2024-07-24 02:12:16.949712] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:02.219 [2024-07-24 02:12:16.949857] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:02.219 [2024-07-24 02:12:16.949883] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:02.219 [2024-07-24 02:12:16.949897] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:02.219 [2024-07-24 02:12:16.949909] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:02.219 [2024-07-24 02:12:16.949939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:02.219 qpair failed and we were unable to recover it. 00:34:02.219 [2024-07-24 02:12:16.959762] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:02.219 [2024-07-24 02:12:16.959901] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:02.219 [2024-07-24 02:12:16.959927] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:02.219 [2024-07-24 02:12:16.959940] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:02.219 [2024-07-24 02:12:16.959953] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:02.219 [2024-07-24 02:12:16.959982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:02.219 qpair failed and we were unable to recover it. 00:34:02.219 [2024-07-24 02:12:16.969718] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:02.219 [2024-07-24 02:12:16.969822] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:02.219 [2024-07-24 02:12:16.969847] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:02.219 [2024-07-24 02:12:16.969862] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:02.219 [2024-07-24 02:12:16.969875] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:02.219 [2024-07-24 02:12:16.969903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:02.219 qpair failed and we were unable to recover it. 00:34:02.219 [2024-07-24 02:12:16.979756] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:02.219 [2024-07-24 02:12:16.979858] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:02.219 [2024-07-24 02:12:16.979884] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:02.219 [2024-07-24 02:12:16.979898] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:02.219 [2024-07-24 02:12:16.979911] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:02.219 [2024-07-24 02:12:16.979941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:02.219 qpair failed and we were unable to recover it. 00:34:02.220 [2024-07-24 02:12:16.989836] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:02.220 [2024-07-24 02:12:16.989950] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:02.220 [2024-07-24 02:12:16.989976] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:02.220 [2024-07-24 02:12:16.989991] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:02.220 [2024-07-24 02:12:16.990004] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:02.220 [2024-07-24 02:12:16.990033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:02.220 qpair failed and we were unable to recover it. 00:34:02.220 [2024-07-24 02:12:16.999828] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:02.220 [2024-07-24 02:12:16.999931] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:02.220 [2024-07-24 02:12:16.999957] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:02.220 [2024-07-24 02:12:16.999971] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:02.220 [2024-07-24 02:12:16.999990] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:02.220 [2024-07-24 02:12:17.000021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:02.220 qpair failed and we were unable to recover it. 00:34:02.220 [2024-07-24 02:12:17.009880] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:02.220 [2024-07-24 02:12:17.009981] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:02.220 [2024-07-24 02:12:17.010007] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:02.220 [2024-07-24 02:12:17.010021] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:02.220 [2024-07-24 02:12:17.010033] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:02.220 [2024-07-24 02:12:17.010063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:02.220 qpair failed and we were unable to recover it. 00:34:02.220 [2024-07-24 02:12:17.019882] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:02.220 [2024-07-24 02:12:17.019989] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:02.220 [2024-07-24 02:12:17.020014] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:02.220 [2024-07-24 02:12:17.020028] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:02.220 [2024-07-24 02:12:17.020040] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:02.220 [2024-07-24 02:12:17.020069] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:02.220 qpair failed and we were unable to recover it. 00:34:02.220 [2024-07-24 02:12:17.029933] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:02.220 [2024-07-24 02:12:17.030039] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:02.220 [2024-07-24 02:12:17.030064] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:02.220 [2024-07-24 02:12:17.030078] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:02.220 [2024-07-24 02:12:17.030091] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:02.220 [2024-07-24 02:12:17.030119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:02.220 qpair failed and we were unable to recover it. 00:34:02.220 [2024-07-24 02:12:17.039940] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:02.220 [2024-07-24 02:12:17.040042] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:02.220 [2024-07-24 02:12:17.040068] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:02.220 [2024-07-24 02:12:17.040082] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:02.220 [2024-07-24 02:12:17.040095] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:02.220 [2024-07-24 02:12:17.040123] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:02.220 qpair failed and we were unable to recover it. 00:34:02.220 [2024-07-24 02:12:17.050030] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:02.220 [2024-07-24 02:12:17.050137] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:02.220 [2024-07-24 02:12:17.050163] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:02.220 [2024-07-24 02:12:17.050178] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:02.220 [2024-07-24 02:12:17.050191] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:02.220 [2024-07-24 02:12:17.050220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:02.220 qpair failed and we were unable to recover it. 00:34:02.220 [2024-07-24 02:12:17.060017] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:02.220 [2024-07-24 02:12:17.060116] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:02.220 [2024-07-24 02:12:17.060143] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:02.220 [2024-07-24 02:12:17.060157] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:02.220 [2024-07-24 02:12:17.060170] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:02.220 [2024-07-24 02:12:17.060211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:02.220 qpair failed and we were unable to recover it. 00:34:02.220 [2024-07-24 02:12:17.070078] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:02.220 [2024-07-24 02:12:17.070188] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:02.220 [2024-07-24 02:12:17.070214] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:02.220 [2024-07-24 02:12:17.070229] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:02.220 [2024-07-24 02:12:17.070242] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:02.220 [2024-07-24 02:12:17.070271] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:02.220 qpair failed and we were unable to recover it. 00:34:02.220 [2024-07-24 02:12:17.080106] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:02.220 [2024-07-24 02:12:17.080219] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:02.220 [2024-07-24 02:12:17.080244] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:02.220 [2024-07-24 02:12:17.080259] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:02.220 [2024-07-24 02:12:17.080272] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:02.220 [2024-07-24 02:12:17.080322] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:02.220 qpair failed and we were unable to recover it. 00:34:02.220 [2024-07-24 02:12:17.090087] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:02.220 [2024-07-24 02:12:17.090191] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:02.220 [2024-07-24 02:12:17.090217] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:02.220 [2024-07-24 02:12:17.090237] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:02.220 [2024-07-24 02:12:17.090250] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:02.220 [2024-07-24 02:12:17.090280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:02.220 qpair failed and we were unable to recover it. 00:34:02.220 [2024-07-24 02:12:17.100108] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:02.220 [2024-07-24 02:12:17.100244] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:02.220 [2024-07-24 02:12:17.100270] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:02.220 [2024-07-24 02:12:17.100285] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:02.220 [2024-07-24 02:12:17.100300] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:02.220 [2024-07-24 02:12:17.100339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:02.220 qpair failed and we were unable to recover it. 00:34:02.220 [2024-07-24 02:12:17.110184] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:02.220 [2024-07-24 02:12:17.110299] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:02.220 [2024-07-24 02:12:17.110333] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:02.220 [2024-07-24 02:12:17.110348] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:02.220 [2024-07-24 02:12:17.110361] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:02.221 [2024-07-24 02:12:17.110402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:02.221 qpair failed and we were unable to recover it. 00:34:02.479 [2024-07-24 02:12:17.120181] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:02.479 [2024-07-24 02:12:17.120308] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:02.479 [2024-07-24 02:12:17.120341] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:02.479 [2024-07-24 02:12:17.120356] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:02.479 [2024-07-24 02:12:17.120368] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:02.479 [2024-07-24 02:12:17.120397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:02.479 qpair failed and we were unable to recover it. 00:34:02.479 [2024-07-24 02:12:17.130222] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:02.479 [2024-07-24 02:12:17.130347] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:02.479 [2024-07-24 02:12:17.130381] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:02.479 [2024-07-24 02:12:17.130395] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:02.479 [2024-07-24 02:12:17.130408] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:02.479 [2024-07-24 02:12:17.130437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:02.479 qpair failed and we were unable to recover it. 00:34:02.479 [2024-07-24 02:12:17.140264] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:02.479 [2024-07-24 02:12:17.140411] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:02.479 [2024-07-24 02:12:17.140437] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:02.479 [2024-07-24 02:12:17.140451] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:02.479 [2024-07-24 02:12:17.140463] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:02.479 [2024-07-24 02:12:17.140494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:02.479 qpair failed and we were unable to recover it. 00:34:02.479 [2024-07-24 02:12:17.150250] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:02.479 [2024-07-24 02:12:17.150375] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:02.479 [2024-07-24 02:12:17.150402] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:02.479 [2024-07-24 02:12:17.150416] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:02.479 [2024-07-24 02:12:17.150428] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:02.480 [2024-07-24 02:12:17.150457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:02.480 qpair failed and we were unable to recover it. 00:34:02.480 [2024-07-24 02:12:17.160303] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:02.480 [2024-07-24 02:12:17.160420] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:02.480 [2024-07-24 02:12:17.160446] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:02.480 [2024-07-24 02:12:17.160460] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:02.480 [2024-07-24 02:12:17.160472] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:02.480 [2024-07-24 02:12:17.160502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:02.480 qpair failed and we were unable to recover it. 00:34:02.480 [2024-07-24 02:12:17.170311] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:02.480 [2024-07-24 02:12:17.170421] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:02.480 [2024-07-24 02:12:17.170446] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:02.480 [2024-07-24 02:12:17.170460] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:02.480 [2024-07-24 02:12:17.170473] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:02.480 [2024-07-24 02:12:17.170502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:02.480 qpair failed and we were unable to recover it. 00:34:02.480 [2024-07-24 02:12:17.180357] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:02.480 [2024-07-24 02:12:17.180458] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:02.480 [2024-07-24 02:12:17.180488] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:02.480 [2024-07-24 02:12:17.180503] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:02.480 [2024-07-24 02:12:17.180515] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:02.480 [2024-07-24 02:12:17.180544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:02.480 qpair failed and we were unable to recover it. 00:34:02.480 [2024-07-24 02:12:17.190380] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:02.480 [2024-07-24 02:12:17.190494] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:02.480 [2024-07-24 02:12:17.190519] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:02.480 [2024-07-24 02:12:17.190533] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:02.480 [2024-07-24 02:12:17.190546] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:02.480 [2024-07-24 02:12:17.190575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:02.480 qpair failed and we were unable to recover it. 00:34:02.480 [2024-07-24 02:12:17.200407] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:02.480 [2024-07-24 02:12:17.200513] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:02.480 [2024-07-24 02:12:17.200538] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:02.480 [2024-07-24 02:12:17.200552] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:02.480 [2024-07-24 02:12:17.200564] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:02.480 [2024-07-24 02:12:17.200594] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:02.480 qpair failed and we were unable to recover it. 00:34:02.480 [2024-07-24 02:12:17.210435] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:02.480 [2024-07-24 02:12:17.210542] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:02.480 [2024-07-24 02:12:17.210567] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:02.480 [2024-07-24 02:12:17.210581] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:02.480 [2024-07-24 02:12:17.210593] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:02.480 [2024-07-24 02:12:17.210623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:02.480 qpair failed and we were unable to recover it. 00:34:02.480 [2024-07-24 02:12:17.220479] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:02.480 [2024-07-24 02:12:17.220582] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:02.480 [2024-07-24 02:12:17.220607] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:02.480 [2024-07-24 02:12:17.220621] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:02.480 [2024-07-24 02:12:17.220634] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:02.480 [2024-07-24 02:12:17.220681] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:02.480 qpair failed and we were unable to recover it. 00:34:02.480 [2024-07-24 02:12:17.230518] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:02.480 [2024-07-24 02:12:17.230640] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:02.480 [2024-07-24 02:12:17.230668] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:02.480 [2024-07-24 02:12:17.230682] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:02.480 [2024-07-24 02:12:17.230695] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:02.480 [2024-07-24 02:12:17.230724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:02.480 qpair failed and we were unable to recover it. 00:34:02.480 [2024-07-24 02:12:17.240520] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:02.480 [2024-07-24 02:12:17.240628] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:02.480 [2024-07-24 02:12:17.240653] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:02.480 [2024-07-24 02:12:17.240667] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:02.480 [2024-07-24 02:12:17.240680] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:02.480 [2024-07-24 02:12:17.240710] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:02.480 qpair failed and we were unable to recover it. 00:34:02.480 [2024-07-24 02:12:17.250546] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:02.480 [2024-07-24 02:12:17.250647] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:02.480 [2024-07-24 02:12:17.250673] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:02.480 [2024-07-24 02:12:17.250687] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:02.480 [2024-07-24 02:12:17.250699] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:02.480 [2024-07-24 02:12:17.250728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:02.480 qpair failed and we were unable to recover it. 00:34:02.480 [2024-07-24 02:12:17.260593] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:02.480 [2024-07-24 02:12:17.260725] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:02.480 [2024-07-24 02:12:17.260754] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:02.480 [2024-07-24 02:12:17.260768] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:02.480 [2024-07-24 02:12:17.260780] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:02.480 [2024-07-24 02:12:17.260810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:02.480 qpair failed and we were unable to recover it. 00:34:02.480 [2024-07-24 02:12:17.270627] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:02.480 [2024-07-24 02:12:17.270733] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:02.480 [2024-07-24 02:12:17.270764] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:02.480 [2024-07-24 02:12:17.270778] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:02.480 [2024-07-24 02:12:17.270791] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:02.480 [2024-07-24 02:12:17.270822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:02.480 qpair failed and we were unable to recover it. 00:34:02.480 [2024-07-24 02:12:17.280660] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:02.480 [2024-07-24 02:12:17.280792] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:02.480 [2024-07-24 02:12:17.280818] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:02.481 [2024-07-24 02:12:17.280831] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:02.481 [2024-07-24 02:12:17.280844] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:02.481 [2024-07-24 02:12:17.280874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:02.481 qpair failed and we were unable to recover it. 00:34:02.481 [2024-07-24 02:12:17.290651] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:02.481 [2024-07-24 02:12:17.290753] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:02.481 [2024-07-24 02:12:17.290778] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:02.481 [2024-07-24 02:12:17.290792] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:02.481 [2024-07-24 02:12:17.290804] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:02.481 [2024-07-24 02:12:17.290833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:02.481 qpair failed and we were unable to recover it. 00:34:02.481 [2024-07-24 02:12:17.300726] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:02.481 [2024-07-24 02:12:17.300825] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:02.481 [2024-07-24 02:12:17.300850] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:02.481 [2024-07-24 02:12:17.300863] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:02.481 [2024-07-24 02:12:17.300876] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:02.481 [2024-07-24 02:12:17.300905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:02.481 qpair failed and we were unable to recover it. 00:34:02.481 [2024-07-24 02:12:17.310769] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:02.481 [2024-07-24 02:12:17.310881] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:02.481 [2024-07-24 02:12:17.310906] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:02.481 [2024-07-24 02:12:17.310920] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:02.481 [2024-07-24 02:12:17.310933] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:02.481 [2024-07-24 02:12:17.310968] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:02.481 qpair failed and we were unable to recover it. 00:34:02.481 [2024-07-24 02:12:17.320815] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:02.481 [2024-07-24 02:12:17.320923] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:02.481 [2024-07-24 02:12:17.320948] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:02.481 [2024-07-24 02:12:17.320962] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:02.481 [2024-07-24 02:12:17.320975] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:02.481 [2024-07-24 02:12:17.321018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:02.481 qpair failed and we were unable to recover it. 00:34:02.481 [2024-07-24 02:12:17.330823] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:02.481 [2024-07-24 02:12:17.330941] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:02.481 [2024-07-24 02:12:17.330967] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:02.481 [2024-07-24 02:12:17.330981] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:02.481 [2024-07-24 02:12:17.330993] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:02.481 [2024-07-24 02:12:17.331023] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:02.481 qpair failed and we were unable to recover it. 00:34:02.481 [2024-07-24 02:12:17.340875] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:02.481 [2024-07-24 02:12:17.340994] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:02.481 [2024-07-24 02:12:17.341021] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:02.481 [2024-07-24 02:12:17.341035] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:02.481 [2024-07-24 02:12:17.341048] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:02.481 [2024-07-24 02:12:17.341077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:02.481 qpair failed and we were unable to recover it. 00:34:02.481 [2024-07-24 02:12:17.350834] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:02.481 [2024-07-24 02:12:17.350960] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:02.481 [2024-07-24 02:12:17.350986] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:02.481 [2024-07-24 02:12:17.351001] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:02.481 [2024-07-24 02:12:17.351013] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:02.481 [2024-07-24 02:12:17.351043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:02.481 qpair failed and we were unable to recover it. 00:34:02.481 [2024-07-24 02:12:17.360905] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:02.481 [2024-07-24 02:12:17.361015] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:02.481 [2024-07-24 02:12:17.361046] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:02.481 [2024-07-24 02:12:17.361061] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:02.481 [2024-07-24 02:12:17.361074] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:02.481 [2024-07-24 02:12:17.361115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:02.481 qpair failed and we were unable to recover it. 00:34:02.481 [2024-07-24 02:12:17.370914] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:02.481 [2024-07-24 02:12:17.371029] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:02.481 [2024-07-24 02:12:17.371055] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:02.481 [2024-07-24 02:12:17.371069] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:02.481 [2024-07-24 02:12:17.371082] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:02.481 [2024-07-24 02:12:17.371111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:02.481 qpair failed and we were unable to recover it. 00:34:02.739 [2024-07-24 02:12:17.380918] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:02.739 [2024-07-24 02:12:17.381025] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:02.739 [2024-07-24 02:12:17.381051] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:02.739 [2024-07-24 02:12:17.381065] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:02.739 [2024-07-24 02:12:17.381078] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:02.739 [2024-07-24 02:12:17.381107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:02.739 qpair failed and we were unable to recover it. 00:34:02.739 [2024-07-24 02:12:17.390999] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:02.739 [2024-07-24 02:12:17.391108] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:02.739 [2024-07-24 02:12:17.391133] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:02.739 [2024-07-24 02:12:17.391147] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:02.739 [2024-07-24 02:12:17.391160] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:02.739 [2024-07-24 02:12:17.391189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:02.739 qpair failed and we were unable to recover it. 00:34:02.739 [2024-07-24 02:12:17.400985] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:02.739 [2024-07-24 02:12:17.401095] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:02.739 [2024-07-24 02:12:17.401121] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:02.739 [2024-07-24 02:12:17.401135] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:02.739 [2024-07-24 02:12:17.401153] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:02.739 [2024-07-24 02:12:17.401195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:02.739 qpair failed and we were unable to recover it. 00:34:02.739 [2024-07-24 02:12:17.410999] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:02.739 [2024-07-24 02:12:17.411132] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:02.739 [2024-07-24 02:12:17.411157] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:02.739 [2024-07-24 02:12:17.411171] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:02.739 [2024-07-24 02:12:17.411184] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:02.739 [2024-07-24 02:12:17.411212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:02.739 qpair failed and we were unable to recover it. 00:34:02.739 [2024-07-24 02:12:17.421053] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:02.739 [2024-07-24 02:12:17.421154] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:02.739 [2024-07-24 02:12:17.421180] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:02.739 [2024-07-24 02:12:17.421194] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:02.739 [2024-07-24 02:12:17.421207] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:02.739 [2024-07-24 02:12:17.421248] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:02.739 qpair failed and we were unable to recover it. 00:34:02.739 [2024-07-24 02:12:17.431061] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:02.739 [2024-07-24 02:12:17.431168] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:02.739 [2024-07-24 02:12:17.431193] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:02.739 [2024-07-24 02:12:17.431207] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:02.739 [2024-07-24 02:12:17.431219] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:02.739 [2024-07-24 02:12:17.431249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:02.739 qpair failed and we were unable to recover it. 00:34:02.739 [2024-07-24 02:12:17.441081] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:02.739 [2024-07-24 02:12:17.441217] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:02.739 [2024-07-24 02:12:17.441243] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:02.739 [2024-07-24 02:12:17.441257] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:02.739 [2024-07-24 02:12:17.441270] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:02.739 [2024-07-24 02:12:17.441299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:02.739 qpair failed and we were unable to recover it. 00:34:02.739 [2024-07-24 02:12:17.451137] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:02.739 [2024-07-24 02:12:17.451251] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:02.739 [2024-07-24 02:12:17.451276] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:02.739 [2024-07-24 02:12:17.451291] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:02.739 [2024-07-24 02:12:17.451303] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:02.739 [2024-07-24 02:12:17.451340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:02.740 qpair failed and we were unable to recover it. 00:34:02.740 [2024-07-24 02:12:17.461216] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:02.740 [2024-07-24 02:12:17.461326] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:02.740 [2024-07-24 02:12:17.461352] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:02.740 [2024-07-24 02:12:17.461366] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:02.740 [2024-07-24 02:12:17.461378] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:02.740 [2024-07-24 02:12:17.461408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:02.740 qpair failed and we were unable to recover it. 00:34:02.740 [2024-07-24 02:12:17.471178] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:02.740 [2024-07-24 02:12:17.471281] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:02.740 [2024-07-24 02:12:17.471306] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:02.740 [2024-07-24 02:12:17.471330] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:02.740 [2024-07-24 02:12:17.471344] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:02.740 [2024-07-24 02:12:17.471374] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:02.740 qpair failed and we were unable to recover it. 00:34:02.740 [2024-07-24 02:12:17.481219] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:02.740 [2024-07-24 02:12:17.481335] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:02.740 [2024-07-24 02:12:17.481361] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:02.740 [2024-07-24 02:12:17.481375] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:02.740 [2024-07-24 02:12:17.481387] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:02.740 [2024-07-24 02:12:17.481417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:02.740 qpair failed and we were unable to recover it. 00:34:02.740 [2024-07-24 02:12:17.491232] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:02.740 [2024-07-24 02:12:17.491345] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:02.740 [2024-07-24 02:12:17.491370] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:02.740 [2024-07-24 02:12:17.491390] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:02.740 [2024-07-24 02:12:17.491403] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:02.740 [2024-07-24 02:12:17.491433] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:02.740 qpair failed and we were unable to recover it. 00:34:02.740 [2024-07-24 02:12:17.501261] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:02.740 [2024-07-24 02:12:17.501368] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:02.740 [2024-07-24 02:12:17.501395] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:02.740 [2024-07-24 02:12:17.501409] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:02.740 [2024-07-24 02:12:17.501421] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:02.740 [2024-07-24 02:12:17.501450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:02.740 qpair failed and we were unable to recover it. 00:34:02.740 [2024-07-24 02:12:17.511352] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:02.740 [2024-07-24 02:12:17.511465] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:02.740 [2024-07-24 02:12:17.511492] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:02.740 [2024-07-24 02:12:17.511510] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:02.740 [2024-07-24 02:12:17.511524] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:02.740 [2024-07-24 02:12:17.511554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:02.740 qpair failed and we were unable to recover it. 00:34:02.740 [2024-07-24 02:12:17.521406] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:02.740 [2024-07-24 02:12:17.521516] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:02.740 [2024-07-24 02:12:17.521542] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:02.740 [2024-07-24 02:12:17.521556] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:02.740 [2024-07-24 02:12:17.521569] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:02.740 [2024-07-24 02:12:17.521598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:02.740 qpair failed and we were unable to recover it. 00:34:02.740 [2024-07-24 02:12:17.531403] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:02.740 [2024-07-24 02:12:17.531554] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:02.740 [2024-07-24 02:12:17.531579] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:02.740 [2024-07-24 02:12:17.531593] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:02.740 [2024-07-24 02:12:17.531606] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:02.740 [2024-07-24 02:12:17.531635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:02.740 qpair failed and we were unable to recover it. 00:34:02.740 [2024-07-24 02:12:17.541388] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:02.740 [2024-07-24 02:12:17.541488] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:02.740 [2024-07-24 02:12:17.541513] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:02.740 [2024-07-24 02:12:17.541527] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:02.740 [2024-07-24 02:12:17.541540] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:02.740 [2024-07-24 02:12:17.541569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:02.740 qpair failed and we were unable to recover it. 00:34:02.740 [2024-07-24 02:12:17.551428] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:02.740 [2024-07-24 02:12:17.551580] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:02.740 [2024-07-24 02:12:17.551606] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:02.740 [2024-07-24 02:12:17.551620] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:02.740 [2024-07-24 02:12:17.551633] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:02.740 [2024-07-24 02:12:17.551661] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:02.740 qpair failed and we were unable to recover it. 00:34:02.740 [2024-07-24 02:12:17.561443] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:02.740 [2024-07-24 02:12:17.561546] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:02.740 [2024-07-24 02:12:17.561571] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:02.740 [2024-07-24 02:12:17.561585] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:02.740 [2024-07-24 02:12:17.561596] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:02.740 [2024-07-24 02:12:17.561625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:02.740 qpair failed and we were unable to recover it. 00:34:02.740 [2024-07-24 02:12:17.571476] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:02.740 [2024-07-24 02:12:17.571611] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:02.740 [2024-07-24 02:12:17.571636] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:02.740 [2024-07-24 02:12:17.571650] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:02.740 [2024-07-24 02:12:17.571662] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:02.740 [2024-07-24 02:12:17.571692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:02.740 qpair failed and we were unable to recover it. 00:34:02.740 [2024-07-24 02:12:17.581527] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:02.740 [2024-07-24 02:12:17.581628] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:02.740 [2024-07-24 02:12:17.581654] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:02.740 [2024-07-24 02:12:17.581676] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:02.740 [2024-07-24 02:12:17.581690] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:02.740 [2024-07-24 02:12:17.581720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:02.740 qpair failed and we were unable to recover it. 00:34:02.740 [2024-07-24 02:12:17.591538] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:02.740 [2024-07-24 02:12:17.591649] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:02.741 [2024-07-24 02:12:17.591674] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:02.741 [2024-07-24 02:12:17.591690] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:02.741 [2024-07-24 02:12:17.591703] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:02.741 [2024-07-24 02:12:17.591733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:02.741 qpair failed and we were unable to recover it. 00:34:02.741 [2024-07-24 02:12:17.601608] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:02.741 [2024-07-24 02:12:17.601744] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:02.741 [2024-07-24 02:12:17.601769] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:02.741 [2024-07-24 02:12:17.601783] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:02.741 [2024-07-24 02:12:17.601795] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:02.741 [2024-07-24 02:12:17.601823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:02.741 qpair failed and we were unable to recover it. 00:34:02.741 [2024-07-24 02:12:17.611587] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:02.741 [2024-07-24 02:12:17.611691] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:02.741 [2024-07-24 02:12:17.611717] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:02.741 [2024-07-24 02:12:17.611731] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:02.741 [2024-07-24 02:12:17.611743] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:02.741 [2024-07-24 02:12:17.611772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:02.741 qpair failed and we were unable to recover it. 00:34:02.741 [2024-07-24 02:12:17.621604] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:02.741 [2024-07-24 02:12:17.621708] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:02.741 [2024-07-24 02:12:17.621733] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:02.741 [2024-07-24 02:12:17.621747] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:02.741 [2024-07-24 02:12:17.621760] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:02.741 [2024-07-24 02:12:17.621789] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:02.741 qpair failed and we were unable to recover it. 00:34:02.741 [2024-07-24 02:12:17.631690] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:02.741 [2024-07-24 02:12:17.631828] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:02.741 [2024-07-24 02:12:17.631854] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:02.741 [2024-07-24 02:12:17.631867] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:02.741 [2024-07-24 02:12:17.631880] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:02.741 [2024-07-24 02:12:17.631911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:02.741 qpair failed and we were unable to recover it. 00:34:02.999 [2024-07-24 02:12:17.641689] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:02.999 [2024-07-24 02:12:17.641804] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:02.999 [2024-07-24 02:12:17.641830] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:02.999 [2024-07-24 02:12:17.641844] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:02.999 [2024-07-24 02:12:17.641857] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:02.999 [2024-07-24 02:12:17.641886] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:02.999 qpair failed and we were unable to recover it. 00:34:02.999 [2024-07-24 02:12:17.651727] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:02.999 [2024-07-24 02:12:17.651826] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:02.999 [2024-07-24 02:12:17.651851] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:02.999 [2024-07-24 02:12:17.651865] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:02.999 [2024-07-24 02:12:17.651878] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:02.999 [2024-07-24 02:12:17.651906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:02.999 qpair failed and we were unable to recover it. 00:34:02.999 [2024-07-24 02:12:17.661755] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:02.999 [2024-07-24 02:12:17.661891] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:02.999 [2024-07-24 02:12:17.661916] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:02.999 [2024-07-24 02:12:17.661930] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:02.999 [2024-07-24 02:12:17.661943] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:02.999 [2024-07-24 02:12:17.661972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:02.999 qpair failed and we were unable to recover it. 00:34:02.999 [2024-07-24 02:12:17.671796] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:02.999 [2024-07-24 02:12:17.671909] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:02.999 [2024-07-24 02:12:17.671940] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:02.999 [2024-07-24 02:12:17.671955] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:02.999 [2024-07-24 02:12:17.671967] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:02.999 [2024-07-24 02:12:17.671996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:02.999 qpair failed and we were unable to recover it. 00:34:02.999 [2024-07-24 02:12:17.681771] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:02.999 [2024-07-24 02:12:17.681878] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:03.000 [2024-07-24 02:12:17.681903] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:03.000 [2024-07-24 02:12:17.681917] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:03.000 [2024-07-24 02:12:17.681930] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:03.000 [2024-07-24 02:12:17.681958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:03.000 qpair failed and we were unable to recover it. 00:34:03.000 [2024-07-24 02:12:17.691833] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:03.000 [2024-07-24 02:12:17.691941] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:03.000 [2024-07-24 02:12:17.691967] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:03.000 [2024-07-24 02:12:17.691981] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:03.000 [2024-07-24 02:12:17.691996] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:03.000 [2024-07-24 02:12:17.692026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:03.000 qpair failed and we were unable to recover it. 00:34:03.000 [2024-07-24 02:12:17.701863] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:03.000 [2024-07-24 02:12:17.702013] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:03.000 [2024-07-24 02:12:17.702039] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:03.000 [2024-07-24 02:12:17.702053] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:03.000 [2024-07-24 02:12:17.702066] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:03.000 [2024-07-24 02:12:17.702096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:03.000 qpair failed and we were unable to recover it. 00:34:03.000 [2024-07-24 02:12:17.711911] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:03.000 [2024-07-24 02:12:17.712034] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:03.000 [2024-07-24 02:12:17.712060] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:03.000 [2024-07-24 02:12:17.712074] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:03.000 [2024-07-24 02:12:17.712089] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:03.000 [2024-07-24 02:12:17.712139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:03.000 qpair failed and we were unable to recover it. 00:34:03.000 [2024-07-24 02:12:17.721986] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:03.000 [2024-07-24 02:12:17.722128] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:03.000 [2024-07-24 02:12:17.722155] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:03.000 [2024-07-24 02:12:17.722169] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:03.000 [2024-07-24 02:12:17.722182] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:03.000 [2024-07-24 02:12:17.722211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:03.000 qpair failed and we were unable to recover it. 00:34:03.000 [2024-07-24 02:12:17.731961] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:03.000 [2024-07-24 02:12:17.732062] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:03.000 [2024-07-24 02:12:17.732087] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:03.000 [2024-07-24 02:12:17.732101] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:03.000 [2024-07-24 02:12:17.732114] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:03.000 [2024-07-24 02:12:17.732143] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:03.000 qpair failed and we were unable to recover it. 00:34:03.000 [2024-07-24 02:12:17.741973] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:03.000 [2024-07-24 02:12:17.742080] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:03.000 [2024-07-24 02:12:17.742106] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:03.000 [2024-07-24 02:12:17.742120] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:03.000 [2024-07-24 02:12:17.742132] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:03.000 [2024-07-24 02:12:17.742162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:03.000 qpair failed and we were unable to recover it. 00:34:03.000 [2024-07-24 02:12:17.752018] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:03.000 [2024-07-24 02:12:17.752125] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:03.000 [2024-07-24 02:12:17.752151] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:03.000 [2024-07-24 02:12:17.752165] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:03.000 [2024-07-24 02:12:17.752177] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:03.000 [2024-07-24 02:12:17.752206] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:03.000 qpair failed and we were unable to recover it. 00:34:03.000 [2024-07-24 02:12:17.762038] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:03.000 [2024-07-24 02:12:17.762144] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:03.000 [2024-07-24 02:12:17.762174] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:03.000 [2024-07-24 02:12:17.762190] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:03.000 [2024-07-24 02:12:17.762203] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:03.000 [2024-07-24 02:12:17.762234] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:03.000 qpair failed and we were unable to recover it. 00:34:03.000 [2024-07-24 02:12:17.772071] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:03.000 [2024-07-24 02:12:17.772173] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:03.000 [2024-07-24 02:12:17.772198] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:03.000 [2024-07-24 02:12:17.772212] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:03.000 [2024-07-24 02:12:17.772225] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:03.000 [2024-07-24 02:12:17.772256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:03.000 qpair failed and we were unable to recover it. 00:34:03.000 [2024-07-24 02:12:17.782083] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:03.000 [2024-07-24 02:12:17.782188] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:03.000 [2024-07-24 02:12:17.782213] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:03.000 [2024-07-24 02:12:17.782227] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:03.000 [2024-07-24 02:12:17.782240] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:03.000 [2024-07-24 02:12:17.782269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:03.000 qpair failed and we were unable to recover it. 00:34:03.000 [2024-07-24 02:12:17.792127] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:03.000 [2024-07-24 02:12:17.792249] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:03.000 [2024-07-24 02:12:17.792274] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:03.000 [2024-07-24 02:12:17.792288] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:03.000 [2024-07-24 02:12:17.792301] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:03.000 [2024-07-24 02:12:17.792338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:03.000 qpair failed and we were unable to recover it. 00:34:03.000 [2024-07-24 02:12:17.802151] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:03.000 [2024-07-24 02:12:17.802263] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:03.000 [2024-07-24 02:12:17.802289] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:03.000 [2024-07-24 02:12:17.802304] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:03.000 [2024-07-24 02:12:17.802330] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:03.000 [2024-07-24 02:12:17.802363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:03.000 qpair failed and we were unable to recover it. 00:34:03.000 [2024-07-24 02:12:17.812170] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:03.001 [2024-07-24 02:12:17.812276] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:03.001 [2024-07-24 02:12:17.812302] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:03.001 [2024-07-24 02:12:17.812326] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:03.001 [2024-07-24 02:12:17.812341] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:03.001 [2024-07-24 02:12:17.812372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:03.001 qpair failed and we were unable to recover it. 00:34:03.001 [2024-07-24 02:12:17.822200] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:03.001 [2024-07-24 02:12:17.822306] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:03.001 [2024-07-24 02:12:17.822341] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:03.001 [2024-07-24 02:12:17.822356] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:03.001 [2024-07-24 02:12:17.822368] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:03.001 [2024-07-24 02:12:17.822399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:03.001 qpair failed and we were unable to recover it. 00:34:03.001 [2024-07-24 02:12:17.832222] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:03.001 [2024-07-24 02:12:17.832365] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:03.001 [2024-07-24 02:12:17.832391] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:03.001 [2024-07-24 02:12:17.832405] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:03.001 [2024-07-24 02:12:17.832418] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:03.001 [2024-07-24 02:12:17.832446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:03.001 qpair failed and we were unable to recover it. 00:34:03.001 [2024-07-24 02:12:17.842273] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:03.001 [2024-07-24 02:12:17.842387] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:03.001 [2024-07-24 02:12:17.842413] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:03.001 [2024-07-24 02:12:17.842427] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:03.001 [2024-07-24 02:12:17.842440] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:03.001 [2024-07-24 02:12:17.842483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:03.001 qpair failed and we were unable to recover it. 00:34:03.001 [2024-07-24 02:12:17.852274] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:03.001 [2024-07-24 02:12:17.852384] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:03.001 [2024-07-24 02:12:17.852410] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:03.001 [2024-07-24 02:12:17.852424] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:03.001 [2024-07-24 02:12:17.852437] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:03.001 [2024-07-24 02:12:17.852466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:03.001 qpair failed and we were unable to recover it. 00:34:03.001 [2024-07-24 02:12:17.862327] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:03.001 [2024-07-24 02:12:17.862432] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:03.001 [2024-07-24 02:12:17.862457] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:03.001 [2024-07-24 02:12:17.862471] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:03.001 [2024-07-24 02:12:17.862483] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:03.001 [2024-07-24 02:12:17.862513] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:03.001 qpair failed and we were unable to recover it. 00:34:03.001 [2024-07-24 02:12:17.872428] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:03.001 [2024-07-24 02:12:17.872556] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:03.001 [2024-07-24 02:12:17.872581] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:03.001 [2024-07-24 02:12:17.872595] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:03.001 [2024-07-24 02:12:17.872607] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:03.001 [2024-07-24 02:12:17.872636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:03.001 qpair failed and we were unable to recover it. 00:34:03.001 [2024-07-24 02:12:17.882385] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:03.001 [2024-07-24 02:12:17.882500] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:03.001 [2024-07-24 02:12:17.882526] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:03.001 [2024-07-24 02:12:17.882540] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:03.001 [2024-07-24 02:12:17.882553] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:03.001 [2024-07-24 02:12:17.882582] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:03.001 qpair failed and we were unable to recover it. 00:34:03.001 [2024-07-24 02:12:17.892403] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:03.001 [2024-07-24 02:12:17.892529] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:03.001 [2024-07-24 02:12:17.892555] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:03.001 [2024-07-24 02:12:17.892574] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:03.001 [2024-07-24 02:12:17.892588] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:03.001 [2024-07-24 02:12:17.892617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:03.001 qpair failed and we were unable to recover it. 00:34:03.260 [2024-07-24 02:12:17.902436] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:03.260 [2024-07-24 02:12:17.902542] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:03.260 [2024-07-24 02:12:17.902568] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:03.260 [2024-07-24 02:12:17.902582] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:03.260 [2024-07-24 02:12:17.902595] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:03.260 [2024-07-24 02:12:17.902625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:03.260 qpair failed and we were unable to recover it. 00:34:03.260 [2024-07-24 02:12:17.912489] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:03.260 [2024-07-24 02:12:17.912601] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:03.260 [2024-07-24 02:12:17.912626] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:03.260 [2024-07-24 02:12:17.912640] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:03.260 [2024-07-24 02:12:17.912653] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:03.260 [2024-07-24 02:12:17.912682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:03.260 qpair failed and we were unable to recover it. 00:34:03.260 [2024-07-24 02:12:17.922536] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:03.260 [2024-07-24 02:12:17.922672] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:03.260 [2024-07-24 02:12:17.922698] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:03.260 [2024-07-24 02:12:17.922712] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:03.260 [2024-07-24 02:12:17.922724] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:03.260 [2024-07-24 02:12:17.922765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:03.260 qpair failed and we were unable to recover it. 00:34:03.260 [2024-07-24 02:12:17.932502] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:03.260 [2024-07-24 02:12:17.932604] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:03.260 [2024-07-24 02:12:17.932630] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:03.260 [2024-07-24 02:12:17.932644] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:03.260 [2024-07-24 02:12:17.932657] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:03.260 [2024-07-24 02:12:17.932685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:03.260 qpair failed and we were unable to recover it. 00:34:03.260 [2024-07-24 02:12:17.942533] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:03.260 [2024-07-24 02:12:17.942639] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:03.260 [2024-07-24 02:12:17.942664] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:03.260 [2024-07-24 02:12:17.942678] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:03.260 [2024-07-24 02:12:17.942690] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:03.260 [2024-07-24 02:12:17.942720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:03.260 qpair failed and we were unable to recover it. 00:34:03.260 [2024-07-24 02:12:17.952580] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:03.260 [2024-07-24 02:12:17.952689] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:03.260 [2024-07-24 02:12:17.952714] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:03.260 [2024-07-24 02:12:17.952729] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:03.260 [2024-07-24 02:12:17.952742] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:03.260 [2024-07-24 02:12:17.952783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:03.260 qpair failed and we were unable to recover it. 00:34:03.260 [2024-07-24 02:12:17.962596] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:03.260 [2024-07-24 02:12:17.962697] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:03.260 [2024-07-24 02:12:17.962722] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:03.260 [2024-07-24 02:12:17.962735] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:03.260 [2024-07-24 02:12:17.962748] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:03.260 [2024-07-24 02:12:17.962778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:03.260 qpair failed and we were unable to recover it. 00:34:03.260 [2024-07-24 02:12:17.972603] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:03.260 [2024-07-24 02:12:17.972699] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:03.260 [2024-07-24 02:12:17.972725] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:03.260 [2024-07-24 02:12:17.972738] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:03.260 [2024-07-24 02:12:17.972751] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:03.260 [2024-07-24 02:12:17.972782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:03.260 qpair failed and we were unable to recover it. 00:34:03.260 [2024-07-24 02:12:17.982638] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:03.260 [2024-07-24 02:12:17.982742] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:03.260 [2024-07-24 02:12:17.982767] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:03.260 [2024-07-24 02:12:17.982787] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:03.260 [2024-07-24 02:12:17.982801] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:03.260 [2024-07-24 02:12:17.982830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:03.260 qpair failed and we were unable to recover it. 00:34:03.260 [2024-07-24 02:12:17.992676] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:03.260 [2024-07-24 02:12:17.992782] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:03.260 [2024-07-24 02:12:17.992806] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:03.260 [2024-07-24 02:12:17.992820] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:03.260 [2024-07-24 02:12:17.992833] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:03.260 [2024-07-24 02:12:17.992863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:03.260 qpair failed and we were unable to recover it. 00:34:03.260 [2024-07-24 02:12:18.002691] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:03.260 [2024-07-24 02:12:18.002799] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:03.261 [2024-07-24 02:12:18.002825] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:03.261 [2024-07-24 02:12:18.002839] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:03.261 [2024-07-24 02:12:18.002852] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:03.261 [2024-07-24 02:12:18.002881] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:03.261 qpair failed and we were unable to recover it. 00:34:03.261 [2024-07-24 02:12:18.012766] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:03.261 [2024-07-24 02:12:18.012892] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:03.261 [2024-07-24 02:12:18.012917] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:03.261 [2024-07-24 02:12:18.012931] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:03.261 [2024-07-24 02:12:18.012944] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:03.261 [2024-07-24 02:12:18.012973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:03.261 qpair failed and we were unable to recover it. 00:34:03.261 [2024-07-24 02:12:18.022759] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:03.261 [2024-07-24 02:12:18.022871] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:03.261 [2024-07-24 02:12:18.022896] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:03.261 [2024-07-24 02:12:18.022910] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:03.261 [2024-07-24 02:12:18.022922] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:03.261 [2024-07-24 02:12:18.022963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:03.261 qpair failed and we were unable to recover it. 00:34:03.261 [2024-07-24 02:12:18.032836] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:03.261 [2024-07-24 02:12:18.032943] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:03.261 [2024-07-24 02:12:18.032968] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:03.261 [2024-07-24 02:12:18.032982] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:03.261 [2024-07-24 02:12:18.032995] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:03.261 [2024-07-24 02:12:18.033025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:03.261 qpair failed and we were unable to recover it. 00:34:03.261 [2024-07-24 02:12:18.042829] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:03.261 [2024-07-24 02:12:18.042932] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:03.261 [2024-07-24 02:12:18.042958] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:03.261 [2024-07-24 02:12:18.042972] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:03.261 [2024-07-24 02:12:18.042985] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:03.261 [2024-07-24 02:12:18.043014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:03.261 qpair failed and we were unable to recover it. 00:34:03.261 [2024-07-24 02:12:18.052857] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:03.261 [2024-07-24 02:12:18.052993] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:03.261 [2024-07-24 02:12:18.053019] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:03.261 [2024-07-24 02:12:18.053033] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:03.261 [2024-07-24 02:12:18.053046] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:03.261 [2024-07-24 02:12:18.053075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:03.261 qpair failed and we were unable to recover it. 00:34:03.261 [2024-07-24 02:12:18.062859] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:03.261 [2024-07-24 02:12:18.062963] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:03.261 [2024-07-24 02:12:18.062989] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:03.261 [2024-07-24 02:12:18.063002] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:03.261 [2024-07-24 02:12:18.063015] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:03.261 [2024-07-24 02:12:18.063045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:03.261 qpair failed and we were unable to recover it. 00:34:03.261 [2024-07-24 02:12:18.072932] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:03.261 [2024-07-24 02:12:18.073053] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:03.261 [2024-07-24 02:12:18.073082] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:03.261 [2024-07-24 02:12:18.073097] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:03.261 [2024-07-24 02:12:18.073110] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:03.261 [2024-07-24 02:12:18.073138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:03.261 qpair failed and we were unable to recover it. 00:34:03.261 [2024-07-24 02:12:18.082957] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:03.261 [2024-07-24 02:12:18.083058] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:03.261 [2024-07-24 02:12:18.083083] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:03.261 [2024-07-24 02:12:18.083097] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:03.261 [2024-07-24 02:12:18.083109] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:03.261 [2024-07-24 02:12:18.083140] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:03.261 qpair failed and we were unable to recover it. 00:34:03.261 [2024-07-24 02:12:18.093003] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:03.261 [2024-07-24 02:12:18.093109] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:03.261 [2024-07-24 02:12:18.093134] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:03.261 [2024-07-24 02:12:18.093148] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:03.261 [2024-07-24 02:12:18.093161] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:03.261 [2024-07-24 02:12:18.093190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:03.261 qpair failed and we were unable to recover it. 00:34:03.261 [2024-07-24 02:12:18.103015] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:03.261 [2024-07-24 02:12:18.103118] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:03.261 [2024-07-24 02:12:18.103144] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:03.261 [2024-07-24 02:12:18.103158] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:03.261 [2024-07-24 02:12:18.103171] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:03.261 [2024-07-24 02:12:18.103200] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:03.261 qpair failed and we were unable to recover it. 00:34:03.261 [2024-07-24 02:12:18.113054] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:03.261 [2024-07-24 02:12:18.113160] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:03.261 [2024-07-24 02:12:18.113185] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:03.261 [2024-07-24 02:12:18.113199] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:03.261 [2024-07-24 02:12:18.113212] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:03.261 [2024-07-24 02:12:18.113246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:03.261 qpair failed and we were unable to recover it. 00:34:03.261 [2024-07-24 02:12:18.123055] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:03.261 [2024-07-24 02:12:18.123162] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:03.261 [2024-07-24 02:12:18.123188] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:03.261 [2024-07-24 02:12:18.123202] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:03.261 [2024-07-24 02:12:18.123215] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:03.261 [2024-07-24 02:12:18.123244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:03.261 qpair failed and we were unable to recover it. 00:34:03.261 [2024-07-24 02:12:18.133133] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:03.262 [2024-07-24 02:12:18.133230] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:03.262 [2024-07-24 02:12:18.133256] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:03.262 [2024-07-24 02:12:18.133270] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:03.262 [2024-07-24 02:12:18.133283] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:03.262 [2024-07-24 02:12:18.133333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:03.262 qpair failed and we were unable to recover it. 00:34:03.262 [2024-07-24 02:12:18.143146] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:03.262 [2024-07-24 02:12:18.143245] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:03.262 [2024-07-24 02:12:18.143271] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:03.262 [2024-07-24 02:12:18.143285] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:03.262 [2024-07-24 02:12:18.143298] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:03.262 [2024-07-24 02:12:18.143333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:03.262 qpair failed and we were unable to recover it. 00:34:03.262 [2024-07-24 02:12:18.153202] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:03.262 [2024-07-24 02:12:18.153321] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:03.262 [2024-07-24 02:12:18.153347] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:03.262 [2024-07-24 02:12:18.153362] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:03.262 [2024-07-24 02:12:18.153374] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:03.262 [2024-07-24 02:12:18.153405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:03.262 qpair failed and we were unable to recover it. 00:34:03.520 [2024-07-24 02:12:18.163195] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:03.521 [2024-07-24 02:12:18.163341] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:03.521 [2024-07-24 02:12:18.163372] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:03.521 [2024-07-24 02:12:18.163387] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:03.521 [2024-07-24 02:12:18.163399] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:03.521 [2024-07-24 02:12:18.163428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:03.521 qpair failed and we were unable to recover it. 00:34:03.521 [2024-07-24 02:12:18.173260] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:03.521 [2024-07-24 02:12:18.173371] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:03.521 [2024-07-24 02:12:18.173398] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:03.521 [2024-07-24 02:12:18.173412] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:03.521 [2024-07-24 02:12:18.173424] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:03.521 [2024-07-24 02:12:18.173454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:03.521 qpair failed and we were unable to recover it. 00:34:03.521 [2024-07-24 02:12:18.183239] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:03.521 [2024-07-24 02:12:18.183356] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:03.521 [2024-07-24 02:12:18.183383] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:03.521 [2024-07-24 02:12:18.183396] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:03.521 [2024-07-24 02:12:18.183409] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:03.521 [2024-07-24 02:12:18.183438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:03.521 qpair failed and we were unable to recover it. 00:34:03.521 [2024-07-24 02:12:18.193291] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:03.521 [2024-07-24 02:12:18.193412] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:03.521 [2024-07-24 02:12:18.193438] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:03.521 [2024-07-24 02:12:18.193452] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:03.521 [2024-07-24 02:12:18.193465] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:03.521 [2024-07-24 02:12:18.193497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:03.521 qpair failed and we were unable to recover it. 00:34:03.521 [2024-07-24 02:12:18.203288] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:03.521 [2024-07-24 02:12:18.203402] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:03.521 [2024-07-24 02:12:18.203428] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:03.521 [2024-07-24 02:12:18.203442] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:03.521 [2024-07-24 02:12:18.203460] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:03.521 [2024-07-24 02:12:18.203493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:03.521 qpair failed and we were unable to recover it. 00:34:03.521 [2024-07-24 02:12:18.213323] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:03.521 [2024-07-24 02:12:18.213426] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:03.521 [2024-07-24 02:12:18.213452] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:03.521 [2024-07-24 02:12:18.213466] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:03.521 [2024-07-24 02:12:18.213479] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:03.521 [2024-07-24 02:12:18.213508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:03.521 qpair failed and we were unable to recover it. 00:34:03.521 [2024-07-24 02:12:18.223368] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:03.521 [2024-07-24 02:12:18.223469] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:03.521 [2024-07-24 02:12:18.223495] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:03.521 [2024-07-24 02:12:18.223509] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:03.521 [2024-07-24 02:12:18.223522] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:03.521 [2024-07-24 02:12:18.223552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:03.521 qpair failed and we were unable to recover it. 00:34:03.521 [2024-07-24 02:12:18.233458] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:03.521 [2024-07-24 02:12:18.233585] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:03.521 [2024-07-24 02:12:18.233611] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:03.521 [2024-07-24 02:12:18.233625] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:03.521 [2024-07-24 02:12:18.233638] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:03.521 [2024-07-24 02:12:18.233666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:03.521 qpair failed and we were unable to recover it. 00:34:03.521 [2024-07-24 02:12:18.243459] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:03.521 [2024-07-24 02:12:18.243564] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:03.521 [2024-07-24 02:12:18.243588] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:03.521 [2024-07-24 02:12:18.243612] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:03.521 [2024-07-24 02:12:18.243626] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:03.521 [2024-07-24 02:12:18.243655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:03.521 qpair failed and we were unable to recover it. 00:34:03.521 [2024-07-24 02:12:18.253431] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:03.521 [2024-07-24 02:12:18.253563] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:03.521 [2024-07-24 02:12:18.253589] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:03.521 [2024-07-24 02:12:18.253603] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:03.521 [2024-07-24 02:12:18.253618] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:03.521 [2024-07-24 02:12:18.253648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:03.521 qpair failed and we were unable to recover it. 00:34:03.521 [2024-07-24 02:12:18.263469] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:03.521 [2024-07-24 02:12:18.263567] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:03.521 [2024-07-24 02:12:18.263592] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:03.521 [2024-07-24 02:12:18.263606] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:03.521 [2024-07-24 02:12:18.263619] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:03.521 [2024-07-24 02:12:18.263648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:03.521 qpair failed and we were unable to recover it. 00:34:03.521 [2024-07-24 02:12:18.273484] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:03.521 [2024-07-24 02:12:18.273630] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:03.521 [2024-07-24 02:12:18.273655] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:03.521 [2024-07-24 02:12:18.273669] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:03.521 [2024-07-24 02:12:18.273682] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:03.521 [2024-07-24 02:12:18.273712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:03.521 qpair failed and we were unable to recover it. 00:34:03.521 [2024-07-24 02:12:18.283507] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:03.521 [2024-07-24 02:12:18.283610] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:03.521 [2024-07-24 02:12:18.283636] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:03.521 [2024-07-24 02:12:18.283651] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:03.521 [2024-07-24 02:12:18.283664] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:03.522 [2024-07-24 02:12:18.283693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:03.522 qpair failed and we were unable to recover it. 00:34:03.522 [2024-07-24 02:12:18.293521] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:03.522 [2024-07-24 02:12:18.293624] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:03.522 [2024-07-24 02:12:18.293650] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:03.522 [2024-07-24 02:12:18.293665] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:03.522 [2024-07-24 02:12:18.293682] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:03.522 [2024-07-24 02:12:18.293714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:03.522 qpair failed and we were unable to recover it. 00:34:03.522 [2024-07-24 02:12:18.303585] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:03.522 [2024-07-24 02:12:18.303693] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:03.522 [2024-07-24 02:12:18.303719] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:03.522 [2024-07-24 02:12:18.303732] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:03.522 [2024-07-24 02:12:18.303745] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:03.522 [2024-07-24 02:12:18.303775] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:03.522 qpair failed and we were unable to recover it. 00:34:03.522 [2024-07-24 02:12:18.313603] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:03.522 [2024-07-24 02:12:18.313722] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:03.522 [2024-07-24 02:12:18.313747] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:03.522 [2024-07-24 02:12:18.313761] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:03.522 [2024-07-24 02:12:18.313774] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:03.522 [2024-07-24 02:12:18.313803] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:03.522 qpair failed and we were unable to recover it. 00:34:03.522 [2024-07-24 02:12:18.323636] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:03.522 [2024-07-24 02:12:18.323744] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:03.522 [2024-07-24 02:12:18.323770] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:03.522 [2024-07-24 02:12:18.323784] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:03.522 [2024-07-24 02:12:18.323796] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:03.522 [2024-07-24 02:12:18.323827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:03.522 qpair failed and we were unable to recover it. 00:34:03.522 [2024-07-24 02:12:18.333664] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:03.522 [2024-07-24 02:12:18.333771] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:03.522 [2024-07-24 02:12:18.333797] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:03.522 [2024-07-24 02:12:18.333810] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:03.522 [2024-07-24 02:12:18.333823] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:03.522 [2024-07-24 02:12:18.333852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:03.522 qpair failed and we were unable to recover it. 00:34:03.522 [2024-07-24 02:12:18.343682] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:03.522 [2024-07-24 02:12:18.343787] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:03.522 [2024-07-24 02:12:18.343813] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:03.522 [2024-07-24 02:12:18.343827] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:03.522 [2024-07-24 02:12:18.343840] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:03.522 [2024-07-24 02:12:18.343869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:03.522 qpair failed and we were unable to recover it. 00:34:03.522 [2024-07-24 02:12:18.353726] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:03.522 [2024-07-24 02:12:18.353830] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:03.522 [2024-07-24 02:12:18.353855] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:03.522 [2024-07-24 02:12:18.353869] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:03.522 [2024-07-24 02:12:18.353881] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:03.522 [2024-07-24 02:12:18.353911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:03.522 qpair failed and we were unable to recover it. 00:34:03.522 [2024-07-24 02:12:18.363742] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:03.522 [2024-07-24 02:12:18.363868] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:03.522 [2024-07-24 02:12:18.363893] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:03.522 [2024-07-24 02:12:18.363906] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:03.522 [2024-07-24 02:12:18.363919] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:03.522 [2024-07-24 02:12:18.363948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:03.522 qpair failed and we were unable to recover it. 00:34:03.522 [2024-07-24 02:12:18.373793] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:03.522 [2024-07-24 02:12:18.373920] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:03.522 [2024-07-24 02:12:18.373945] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:03.522 [2024-07-24 02:12:18.373959] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:03.522 [2024-07-24 02:12:18.373972] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:03.522 [2024-07-24 02:12:18.374001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:03.522 qpair failed and we were unable to recover it. 00:34:03.522 [2024-07-24 02:12:18.383786] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:03.522 [2024-07-24 02:12:18.383886] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:03.522 [2024-07-24 02:12:18.383911] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:03.522 [2024-07-24 02:12:18.383931] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:03.522 [2024-07-24 02:12:18.383945] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:03.522 [2024-07-24 02:12:18.383974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:03.522 qpair failed and we were unable to recover it. 00:34:03.522 [2024-07-24 02:12:18.393851] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:03.522 [2024-07-24 02:12:18.393973] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:03.522 [2024-07-24 02:12:18.393999] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:03.522 [2024-07-24 02:12:18.394013] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:03.522 [2024-07-24 02:12:18.394026] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:03.522 [2024-07-24 02:12:18.394054] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:03.522 qpair failed and we were unable to recover it. 00:34:03.522 [2024-07-24 02:12:18.403869] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:03.522 [2024-07-24 02:12:18.403992] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:03.522 [2024-07-24 02:12:18.404018] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:03.522 [2024-07-24 02:12:18.404032] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:03.522 [2024-07-24 02:12:18.404048] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:03.522 [2024-07-24 02:12:18.404080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:03.522 qpair failed and we were unable to recover it. 00:34:03.522 [2024-07-24 02:12:18.413904] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:03.522 [2024-07-24 02:12:18.414013] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:03.523 [2024-07-24 02:12:18.414039] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:03.523 [2024-07-24 02:12:18.414053] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:03.523 [2024-07-24 02:12:18.414065] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:03.523 [2024-07-24 02:12:18.414095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:03.523 qpair failed and we were unable to recover it. 00:34:03.781 [2024-07-24 02:12:18.423915] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:03.781 [2024-07-24 02:12:18.424018] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:03.781 [2024-07-24 02:12:18.424044] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:03.781 [2024-07-24 02:12:18.424059] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:03.781 [2024-07-24 02:12:18.424072] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:03.781 [2024-07-24 02:12:18.424112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:03.781 qpair failed and we were unable to recover it. 00:34:03.781 [2024-07-24 02:12:18.433936] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:03.781 [2024-07-24 02:12:18.434049] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:03.781 [2024-07-24 02:12:18.434075] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:03.781 [2024-07-24 02:12:18.434089] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:03.781 [2024-07-24 02:12:18.434101] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:03.781 [2024-07-24 02:12:18.434132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:03.781 qpair failed and we were unable to recover it. 00:34:03.781 [2024-07-24 02:12:18.443963] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:03.781 [2024-07-24 02:12:18.444084] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:03.781 [2024-07-24 02:12:18.444109] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:03.781 [2024-07-24 02:12:18.444123] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:03.781 [2024-07-24 02:12:18.444135] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:03.781 [2024-07-24 02:12:18.444166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:03.781 qpair failed and we were unable to recover it. 00:34:03.781 [2024-07-24 02:12:18.453977] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:03.781 [2024-07-24 02:12:18.454084] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:03.781 [2024-07-24 02:12:18.454109] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:03.781 [2024-07-24 02:12:18.454122] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:03.781 [2024-07-24 02:12:18.454135] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:03.781 [2024-07-24 02:12:18.454164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:03.781 qpair failed and we were unable to recover it. 00:34:03.781 [2024-07-24 02:12:18.464037] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:03.781 [2024-07-24 02:12:18.464177] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:03.781 [2024-07-24 02:12:18.464202] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:03.782 [2024-07-24 02:12:18.464216] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:03.782 [2024-07-24 02:12:18.464228] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:03.782 [2024-07-24 02:12:18.464258] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:03.782 qpair failed and we were unable to recover it. 00:34:03.782 [2024-07-24 02:12:18.474060] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:03.782 [2024-07-24 02:12:18.474167] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:03.782 [2024-07-24 02:12:18.474197] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:03.782 [2024-07-24 02:12:18.474213] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:03.782 [2024-07-24 02:12:18.474225] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:03.782 [2024-07-24 02:12:18.474254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:03.782 qpair failed and we were unable to recover it. 00:34:03.782 [2024-07-24 02:12:18.484080] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:03.782 [2024-07-24 02:12:18.484215] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:03.782 [2024-07-24 02:12:18.484241] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:03.782 [2024-07-24 02:12:18.484256] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:03.782 [2024-07-24 02:12:18.484268] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:03.782 [2024-07-24 02:12:18.484298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:03.782 qpair failed and we were unable to recover it. 00:34:03.782 [2024-07-24 02:12:18.494122] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:03.782 [2024-07-24 02:12:18.494225] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:03.782 [2024-07-24 02:12:18.494252] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:03.782 [2024-07-24 02:12:18.494266] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:03.782 [2024-07-24 02:12:18.494278] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:03.782 [2024-07-24 02:12:18.494308] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:03.782 qpair failed and we were unable to recover it. 00:34:03.782 [2024-07-24 02:12:18.504107] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:03.782 [2024-07-24 02:12:18.504211] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:03.782 [2024-07-24 02:12:18.504237] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:03.782 [2024-07-24 02:12:18.504251] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:03.782 [2024-07-24 02:12:18.504263] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:03.782 [2024-07-24 02:12:18.504293] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:03.782 qpair failed and we were unable to recover it. 00:34:03.782 [2024-07-24 02:12:18.514187] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:03.782 [2024-07-24 02:12:18.514325] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:03.782 [2024-07-24 02:12:18.514351] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:03.782 [2024-07-24 02:12:18.514364] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:03.782 [2024-07-24 02:12:18.514377] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:03.782 [2024-07-24 02:12:18.514412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:03.782 qpair failed and we were unable to recover it. 00:34:03.782 [2024-07-24 02:12:18.524200] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:03.782 [2024-07-24 02:12:18.524309] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:03.782 [2024-07-24 02:12:18.524342] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:03.782 [2024-07-24 02:12:18.524357] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:03.782 [2024-07-24 02:12:18.524370] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:03.782 [2024-07-24 02:12:18.524399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:03.782 qpair failed and we were unable to recover it. 00:34:03.782 [2024-07-24 02:12:18.534210] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:03.782 [2024-07-24 02:12:18.534314] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:03.782 [2024-07-24 02:12:18.534347] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:03.782 [2024-07-24 02:12:18.534360] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:03.782 [2024-07-24 02:12:18.534373] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:03.782 [2024-07-24 02:12:18.534403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:03.782 qpair failed and we were unable to recover it. 00:34:03.782 [2024-07-24 02:12:18.544297] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:03.782 [2024-07-24 02:12:18.544416] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:03.782 [2024-07-24 02:12:18.544442] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:03.782 [2024-07-24 02:12:18.544456] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:03.782 [2024-07-24 02:12:18.544468] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:03.782 [2024-07-24 02:12:18.544499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:03.782 qpair failed and we were unable to recover it. 00:34:03.782 [2024-07-24 02:12:18.554254] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:03.782 [2024-07-24 02:12:18.554366] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:03.782 [2024-07-24 02:12:18.554391] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:03.782 [2024-07-24 02:12:18.554405] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:03.782 [2024-07-24 02:12:18.554418] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:03.782 [2024-07-24 02:12:18.554447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:03.782 qpair failed and we were unable to recover it. 00:34:03.782 [2024-07-24 02:12:18.564282] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:03.782 [2024-07-24 02:12:18.564398] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:03.782 [2024-07-24 02:12:18.564430] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:03.782 [2024-07-24 02:12:18.564445] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:03.782 [2024-07-24 02:12:18.564456] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:03.782 [2024-07-24 02:12:18.564498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:03.782 qpair failed and we were unable to recover it. 00:34:03.782 [2024-07-24 02:12:18.574314] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:03.782 [2024-07-24 02:12:18.574433] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:03.782 [2024-07-24 02:12:18.574459] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:03.782 [2024-07-24 02:12:18.574473] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:03.782 [2024-07-24 02:12:18.574486] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:03.782 [2024-07-24 02:12:18.574515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:03.782 qpair failed and we were unable to recover it. 00:34:03.782 [2024-07-24 02:12:18.584367] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:03.782 [2024-07-24 02:12:18.584479] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:03.782 [2024-07-24 02:12:18.584506] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:03.782 [2024-07-24 02:12:18.584521] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:03.782 [2024-07-24 02:12:18.584537] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:03.782 [2024-07-24 02:12:18.584568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:03.782 qpair failed and we were unable to recover it. 00:34:03.782 [2024-07-24 02:12:18.594397] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:03.783 [2024-07-24 02:12:18.594512] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:03.783 [2024-07-24 02:12:18.594538] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:03.783 [2024-07-24 02:12:18.594553] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:03.783 [2024-07-24 02:12:18.594566] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:03.783 [2024-07-24 02:12:18.594595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:03.783 qpair failed and we were unable to recover it. 00:34:03.783 [2024-07-24 02:12:18.604428] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:03.783 [2024-07-24 02:12:18.604542] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:03.783 [2024-07-24 02:12:18.604566] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:03.783 [2024-07-24 02:12:18.604579] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:03.783 [2024-07-24 02:12:18.604591] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:03.783 [2024-07-24 02:12:18.604625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:03.783 qpair failed and we were unable to recover it. 00:34:03.783 [2024-07-24 02:12:18.614438] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:03.783 [2024-07-24 02:12:18.614568] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:03.783 [2024-07-24 02:12:18.614594] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:03.783 [2024-07-24 02:12:18.614608] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:03.783 [2024-07-24 02:12:18.614620] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:03.783 [2024-07-24 02:12:18.614651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:03.783 qpair failed and we were unable to recover it. 00:34:03.783 [2024-07-24 02:12:18.624459] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:03.783 [2024-07-24 02:12:18.624581] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:03.783 [2024-07-24 02:12:18.624607] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:03.783 [2024-07-24 02:12:18.624621] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:03.783 [2024-07-24 02:12:18.624633] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:03.783 [2024-07-24 02:12:18.624663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:03.783 qpair failed and we were unable to recover it. 00:34:03.783 [2024-07-24 02:12:18.634480] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:03.783 [2024-07-24 02:12:18.634586] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:03.783 [2024-07-24 02:12:18.634611] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:03.783 [2024-07-24 02:12:18.634625] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:03.783 [2024-07-24 02:12:18.634638] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:03.783 [2024-07-24 02:12:18.634667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:03.783 qpair failed and we were unable to recover it. 00:34:03.783 [2024-07-24 02:12:18.644556] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:03.783 [2024-07-24 02:12:18.644712] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:03.783 [2024-07-24 02:12:18.644738] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:03.783 [2024-07-24 02:12:18.644752] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:03.783 [2024-07-24 02:12:18.644765] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:03.783 [2024-07-24 02:12:18.644807] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:03.783 qpair failed and we were unable to recover it. 00:34:03.783 [2024-07-24 02:12:18.654546] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:03.783 [2024-07-24 02:12:18.654650] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:03.783 [2024-07-24 02:12:18.654676] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:03.783 [2024-07-24 02:12:18.654690] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:03.783 [2024-07-24 02:12:18.654702] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:03.783 [2024-07-24 02:12:18.654731] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:03.783 qpair failed and we were unable to recover it. 00:34:03.783 [2024-07-24 02:12:18.664552] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:03.783 [2024-07-24 02:12:18.664655] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:03.783 [2024-07-24 02:12:18.664680] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:03.783 [2024-07-24 02:12:18.664694] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:03.783 [2024-07-24 02:12:18.664706] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:03.783 [2024-07-24 02:12:18.664736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:03.783 qpair failed and we were unable to recover it. 00:34:03.783 [2024-07-24 02:12:18.674680] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:03.783 [2024-07-24 02:12:18.674825] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:03.783 [2024-07-24 02:12:18.674850] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:03.783 [2024-07-24 02:12:18.674865] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:03.783 [2024-07-24 02:12:18.674877] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:03.783 [2024-07-24 02:12:18.674906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:03.783 qpair failed and we were unable to recover it. 00:34:04.042 [2024-07-24 02:12:18.684650] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:04.042 [2024-07-24 02:12:18.684760] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:04.042 [2024-07-24 02:12:18.684786] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:04.042 [2024-07-24 02:12:18.684800] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:04.042 [2024-07-24 02:12:18.684812] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:04.042 [2024-07-24 02:12:18.684842] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:04.042 qpair failed and we were unable to recover it. 00:34:04.042 [2024-07-24 02:12:18.694707] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:04.042 [2024-07-24 02:12:18.694820] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:04.042 [2024-07-24 02:12:18.694846] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:04.042 [2024-07-24 02:12:18.694860] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:04.042 [2024-07-24 02:12:18.694878] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:04.042 [2024-07-24 02:12:18.694908] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:04.042 qpair failed and we were unable to recover it. 00:34:04.042 [2024-07-24 02:12:18.704669] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:04.042 [2024-07-24 02:12:18.704773] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:04.042 [2024-07-24 02:12:18.704798] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:04.042 [2024-07-24 02:12:18.704812] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:04.042 [2024-07-24 02:12:18.704825] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:04.042 [2024-07-24 02:12:18.704854] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:04.042 qpair failed and we were unable to recover it. 00:34:04.042 [2024-07-24 02:12:18.714735] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:04.042 [2024-07-24 02:12:18.714849] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:04.042 [2024-07-24 02:12:18.714874] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:04.042 [2024-07-24 02:12:18.714887] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:04.042 [2024-07-24 02:12:18.714900] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:04.042 [2024-07-24 02:12:18.714928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:04.042 qpair failed and we were unable to recover it. 00:34:04.042 [2024-07-24 02:12:18.724744] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:04.042 [2024-07-24 02:12:18.724850] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:04.042 [2024-07-24 02:12:18.724875] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:04.042 [2024-07-24 02:12:18.724889] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:04.042 [2024-07-24 02:12:18.724902] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:04.042 [2024-07-24 02:12:18.724931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:04.042 qpair failed and we were unable to recover it. 00:34:04.042 [2024-07-24 02:12:18.734791] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:04.042 [2024-07-24 02:12:18.734891] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:04.042 [2024-07-24 02:12:18.734917] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:04.042 [2024-07-24 02:12:18.734930] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:04.042 [2024-07-24 02:12:18.734943] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:04.042 [2024-07-24 02:12:18.734971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:04.042 qpair failed and we were unable to recover it. 00:34:04.042 [2024-07-24 02:12:18.744817] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:04.042 [2024-07-24 02:12:18.744917] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:04.042 [2024-07-24 02:12:18.744943] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:04.042 [2024-07-24 02:12:18.744957] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:04.042 [2024-07-24 02:12:18.744969] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:04.042 [2024-07-24 02:12:18.745012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:04.042 qpair failed and we were unable to recover it. 00:34:04.042 [2024-07-24 02:12:18.754875] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:04.042 [2024-07-24 02:12:18.754985] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:04.042 [2024-07-24 02:12:18.755013] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:04.042 [2024-07-24 02:12:18.755036] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:04.042 [2024-07-24 02:12:18.755049] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:04.042 [2024-07-24 02:12:18.755080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:04.042 qpair failed and we were unable to recover it. 00:34:04.042 [2024-07-24 02:12:18.764881] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:04.042 [2024-07-24 02:12:18.764986] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:04.042 [2024-07-24 02:12:18.765012] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:04.043 [2024-07-24 02:12:18.765026] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:04.043 [2024-07-24 02:12:18.765038] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:04.043 [2024-07-24 02:12:18.765068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:04.043 qpair failed and we were unable to recover it. 00:34:04.043 [2024-07-24 02:12:18.774905] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:04.043 [2024-07-24 02:12:18.775028] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:04.043 [2024-07-24 02:12:18.775056] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:04.043 [2024-07-24 02:12:18.775070] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:04.043 [2024-07-24 02:12:18.775083] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:04.043 [2024-07-24 02:12:18.775113] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:04.043 qpair failed and we were unable to recover it. 00:34:04.043 [2024-07-24 02:12:18.784944] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:04.043 [2024-07-24 02:12:18.785042] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:04.043 [2024-07-24 02:12:18.785068] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:04.043 [2024-07-24 02:12:18.785089] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:04.043 [2024-07-24 02:12:18.785103] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:04.043 [2024-07-24 02:12:18.785133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:04.043 qpair failed and we were unable to recover it. 00:34:04.043 [2024-07-24 02:12:18.794982] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:04.043 [2024-07-24 02:12:18.795115] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:04.043 [2024-07-24 02:12:18.795141] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:04.043 [2024-07-24 02:12:18.795155] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:04.043 [2024-07-24 02:12:18.795168] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:04.043 [2024-07-24 02:12:18.795197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:04.043 qpair failed and we were unable to recover it. 00:34:04.043 [2024-07-24 02:12:18.805004] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:04.043 [2024-07-24 02:12:18.805146] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:04.043 [2024-07-24 02:12:18.805172] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:04.043 [2024-07-24 02:12:18.805186] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:04.043 [2024-07-24 02:12:18.805198] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:04.043 [2024-07-24 02:12:18.805228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:04.043 qpair failed and we were unable to recover it. 00:34:04.043 [2024-07-24 02:12:18.815042] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:04.043 [2024-07-24 02:12:18.815154] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:04.043 [2024-07-24 02:12:18.815180] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:04.043 [2024-07-24 02:12:18.815194] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:04.043 [2024-07-24 02:12:18.815206] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:04.043 [2024-07-24 02:12:18.815235] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:04.043 qpair failed and we were unable to recover it. 00:34:04.043 [2024-07-24 02:12:18.825063] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:04.043 [2024-07-24 02:12:18.825222] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:04.043 [2024-07-24 02:12:18.825247] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:04.043 [2024-07-24 02:12:18.825261] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:04.043 [2024-07-24 02:12:18.825274] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:04.043 [2024-07-24 02:12:18.825303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:04.043 qpair failed and we were unable to recover it. 00:34:04.043 [2024-07-24 02:12:18.835091] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:04.043 [2024-07-24 02:12:18.835201] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:04.043 [2024-07-24 02:12:18.835227] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:04.043 [2024-07-24 02:12:18.835242] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:04.043 [2024-07-24 02:12:18.835255] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:04.043 [2024-07-24 02:12:18.835285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:04.043 qpair failed and we were unable to recover it. 00:34:04.043 [2024-07-24 02:12:18.845134] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:04.043 [2024-07-24 02:12:18.845239] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:04.043 [2024-07-24 02:12:18.845264] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:04.043 [2024-07-24 02:12:18.845278] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:04.043 [2024-07-24 02:12:18.845291] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:04.043 [2024-07-24 02:12:18.845327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:04.043 qpair failed and we were unable to recover it. 00:34:04.043 [2024-07-24 02:12:18.855157] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:04.043 [2024-07-24 02:12:18.855258] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:04.043 [2024-07-24 02:12:18.855283] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:04.043 [2024-07-24 02:12:18.855298] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:04.043 [2024-07-24 02:12:18.855310] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:04.043 [2024-07-24 02:12:18.855347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:04.043 qpair failed and we were unable to recover it. 00:34:04.043 [2024-07-24 02:12:18.865180] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:04.043 [2024-07-24 02:12:18.865286] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:04.043 [2024-07-24 02:12:18.865311] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:04.043 [2024-07-24 02:12:18.865332] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:04.043 [2024-07-24 02:12:18.865345] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:04.043 [2024-07-24 02:12:18.865377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:04.043 qpair failed and we were unable to recover it. 00:34:04.043 [2024-07-24 02:12:18.875202] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:04.043 [2024-07-24 02:12:18.875313] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:04.043 [2024-07-24 02:12:18.875352] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:04.043 [2024-07-24 02:12:18.875367] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:04.043 [2024-07-24 02:12:18.875380] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:04.043 [2024-07-24 02:12:18.875409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:04.043 qpair failed and we were unable to recover it. 00:34:04.043 [2024-07-24 02:12:18.885226] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:04.043 [2024-07-24 02:12:18.885339] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:04.043 [2024-07-24 02:12:18.885365] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:04.043 [2024-07-24 02:12:18.885378] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:04.043 [2024-07-24 02:12:18.885391] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:04.043 [2024-07-24 02:12:18.885421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:04.043 qpair failed and we were unable to recover it. 00:34:04.044 [2024-07-24 02:12:18.895270] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:04.044 [2024-07-24 02:12:18.895414] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:04.044 [2024-07-24 02:12:18.895441] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:04.044 [2024-07-24 02:12:18.895455] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:04.044 [2024-07-24 02:12:18.895467] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:04.044 [2024-07-24 02:12:18.895498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:04.044 qpair failed and we were unable to recover it. 00:34:04.044 [2024-07-24 02:12:18.905269] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:04.044 [2024-07-24 02:12:18.905417] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:04.044 [2024-07-24 02:12:18.905442] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:04.044 [2024-07-24 02:12:18.905456] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:04.044 [2024-07-24 02:12:18.905469] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:04.044 [2024-07-24 02:12:18.905499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:04.044 qpair failed and we were unable to recover it. 00:34:04.044 [2024-07-24 02:12:18.915354] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:04.044 [2024-07-24 02:12:18.915464] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:04.044 [2024-07-24 02:12:18.915492] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:04.044 [2024-07-24 02:12:18.915507] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:04.044 [2024-07-24 02:12:18.915523] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:04.044 [2024-07-24 02:12:18.915559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:04.044 qpair failed and we were unable to recover it. 00:34:04.044 [2024-07-24 02:12:18.925362] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:04.044 [2024-07-24 02:12:18.925469] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:04.044 [2024-07-24 02:12:18.925495] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:04.044 [2024-07-24 02:12:18.925510] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:04.044 [2024-07-24 02:12:18.925523] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:04.044 [2024-07-24 02:12:18.925552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:04.044 qpair failed and we were unable to recover it. 00:34:04.044 [2024-07-24 02:12:18.935413] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:04.044 [2024-07-24 02:12:18.935527] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:04.044 [2024-07-24 02:12:18.935556] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:04.044 [2024-07-24 02:12:18.935571] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:04.044 [2024-07-24 02:12:18.935584] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:04.044 [2024-07-24 02:12:18.935619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:04.044 qpair failed and we were unable to recover it. 00:34:04.302 [2024-07-24 02:12:18.945449] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:04.302 [2024-07-24 02:12:18.945568] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:04.302 [2024-07-24 02:12:18.945594] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:04.302 [2024-07-24 02:12:18.945608] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:04.302 [2024-07-24 02:12:18.945621] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:04.302 [2024-07-24 02:12:18.945651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:04.302 qpair failed and we were unable to recover it. 00:34:04.302 [2024-07-24 02:12:18.955476] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:04.302 [2024-07-24 02:12:18.955635] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:04.302 [2024-07-24 02:12:18.955661] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:04.302 [2024-07-24 02:12:18.955674] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:04.302 [2024-07-24 02:12:18.955687] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:04.302 [2024-07-24 02:12:18.955716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:04.302 qpair failed and we were unable to recover it. 00:34:04.302 [2024-07-24 02:12:18.965473] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:04.302 [2024-07-24 02:12:18.965593] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:04.302 [2024-07-24 02:12:18.965624] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:04.302 [2024-07-24 02:12:18.965639] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:04.302 [2024-07-24 02:12:18.965652] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:04.302 [2024-07-24 02:12:18.965681] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:04.302 qpair failed and we were unable to recover it. 00:34:04.302 [2024-07-24 02:12:18.975509] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:04.302 [2024-07-24 02:12:18.975639] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:04.302 [2024-07-24 02:12:18.975665] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:04.302 [2024-07-24 02:12:18.975679] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:04.302 [2024-07-24 02:12:18.975692] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:04.302 [2024-07-24 02:12:18.975722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:04.302 qpair failed and we were unable to recover it. 00:34:04.302 [2024-07-24 02:12:18.985528] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:04.302 [2024-07-24 02:12:18.985655] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:04.302 [2024-07-24 02:12:18.985680] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:04.302 [2024-07-24 02:12:18.985694] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:04.302 [2024-07-24 02:12:18.985707] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:04.302 [2024-07-24 02:12:18.985736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:04.302 qpair failed and we were unable to recover it. 00:34:04.302 [2024-07-24 02:12:18.995567] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:04.302 [2024-07-24 02:12:18.995677] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:04.302 [2024-07-24 02:12:18.995702] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:04.302 [2024-07-24 02:12:18.995716] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:04.302 [2024-07-24 02:12:18.995728] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:04.302 [2024-07-24 02:12:18.995758] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:04.302 qpair failed and we were unable to recover it. 00:34:04.302 [2024-07-24 02:12:19.005594] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:04.303 [2024-07-24 02:12:19.005721] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:04.303 [2024-07-24 02:12:19.005745] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:04.303 [2024-07-24 02:12:19.005759] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:04.303 [2024-07-24 02:12:19.005772] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:04.303 [2024-07-24 02:12:19.005807] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:04.303 qpair failed and we were unable to recover it. 00:34:04.303 [2024-07-24 02:12:19.015612] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:04.303 [2024-07-24 02:12:19.015761] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:04.303 [2024-07-24 02:12:19.015787] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:04.303 [2024-07-24 02:12:19.015801] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:04.303 [2024-07-24 02:12:19.015814] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:04.303 [2024-07-24 02:12:19.015843] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:04.303 qpair failed and we were unable to recover it. 00:34:04.303 [2024-07-24 02:12:19.025620] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:04.303 [2024-07-24 02:12:19.025722] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:04.303 [2024-07-24 02:12:19.025748] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:04.303 [2024-07-24 02:12:19.025761] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:04.303 [2024-07-24 02:12:19.025774] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:04.303 [2024-07-24 02:12:19.025804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:04.303 qpair failed and we were unable to recover it. 00:34:04.303 [2024-07-24 02:12:19.035660] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:04.303 [2024-07-24 02:12:19.035792] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:04.303 [2024-07-24 02:12:19.035818] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:04.303 [2024-07-24 02:12:19.035832] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:04.303 [2024-07-24 02:12:19.035845] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:04.303 [2024-07-24 02:12:19.035873] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:04.303 qpair failed and we were unable to recover it. 00:34:04.303 [2024-07-24 02:12:19.045657] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:04.303 [2024-07-24 02:12:19.045764] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:04.303 [2024-07-24 02:12:19.045788] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:04.303 [2024-07-24 02:12:19.045802] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:04.303 [2024-07-24 02:12:19.045815] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:04.303 [2024-07-24 02:12:19.045844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:04.303 qpair failed and we were unable to recover it. 00:34:04.303 [2024-07-24 02:12:19.055736] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:04.303 [2024-07-24 02:12:19.055842] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:04.303 [2024-07-24 02:12:19.055875] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:04.303 [2024-07-24 02:12:19.055890] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:04.303 [2024-07-24 02:12:19.055903] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:04.303 [2024-07-24 02:12:19.055932] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:04.303 qpair failed and we were unable to recover it. 00:34:04.303 [2024-07-24 02:12:19.065732] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:04.303 [2024-07-24 02:12:19.065830] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:04.303 [2024-07-24 02:12:19.065855] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:04.303 [2024-07-24 02:12:19.065869] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:04.303 [2024-07-24 02:12:19.065882] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:04.303 [2024-07-24 02:12:19.065912] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:04.303 qpair failed and we were unable to recover it. 00:34:04.303 [2024-07-24 02:12:19.075752] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:04.303 [2024-07-24 02:12:19.075917] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:04.303 [2024-07-24 02:12:19.075945] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:04.303 [2024-07-24 02:12:19.075960] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:04.303 [2024-07-24 02:12:19.075973] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:04.303 [2024-07-24 02:12:19.076003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:04.303 qpair failed and we were unable to recover it. 00:34:04.303 [2024-07-24 02:12:19.085777] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:04.303 [2024-07-24 02:12:19.085889] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:04.303 [2024-07-24 02:12:19.085915] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:04.303 [2024-07-24 02:12:19.085928] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:04.303 [2024-07-24 02:12:19.085941] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:04.303 [2024-07-24 02:12:19.085973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:04.303 qpair failed and we were unable to recover it. 00:34:04.303 [2024-07-24 02:12:19.095809] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:04.303 [2024-07-24 02:12:19.095916] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:04.303 [2024-07-24 02:12:19.095941] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:04.303 [2024-07-24 02:12:19.095956] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:04.303 [2024-07-24 02:12:19.095973] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:04.303 [2024-07-24 02:12:19.096005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:04.303 qpair failed and we were unable to recover it. 00:34:04.303 [2024-07-24 02:12:19.105864] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:04.303 [2024-07-24 02:12:19.105986] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:04.303 [2024-07-24 02:12:19.106012] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:04.303 [2024-07-24 02:12:19.106025] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:04.303 [2024-07-24 02:12:19.106038] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:04.303 [2024-07-24 02:12:19.106067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:04.303 qpair failed and we were unable to recover it. 00:34:04.303 [2024-07-24 02:12:19.115885] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:04.303 [2024-07-24 02:12:19.115989] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:04.303 [2024-07-24 02:12:19.116014] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:04.303 [2024-07-24 02:12:19.116028] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:04.303 [2024-07-24 02:12:19.116040] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:04.303 [2024-07-24 02:12:19.116069] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:04.303 qpair failed and we were unable to recover it. 00:34:04.303 [2024-07-24 02:12:19.125914] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:04.303 [2024-07-24 02:12:19.126030] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:04.303 [2024-07-24 02:12:19.126055] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:04.303 [2024-07-24 02:12:19.126069] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:04.303 [2024-07-24 02:12:19.126081] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:04.304 [2024-07-24 02:12:19.126112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:04.304 qpair failed and we were unable to recover it. 00:34:04.304 [2024-07-24 02:12:19.135943] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:04.304 [2024-07-24 02:12:19.136053] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:04.304 [2024-07-24 02:12:19.136078] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:04.304 [2024-07-24 02:12:19.136092] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:04.304 [2024-07-24 02:12:19.136105] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:04.304 [2024-07-24 02:12:19.136135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:04.304 qpair failed and we were unable to recover it. 00:34:04.304 [2024-07-24 02:12:19.145958] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:04.304 [2024-07-24 02:12:19.146066] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:04.304 [2024-07-24 02:12:19.146091] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:04.304 [2024-07-24 02:12:19.146105] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:04.304 [2024-07-24 02:12:19.146118] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:04.304 [2024-07-24 02:12:19.146148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:04.304 qpair failed and we were unable to recover it. 00:34:04.304 [2024-07-24 02:12:19.156009] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:04.304 [2024-07-24 02:12:19.156125] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:04.304 [2024-07-24 02:12:19.156150] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:04.304 [2024-07-24 02:12:19.156164] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:04.304 [2024-07-24 02:12:19.156177] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:04.304 [2024-07-24 02:12:19.156205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:04.304 qpair failed and we were unable to recover it. 00:34:04.304 [2024-07-24 02:12:19.165994] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:04.304 [2024-07-24 02:12:19.166096] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:04.304 [2024-07-24 02:12:19.166121] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:04.304 [2024-07-24 02:12:19.166135] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:04.304 [2024-07-24 02:12:19.166148] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:04.304 [2024-07-24 02:12:19.166177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:04.304 qpair failed and we were unable to recover it. 00:34:04.304 [2024-07-24 02:12:19.176043] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:04.304 [2024-07-24 02:12:19.176144] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:04.304 [2024-07-24 02:12:19.176169] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:04.304 [2024-07-24 02:12:19.176183] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:04.304 [2024-07-24 02:12:19.176195] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:04.304 [2024-07-24 02:12:19.176224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:04.304 qpair failed and we were unable to recover it. 00:34:04.304 [2024-07-24 02:12:19.186075] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:04.304 [2024-07-24 02:12:19.186189] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:04.304 [2024-07-24 02:12:19.186214] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:04.304 [2024-07-24 02:12:19.186234] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:04.304 [2024-07-24 02:12:19.186247] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:04.304 [2024-07-24 02:12:19.186277] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:04.304 qpair failed and we were unable to recover it. 00:34:04.304 [2024-07-24 02:12:19.196172] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:04.304 [2024-07-24 02:12:19.196304] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:04.304 [2024-07-24 02:12:19.196339] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:04.563 [2024-07-24 02:12:19.196358] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:04.563 [2024-07-24 02:12:19.196373] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:04.563 [2024-07-24 02:12:19.196405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:04.563 qpair failed and we were unable to recover it. 00:34:04.563 [2024-07-24 02:12:19.206113] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:04.563 [2024-07-24 02:12:19.206222] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:04.563 [2024-07-24 02:12:19.206249] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:04.563 [2024-07-24 02:12:19.206263] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:04.563 [2024-07-24 02:12:19.206276] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:04.563 [2024-07-24 02:12:19.206306] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:04.563 qpair failed and we were unable to recover it. 00:34:04.563 [2024-07-24 02:12:19.216199] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:04.563 [2024-07-24 02:12:19.216315] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:04.563 [2024-07-24 02:12:19.216348] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:04.563 [2024-07-24 02:12:19.216362] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:04.563 [2024-07-24 02:12:19.216375] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:04.563 [2024-07-24 02:12:19.216416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:04.563 qpair failed and we were unable to recover it. 00:34:04.563 [2024-07-24 02:12:19.226203] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:04.563 [2024-07-24 02:12:19.226341] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:04.563 [2024-07-24 02:12:19.226367] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:04.563 [2024-07-24 02:12:19.226381] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:04.563 [2024-07-24 02:12:19.226397] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:04.563 [2024-07-24 02:12:19.226427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:04.563 qpair failed and we were unable to recover it. 00:34:04.563 [2024-07-24 02:12:19.236210] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:04.563 [2024-07-24 02:12:19.236325] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:04.563 [2024-07-24 02:12:19.236351] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:04.563 [2024-07-24 02:12:19.236365] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:04.563 [2024-07-24 02:12:19.236378] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:04.563 [2024-07-24 02:12:19.236407] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:04.563 qpair failed and we were unable to recover it. 00:34:04.563 [2024-07-24 02:12:19.246218] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:04.563 [2024-07-24 02:12:19.246322] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:04.563 [2024-07-24 02:12:19.246348] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:04.563 [2024-07-24 02:12:19.246362] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:04.563 [2024-07-24 02:12:19.246374] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:04.563 [2024-07-24 02:12:19.246405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:04.563 qpair failed and we were unable to recover it. 00:34:04.563 [2024-07-24 02:12:19.256322] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:04.563 [2024-07-24 02:12:19.256459] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:04.563 [2024-07-24 02:12:19.256486] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:04.563 [2024-07-24 02:12:19.256500] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:04.563 [2024-07-24 02:12:19.256513] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:04.563 [2024-07-24 02:12:19.256555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:04.563 qpair failed and we were unable to recover it. 00:34:04.563 [2024-07-24 02:12:19.266340] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:04.563 [2024-07-24 02:12:19.266446] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:04.563 [2024-07-24 02:12:19.266472] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:04.563 [2024-07-24 02:12:19.266486] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:04.563 [2024-07-24 02:12:19.266498] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:04.563 [2024-07-24 02:12:19.266539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:04.563 qpair failed and we were unable to recover it. 00:34:04.563 [2024-07-24 02:12:19.276342] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:04.563 [2024-07-24 02:12:19.276451] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:04.563 [2024-07-24 02:12:19.276477] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:04.563 [2024-07-24 02:12:19.276497] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:04.563 [2024-07-24 02:12:19.276511] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:04.563 [2024-07-24 02:12:19.276540] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:04.563 qpair failed and we were unable to recover it. 00:34:04.563 [2024-07-24 02:12:19.286340] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:04.564 [2024-07-24 02:12:19.286441] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:04.564 [2024-07-24 02:12:19.286467] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:04.564 [2024-07-24 02:12:19.286481] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:04.564 [2024-07-24 02:12:19.286494] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:04.564 [2024-07-24 02:12:19.286524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:04.564 qpair failed and we were unable to recover it. 00:34:04.564 [2024-07-24 02:12:19.296398] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:04.564 [2024-07-24 02:12:19.296510] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:04.564 [2024-07-24 02:12:19.296537] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:04.564 [2024-07-24 02:12:19.296551] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:04.564 [2024-07-24 02:12:19.296564] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:04.564 [2024-07-24 02:12:19.296594] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:04.564 qpair failed and we were unable to recover it. 00:34:04.564 [2024-07-24 02:12:19.306408] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:04.564 [2024-07-24 02:12:19.306516] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:04.564 [2024-07-24 02:12:19.306542] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:04.564 [2024-07-24 02:12:19.306556] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:04.564 [2024-07-24 02:12:19.306569] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:04.564 [2024-07-24 02:12:19.306597] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:04.564 qpair failed and we were unable to recover it. 00:34:04.564 [2024-07-24 02:12:19.316510] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:04.564 [2024-07-24 02:12:19.316622] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:04.564 [2024-07-24 02:12:19.316647] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:04.564 [2024-07-24 02:12:19.316661] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:04.564 [2024-07-24 02:12:19.316674] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:04.564 [2024-07-24 02:12:19.316704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:04.564 qpair failed and we were unable to recover it. 00:34:04.564 [2024-07-24 02:12:19.326576] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:04.564 [2024-07-24 02:12:19.326694] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:04.564 [2024-07-24 02:12:19.326719] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:04.564 [2024-07-24 02:12:19.326732] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:04.564 [2024-07-24 02:12:19.326745] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:04.564 [2024-07-24 02:12:19.326775] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:04.564 qpair failed and we were unable to recover it. 00:34:04.564 [2024-07-24 02:12:19.336515] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:04.564 [2024-07-24 02:12:19.336649] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:04.564 [2024-07-24 02:12:19.336675] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:04.564 [2024-07-24 02:12:19.336689] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:04.564 [2024-07-24 02:12:19.336702] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:04.564 [2024-07-24 02:12:19.336730] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:04.564 qpair failed and we were unable to recover it. 00:34:04.564 [2024-07-24 02:12:19.346549] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:04.564 [2024-07-24 02:12:19.346664] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:04.564 [2024-07-24 02:12:19.346691] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:04.564 [2024-07-24 02:12:19.346705] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:04.564 [2024-07-24 02:12:19.346718] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:04.564 [2024-07-24 02:12:19.346759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:04.564 qpair failed and we were unable to recover it. 00:34:04.564 [2024-07-24 02:12:19.356583] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:04.564 [2024-07-24 02:12:19.356689] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:04.564 [2024-07-24 02:12:19.356715] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:04.564 [2024-07-24 02:12:19.356729] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:04.564 [2024-07-24 02:12:19.356742] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:04.564 [2024-07-24 02:12:19.356782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:04.564 qpair failed and we were unable to recover it. 00:34:04.564 [2024-07-24 02:12:19.366627] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:04.564 [2024-07-24 02:12:19.366744] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:04.564 [2024-07-24 02:12:19.366775] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:04.564 [2024-07-24 02:12:19.366789] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:04.564 [2024-07-24 02:12:19.366802] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:04.564 [2024-07-24 02:12:19.366834] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:04.564 qpair failed and we were unable to recover it. 00:34:04.564 [2024-07-24 02:12:19.376692] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:04.564 [2024-07-24 02:12:19.376803] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:04.564 [2024-07-24 02:12:19.376829] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:04.564 [2024-07-24 02:12:19.376843] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:04.564 [2024-07-24 02:12:19.376856] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:04.564 [2024-07-24 02:12:19.376886] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:04.564 qpair failed and we were unable to recover it. 00:34:04.564 [2024-07-24 02:12:19.386638] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:04.564 [2024-07-24 02:12:19.386745] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:04.564 [2024-07-24 02:12:19.386770] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:04.564 [2024-07-24 02:12:19.386784] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:04.564 [2024-07-24 02:12:19.386797] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:04.564 [2024-07-24 02:12:19.386826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:04.564 qpair failed and we were unable to recover it. 00:34:04.564 [2024-07-24 02:12:19.396714] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:04.564 [2024-07-24 02:12:19.396829] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:04.564 [2024-07-24 02:12:19.396855] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:04.564 [2024-07-24 02:12:19.396869] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:04.564 [2024-07-24 02:12:19.396882] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:04.564 [2024-07-24 02:12:19.396912] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:04.564 qpair failed and we were unable to recover it. 00:34:04.564 [2024-07-24 02:12:19.406782] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:04.564 [2024-07-24 02:12:19.406892] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:04.564 [2024-07-24 02:12:19.406918] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:04.564 [2024-07-24 02:12:19.406932] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:04.565 [2024-07-24 02:12:19.406945] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:04.565 [2024-07-24 02:12:19.406980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:04.565 qpair failed and we were unable to recover it. 00:34:04.565 [2024-07-24 02:12:19.416746] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:04.565 [2024-07-24 02:12:19.416854] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:04.565 [2024-07-24 02:12:19.416880] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:04.565 [2024-07-24 02:12:19.416895] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:04.565 [2024-07-24 02:12:19.416907] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:04.565 [2024-07-24 02:12:19.416948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:04.565 qpair failed and we were unable to recover it. 00:34:04.565 [2024-07-24 02:12:19.426724] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:04.565 [2024-07-24 02:12:19.426869] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:04.565 [2024-07-24 02:12:19.426894] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:04.565 [2024-07-24 02:12:19.426908] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:04.565 [2024-07-24 02:12:19.426921] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:04.565 [2024-07-24 02:12:19.426951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:04.565 qpair failed and we were unable to recover it. 00:34:04.565 [2024-07-24 02:12:19.436844] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:04.565 [2024-07-24 02:12:19.436949] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:04.565 [2024-07-24 02:12:19.436974] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:04.565 [2024-07-24 02:12:19.436988] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:04.565 [2024-07-24 02:12:19.437000] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:04.565 [2024-07-24 02:12:19.437029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:04.565 qpair failed and we were unable to recover it. 00:34:04.565 [2024-07-24 02:12:19.446829] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:04.565 [2024-07-24 02:12:19.446959] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:04.565 [2024-07-24 02:12:19.446986] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:04.565 [2024-07-24 02:12:19.447000] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:04.565 [2024-07-24 02:12:19.447015] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:04.565 [2024-07-24 02:12:19.447048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:04.565 qpair failed and we were unable to recover it. 00:34:04.565 [2024-07-24 02:12:19.456862] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:04.824 [2024-07-24 02:12:19.456983] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:04.824 [2024-07-24 02:12:19.457014] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:04.824 [2024-07-24 02:12:19.457029] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:04.824 [2024-07-24 02:12:19.457042] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:04.824 [2024-07-24 02:12:19.457071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:04.824 qpair failed and we were unable to recover it. 00:34:04.824 [2024-07-24 02:12:19.466862] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:04.824 [2024-07-24 02:12:19.466997] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:04.824 [2024-07-24 02:12:19.467022] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:04.824 [2024-07-24 02:12:19.467036] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:04.824 [2024-07-24 02:12:19.467048] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:04.824 [2024-07-24 02:12:19.467077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:04.824 qpair failed and we were unable to recover it. 00:34:04.824 [2024-07-24 02:12:19.476885] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:04.824 [2024-07-24 02:12:19.476997] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:04.824 [2024-07-24 02:12:19.477022] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:04.824 [2024-07-24 02:12:19.477037] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:04.824 [2024-07-24 02:12:19.477049] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:04.824 [2024-07-24 02:12:19.477079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:04.824 qpair failed and we were unable to recover it. 00:34:04.824 [2024-07-24 02:12:19.486950] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:04.824 [2024-07-24 02:12:19.487097] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:04.824 [2024-07-24 02:12:19.487122] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:04.824 [2024-07-24 02:12:19.487136] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:04.824 [2024-07-24 02:12:19.487148] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:04.824 [2024-07-24 02:12:19.487178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:04.824 qpair failed and we were unable to recover it. 00:34:04.824 [2024-07-24 02:12:19.496951] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:04.824 [2024-07-24 02:12:19.497063] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:04.824 [2024-07-24 02:12:19.497089] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:04.824 [2024-07-24 02:12:19.497103] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:04.824 [2024-07-24 02:12:19.497121] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:04.824 [2024-07-24 02:12:19.497152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:04.824 qpair failed and we were unable to recover it. 00:34:04.824 [2024-07-24 02:12:19.506961] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:04.824 [2024-07-24 02:12:19.507097] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:04.824 [2024-07-24 02:12:19.507122] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:04.824 [2024-07-24 02:12:19.507135] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:04.824 [2024-07-24 02:12:19.507148] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:04.824 [2024-07-24 02:12:19.507177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:04.824 qpair failed and we were unable to recover it. 00:34:04.824 [2024-07-24 02:12:19.516988] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:04.824 [2024-07-24 02:12:19.517099] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:04.824 [2024-07-24 02:12:19.517123] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:04.824 [2024-07-24 02:12:19.517138] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:04.824 [2024-07-24 02:12:19.517150] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:04.824 [2024-07-24 02:12:19.517179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:04.824 qpair failed and we were unable to recover it. 00:34:04.824 [2024-07-24 02:12:19.527017] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:04.824 [2024-07-24 02:12:19.527119] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:04.824 [2024-07-24 02:12:19.527144] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:04.824 [2024-07-24 02:12:19.527158] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:04.824 [2024-07-24 02:12:19.527170] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:04.824 [2024-07-24 02:12:19.527200] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:04.824 qpair failed and we were unable to recover it. 00:34:04.824 [2024-07-24 02:12:19.537050] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:04.824 [2024-07-24 02:12:19.537160] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:04.824 [2024-07-24 02:12:19.537186] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:04.824 [2024-07-24 02:12:19.537200] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:04.824 [2024-07-24 02:12:19.537212] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:04.824 [2024-07-24 02:12:19.537255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:04.824 qpair failed and we were unable to recover it. 00:34:04.824 [2024-07-24 02:12:19.547075] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:04.824 [2024-07-24 02:12:19.547187] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:04.824 [2024-07-24 02:12:19.547212] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:04.824 [2024-07-24 02:12:19.547226] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:04.824 [2024-07-24 02:12:19.547239] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:04.824 [2024-07-24 02:12:19.547268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:04.824 qpair failed and we were unable to recover it. 00:34:04.824 [2024-07-24 02:12:19.557121] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:04.824 [2024-07-24 02:12:19.557276] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:04.824 [2024-07-24 02:12:19.557302] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:04.824 [2024-07-24 02:12:19.557324] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:04.824 [2024-07-24 02:12:19.557341] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:04.824 [2024-07-24 02:12:19.557383] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:04.824 qpair failed and we were unable to recover it. 00:34:04.825 [2024-07-24 02:12:19.567110] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:04.825 [2024-07-24 02:12:19.567212] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:04.825 [2024-07-24 02:12:19.567237] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:04.825 [2024-07-24 02:12:19.567250] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:04.825 [2024-07-24 02:12:19.567262] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:04.825 [2024-07-24 02:12:19.567290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:04.825 qpair failed and we were unable to recover it. 00:34:04.825 [2024-07-24 02:12:19.577162] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:04.825 [2024-07-24 02:12:19.577282] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:04.825 [2024-07-24 02:12:19.577307] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:04.825 [2024-07-24 02:12:19.577331] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:04.825 [2024-07-24 02:12:19.577344] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:04.825 [2024-07-24 02:12:19.577374] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:04.825 qpair failed and we were unable to recover it. 00:34:04.825 [2024-07-24 02:12:19.587188] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:04.825 [2024-07-24 02:12:19.587294] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:04.825 [2024-07-24 02:12:19.587325] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:04.825 [2024-07-24 02:12:19.587347] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:04.825 [2024-07-24 02:12:19.587361] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:04.825 [2024-07-24 02:12:19.587391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:04.825 qpair failed and we were unable to recover it. 00:34:04.825 [2024-07-24 02:12:19.597217] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:04.825 [2024-07-24 02:12:19.597329] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:04.825 [2024-07-24 02:12:19.597355] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:04.825 [2024-07-24 02:12:19.597369] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:04.825 [2024-07-24 02:12:19.597381] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:04.825 [2024-07-24 02:12:19.597411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:04.825 qpair failed and we were unable to recover it. 00:34:04.825 [2024-07-24 02:12:19.607219] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:04.825 [2024-07-24 02:12:19.607333] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:04.825 [2024-07-24 02:12:19.607358] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:04.825 [2024-07-24 02:12:19.607371] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:04.825 [2024-07-24 02:12:19.607382] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:04.825 [2024-07-24 02:12:19.607411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:04.825 qpair failed and we were unable to recover it. 00:34:04.825 [2024-07-24 02:12:19.617283] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:04.825 [2024-07-24 02:12:19.617434] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:04.825 [2024-07-24 02:12:19.617460] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:04.825 [2024-07-24 02:12:19.617474] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:04.825 [2024-07-24 02:12:19.617487] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:04.825 [2024-07-24 02:12:19.617516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:04.825 qpair failed and we were unable to recover it. 00:34:04.825 [2024-07-24 02:12:19.627345] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:04.825 [2024-07-24 02:12:19.627473] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:04.825 [2024-07-24 02:12:19.627501] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:04.825 [2024-07-24 02:12:19.627515] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:04.825 [2024-07-24 02:12:19.627531] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:04.825 [2024-07-24 02:12:19.627576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:04.825 qpair failed and we were unable to recover it. 00:34:04.825 [2024-07-24 02:12:19.637350] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:04.825 [2024-07-24 02:12:19.637462] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:04.825 [2024-07-24 02:12:19.637487] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:04.825 [2024-07-24 02:12:19.637501] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:04.825 [2024-07-24 02:12:19.637513] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:04.825 [2024-07-24 02:12:19.637544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:04.825 qpair failed and we were unable to recover it. 00:34:04.825 [2024-07-24 02:12:19.647381] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:04.825 [2024-07-24 02:12:19.647527] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:04.825 [2024-07-24 02:12:19.647553] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:04.825 [2024-07-24 02:12:19.647567] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:04.825 [2024-07-24 02:12:19.647579] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:04.825 [2024-07-24 02:12:19.647608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:04.825 qpair failed and we were unable to recover it. 00:34:04.825 [2024-07-24 02:12:19.657377] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:04.825 [2024-07-24 02:12:19.657484] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:04.825 [2024-07-24 02:12:19.657510] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:04.825 [2024-07-24 02:12:19.657524] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:04.825 [2024-07-24 02:12:19.657537] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:04.825 [2024-07-24 02:12:19.657566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:04.825 qpair failed and we were unable to recover it. 00:34:04.825 [2024-07-24 02:12:19.667421] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:04.825 [2024-07-24 02:12:19.667534] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:04.825 [2024-07-24 02:12:19.667560] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:04.825 [2024-07-24 02:12:19.667574] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:04.825 [2024-07-24 02:12:19.667586] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:04.825 [2024-07-24 02:12:19.667628] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:04.825 qpair failed and we were unable to recover it. 00:34:04.825 [2024-07-24 02:12:19.677459] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:04.825 [2024-07-24 02:12:19.677612] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:04.825 [2024-07-24 02:12:19.677638] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:04.825 [2024-07-24 02:12:19.677658] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:04.825 [2024-07-24 02:12:19.677672] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:04.825 [2024-07-24 02:12:19.677701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:04.825 qpair failed and we were unable to recover it. 00:34:04.825 [2024-07-24 02:12:19.687512] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:04.825 [2024-07-24 02:12:19.687616] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:04.825 [2024-07-24 02:12:19.687641] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:04.825 [2024-07-24 02:12:19.687655] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:04.826 [2024-07-24 02:12:19.687668] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:04.826 [2024-07-24 02:12:19.687697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:04.826 qpair failed and we were unable to recover it. 00:34:04.826 [2024-07-24 02:12:19.697498] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:04.826 [2024-07-24 02:12:19.697614] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:04.826 [2024-07-24 02:12:19.697642] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:04.826 [2024-07-24 02:12:19.697657] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:04.826 [2024-07-24 02:12:19.697670] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:04.826 [2024-07-24 02:12:19.697700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:04.826 qpair failed and we were unable to recover it. 00:34:04.826 [2024-07-24 02:12:19.707519] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:04.826 [2024-07-24 02:12:19.707619] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:04.826 [2024-07-24 02:12:19.707645] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:04.826 [2024-07-24 02:12:19.707659] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:04.826 [2024-07-24 02:12:19.707671] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:04.826 [2024-07-24 02:12:19.707701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:04.826 qpair failed and we were unable to recover it. 00:34:04.826 [2024-07-24 02:12:19.717652] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:05.084 [2024-07-24 02:12:19.717779] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:05.084 [2024-07-24 02:12:19.717805] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:05.084 [2024-07-24 02:12:19.717819] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:05.084 [2024-07-24 02:12:19.717832] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:05.084 [2024-07-24 02:12:19.717861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:05.084 qpair failed and we were unable to recover it. 00:34:05.084 [2024-07-24 02:12:19.727590] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:05.084 [2024-07-24 02:12:19.727697] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:05.084 [2024-07-24 02:12:19.727723] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:05.084 [2024-07-24 02:12:19.727737] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:05.084 [2024-07-24 02:12:19.727749] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:05.084 [2024-07-24 02:12:19.727779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:05.084 qpair failed and we were unable to recover it. 00:34:05.084 [2024-07-24 02:12:19.737603] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:05.084 [2024-07-24 02:12:19.737707] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:05.084 [2024-07-24 02:12:19.737732] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:05.084 [2024-07-24 02:12:19.737747] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:05.084 [2024-07-24 02:12:19.737759] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:05.084 [2024-07-24 02:12:19.737789] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:05.084 qpair failed and we were unable to recover it. 00:34:05.084 [2024-07-24 02:12:19.747649] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:05.084 [2024-07-24 02:12:19.747761] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:05.084 [2024-07-24 02:12:19.747790] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:05.084 [2024-07-24 02:12:19.747804] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:05.084 [2024-07-24 02:12:19.747816] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:05.084 [2024-07-24 02:12:19.747846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:05.084 qpair failed and we were unable to recover it. 00:34:05.084 [2024-07-24 02:12:19.757702] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:05.084 [2024-07-24 02:12:19.757844] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:05.084 [2024-07-24 02:12:19.757870] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:05.084 [2024-07-24 02:12:19.757884] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:05.084 [2024-07-24 02:12:19.757896] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1164000b90 00:34:05.084 [2024-07-24 02:12:19.757937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:05.084 qpair failed and we were unable to recover it. 00:34:05.084 [2024-07-24 02:12:19.758069] nvme_ctrlr.c:4476:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:34:05.084 A controller has encountered a failure and is being reset. 00:34:05.084 Controller properly reset. 00:34:05.084 Initializing NVMe Controllers 00:34:05.084 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:05.084 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:05.084 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:34:05.084 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:34:05.084 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:34:05.084 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:34:05.084 Initialization complete. Launching workers. 00:34:05.084 Starting thread on core 1 00:34:05.084 Starting thread on core 2 00:34:05.084 Starting thread on core 3 00:34:05.084 Starting thread on core 0 00:34:05.084 02:12:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:34:05.084 00:34:05.084 real 0m10.792s 00:34:05.084 user 0m18.362s 00:34:05.084 sys 0m5.174s 00:34:05.084 02:12:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:05.084 02:12:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:05.084 ************************************ 00:34:05.084 END TEST nvmf_target_disconnect_tc2 00:34:05.084 ************************************ 00:34:05.084 02:12:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:34:05.084 02:12:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:34:05.084 02:12:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:34:05.084 02:12:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:05.084 02:12:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:34:05.084 02:12:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:05.084 02:12:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:34:05.084 02:12:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:05.084 02:12:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:05.084 rmmod nvme_tcp 00:34:05.084 rmmod nvme_fabrics 00:34:05.341 rmmod nvme_keyring 00:34:05.341 02:12:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:05.341 02:12:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:34:05.341 02:12:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:34:05.341 02:12:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 1584249 ']' 00:34:05.341 02:12:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 1584249 00:34:05.341 02:12:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@948 -- # '[' -z 1584249 ']' 00:34:05.341 02:12:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # kill -0 1584249 00:34:05.341 02:12:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # uname 00:34:05.341 02:12:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:05.341 02:12:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1584249 00:34:05.341 02:12:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_4 00:34:05.341 02:12:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_4 = sudo ']' 00:34:05.341 02:12:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1584249' 00:34:05.341 killing process with pid 1584249 00:34:05.342 02:12:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@967 -- # kill 1584249 00:34:05.342 02:12:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # wait 1584249 00:34:05.599 02:12:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:05.599 02:12:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:05.599 02:12:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:05.599 02:12:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:05.599 02:12:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:05.599 02:12:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:05.599 02:12:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:05.599 02:12:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:07.502 02:12:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:07.502 00:34:07.502 real 0m15.335s 00:34:07.502 user 0m44.645s 00:34:07.502 sys 0m6.979s 00:34:07.502 02:12:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:07.502 02:12:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:07.502 ************************************ 00:34:07.502 END TEST nvmf_target_disconnect 00:34:07.502 ************************************ 00:34:07.502 02:12:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:34:07.502 00:34:07.502 real 6m29.968s 00:34:07.502 user 16m43.426s 00:34:07.502 sys 1m23.601s 00:34:07.502 02:12:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:07.502 02:12:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:07.502 ************************************ 00:34:07.502 END TEST nvmf_host 00:34:07.502 ************************************ 00:34:07.502 00:34:07.502 real 27m4.031s 00:34:07.502 user 73m45.213s 00:34:07.502 sys 6m22.871s 00:34:07.502 02:12:22 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:07.502 02:12:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:07.502 ************************************ 00:34:07.502 END TEST nvmf_tcp 00:34:07.502 ************************************ 00:34:07.502 02:12:22 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:34:07.502 02:12:22 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:34:07.502 02:12:22 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:34:07.502 02:12:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:07.502 02:12:22 -- common/autotest_common.sh@10 -- # set +x 00:34:07.761 ************************************ 00:34:07.761 START TEST spdkcli_nvmf_tcp 00:34:07.761 ************************************ 00:34:07.761 02:12:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:34:07.761 * Looking for test storage... 00:34:07.761 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:34:07.761 02:12:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:34:07.761 02:12:22 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:34:07.761 02:12:22 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:34:07.761 02:12:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:07.761 02:12:22 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:34:07.761 02:12:22 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:07.761 02:12:22 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:07.761 02:12:22 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:07.761 02:12:22 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:07.761 02:12:22 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:07.761 02:12:22 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:07.761 02:12:22 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:07.761 02:12:22 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:07.761 02:12:22 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:07.761 02:12:22 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:07.761 02:12:22 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:07.761 02:12:22 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:07.761 02:12:22 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:07.761 02:12:22 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:07.761 02:12:22 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:07.761 02:12:22 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:07.761 02:12:22 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:07.761 02:12:22 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:07.761 02:12:22 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:07.761 02:12:22 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:07.761 02:12:22 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:07.761 02:12:22 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:07.761 02:12:22 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:07.761 02:12:22 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:34:07.761 02:12:22 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:07.761 02:12:22 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:34:07.761 02:12:22 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:07.761 02:12:22 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:07.761 02:12:22 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:07.761 02:12:22 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:07.761 02:12:22 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:07.761 02:12:22 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:07.761 02:12:22 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:07.761 02:12:22 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:07.761 02:12:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:34:07.761 02:12:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:34:07.761 02:12:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:34:07.761 02:12:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:34:07.761 02:12:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:07.761 02:12:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:07.761 02:12:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:34:07.761 02:12:22 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1585444 00:34:07.761 02:12:22 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:34:07.761 02:12:22 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 1585444 00:34:07.761 02:12:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@829 -- # '[' -z 1585444 ']' 00:34:07.761 02:12:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:07.761 02:12:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:07.761 02:12:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:07.761 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:07.761 02:12:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:07.761 02:12:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:07.761 [2024-07-24 02:12:22.515902] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:34:07.761 [2024-07-24 02:12:22.515986] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1585444 ] 00:34:07.762 EAL: No free 2048 kB hugepages reported on node 1 00:34:07.762 [2024-07-24 02:12:22.576626] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:08.020 [2024-07-24 02:12:22.669336] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:08.020 [2024-07-24 02:12:22.669400] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:08.020 02:12:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:08.020 02:12:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # return 0 00:34:08.020 02:12:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:34:08.020 02:12:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:08.020 02:12:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:08.020 02:12:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:34:08.020 02:12:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:34:08.020 02:12:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:34:08.020 02:12:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:08.020 02:12:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:08.020 02:12:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:34:08.020 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:34:08.020 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:34:08.020 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:34:08.020 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:34:08.020 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:34:08.020 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:34:08.020 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:34:08.020 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:34:08.020 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:34:08.020 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:08.020 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:08.020 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:34:08.020 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:08.020 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:08.020 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:34:08.020 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:08.020 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:34:08.020 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:34:08.020 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:08.020 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:34:08.020 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:34:08.020 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:34:08.020 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:34:08.020 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:08.020 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:34:08.020 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:34:08.020 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:34:08.020 ' 00:34:10.550 [2024-07-24 02:12:25.308647] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:11.921 [2024-07-24 02:12:26.544989] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:34:14.447 [2024-07-24 02:12:28.820156] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:34:16.346 [2024-07-24 02:12:30.786382] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:34:17.719 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:34:17.719 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:34:17.719 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:34:17.719 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:34:17.719 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:34:17.719 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:34:17.719 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:34:17.719 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:34:17.719 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:34:17.719 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:34:17.719 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:17.719 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:17.719 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:34:17.719 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:17.719 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:17.719 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:34:17.719 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:17.719 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:34:17.719 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:34:17.719 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:17.719 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:34:17.719 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:34:17.719 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:34:17.719 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:34:17.719 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:17.719 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:34:17.719 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:34:17.719 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:34:17.719 02:12:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:34:17.719 02:12:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:17.719 02:12:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:17.719 02:12:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:34:17.719 02:12:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:17.719 02:12:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:17.719 02:12:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:34:17.719 02:12:32 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:34:17.977 02:12:32 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:34:17.977 02:12:32 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:34:17.977 02:12:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:34:17.977 02:12:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:17.977 02:12:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:18.235 02:12:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:34:18.235 02:12:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:18.235 02:12:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:18.235 02:12:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:34:18.235 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:34:18.235 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:34:18.235 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:34:18.235 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:34:18.235 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:34:18.235 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:34:18.235 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:34:18.235 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:34:18.235 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:34:18.235 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:34:18.235 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:34:18.235 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:34:18.235 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:34:18.235 ' 00:34:23.538 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:34:23.538 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:34:23.538 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:34:23.538 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:34:23.538 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:34:23.538 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:34:23.538 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:34:23.538 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:34:23.538 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:34:23.538 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:34:23.538 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:34:23.538 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:34:23.538 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:34:23.538 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:34:23.538 02:12:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:34:23.538 02:12:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:23.538 02:12:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:23.538 02:12:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 1585444 00:34:23.538 02:12:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 1585444 ']' 00:34:23.538 02:12:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 1585444 00:34:23.538 02:12:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # uname 00:34:23.538 02:12:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:23.538 02:12:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1585444 00:34:23.538 02:12:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:34:23.538 02:12:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:34:23.538 02:12:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1585444' 00:34:23.538 killing process with pid 1585444 00:34:23.538 02:12:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@967 -- # kill 1585444 00:34:23.538 02:12:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # wait 1585444 00:34:23.538 02:12:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:34:23.538 02:12:38 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:34:23.538 02:12:38 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 1585444 ']' 00:34:23.538 02:12:38 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 1585444 00:34:23.538 02:12:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 1585444 ']' 00:34:23.538 02:12:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 1585444 00:34:23.538 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1585444) - No such process 00:34:23.538 02:12:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@975 -- # echo 'Process with pid 1585444 is not found' 00:34:23.538 Process with pid 1585444 is not found 00:34:23.538 02:12:38 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:34:23.538 02:12:38 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:34:23.538 02:12:38 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:34:23.538 00:34:23.538 real 0m15.985s 00:34:23.538 user 0m33.873s 00:34:23.538 sys 0m0.782s 00:34:23.538 02:12:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:23.538 02:12:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:23.538 ************************************ 00:34:23.538 END TEST spdkcli_nvmf_tcp 00:34:23.538 ************************************ 00:34:23.538 02:12:38 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:34:23.538 02:12:38 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:34:23.538 02:12:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:23.538 02:12:38 -- common/autotest_common.sh@10 -- # set +x 00:34:23.796 ************************************ 00:34:23.796 START TEST nvmf_identify_passthru 00:34:23.796 ************************************ 00:34:23.796 02:12:38 nvmf_identify_passthru -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:34:23.796 * Looking for test storage... 00:34:23.796 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:23.796 02:12:38 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:23.796 02:12:38 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:34:23.796 02:12:38 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:23.796 02:12:38 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:23.796 02:12:38 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:23.796 02:12:38 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:23.796 02:12:38 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:23.797 02:12:38 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:23.797 02:12:38 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:23.797 02:12:38 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:23.797 02:12:38 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:23.797 02:12:38 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:23.797 02:12:38 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:23.797 02:12:38 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:23.797 02:12:38 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:23.797 02:12:38 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:23.797 02:12:38 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:23.797 02:12:38 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:23.797 02:12:38 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:23.797 02:12:38 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:23.797 02:12:38 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:23.797 02:12:38 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:23.797 02:12:38 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:23.797 02:12:38 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:23.797 02:12:38 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:23.797 02:12:38 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:34:23.797 02:12:38 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:23.797 02:12:38 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:34:23.797 02:12:38 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:23.797 02:12:38 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:23.797 02:12:38 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:23.797 02:12:38 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:23.797 02:12:38 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:23.797 02:12:38 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:23.797 02:12:38 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:23.797 02:12:38 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:23.797 02:12:38 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:23.797 02:12:38 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:23.797 02:12:38 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:23.797 02:12:38 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:23.797 02:12:38 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:23.797 02:12:38 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:23.797 02:12:38 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:23.797 02:12:38 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:34:23.797 02:12:38 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:23.797 02:12:38 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:34:23.797 02:12:38 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:23.797 02:12:38 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:23.797 02:12:38 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:23.797 02:12:38 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:23.797 02:12:38 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:23.797 02:12:38 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:23.797 02:12:38 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:23.797 02:12:38 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:23.797 02:12:38 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:23.797 02:12:38 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:23.797 02:12:38 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:34:23.797 02:12:38 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:25.697 02:12:40 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:25.697 02:12:40 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:34:25.697 02:12:40 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:25.697 02:12:40 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:25.697 02:12:40 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:25.697 02:12:40 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:25.697 02:12:40 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:25.697 02:12:40 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:34:25.697 02:12:40 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:25.697 02:12:40 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:34:25.697 02:12:40 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:34:25.697 02:12:40 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:34:25.697 02:12:40 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:34:25.697 02:12:40 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:34:25.697 02:12:40 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:34:25.697 02:12:40 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:25.697 02:12:40 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:25.697 02:12:40 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:25.697 02:12:40 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:25.697 02:12:40 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:25.697 02:12:40 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:25.697 02:12:40 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:25.697 02:12:40 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:25.697 02:12:40 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:25.697 02:12:40 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:25.697 02:12:40 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:25.697 02:12:40 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:25.697 02:12:40 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:25.697 02:12:40 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:25.697 02:12:40 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:25.697 02:12:40 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:25.697 02:12:40 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:25.697 02:12:40 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:25.697 02:12:40 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:34:25.697 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:34:25.697 02:12:40 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:25.697 02:12:40 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:25.697 02:12:40 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:25.697 02:12:40 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:25.697 02:12:40 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:25.697 02:12:40 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:25.697 02:12:40 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:34:25.697 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:34:25.697 02:12:40 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:25.697 02:12:40 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:25.697 02:12:40 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:25.697 02:12:40 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:25.697 02:12:40 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:25.697 02:12:40 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:25.697 02:12:40 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:25.697 02:12:40 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:25.697 02:12:40 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:25.697 02:12:40 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:25.697 02:12:40 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:25.697 02:12:40 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:25.697 02:12:40 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:25.697 02:12:40 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:25.697 02:12:40 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:25.697 02:12:40 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:34:25.697 Found net devices under 0000:0a:00.0: cvl_0_0 00:34:25.697 02:12:40 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:25.697 02:12:40 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:25.697 02:12:40 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:25.697 02:12:40 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:25.697 02:12:40 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:25.697 02:12:40 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:25.697 02:12:40 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:25.697 02:12:40 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:25.697 02:12:40 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:34:25.697 Found net devices under 0000:0a:00.1: cvl_0_1 00:34:25.697 02:12:40 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:25.697 02:12:40 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:25.697 02:12:40 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:34:25.697 02:12:40 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:25.697 02:12:40 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:25.697 02:12:40 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:25.697 02:12:40 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:25.697 02:12:40 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:25.697 02:12:40 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:25.697 02:12:40 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:25.697 02:12:40 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:25.697 02:12:40 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:25.697 02:12:40 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:25.697 02:12:40 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:25.697 02:12:40 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:25.697 02:12:40 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:25.697 02:12:40 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:25.697 02:12:40 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:25.697 02:12:40 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:25.697 02:12:40 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:25.697 02:12:40 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:25.697 02:12:40 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:25.697 02:12:40 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:25.956 02:12:40 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:25.956 02:12:40 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:25.956 02:12:40 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:25.956 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:25.956 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.207 ms 00:34:25.956 00:34:25.956 --- 10.0.0.2 ping statistics --- 00:34:25.956 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:25.956 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:34:25.956 02:12:40 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:25.956 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:25.956 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.059 ms 00:34:25.956 00:34:25.956 --- 10.0.0.1 ping statistics --- 00:34:25.956 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:25.956 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:34:25.956 02:12:40 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:25.956 02:12:40 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:34:25.956 02:12:40 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:25.956 02:12:40 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:25.956 02:12:40 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:25.956 02:12:40 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:25.956 02:12:40 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:25.956 02:12:40 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:25.956 02:12:40 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:25.956 02:12:40 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:34:25.956 02:12:40 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:25.956 02:12:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:25.956 02:12:40 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:34:25.956 02:12:40 nvmf_identify_passthru -- common/autotest_common.sh@1522 -- # bdfs=() 00:34:25.956 02:12:40 nvmf_identify_passthru -- common/autotest_common.sh@1522 -- # local bdfs 00:34:25.956 02:12:40 nvmf_identify_passthru -- common/autotest_common.sh@1523 -- # bdfs=($(get_nvme_bdfs)) 00:34:25.956 02:12:40 nvmf_identify_passthru -- common/autotest_common.sh@1523 -- # get_nvme_bdfs 00:34:25.956 02:12:40 nvmf_identify_passthru -- common/autotest_common.sh@1511 -- # bdfs=() 00:34:25.956 02:12:40 nvmf_identify_passthru -- common/autotest_common.sh@1511 -- # local bdfs 00:34:25.956 02:12:40 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:34:25.956 02:12:40 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:34:25.956 02:12:40 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # jq -r '.config[].params.traddr' 00:34:25.956 02:12:40 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # (( 1 == 0 )) 00:34:25.956 02:12:40 nvmf_identify_passthru -- common/autotest_common.sh@1517 -- # printf '%s\n' 0000:88:00.0 00:34:25.956 02:12:40 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # echo 0000:88:00.0 00:34:25.956 02:12:40 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:88:00.0 00:34:25.956 02:12:40 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:88:00.0 ']' 00:34:25.956 02:12:40 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:34:25.956 02:12:40 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:34:25.956 02:12:40 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:34:25.956 EAL: No free 2048 kB hugepages reported on node 1 00:34:30.205 02:12:44 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ916004901P0FGN 00:34:30.205 02:12:44 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:34:30.205 02:12:44 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:34:30.205 02:12:44 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:34:30.205 EAL: No free 2048 kB hugepages reported on node 1 00:34:34.389 02:12:49 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:34:34.389 02:12:49 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:34:34.389 02:12:49 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:34.389 02:12:49 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:34.389 02:12:49 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:34:34.389 02:12:49 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:34.389 02:12:49 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:34.389 02:12:49 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=1589956 00:34:34.389 02:12:49 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:34:34.390 02:12:49 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:34.390 02:12:49 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 1589956 00:34:34.390 02:12:49 nvmf_identify_passthru -- common/autotest_common.sh@829 -- # '[' -z 1589956 ']' 00:34:34.390 02:12:49 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:34.390 02:12:49 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:34.390 02:12:49 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:34.390 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:34.390 02:12:49 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:34.390 02:12:49 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:34.390 [2024-07-24 02:12:49.173309] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:34:34.390 [2024-07-24 02:12:49.173432] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:34.390 EAL: No free 2048 kB hugepages reported on node 1 00:34:34.390 [2024-07-24 02:12:49.252559] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:34.648 [2024-07-24 02:12:49.350576] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:34.648 [2024-07-24 02:12:49.350643] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:34.648 [2024-07-24 02:12:49.350659] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:34.648 [2024-07-24 02:12:49.350673] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:34.648 [2024-07-24 02:12:49.350684] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:34.648 [2024-07-24 02:12:49.350770] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:34.648 [2024-07-24 02:12:49.350837] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:34:34.648 [2024-07-24 02:12:49.350860] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:34:34.648 [2024-07-24 02:12:49.350863] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:34.648 02:12:49 nvmf_identify_passthru -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:34.648 02:12:49 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # return 0 00:34:34.648 02:12:49 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:34:34.648 02:12:49 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:34.648 02:12:49 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:34.648 INFO: Log level set to 20 00:34:34.648 INFO: Requests: 00:34:34.648 { 00:34:34.648 "jsonrpc": "2.0", 00:34:34.648 "method": "nvmf_set_config", 00:34:34.648 "id": 1, 00:34:34.648 "params": { 00:34:34.648 "admin_cmd_passthru": { 00:34:34.648 "identify_ctrlr": true 00:34:34.648 } 00:34:34.648 } 00:34:34.648 } 00:34:34.648 00:34:34.648 INFO: response: 00:34:34.648 { 00:34:34.648 "jsonrpc": "2.0", 00:34:34.648 "id": 1, 00:34:34.648 "result": true 00:34:34.648 } 00:34:34.648 00:34:34.648 02:12:49 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:34.648 02:12:49 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:34:34.648 02:12:49 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:34.648 02:12:49 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:34.648 INFO: Setting log level to 20 00:34:34.648 INFO: Setting log level to 20 00:34:34.648 INFO: Log level set to 20 00:34:34.648 INFO: Log level set to 20 00:34:34.648 INFO: Requests: 00:34:34.648 { 00:34:34.648 "jsonrpc": "2.0", 00:34:34.648 "method": "framework_start_init", 00:34:34.648 "id": 1 00:34:34.648 } 00:34:34.648 00:34:34.648 INFO: Requests: 00:34:34.648 { 00:34:34.648 "jsonrpc": "2.0", 00:34:34.648 "method": "framework_start_init", 00:34:34.648 "id": 1 00:34:34.648 } 00:34:34.648 00:34:34.906 [2024-07-24 02:12:49.545673] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:34:34.906 INFO: response: 00:34:34.906 { 00:34:34.906 "jsonrpc": "2.0", 00:34:34.906 "id": 1, 00:34:34.906 "result": true 00:34:34.906 } 00:34:34.906 00:34:34.906 INFO: response: 00:34:34.906 { 00:34:34.906 "jsonrpc": "2.0", 00:34:34.906 "id": 1, 00:34:34.906 "result": true 00:34:34.906 } 00:34:34.906 00:34:34.906 02:12:49 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:34.906 02:12:49 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:34.906 02:12:49 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:34.906 02:12:49 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:34.906 INFO: Setting log level to 40 00:34:34.906 INFO: Setting log level to 40 00:34:34.906 INFO: Setting log level to 40 00:34:34.906 [2024-07-24 02:12:49.555717] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:34.906 02:12:49 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:34.906 02:12:49 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:34:34.906 02:12:49 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:34.906 02:12:49 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:34.906 02:12:49 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 00:34:34.906 02:12:49 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:34.906 02:12:49 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:38.185 Nvme0n1 00:34:38.185 02:12:52 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:38.185 02:12:52 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:34:38.185 02:12:52 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:38.185 02:12:52 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:38.185 02:12:52 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:38.185 02:12:52 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:34:38.185 02:12:52 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:38.185 02:12:52 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:38.185 02:12:52 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:38.185 02:12:52 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:38.185 02:12:52 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:38.185 02:12:52 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:38.185 [2024-07-24 02:12:52.448621] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:38.185 02:12:52 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:38.185 02:12:52 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:34:38.185 02:12:52 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:38.185 02:12:52 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:38.185 [ 00:34:38.185 { 00:34:38.185 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:34:38.185 "subtype": "Discovery", 00:34:38.185 "listen_addresses": [], 00:34:38.185 "allow_any_host": true, 00:34:38.185 "hosts": [] 00:34:38.185 }, 00:34:38.185 { 00:34:38.185 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:34:38.185 "subtype": "NVMe", 00:34:38.185 "listen_addresses": [ 00:34:38.185 { 00:34:38.185 "trtype": "TCP", 00:34:38.185 "adrfam": "IPv4", 00:34:38.185 "traddr": "10.0.0.2", 00:34:38.185 "trsvcid": "4420" 00:34:38.185 } 00:34:38.185 ], 00:34:38.185 "allow_any_host": true, 00:34:38.185 "hosts": [], 00:34:38.185 "serial_number": "SPDK00000000000001", 00:34:38.185 "model_number": "SPDK bdev Controller", 00:34:38.185 "max_namespaces": 1, 00:34:38.185 "min_cntlid": 1, 00:34:38.185 "max_cntlid": 65519, 00:34:38.185 "namespaces": [ 00:34:38.185 { 00:34:38.185 "nsid": 1, 00:34:38.185 "bdev_name": "Nvme0n1", 00:34:38.185 "name": "Nvme0n1", 00:34:38.185 "nguid": "9455BD336CB74296B560DEFB2EC05ED5", 00:34:38.185 "uuid": "9455bd33-6cb7-4296-b560-defb2ec05ed5" 00:34:38.185 } 00:34:38.185 ] 00:34:38.185 } 00:34:38.185 ] 00:34:38.185 02:12:52 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:38.185 02:12:52 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:34:38.185 02:12:52 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:34:38.185 02:12:52 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:34:38.185 EAL: No free 2048 kB hugepages reported on node 1 00:34:38.185 02:12:52 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ916004901P0FGN 00:34:38.185 02:12:52 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:34:38.185 02:12:52 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:34:38.185 02:12:52 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:34:38.185 EAL: No free 2048 kB hugepages reported on node 1 00:34:38.185 02:12:52 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:34:38.185 02:12:52 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLJ916004901P0FGN '!=' PHLJ916004901P0FGN ']' 00:34:38.185 02:12:52 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:34:38.185 02:12:52 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:38.185 02:12:52 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:38.185 02:12:52 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:38.185 02:12:52 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:38.185 02:12:52 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:34:38.185 02:12:52 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:34:38.185 02:12:52 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:38.185 02:12:52 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:34:38.185 02:12:52 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:38.185 02:12:52 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:34:38.185 02:12:52 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:38.185 02:12:52 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:38.185 rmmod nvme_tcp 00:34:38.185 rmmod nvme_fabrics 00:34:38.185 rmmod nvme_keyring 00:34:38.185 02:12:52 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:38.185 02:12:52 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:34:38.185 02:12:52 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:34:38.185 02:12:52 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 1589956 ']' 00:34:38.185 02:12:52 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 1589956 00:34:38.185 02:12:52 nvmf_identify_passthru -- common/autotest_common.sh@948 -- # '[' -z 1589956 ']' 00:34:38.185 02:12:52 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # kill -0 1589956 00:34:38.185 02:12:52 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # uname 00:34:38.185 02:12:52 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:38.185 02:12:52 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1589956 00:34:38.185 02:12:52 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:34:38.185 02:12:52 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:34:38.185 02:12:52 nvmf_identify_passthru -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1589956' 00:34:38.185 killing process with pid 1589956 00:34:38.185 02:12:52 nvmf_identify_passthru -- common/autotest_common.sh@967 -- # kill 1589956 00:34:38.185 02:12:52 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # wait 1589956 00:34:40.085 02:12:54 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:40.085 02:12:54 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:40.085 02:12:54 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:40.085 02:12:54 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:40.085 02:12:54 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:40.085 02:12:54 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:40.085 02:12:54 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:40.085 02:12:54 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:41.987 02:12:56 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:41.987 00:34:41.987 real 0m18.107s 00:34:41.987 user 0m27.071s 00:34:41.987 sys 0m2.349s 00:34:41.987 02:12:56 nvmf_identify_passthru -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:41.987 02:12:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:41.987 ************************************ 00:34:41.987 END TEST nvmf_identify_passthru 00:34:41.987 ************************************ 00:34:41.987 02:12:56 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:34:41.987 02:12:56 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:34:41.987 02:12:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:41.987 02:12:56 -- common/autotest_common.sh@10 -- # set +x 00:34:41.987 ************************************ 00:34:41.987 START TEST nvmf_dif 00:34:41.987 ************************************ 00:34:41.987 02:12:56 nvmf_dif -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:34:41.987 * Looking for test storage... 00:34:41.987 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:41.987 02:12:56 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:41.987 02:12:56 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:34:41.987 02:12:56 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:41.987 02:12:56 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:41.987 02:12:56 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:41.987 02:12:56 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:41.987 02:12:56 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:41.987 02:12:56 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:41.987 02:12:56 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:41.987 02:12:56 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:41.987 02:12:56 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:41.987 02:12:56 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:41.987 02:12:56 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:41.987 02:12:56 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:41.987 02:12:56 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:41.987 02:12:56 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:41.987 02:12:56 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:41.987 02:12:56 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:41.987 02:12:56 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:41.987 02:12:56 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:41.987 02:12:56 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:41.987 02:12:56 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:41.987 02:12:56 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:41.987 02:12:56 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:41.987 02:12:56 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:41.987 02:12:56 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:34:41.987 02:12:56 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:41.987 02:12:56 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:34:41.987 02:12:56 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:41.987 02:12:56 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:41.987 02:12:56 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:41.987 02:12:56 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:41.987 02:12:56 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:41.987 02:12:56 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:41.987 02:12:56 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:41.987 02:12:56 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:41.987 02:12:56 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:34:41.987 02:12:56 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:34:41.987 02:12:56 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:34:41.987 02:12:56 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:34:41.987 02:12:56 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:34:41.987 02:12:56 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:41.987 02:12:56 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:41.987 02:12:56 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:41.987 02:12:56 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:41.987 02:12:56 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:41.987 02:12:56 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:41.987 02:12:56 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:41.987 02:12:56 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:41.987 02:12:56 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:41.987 02:12:56 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:41.987 02:12:56 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:34:41.987 02:12:56 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:43.888 02:12:58 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:43.888 02:12:58 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:34:43.888 02:12:58 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:43.888 02:12:58 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:43.888 02:12:58 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:43.888 02:12:58 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:43.888 02:12:58 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:43.888 02:12:58 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:34:43.888 02:12:58 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:43.888 02:12:58 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:34:43.888 02:12:58 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:34:43.888 02:12:58 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:34:43.888 02:12:58 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:34:43.888 02:12:58 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:34:43.888 02:12:58 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:34:43.888 02:12:58 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:43.888 02:12:58 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:43.888 02:12:58 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:43.888 02:12:58 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:43.888 02:12:58 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:43.888 02:12:58 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:43.888 02:12:58 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:43.888 02:12:58 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:43.888 02:12:58 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:43.888 02:12:58 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:43.888 02:12:58 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:43.888 02:12:58 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:43.888 02:12:58 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:43.888 02:12:58 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:43.888 02:12:58 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:43.888 02:12:58 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:43.888 02:12:58 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:43.888 02:12:58 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:43.888 02:12:58 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:34:43.888 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:34:43.888 02:12:58 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:43.888 02:12:58 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:43.888 02:12:58 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:43.888 02:12:58 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:43.888 02:12:58 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:43.889 02:12:58 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:43.889 02:12:58 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:34:43.889 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:34:43.889 02:12:58 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:43.889 02:12:58 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:43.889 02:12:58 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:43.889 02:12:58 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:43.889 02:12:58 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:43.889 02:12:58 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:43.889 02:12:58 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:43.889 02:12:58 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:43.889 02:12:58 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:43.889 02:12:58 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:43.889 02:12:58 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:43.889 02:12:58 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:43.889 02:12:58 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:43.889 02:12:58 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:43.889 02:12:58 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:43.889 02:12:58 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:34:43.889 Found net devices under 0000:0a:00.0: cvl_0_0 00:34:43.889 02:12:58 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:43.889 02:12:58 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:43.889 02:12:58 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:43.889 02:12:58 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:43.889 02:12:58 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:43.889 02:12:58 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:43.889 02:12:58 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:43.889 02:12:58 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:43.889 02:12:58 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:34:43.889 Found net devices under 0000:0a:00.1: cvl_0_1 00:34:43.889 02:12:58 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:43.889 02:12:58 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:43.889 02:12:58 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:34:43.889 02:12:58 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:43.889 02:12:58 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:43.889 02:12:58 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:43.889 02:12:58 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:43.889 02:12:58 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:43.889 02:12:58 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:43.889 02:12:58 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:43.889 02:12:58 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:43.889 02:12:58 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:43.889 02:12:58 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:43.889 02:12:58 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:43.889 02:12:58 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:43.889 02:12:58 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:43.889 02:12:58 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:43.889 02:12:58 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:43.889 02:12:58 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:43.889 02:12:58 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:43.889 02:12:58 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:43.889 02:12:58 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:43.889 02:12:58 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:43.889 02:12:58 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:43.889 02:12:58 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:43.889 02:12:58 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:43.889 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:43.889 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.152 ms 00:34:43.889 00:34:43.889 --- 10.0.0.2 ping statistics --- 00:34:43.889 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:43.889 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:34:43.889 02:12:58 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:43.889 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:43.889 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.080 ms 00:34:43.889 00:34:43.889 --- 10.0.0.1 ping statistics --- 00:34:43.889 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:43.889 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:34:43.889 02:12:58 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:43.889 02:12:58 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:34:43.889 02:12:58 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:34:43.889 02:12:58 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:44.824 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:34:44.824 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:34:44.824 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:34:44.824 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:34:44.824 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:34:44.824 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:34:44.824 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:34:44.824 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:34:44.824 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:34:44.824 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:34:44.824 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:34:44.824 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:34:44.824 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:34:44.824 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:34:44.824 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:34:44.824 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:34:44.824 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:34:45.082 02:12:59 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:45.082 02:12:59 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:45.082 02:12:59 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:45.082 02:12:59 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:45.082 02:12:59 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:45.082 02:12:59 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:45.082 02:12:59 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:34:45.082 02:12:59 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:34:45.082 02:12:59 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:45.082 02:12:59 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:45.082 02:12:59 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:45.082 02:12:59 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=1593212 00:34:45.082 02:12:59 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:34:45.082 02:12:59 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 1593212 00:34:45.082 02:12:59 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 1593212 ']' 00:34:45.082 02:12:59 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:45.082 02:12:59 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:45.082 02:12:59 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:45.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:45.082 02:12:59 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:45.082 02:12:59 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:45.082 [2024-07-24 02:12:59.935273] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:34:45.082 [2024-07-24 02:12:59.935378] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:45.082 EAL: No free 2048 kB hugepages reported on node 1 00:34:45.339 [2024-07-24 02:12:59.999836] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:45.339 [2024-07-24 02:13:00.100173] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:45.339 [2024-07-24 02:13:00.100255] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:45.339 [2024-07-24 02:13:00.100268] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:45.339 [2024-07-24 02:13:00.100278] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:45.339 [2024-07-24 02:13:00.100288] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:45.339 [2024-07-24 02:13:00.100314] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:45.339 02:13:00 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:45.339 02:13:00 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 00:34:45.339 02:13:00 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:45.339 02:13:00 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:45.339 02:13:00 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:45.597 02:13:00 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:45.597 02:13:00 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:34:45.597 02:13:00 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:34:45.597 02:13:00 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:45.597 02:13:00 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:45.597 [2024-07-24 02:13:00.246750] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:45.597 02:13:00 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:45.597 02:13:00 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:34:45.597 02:13:00 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:34:45.597 02:13:00 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:45.597 02:13:00 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:45.597 ************************************ 00:34:45.597 START TEST fio_dif_1_default 00:34:45.597 ************************************ 00:34:45.597 02:13:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 00:34:45.597 02:13:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:34:45.597 02:13:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:34:45.598 02:13:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:34:45.598 02:13:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:34:45.598 02:13:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:34:45.598 02:13:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:34:45.598 02:13:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:45.598 02:13:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:45.598 bdev_null0 00:34:45.598 02:13:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:45.598 02:13:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:45.598 02:13:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:45.598 02:13:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:45.598 02:13:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:45.598 02:13:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:45.598 02:13:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:45.598 02:13:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:45.598 02:13:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:45.598 02:13:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:45.598 02:13:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:45.598 02:13:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:45.598 [2024-07-24 02:13:00.303048] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:45.598 02:13:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:45.598 02:13:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:34:45.598 02:13:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:34:45.598 02:13:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:34:45.598 02:13:00 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:34:45.598 02:13:00 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:34:45.598 02:13:00 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:45.598 02:13:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:45.598 02:13:00 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:45.598 { 00:34:45.598 "params": { 00:34:45.598 "name": "Nvme$subsystem", 00:34:45.598 "trtype": "$TEST_TRANSPORT", 00:34:45.598 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:45.598 "adrfam": "ipv4", 00:34:45.598 "trsvcid": "$NVMF_PORT", 00:34:45.598 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:45.598 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:45.598 "hdgst": ${hdgst:-false}, 00:34:45.598 "ddgst": ${ddgst:-false} 00:34:45.598 }, 00:34:45.598 "method": "bdev_nvme_attach_controller" 00:34:45.598 } 00:34:45.598 EOF 00:34:45.598 )") 00:34:45.598 02:13:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1354 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:45.598 02:13:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1335 -- # local fio_dir=/usr/src/fio 00:34:45.598 02:13:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:45.598 02:13:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local sanitizers 00:34:45.598 02:13:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1338 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:45.598 02:13:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # shift 00:34:45.598 02:13:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:34:45.598 02:13:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local asan_lib= 00:34:45.598 02:13:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:34:45.598 02:13:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:34:45.598 02:13:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:34:45.598 02:13:00 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:34:45.598 02:13:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:45.598 02:13:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # grep libasan 00:34:45.598 02:13:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:34:45.598 02:13:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:34:45.598 02:13:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:34:45.598 02:13:00 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:34:45.598 02:13:00 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:34:45.598 02:13:00 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:34:45.598 "params": { 00:34:45.598 "name": "Nvme0", 00:34:45.598 "trtype": "tcp", 00:34:45.598 "traddr": "10.0.0.2", 00:34:45.598 "adrfam": "ipv4", 00:34:45.598 "trsvcid": "4420", 00:34:45.598 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:45.598 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:45.598 "hdgst": false, 00:34:45.598 "ddgst": false 00:34:45.598 }, 00:34:45.598 "method": "bdev_nvme_attach_controller" 00:34:45.598 }' 00:34:45.598 02:13:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # asan_lib= 00:34:45.598 02:13:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:34:45.598 02:13:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:34:45.598 02:13:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:45.598 02:13:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # grep libclang_rt.asan 00:34:45.598 02:13:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:34:45.598 02:13:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # asan_lib= 00:34:45.598 02:13:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:34:45.598 02:13:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:45.598 02:13:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:45.856 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:45.856 fio-3.35 00:34:45.856 Starting 1 thread 00:34:45.856 EAL: No free 2048 kB hugepages reported on node 1 00:34:58.084 00:34:58.084 filename0: (groupid=0, jobs=1): err= 0: pid=1593444: Wed Jul 24 02:13:11 2024 00:34:58.084 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10009msec) 00:34:58.084 slat (nsec): min=6655, max=50305, avg=8586.15, stdev=3378.46 00:34:58.084 clat (usec): min=40847, max=44383, avg=40991.03, stdev=223.07 00:34:58.084 lat (usec): min=40855, max=44413, avg=40999.62, stdev=223.55 00:34:58.084 clat percentiles (usec): 00:34:58.084 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:34:58.084 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:34:58.084 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:34:58.084 | 99.00th=[41157], 99.50th=[41681], 99.90th=[44303], 99.95th=[44303], 00:34:58.084 | 99.99th=[44303] 00:34:58.084 bw ( KiB/s): min= 384, max= 416, per=99.47%, avg=388.80, stdev=11.72, samples=20 00:34:58.084 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:34:58.084 lat (msec) : 50=100.00% 00:34:58.084 cpu : usr=90.46%, sys=9.27%, ctx=21, majf=0, minf=268 00:34:58.084 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:58.084 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:58.084 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:58.084 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:58.084 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:58.084 00:34:58.084 Run status group 0 (all jobs): 00:34:58.084 READ: bw=390KiB/s (399kB/s), 390KiB/s-390KiB/s (399kB/s-399kB/s), io=3904KiB (3998kB), run=10009-10009msec 00:34:58.084 02:13:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:34:58.084 02:13:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:34:58.084 02:13:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:34:58.084 02:13:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:58.084 02:13:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:34:58.084 02:13:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:58.084 02:13:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:58.084 02:13:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:58.084 02:13:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:58.084 02:13:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:58.084 02:13:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:58.084 02:13:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:58.084 02:13:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:58.084 00:34:58.084 real 0m11.205s 00:34:58.084 user 0m10.399s 00:34:58.084 sys 0m1.229s 00:34:58.084 02:13:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:58.084 02:13:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:58.084 ************************************ 00:34:58.084 END TEST fio_dif_1_default 00:34:58.084 ************************************ 00:34:58.084 02:13:11 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:34:58.084 02:13:11 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:34:58.084 02:13:11 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:58.084 02:13:11 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:58.084 ************************************ 00:34:58.084 START TEST fio_dif_1_multi_subsystems 00:34:58.084 ************************************ 00:34:58.084 02:13:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 00:34:58.084 02:13:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:34:58.084 02:13:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:34:58.084 02:13:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:34:58.084 02:13:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:34:58.084 02:13:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:34:58.084 02:13:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:34:58.084 02:13:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:34:58.084 02:13:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:58.084 02:13:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:58.084 bdev_null0 00:34:58.085 02:13:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:58.085 02:13:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:58.085 02:13:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:58.085 02:13:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:58.085 02:13:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:58.085 02:13:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:58.085 02:13:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:58.085 02:13:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:58.085 02:13:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:58.085 02:13:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:58.085 02:13:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:58.085 02:13:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:58.085 [2024-07-24 02:13:11.559263] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:58.085 02:13:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:58.085 02:13:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:34:58.085 02:13:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:34:58.085 02:13:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:34:58.085 02:13:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:34:58.085 02:13:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:58.085 02:13:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:58.085 bdev_null1 00:34:58.085 02:13:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:58.085 02:13:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:34:58.085 02:13:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:58.085 02:13:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:58.085 02:13:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:58.085 02:13:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:34:58.085 02:13:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:58.085 02:13:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:58.085 02:13:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:58.085 02:13:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:58.085 02:13:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:58.085 02:13:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:58.085 02:13:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:58.085 02:13:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:34:58.085 02:13:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:34:58.085 02:13:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:34:58.085 02:13:11 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:34:58.085 02:13:11 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:34:58.085 02:13:11 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:58.085 02:13:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:58.085 02:13:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:34:58.085 02:13:11 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:58.085 { 00:34:58.085 "params": { 00:34:58.085 "name": "Nvme$subsystem", 00:34:58.085 "trtype": "$TEST_TRANSPORT", 00:34:58.085 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:58.085 "adrfam": "ipv4", 00:34:58.085 "trsvcid": "$NVMF_PORT", 00:34:58.085 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:58.085 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:58.085 "hdgst": ${hdgst:-false}, 00:34:58.085 "ddgst": ${ddgst:-false} 00:34:58.085 }, 00:34:58.085 "method": "bdev_nvme_attach_controller" 00:34:58.085 } 00:34:58.085 EOF 00:34:58.085 )") 00:34:58.085 02:13:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1354 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:58.085 02:13:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:34:58.085 02:13:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:34:58.085 02:13:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1335 -- # local fio_dir=/usr/src/fio 00:34:58.085 02:13:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:58.085 02:13:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local sanitizers 00:34:58.085 02:13:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1338 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:58.085 02:13:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # shift 00:34:58.085 02:13:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local asan_lib= 00:34:58.085 02:13:11 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:34:58.085 02:13:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:34:58.085 02:13:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:34:58.085 02:13:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:34:58.085 02:13:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:58.085 02:13:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:34:58.085 02:13:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # grep libasan 00:34:58.085 02:13:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:34:58.085 02:13:11 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:58.085 02:13:11 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:58.085 { 00:34:58.085 "params": { 00:34:58.085 "name": "Nvme$subsystem", 00:34:58.085 "trtype": "$TEST_TRANSPORT", 00:34:58.085 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:58.085 "adrfam": "ipv4", 00:34:58.085 "trsvcid": "$NVMF_PORT", 00:34:58.085 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:58.085 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:58.085 "hdgst": ${hdgst:-false}, 00:34:58.085 "ddgst": ${ddgst:-false} 00:34:58.085 }, 00:34:58.085 "method": "bdev_nvme_attach_controller" 00:34:58.085 } 00:34:58.085 EOF 00:34:58.085 )") 00:34:58.085 02:13:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:34:58.085 02:13:11 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:34:58.085 02:13:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:34:58.085 02:13:11 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:34:58.085 02:13:11 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:34:58.085 02:13:11 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:34:58.085 "params": { 00:34:58.085 "name": "Nvme0", 00:34:58.085 "trtype": "tcp", 00:34:58.085 "traddr": "10.0.0.2", 00:34:58.085 "adrfam": "ipv4", 00:34:58.085 "trsvcid": "4420", 00:34:58.085 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:58.085 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:58.085 "hdgst": false, 00:34:58.085 "ddgst": false 00:34:58.085 }, 00:34:58.085 "method": "bdev_nvme_attach_controller" 00:34:58.085 },{ 00:34:58.085 "params": { 00:34:58.085 "name": "Nvme1", 00:34:58.085 "trtype": "tcp", 00:34:58.085 "traddr": "10.0.0.2", 00:34:58.085 "adrfam": "ipv4", 00:34:58.085 "trsvcid": "4420", 00:34:58.085 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:58.085 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:58.085 "hdgst": false, 00:34:58.085 "ddgst": false 00:34:58.085 }, 00:34:58.085 "method": "bdev_nvme_attach_controller" 00:34:58.085 }' 00:34:58.085 02:13:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # asan_lib= 00:34:58.085 02:13:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:34:58.085 02:13:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:34:58.085 02:13:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:58.085 02:13:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # grep libclang_rt.asan 00:34:58.085 02:13:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:34:58.085 02:13:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # asan_lib= 00:34:58.085 02:13:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:34:58.085 02:13:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:58.085 02:13:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:58.085 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:58.086 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:58.086 fio-3.35 00:34:58.086 Starting 2 threads 00:34:58.086 EAL: No free 2048 kB hugepages reported on node 1 00:35:08.054 00:35:08.054 filename0: (groupid=0, jobs=1): err= 0: pid=1594843: Wed Jul 24 02:13:22 2024 00:35:08.054 read: IOPS=189, BW=759KiB/s (777kB/s)(7616KiB/10038msec) 00:35:08.054 slat (nsec): min=5335, max=57677, avg=9129.77, stdev=3463.00 00:35:08.054 clat (usec): min=615, max=42078, avg=21058.09, stdev=20266.90 00:35:08.054 lat (usec): min=623, max=42093, avg=21067.21, stdev=20266.74 00:35:08.054 clat percentiles (usec): 00:35:08.054 | 1.00th=[ 668], 5.00th=[ 676], 10.00th=[ 685], 20.00th=[ 693], 00:35:08.054 | 30.00th=[ 701], 40.00th=[ 725], 50.00th=[40633], 60.00th=[41157], 00:35:08.054 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:35:08.054 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:35:08.054 | 99.99th=[42206] 00:35:08.054 bw ( KiB/s): min= 672, max= 768, per=66.51%, avg=760.00, stdev=22.92, samples=20 00:35:08.054 iops : min= 168, max= 192, avg=190.00, stdev= 5.73, samples=20 00:35:08.054 lat (usec) : 750=43.59%, 1000=5.78% 00:35:08.054 lat (msec) : 2=0.42%, 50=50.21% 00:35:08.054 cpu : usr=94.23%, sys=5.46%, ctx=14, majf=0, minf=147 00:35:08.054 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:08.054 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:08.054 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:08.054 issued rwts: total=1904,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:08.054 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:08.054 filename1: (groupid=0, jobs=1): err= 0: pid=1594844: Wed Jul 24 02:13:22 2024 00:35:08.054 read: IOPS=96, BW=384KiB/s (393kB/s)(3856KiB/10039msec) 00:35:08.054 slat (nsec): min=4758, max=28858, avg=9258.99, stdev=3072.12 00:35:08.054 clat (usec): min=40765, max=43250, avg=41623.32, stdev=501.14 00:35:08.054 lat (usec): min=40772, max=43266, avg=41632.58, stdev=500.98 00:35:08.054 clat percentiles (usec): 00:35:08.054 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:35:08.054 | 30.00th=[41157], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:35:08.054 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:35:08.054 | 99.00th=[42206], 99.50th=[42206], 99.90th=[43254], 99.95th=[43254], 00:35:08.054 | 99.99th=[43254] 00:35:08.054 bw ( KiB/s): min= 352, max= 416, per=33.60%, avg=384.00, stdev=14.68, samples=20 00:35:08.054 iops : min= 88, max= 104, avg=96.00, stdev= 3.67, samples=20 00:35:08.054 lat (msec) : 50=100.00% 00:35:08.054 cpu : usr=94.22%, sys=5.47%, ctx=10, majf=0, minf=102 00:35:08.054 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:08.054 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:08.054 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:08.054 issued rwts: total=964,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:08.054 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:08.054 00:35:08.054 Run status group 0 (all jobs): 00:35:08.054 READ: bw=1143KiB/s (1170kB/s), 384KiB/s-759KiB/s (393kB/s-777kB/s), io=11.2MiB (11.7MB), run=10038-10039msec 00:35:08.054 02:13:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:35:08.054 02:13:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:35:08.054 02:13:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:35:08.054 02:13:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:08.054 02:13:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:35:08.054 02:13:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:08.054 02:13:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:08.054 02:13:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:08.054 02:13:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:08.054 02:13:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:08.054 02:13:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:08.054 02:13:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:08.054 02:13:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:08.054 02:13:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:35:08.054 02:13:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:08.054 02:13:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:35:08.054 02:13:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:08.054 02:13:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:08.054 02:13:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:08.314 02:13:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:08.314 02:13:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:08.314 02:13:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:08.314 02:13:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:08.314 02:13:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:08.314 00:35:08.314 real 0m11.434s 00:35:08.314 user 0m20.102s 00:35:08.314 sys 0m1.408s 00:35:08.314 02:13:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:08.314 02:13:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:08.314 ************************************ 00:35:08.314 END TEST fio_dif_1_multi_subsystems 00:35:08.314 ************************************ 00:35:08.314 02:13:22 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:35:08.314 02:13:22 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:35:08.314 02:13:22 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:08.314 02:13:22 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:08.314 ************************************ 00:35:08.314 START TEST fio_dif_rand_params 00:35:08.314 ************************************ 00:35:08.314 02:13:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 00:35:08.314 02:13:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:35:08.314 02:13:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:35:08.314 02:13:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:35:08.314 02:13:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:35:08.314 02:13:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:35:08.314 02:13:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:35:08.314 02:13:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:35:08.314 02:13:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:35:08.314 02:13:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:35:08.314 02:13:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:08.314 02:13:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:35:08.314 02:13:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:35:08.314 02:13:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:35:08.314 02:13:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:08.314 02:13:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:08.314 bdev_null0 00:35:08.314 02:13:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:08.314 02:13:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:08.314 02:13:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:08.314 02:13:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:08.314 02:13:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:08.314 02:13:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:08.314 02:13:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:08.314 02:13:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:08.314 02:13:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:08.314 02:13:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:08.314 02:13:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:08.314 02:13:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:08.314 [2024-07-24 02:13:23.046801] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:08.314 02:13:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:08.314 02:13:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:35:08.314 02:13:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:35:08.314 02:13:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:08.314 02:13:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:35:08.314 02:13:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:08.314 02:13:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:35:08.314 02:13:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:08.314 02:13:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:08.314 02:13:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:35:08.314 02:13:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local fio_dir=/usr/src/fio 00:35:08.314 02:13:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:08.314 { 00:35:08.314 "params": { 00:35:08.314 "name": "Nvme$subsystem", 00:35:08.314 "trtype": "$TEST_TRANSPORT", 00:35:08.314 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:08.314 "adrfam": "ipv4", 00:35:08.314 "trsvcid": "$NVMF_PORT", 00:35:08.314 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:08.314 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:08.314 "hdgst": ${hdgst:-false}, 00:35:08.314 "ddgst": ${ddgst:-false} 00:35:08.314 }, 00:35:08.314 "method": "bdev_nvme_attach_controller" 00:35:08.314 } 00:35:08.314 EOF 00:35:08.314 )") 00:35:08.314 02:13:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:08.314 02:13:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:35:08.314 02:13:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local sanitizers 00:35:08.314 02:13:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:35:08.314 02:13:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:08.314 02:13:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # shift 00:35:08.314 02:13:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local asan_lib= 00:35:08.314 02:13:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:35:08.314 02:13:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:08.314 02:13:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:08.314 02:13:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:35:08.314 02:13:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:08.314 02:13:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # grep libasan 00:35:08.314 02:13:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:35:08.314 02:13:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:35:08.314 02:13:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:35:08.314 02:13:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:08.314 "params": { 00:35:08.314 "name": "Nvme0", 00:35:08.314 "trtype": "tcp", 00:35:08.314 "traddr": "10.0.0.2", 00:35:08.314 "adrfam": "ipv4", 00:35:08.314 "trsvcid": "4420", 00:35:08.314 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:08.315 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:08.315 "hdgst": false, 00:35:08.315 "ddgst": false 00:35:08.315 }, 00:35:08.315 "method": "bdev_nvme_attach_controller" 00:35:08.315 }' 00:35:08.315 02:13:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # asan_lib= 00:35:08.315 02:13:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:35:08.315 02:13:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:35:08.315 02:13:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:08.315 02:13:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # grep libclang_rt.asan 00:35:08.315 02:13:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:35:08.315 02:13:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # asan_lib= 00:35:08.315 02:13:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:35:08.315 02:13:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:08.315 02:13:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:08.573 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:35:08.573 ... 00:35:08.573 fio-3.35 00:35:08.573 Starting 3 threads 00:35:08.573 EAL: No free 2048 kB hugepages reported on node 1 00:35:15.229 00:35:15.229 filename0: (groupid=0, jobs=1): err= 0: pid=1596239: Wed Jul 24 02:13:28 2024 00:35:15.229 read: IOPS=191, BW=23.9MiB/s (25.0MB/s)(120MiB/5008msec) 00:35:15.229 slat (nsec): min=7343, max=74529, avg=12343.27, stdev=3849.30 00:35:15.229 clat (usec): min=6096, max=59190, avg=15678.03, stdev=10958.65 00:35:15.229 lat (usec): min=6109, max=59226, avg=15690.37, stdev=10958.56 00:35:15.229 clat percentiles (usec): 00:35:15.229 | 1.00th=[ 7832], 5.00th=[ 8586], 10.00th=[ 9503], 20.00th=[10814], 00:35:15.229 | 30.00th=[11469], 40.00th=[11994], 50.00th=[12649], 60.00th=[13304], 00:35:15.229 | 70.00th=[14222], 80.00th=[15008], 90.00th=[17433], 95.00th=[51119], 00:35:15.229 | 99.00th=[53216], 99.50th=[55313], 99.90th=[58983], 99.95th=[58983], 00:35:15.229 | 99.99th=[58983] 00:35:15.229 bw ( KiB/s): min=16128, max=31744, per=31.48%, avg=24422.40, stdev=5513.35, samples=10 00:35:15.229 iops : min= 126, max= 248, avg=190.80, stdev=43.07, samples=10 00:35:15.229 lat (msec) : 10=13.38%, 20=78.16%, 50=1.78%, 100=6.69% 00:35:15.229 cpu : usr=91.03%, sys=8.53%, ctx=14, majf=0, minf=93 00:35:15.229 IO depths : 1=5.7%, 2=94.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:15.229 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:15.229 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:15.229 issued rwts: total=957,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:15.229 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:15.229 filename0: (groupid=0, jobs=1): err= 0: pid=1596240: Wed Jul 24 02:13:28 2024 00:35:15.229 read: IOPS=214, BW=26.8MiB/s (28.1MB/s)(135MiB/5047msec) 00:35:15.229 slat (nsec): min=5041, max=37562, avg=13382.99, stdev=3150.09 00:35:15.229 clat (usec): min=5313, max=88313, avg=13926.69, stdev=6298.14 00:35:15.229 lat (usec): min=5325, max=88326, avg=13940.07, stdev=6298.20 00:35:15.229 clat percentiles (usec): 00:35:15.229 | 1.00th=[ 6128], 5.00th=[ 7635], 10.00th=[ 8979], 20.00th=[10159], 00:35:15.229 | 30.00th=[11207], 40.00th=[12518], 50.00th=[13698], 60.00th=[14484], 00:35:15.229 | 70.00th=[15270], 80.00th=[16450], 90.00th=[17957], 95.00th=[18744], 00:35:15.229 | 99.00th=[51119], 99.50th=[52691], 99.90th=[53740], 99.95th=[88605], 00:35:15.229 | 99.99th=[88605] 00:35:15.229 bw ( KiB/s): min=22272, max=35072, per=35.64%, avg=27653.90, stdev=4034.86, samples=10 00:35:15.229 iops : min= 174, max= 274, avg=216.00, stdev=31.50, samples=10 00:35:15.229 lat (msec) : 10=18.74%, 20=79.13%, 50=0.83%, 100=1.29% 00:35:15.229 cpu : usr=90.43%, sys=9.07%, ctx=10, majf=0, minf=72 00:35:15.229 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:15.229 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:15.229 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:15.229 issued rwts: total=1083,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:15.229 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:15.229 filename0: (groupid=0, jobs=1): err= 0: pid=1596241: Wed Jul 24 02:13:28 2024 00:35:15.229 read: IOPS=202, BW=25.3MiB/s (26.5MB/s)(127MiB/5043msec) 00:35:15.229 slat (nsec): min=7447, max=36253, avg=13282.24, stdev=3062.36 00:35:15.229 clat (usec): min=5829, max=88235, avg=14827.74, stdev=8589.22 00:35:15.229 lat (usec): min=5840, max=88248, avg=14841.02, stdev=8589.25 00:35:15.229 clat percentiles (usec): 00:35:15.229 | 1.00th=[ 7177], 5.00th=[ 8455], 10.00th=[ 9372], 20.00th=[10421], 00:35:15.229 | 30.00th=[11863], 40.00th=[12780], 50.00th=[13435], 60.00th=[14091], 00:35:15.229 | 70.00th=[15008], 80.00th=[15795], 90.00th=[17433], 95.00th=[19530], 00:35:15.229 | 99.00th=[54789], 99.50th=[55313], 99.90th=[56361], 99.95th=[88605], 00:35:15.229 | 99.99th=[88605] 00:35:15.229 bw ( KiB/s): min=18212, max=32000, per=33.53%, avg=26013.20, stdev=4327.51, samples=10 00:35:15.229 iops : min= 142, max= 250, avg=203.20, stdev=33.87, samples=10 00:35:15.229 lat (msec) : 10=15.90%, 20=79.39%, 50=1.77%, 100=2.94% 00:35:15.229 cpu : usr=90.62%, sys=8.94%, ctx=11, majf=0, minf=108 00:35:15.229 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:15.229 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:15.229 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:15.229 issued rwts: total=1019,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:15.229 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:15.229 00:35:15.229 Run status group 0 (all jobs): 00:35:15.229 READ: bw=75.8MiB/s (79.4MB/s), 23.9MiB/s-26.8MiB/s (25.0MB/s-28.1MB/s), io=382MiB (401MB), run=5008-5047msec 00:35:15.229 02:13:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:35:15.229 02:13:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:35:15.229 02:13:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:15.229 02:13:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:15.229 02:13:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:35:15.229 02:13:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:15.229 02:13:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:15.229 02:13:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:15.229 02:13:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:15.229 02:13:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:15.229 02:13:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:15.229 02:13:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:15.229 02:13:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:15.229 02:13:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:35:15.229 02:13:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:35:15.229 02:13:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:35:15.229 02:13:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:35:15.229 02:13:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:35:15.229 02:13:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:35:15.230 02:13:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:35:15.230 02:13:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:35:15.230 02:13:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:15.230 02:13:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:35:15.230 02:13:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:35:15.230 02:13:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:35:15.230 02:13:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:15.230 02:13:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:15.230 bdev_null0 00:35:15.230 02:13:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:15.230 02:13:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:15.230 02:13:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:15.230 02:13:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:15.230 02:13:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:15.230 02:13:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:15.230 02:13:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:15.230 02:13:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:15.230 02:13:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:15.230 02:13:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:15.230 02:13:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:15.230 02:13:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:15.230 [2024-07-24 02:13:29.154969] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:15.230 02:13:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:15.230 02:13:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:15.230 02:13:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:35:15.230 02:13:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:35:15.230 02:13:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:35:15.230 02:13:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:15.230 02:13:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:15.230 bdev_null1 00:35:15.230 02:13:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:15.230 02:13:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:15.230 02:13:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:15.230 02:13:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:15.230 02:13:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:15.230 02:13:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:15.230 02:13:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:15.230 02:13:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:15.230 02:13:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:15.230 02:13:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:15.230 02:13:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:15.230 02:13:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:15.230 02:13:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:15.230 02:13:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:15.230 02:13:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:35:15.230 02:13:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:35:15.230 02:13:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:35:15.230 02:13:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:15.230 02:13:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:15.230 bdev_null2 00:35:15.230 02:13:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:15.230 02:13:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:35:15.230 02:13:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:15.230 02:13:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:15.230 02:13:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:15.230 02:13:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:35:15.230 02:13:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:15.230 02:13:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:15.230 02:13:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:15.230 02:13:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:35:15.230 02:13:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:15.230 02:13:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:15.230 02:13:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:15.230 02:13:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:35:15.230 02:13:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:35:15.230 02:13:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:35:15.230 02:13:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:35:15.230 02:13:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:15.230 02:13:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:35:15.230 02:13:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:15.230 02:13:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:35:15.230 02:13:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:15.230 02:13:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:15.230 { 00:35:15.230 "params": { 00:35:15.230 "name": "Nvme$subsystem", 00:35:15.230 "trtype": "$TEST_TRANSPORT", 00:35:15.230 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:15.230 "adrfam": "ipv4", 00:35:15.230 "trsvcid": "$NVMF_PORT", 00:35:15.230 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:15.230 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:15.230 "hdgst": ${hdgst:-false}, 00:35:15.230 "ddgst": ${ddgst:-false} 00:35:15.230 }, 00:35:15.230 "method": "bdev_nvme_attach_controller" 00:35:15.230 } 00:35:15.230 EOF 00:35:15.230 )") 00:35:15.230 02:13:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local fio_dir=/usr/src/fio 00:35:15.230 02:13:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:35:15.230 02:13:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:15.230 02:13:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:35:15.230 02:13:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local sanitizers 00:35:15.230 02:13:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:15.230 02:13:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # shift 00:35:15.230 02:13:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local asan_lib= 00:35:15.230 02:13:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:35:15.230 02:13:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:15.230 02:13:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:15.230 02:13:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:35:15.230 02:13:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:15.230 02:13:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # grep libasan 00:35:15.230 02:13:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:35:15.230 02:13:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:35:15.230 02:13:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:15.230 02:13:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:15.230 { 00:35:15.230 "params": { 00:35:15.230 "name": "Nvme$subsystem", 00:35:15.230 "trtype": "$TEST_TRANSPORT", 00:35:15.230 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:15.230 "adrfam": "ipv4", 00:35:15.230 "trsvcid": "$NVMF_PORT", 00:35:15.230 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:15.230 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:15.230 "hdgst": ${hdgst:-false}, 00:35:15.230 "ddgst": ${ddgst:-false} 00:35:15.230 }, 00:35:15.230 "method": "bdev_nvme_attach_controller" 00:35:15.230 } 00:35:15.230 EOF 00:35:15.230 )") 00:35:15.230 02:13:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:35:15.230 02:13:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:15.230 02:13:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:15.230 02:13:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:35:15.230 02:13:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:35:15.230 02:13:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:15.230 02:13:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:15.230 02:13:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:15.231 { 00:35:15.231 "params": { 00:35:15.231 "name": "Nvme$subsystem", 00:35:15.231 "trtype": "$TEST_TRANSPORT", 00:35:15.231 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:15.231 "adrfam": "ipv4", 00:35:15.231 "trsvcid": "$NVMF_PORT", 00:35:15.231 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:15.231 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:15.231 "hdgst": ${hdgst:-false}, 00:35:15.231 "ddgst": ${ddgst:-false} 00:35:15.231 }, 00:35:15.231 "method": "bdev_nvme_attach_controller" 00:35:15.231 } 00:35:15.231 EOF 00:35:15.231 )") 00:35:15.231 02:13:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:15.231 02:13:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:35:15.231 02:13:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:35:15.231 02:13:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:15.231 "params": { 00:35:15.231 "name": "Nvme0", 00:35:15.231 "trtype": "tcp", 00:35:15.231 "traddr": "10.0.0.2", 00:35:15.231 "adrfam": "ipv4", 00:35:15.231 "trsvcid": "4420", 00:35:15.231 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:15.231 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:15.231 "hdgst": false, 00:35:15.231 "ddgst": false 00:35:15.231 }, 00:35:15.231 "method": "bdev_nvme_attach_controller" 00:35:15.231 },{ 00:35:15.231 "params": { 00:35:15.231 "name": "Nvme1", 00:35:15.231 "trtype": "tcp", 00:35:15.231 "traddr": "10.0.0.2", 00:35:15.231 "adrfam": "ipv4", 00:35:15.231 "trsvcid": "4420", 00:35:15.231 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:15.231 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:15.231 "hdgst": false, 00:35:15.231 "ddgst": false 00:35:15.231 }, 00:35:15.231 "method": "bdev_nvme_attach_controller" 00:35:15.231 },{ 00:35:15.231 "params": { 00:35:15.231 "name": "Nvme2", 00:35:15.231 "trtype": "tcp", 00:35:15.231 "traddr": "10.0.0.2", 00:35:15.231 "adrfam": "ipv4", 00:35:15.231 "trsvcid": "4420", 00:35:15.231 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:35:15.231 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:35:15.231 "hdgst": false, 00:35:15.231 "ddgst": false 00:35:15.231 }, 00:35:15.231 "method": "bdev_nvme_attach_controller" 00:35:15.231 }' 00:35:15.231 02:13:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # asan_lib= 00:35:15.231 02:13:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:35:15.231 02:13:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:35:15.231 02:13:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:15.231 02:13:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # grep libclang_rt.asan 00:35:15.231 02:13:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:35:15.231 02:13:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # asan_lib= 00:35:15.231 02:13:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:35:15.231 02:13:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:15.231 02:13:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:15.231 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:15.231 ... 00:35:15.231 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:15.231 ... 00:35:15.231 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:15.231 ... 00:35:15.231 fio-3.35 00:35:15.231 Starting 24 threads 00:35:15.231 EAL: No free 2048 kB hugepages reported on node 1 00:35:27.439 00:35:27.439 filename0: (groupid=0, jobs=1): err= 0: pid=1596993: Wed Jul 24 02:13:40 2024 00:35:27.439 read: IOPS=332, BW=1329KiB/s (1360kB/s)(13.1MiB/10116msec) 00:35:27.439 slat (usec): min=8, max=124, avg=48.38, stdev=25.87 00:35:27.439 clat (msec): min=21, max=319, avg=47.77, stdev=54.50 00:35:27.439 lat (msec): min=21, max=319, avg=47.82, stdev=54.50 00:35:27.439 clat percentiles (msec): 00:35:27.439 | 1.00th=[ 22], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:35:27.439 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 34], 60.00th=[ 34], 00:35:27.439 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 209], 00:35:27.439 | 99.00th=[ 305], 99.50th=[ 309], 99.90th=[ 309], 99.95th=[ 321], 00:35:27.439 | 99.99th=[ 321] 00:35:27.439 bw ( KiB/s): min= 144, max= 1920, per=4.14%, avg=1337.60, stdev=802.45, samples=20 00:35:27.439 iops : min= 36, max= 480, avg=334.40, stdev=200.61, samples=20 00:35:27.439 lat (msec) : 50=92.86%, 250=3.93%, 500=3.21% 00:35:27.439 cpu : usr=97.85%, sys=1.58%, ctx=88, majf=0, minf=30 00:35:27.439 IO depths : 1=5.2%, 2=11.5%, 4=25.0%, 8=51.0%, 16=7.3%, 32=0.0%, >=64=0.0% 00:35:27.439 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:27.439 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:27.439 issued rwts: total=3360,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:27.439 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:27.439 filename0: (groupid=0, jobs=1): err= 0: pid=1596994: Wed Jul 24 02:13:40 2024 00:35:27.439 read: IOPS=331, BW=1325KiB/s (1357kB/s)(13.1MiB/10095msec) 00:35:27.439 slat (usec): min=10, max=135, avg=36.00, stdev=13.54 00:35:27.439 clat (msec): min=21, max=402, avg=47.98, stdev=57.10 00:35:27.439 lat (msec): min=21, max=402, avg=48.02, stdev=57.10 00:35:27.439 clat percentiles (msec): 00:35:27.439 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:35:27.439 | 30.00th=[ 33], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:35:27.439 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 211], 00:35:27.439 | 99.00th=[ 309], 99.50th=[ 338], 99.90th=[ 388], 99.95th=[ 401], 00:35:27.439 | 99.99th=[ 401] 00:35:27.439 bw ( KiB/s): min= 128, max= 2048, per=4.12%, avg=1331.20, stdev=815.03, samples=20 00:35:27.439 iops : min= 32, max= 512, avg=332.80, stdev=203.76, samples=20 00:35:27.439 lat (msec) : 50=93.30%, 250=3.08%, 500=3.62% 00:35:27.439 cpu : usr=95.30%, sys=2.90%, ctx=243, majf=0, minf=31 00:35:27.439 IO depths : 1=6.0%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.5%, 32=0.0%, >=64=0.0% 00:35:27.439 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:27.439 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:27.439 issued rwts: total=3344,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:27.439 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:27.439 filename0: (groupid=0, jobs=1): err= 0: pid=1596995: Wed Jul 24 02:13:40 2024 00:35:27.439 read: IOPS=333, BW=1333KiB/s (1365kB/s)(13.2MiB/10131msec) 00:35:27.439 slat (usec): min=7, max=110, avg=31.83, stdev=11.11 00:35:27.439 clat (msec): min=23, max=389, avg=47.56, stdev=51.61 00:35:27.439 lat (msec): min=24, max=389, avg=47.60, stdev=51.61 00:35:27.439 clat percentiles (msec): 00:35:27.439 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:35:27.439 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:35:27.439 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 203], 00:35:27.439 | 99.00th=[ 279], 99.50th=[ 284], 99.90th=[ 309], 99.95th=[ 388], 00:35:27.439 | 99.99th=[ 388] 00:35:27.440 bw ( KiB/s): min= 128, max= 1920, per=4.16%, avg=1344.00, stdev=793.95, samples=20 00:35:27.440 iops : min= 32, max= 480, avg=336.00, stdev=198.49, samples=20 00:35:27.440 lat (msec) : 50=92.42%, 250=5.21%, 500=2.37% 00:35:27.440 cpu : usr=94.70%, sys=3.01%, ctx=316, majf=0, minf=31 00:35:27.440 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:35:27.440 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:27.440 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:27.440 issued rwts: total=3376,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:27.440 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:27.440 filename0: (groupid=0, jobs=1): err= 0: pid=1596996: Wed Jul 24 02:13:40 2024 00:35:27.440 read: IOPS=352, BW=1409KiB/s (1443kB/s)(13.8MiB/10017msec) 00:35:27.440 slat (nsec): min=4055, max=58613, avg=28278.63, stdev=10784.42 00:35:27.440 clat (msec): min=10, max=300, avg=45.19, stdev=41.16 00:35:27.440 lat (msec): min=10, max=300, avg=45.22, stdev=41.15 00:35:27.440 clat percentiles (msec): 00:35:27.440 | 1.00th=[ 22], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:35:27.440 | 30.00th=[ 33], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:35:27.440 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 165], 00:35:27.440 | 99.00th=[ 222], 99.50th=[ 228], 99.90th=[ 300], 99.95th=[ 300], 00:35:27.440 | 99.99th=[ 300] 00:35:27.440 bw ( KiB/s): min= 256, max= 2176, per=4.35%, avg=1404.80, stdev=744.38, samples=20 00:35:27.440 iops : min= 64, max= 544, avg=351.20, stdev=186.10, samples=20 00:35:27.440 lat (msec) : 20=0.20%, 50=90.65%, 100=1.11%, 250=7.65%, 500=0.40% 00:35:27.440 cpu : usr=98.00%, sys=1.63%, ctx=23, majf=0, minf=39 00:35:27.440 IO depths : 1=5.5%, 2=11.1%, 4=22.7%, 8=53.7%, 16=7.0%, 32=0.0%, >=64=0.0% 00:35:27.440 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:27.440 complete : 0=0.0%, 4=93.5%, 8=0.7%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:27.440 issued rwts: total=3528,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:27.440 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:27.440 filename0: (groupid=0, jobs=1): err= 0: pid=1596997: Wed Jul 24 02:13:40 2024 00:35:27.440 read: IOPS=331, BW=1325KiB/s (1357kB/s)(13.1MiB/10097msec) 00:35:27.440 slat (usec): min=8, max=130, avg=42.20, stdev=21.79 00:35:27.440 clat (msec): min=31, max=401, avg=47.93, stdev=57.34 00:35:27.440 lat (msec): min=32, max=401, avg=47.97, stdev=57.34 00:35:27.440 clat percentiles (msec): 00:35:27.440 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:35:27.440 | 30.00th=[ 33], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:35:27.440 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 213], 00:35:27.440 | 99.00th=[ 309], 99.50th=[ 342], 99.90th=[ 376], 99.95th=[ 401], 00:35:27.440 | 99.99th=[ 401] 00:35:27.440 bw ( KiB/s): min= 128, max= 1920, per=4.12%, avg=1331.20, stdev=800.33, samples=20 00:35:27.440 iops : min= 32, max= 480, avg=332.80, stdev=200.08, samples=20 00:35:27.440 lat (msec) : 50=93.36%, 250=2.87%, 500=3.77% 00:35:27.440 cpu : usr=96.91%, sys=2.08%, ctx=58, majf=0, minf=37 00:35:27.440 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:35:27.440 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:27.440 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:27.440 issued rwts: total=3344,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:27.440 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:27.440 filename0: (groupid=0, jobs=1): err= 0: pid=1596998: Wed Jul 24 02:13:40 2024 00:35:27.440 read: IOPS=331, BW=1326KiB/s (1357kB/s)(13.1MiB/10091msec) 00:35:27.440 slat (usec): min=8, max=168, avg=56.05, stdev=27.37 00:35:27.440 clat (msec): min=28, max=358, avg=47.77, stdev=56.25 00:35:27.440 lat (msec): min=28, max=358, avg=47.83, stdev=56.25 00:35:27.440 clat percentiles (msec): 00:35:27.440 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:35:27.440 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 34], 00:35:27.440 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 211], 00:35:27.440 | 99.00th=[ 305], 99.50th=[ 326], 99.90th=[ 347], 99.95th=[ 359], 00:35:27.440 | 99.99th=[ 359] 00:35:27.440 bw ( KiB/s): min= 128, max= 1920, per=4.12%, avg=1331.20, stdev=811.03, samples=20 00:35:27.440 iops : min= 32, max= 480, avg=332.80, stdev=202.76, samples=20 00:35:27.440 lat (msec) : 50=93.30%, 250=3.41%, 500=3.29% 00:35:27.440 cpu : usr=94.61%, sys=3.02%, ctx=139, majf=0, minf=37 00:35:27.440 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:27.440 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:27.440 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:27.440 issued rwts: total=3344,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:27.440 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:27.440 filename0: (groupid=0, jobs=1): err= 0: pid=1596999: Wed Jul 24 02:13:40 2024 00:35:27.440 read: IOPS=330, BW=1323KiB/s (1355kB/s)(13.1MiB/10107msec) 00:35:27.440 slat (nsec): min=8611, max=95609, avg=29780.53, stdev=10914.36 00:35:27.440 clat (msec): min=22, max=402, avg=48.10, stdev=57.52 00:35:27.440 lat (msec): min=22, max=402, avg=48.13, stdev=57.52 00:35:27.440 clat percentiles (msec): 00:35:27.440 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:35:27.440 | 30.00th=[ 33], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:35:27.440 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 213], 00:35:27.440 | 99.00th=[ 309], 99.50th=[ 351], 99.90th=[ 376], 99.95th=[ 405], 00:35:27.440 | 99.99th=[ 405] 00:35:27.440 bw ( KiB/s): min= 128, max= 1920, per=4.12%, avg=1331.20, stdev=812.10, samples=20 00:35:27.440 iops : min= 32, max= 480, avg=332.80, stdev=203.02, samples=20 00:35:27.440 lat (msec) : 50=93.36%, 250=2.81%, 500=3.83% 00:35:27.440 cpu : usr=98.22%, sys=1.37%, ctx=17, majf=0, minf=37 00:35:27.440 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:35:27.440 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:27.440 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:27.440 issued rwts: total=3344,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:27.440 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:27.440 filename0: (groupid=0, jobs=1): err= 0: pid=1597000: Wed Jul 24 02:13:40 2024 00:35:27.440 read: IOPS=411, BW=1644KiB/s (1684kB/s)(16.3MiB/10131msec) 00:35:27.440 slat (usec): min=7, max=126, avg=15.49, stdev=10.91 00:35:27.440 clat (msec): min=12, max=253, avg=38.44, stdev=40.08 00:35:27.440 lat (msec): min=12, max=253, avg=38.45, stdev=40.08 00:35:27.440 clat percentiles (msec): 00:35:27.440 | 1.00th=[ 13], 5.00th=[ 21], 10.00th=[ 21], 20.00th=[ 23], 00:35:27.440 | 30.00th=[ 25], 40.00th=[ 26], 50.00th=[ 31], 60.00th=[ 33], 00:35:27.440 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 161], 00:35:27.440 | 99.00th=[ 222], 99.50th=[ 230], 99.90th=[ 234], 99.95th=[ 247], 00:35:27.440 | 99.99th=[ 253] 00:35:27.440 bw ( KiB/s): min= 256, max= 2800, per=5.14%, avg=1659.20, stdev=995.76, samples=20 00:35:27.440 iops : min= 64, max= 700, avg=414.80, stdev=248.94, samples=20 00:35:27.440 lat (msec) : 20=4.27%, 50=88.04%, 100=0.62%, 250=7.01%, 500=0.05% 00:35:27.440 cpu : usr=97.87%, sys=1.61%, ctx=41, majf=0, minf=36 00:35:27.440 IO depths : 1=2.5%, 2=5.4%, 4=14.5%, 8=67.4%, 16=10.2%, 32=0.0%, >=64=0.0% 00:35:27.440 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:27.440 complete : 0=0.0%, 4=91.1%, 8=3.5%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:27.440 issued rwts: total=4164,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:27.440 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:27.440 filename1: (groupid=0, jobs=1): err= 0: pid=1597001: Wed Jul 24 02:13:40 2024 00:35:27.440 read: IOPS=331, BW=1325KiB/s (1357kB/s)(13.1MiB/10096msec) 00:35:27.440 slat (usec): min=12, max=129, avg=49.13, stdev=25.53 00:35:27.440 clat (msec): min=31, max=402, avg=47.86, stdev=57.32 00:35:27.440 lat (msec): min=31, max=402, avg=47.91, stdev=57.33 00:35:27.440 clat percentiles (msec): 00:35:27.440 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:35:27.440 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 34], 60.00th=[ 34], 00:35:27.440 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 213], 00:35:27.440 | 99.00th=[ 309], 99.50th=[ 338], 99.90th=[ 376], 99.95th=[ 401], 00:35:27.440 | 99.99th=[ 401] 00:35:27.440 bw ( KiB/s): min= 128, max= 2048, per=4.12%, avg=1331.20, stdev=815.27, samples=20 00:35:27.440 iops : min= 32, max= 512, avg=332.80, stdev=203.82, samples=20 00:35:27.440 lat (msec) : 50=93.36%, 250=2.87%, 500=3.77% 00:35:27.440 cpu : usr=96.09%, sys=2.32%, ctx=221, majf=0, minf=38 00:35:27.440 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:35:27.440 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:27.440 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:27.440 issued rwts: total=3344,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:27.440 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:27.440 filename1: (groupid=0, jobs=1): err= 0: pid=1597003: Wed Jul 24 02:13:40 2024 00:35:27.440 read: IOPS=334, BW=1338KiB/s (1370kB/s)(13.2MiB/10091msec) 00:35:27.440 slat (usec): min=8, max=104, avg=31.76, stdev=10.92 00:35:27.440 clat (msec): min=25, max=368, avg=47.54, stdev=51.65 00:35:27.440 lat (msec): min=25, max=368, avg=47.57, stdev=51.65 00:35:27.440 clat percentiles (msec): 00:35:27.440 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:35:27.440 | 30.00th=[ 33], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:35:27.440 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 207], 00:35:27.440 | 99.00th=[ 296], 99.50th=[ 305], 99.90th=[ 305], 99.95th=[ 368], 00:35:27.440 | 99.99th=[ 368] 00:35:27.440 bw ( KiB/s): min= 240, max= 1920, per=4.16%, avg=1344.00, stdev=792.89, samples=20 00:35:27.440 iops : min= 60, max= 480, avg=336.00, stdev=198.22, samples=20 00:35:27.440 lat (msec) : 50=92.42%, 100=0.06%, 250=5.12%, 500=2.40% 00:35:27.440 cpu : usr=97.98%, sys=1.50%, ctx=43, majf=0, minf=38 00:35:27.440 IO depths : 1=6.0%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.5%, 32=0.0%, >=64=0.0% 00:35:27.440 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:27.441 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:27.441 issued rwts: total=3376,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:27.441 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:27.441 filename1: (groupid=0, jobs=1): err= 0: pid=1597004: Wed Jul 24 02:13:40 2024 00:35:27.441 read: IOPS=334, BW=1338KiB/s (1371kB/s)(13.2MiB/10089msec) 00:35:27.441 slat (usec): min=8, max=139, avg=46.31, stdev=25.80 00:35:27.441 clat (msec): min=28, max=362, avg=47.05, stdev=50.12 00:35:27.441 lat (msec): min=28, max=362, avg=47.10, stdev=50.11 00:35:27.441 clat percentiles (msec): 00:35:27.441 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:35:27.441 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 34], 60.00th=[ 34], 00:35:27.441 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 192], 00:35:27.441 | 99.00th=[ 284], 99.50th=[ 305], 99.90th=[ 347], 99.95th=[ 363], 00:35:27.441 | 99.99th=[ 363] 00:35:27.441 bw ( KiB/s): min= 256, max= 1920, per=4.18%, avg=1349.60, stdev=785.13, samples=20 00:35:27.441 iops : min= 64, max= 480, avg=337.40, stdev=196.28, samples=20 00:35:27.441 lat (msec) : 50=92.42%, 250=6.10%, 500=1.48% 00:35:27.441 cpu : usr=96.90%, sys=1.93%, ctx=63, majf=0, minf=36 00:35:27.441 IO depths : 1=6.0%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.5%, 32=0.0%, >=64=0.0% 00:35:27.441 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:27.441 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:27.441 issued rwts: total=3376,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:27.441 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:27.441 filename1: (groupid=0, jobs=1): err= 0: pid=1597005: Wed Jul 24 02:13:40 2024 00:35:27.441 read: IOPS=331, BW=1326KiB/s (1358kB/s)(13.1MiB/10101msec) 00:35:27.441 slat (nsec): min=8074, max=96929, avg=24426.50, stdev=12250.07 00:35:27.441 clat (msec): min=19, max=421, avg=48.02, stdev=56.80 00:35:27.441 lat (msec): min=19, max=422, avg=48.05, stdev=56.80 00:35:27.441 clat percentiles (msec): 00:35:27.441 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:35:27.441 | 30.00th=[ 33], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:35:27.441 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 213], 00:35:27.441 | 99.00th=[ 309], 99.50th=[ 342], 99.90th=[ 388], 99.95th=[ 422], 00:35:27.441 | 99.99th=[ 422] 00:35:27.441 bw ( KiB/s): min= 128, max= 1920, per=4.13%, avg=1332.80, stdev=804.15, samples=20 00:35:27.441 iops : min= 32, max= 480, avg=333.20, stdev=201.04, samples=20 00:35:27.441 lat (msec) : 20=0.30%, 50=92.35%, 100=0.24%, 250=3.41%, 500=3.70% 00:35:27.441 cpu : usr=98.42%, sys=1.16%, ctx=22, majf=0, minf=37 00:35:27.441 IO depths : 1=5.9%, 2=12.0%, 4=24.8%, 8=50.7%, 16=6.6%, 32=0.0%, >=64=0.0% 00:35:27.441 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:27.441 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:27.441 issued rwts: total=3348,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:27.441 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:27.441 filename1: (groupid=0, jobs=1): err= 0: pid=1597006: Wed Jul 24 02:13:40 2024 00:35:27.441 read: IOPS=340, BW=1362KiB/s (1394kB/s)(13.4MiB/10105msec) 00:35:27.441 slat (nsec): min=6949, max=87438, avg=24048.96, stdev=13287.80 00:35:27.441 clat (msec): min=32, max=262, avg=46.65, stdev=45.57 00:35:27.441 lat (msec): min=32, max=262, avg=46.67, stdev=45.57 00:35:27.441 clat percentiles (msec): 00:35:27.441 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:35:27.441 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:35:27.441 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 194], 00:35:27.441 | 99.00th=[ 239], 99.50th=[ 257], 99.90th=[ 262], 99.95th=[ 262], 00:35:27.441 | 99.99th=[ 262] 00:35:27.441 bw ( KiB/s): min= 256, max= 2048, per=4.24%, avg=1369.60, stdev=759.08, samples=20 00:35:27.441 iops : min= 64, max= 512, avg=342.40, stdev=189.77, samples=20 00:35:27.441 lat (msec) : 50=91.57%, 100=0.52%, 250=7.21%, 500=0.70% 00:35:27.441 cpu : usr=97.89%, sys=1.54%, ctx=88, majf=0, minf=31 00:35:27.441 IO depths : 1=6.0%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.5%, 32=0.0%, >=64=0.0% 00:35:27.441 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:27.441 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:27.441 issued rwts: total=3440,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:27.441 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:27.441 filename1: (groupid=0, jobs=1): err= 0: pid=1597007: Wed Jul 24 02:13:40 2024 00:35:27.441 read: IOPS=331, BW=1328KiB/s (1360kB/s)(13.1MiB/10115msec) 00:35:27.441 slat (usec): min=11, max=123, avg=77.88, stdev=13.02 00:35:27.441 clat (msec): min=31, max=401, avg=47.43, stdev=55.11 00:35:27.441 lat (msec): min=31, max=401, avg=47.51, stdev=55.10 00:35:27.441 clat percentiles (msec): 00:35:27.441 | 1.00th=[ 32], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:35:27.441 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 34], 00:35:27.441 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 209], 00:35:27.441 | 99.00th=[ 305], 99.50th=[ 309], 99.90th=[ 376], 99.95th=[ 401], 00:35:27.441 | 99.99th=[ 401] 00:35:27.441 bw ( KiB/s): min= 128, max= 1920, per=4.14%, avg=1337.60, stdev=802.56, samples=20 00:35:27.441 iops : min= 32, max= 480, avg=334.40, stdev=200.64, samples=20 00:35:27.441 lat (msec) : 50=92.97%, 250=3.75%, 500=3.28% 00:35:27.441 cpu : usr=95.20%, sys=2.88%, ctx=152, majf=0, minf=53 00:35:27.441 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:35:27.441 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:27.441 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:27.441 issued rwts: total=3358,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:27.441 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:27.441 filename1: (groupid=0, jobs=1): err= 0: pid=1597008: Wed Jul 24 02:13:40 2024 00:35:27.441 read: IOPS=331, BW=1324KiB/s (1356kB/s)(13.1MiB/10101msec) 00:35:27.441 slat (nsec): min=8278, max=94697, avg=29497.69, stdev=11717.79 00:35:27.441 clat (msec): min=28, max=371, avg=47.95, stdev=56.08 00:35:27.441 lat (msec): min=28, max=371, avg=47.98, stdev=56.08 00:35:27.441 clat percentiles (msec): 00:35:27.441 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:35:27.441 | 30.00th=[ 33], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:35:27.441 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 211], 00:35:27.441 | 99.00th=[ 305], 99.50th=[ 313], 99.90th=[ 372], 99.95th=[ 372], 00:35:27.441 | 99.99th=[ 372] 00:35:27.441 bw ( KiB/s): min= 128, max= 2048, per=4.12%, avg=1331.20, stdev=813.31, samples=20 00:35:27.441 iops : min= 32, max= 512, avg=332.80, stdev=203.33, samples=20 00:35:27.441 lat (msec) : 50=93.30%, 250=3.41%, 500=3.29% 00:35:27.441 cpu : usr=97.94%, sys=1.61%, ctx=29, majf=0, minf=36 00:35:27.441 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:35:27.441 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:27.441 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:27.441 issued rwts: total=3344,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:27.441 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:27.441 filename1: (groupid=0, jobs=1): err= 0: pid=1597009: Wed Jul 24 02:13:40 2024 00:35:27.441 read: IOPS=333, BW=1334KiB/s (1366kB/s)(13.2MiB/10098msec) 00:35:27.441 slat (usec): min=7, max=110, avg=21.68, stdev=15.22 00:35:27.441 clat (msec): min=19, max=387, avg=47.87, stdev=57.19 00:35:27.441 lat (msec): min=19, max=387, avg=47.90, stdev=57.20 00:35:27.441 clat percentiles (msec): 00:35:27.441 | 1.00th=[ 24], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 34], 00:35:27.441 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:35:27.441 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 211], 00:35:27.441 | 99.00th=[ 305], 99.50th=[ 309], 99.90th=[ 388], 99.95th=[ 388], 00:35:27.441 | 99.99th=[ 388] 00:35:27.441 bw ( KiB/s): min= 128, max= 1984, per=4.15%, avg=1340.80, stdev=818.26, samples=20 00:35:27.441 iops : min= 32, max= 496, avg=335.20, stdev=204.57, samples=20 00:35:27.441 lat (msec) : 20=0.24%, 50=92.40%, 100=0.71%, 250=3.15%, 500=3.50% 00:35:27.441 cpu : usr=97.58%, sys=1.61%, ctx=54, majf=0, minf=42 00:35:27.441 IO depths : 1=0.4%, 2=0.8%, 4=2.2%, 8=79.0%, 16=17.6%, 32=0.0%, >=64=0.0% 00:35:27.441 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:27.441 complete : 0=0.0%, 4=89.8%, 8=9.6%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:27.441 issued rwts: total=3368,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:27.441 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:27.441 filename2: (groupid=0, jobs=1): err= 0: pid=1597011: Wed Jul 24 02:13:40 2024 00:35:27.441 read: IOPS=331, BW=1324KiB/s (1356kB/s)(13.1MiB/10099msec) 00:35:27.441 slat (usec): min=10, max=116, avg=36.83, stdev=18.57 00:35:27.441 clat (msec): min=24, max=399, avg=47.99, stdev=57.06 00:35:27.441 lat (msec): min=24, max=399, avg=48.02, stdev=57.05 00:35:27.441 clat percentiles (msec): 00:35:27.441 | 1.00th=[ 32], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:35:27.441 | 30.00th=[ 33], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:35:27.441 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 213], 00:35:27.441 | 99.00th=[ 309], 99.50th=[ 342], 99.90th=[ 388], 99.95th=[ 401], 00:35:27.441 | 99.99th=[ 401] 00:35:27.441 bw ( KiB/s): min= 128, max= 1920, per=4.12%, avg=1331.20, stdev=800.08, samples=20 00:35:27.441 iops : min= 32, max= 480, avg=332.80, stdev=200.02, samples=20 00:35:27.441 lat (msec) : 50=93.30%, 250=2.93%, 500=3.77% 00:35:27.441 cpu : usr=97.49%, sys=1.82%, ctx=68, majf=0, minf=40 00:35:27.441 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:35:27.441 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:27.441 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:27.441 issued rwts: total=3344,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:27.441 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:27.442 filename2: (groupid=0, jobs=1): err= 0: pid=1597012: Wed Jul 24 02:13:40 2024 00:35:27.442 read: IOPS=338, BW=1352KiB/s (1385kB/s)(13.4MiB/10115msec) 00:35:27.442 slat (nsec): min=5370, max=86576, avg=27932.93, stdev=13274.36 00:35:27.442 clat (msec): min=21, max=347, avg=46.99, stdev=48.45 00:35:27.442 lat (msec): min=21, max=347, avg=47.02, stdev=48.45 00:35:27.442 clat percentiles (msec): 00:35:27.442 | 1.00th=[ 24], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:35:27.442 | 30.00th=[ 33], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:35:27.442 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 192], 00:35:27.442 | 99.00th=[ 253], 99.50th=[ 275], 99.90th=[ 305], 99.95th=[ 347], 00:35:27.442 | 99.99th=[ 347] 00:35:27.442 bw ( KiB/s): min= 240, max= 1968, per=4.22%, avg=1361.60, stdev=776.06, samples=20 00:35:27.442 iops : min= 60, max= 492, avg=340.40, stdev=194.02, samples=20 00:35:27.442 lat (msec) : 50=91.87%, 100=0.29%, 250=6.26%, 500=1.58% 00:35:27.442 cpu : usr=98.15%, sys=1.43%, ctx=15, majf=0, minf=39 00:35:27.442 IO depths : 1=5.8%, 2=11.8%, 4=24.2%, 8=51.5%, 16=6.8%, 32=0.0%, >=64=0.0% 00:35:27.442 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:27.442 complete : 0=0.0%, 4=93.9%, 8=0.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:27.442 issued rwts: total=3420,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:27.442 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:27.442 filename2: (groupid=0, jobs=1): err= 0: pid=1597013: Wed Jul 24 02:13:40 2024 00:35:27.442 read: IOPS=337, BW=1349KiB/s (1381kB/s)(13.3MiB/10105msec) 00:35:27.442 slat (usec): min=8, max=107, avg=36.78, stdev=27.38 00:35:27.442 clat (msec): min=20, max=431, avg=46.76, stdev=50.73 00:35:27.442 lat (msec): min=20, max=431, avg=46.80, stdev=50.73 00:35:27.442 clat percentiles (msec): 00:35:27.442 | 1.00th=[ 32], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:35:27.442 | 30.00th=[ 33], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:35:27.442 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 176], 00:35:27.442 | 99.00th=[ 292], 99.50th=[ 309], 99.90th=[ 405], 99.95th=[ 430], 00:35:27.442 | 99.99th=[ 430] 00:35:27.442 bw ( KiB/s): min= 256, max= 1920, per=4.20%, avg=1356.80, stdev=776.26, samples=20 00:35:27.442 iops : min= 64, max= 480, avg=339.20, stdev=194.07, samples=20 00:35:27.442 lat (msec) : 50=92.02%, 100=0.88%, 250=4.75%, 500=2.35% 00:35:27.442 cpu : usr=96.32%, sys=2.29%, ctx=87, majf=0, minf=34 00:35:27.442 IO depths : 1=6.0%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.5%, 32=0.0%, >=64=0.0% 00:35:27.442 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:27.442 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:27.442 issued rwts: total=3408,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:27.442 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:27.442 filename2: (groupid=0, jobs=1): err= 0: pid=1597014: Wed Jul 24 02:13:40 2024 00:35:27.442 read: IOPS=331, BW=1327KiB/s (1359kB/s)(13.1MiB/10079msec) 00:35:27.442 slat (usec): min=9, max=116, avg=49.29, stdev=25.88 00:35:27.442 clat (msec): min=26, max=314, avg=47.86, stdev=55.92 00:35:27.442 lat (msec): min=26, max=314, avg=47.90, stdev=55.92 00:35:27.442 clat percentiles (msec): 00:35:27.442 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:35:27.442 | 30.00th=[ 33], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:35:27.442 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 211], 00:35:27.442 | 99.00th=[ 305], 99.50th=[ 309], 99.90th=[ 313], 99.95th=[ 313], 00:35:27.442 | 99.99th=[ 313] 00:35:27.442 bw ( KiB/s): min= 128, max= 2032, per=4.12%, avg=1331.20, stdev=811.22, samples=20 00:35:27.442 iops : min= 32, max= 508, avg=332.80, stdev=202.80, samples=20 00:35:27.442 lat (msec) : 50=93.30%, 250=3.29%, 500=3.41% 00:35:27.442 cpu : usr=96.25%, sys=2.13%, ctx=153, majf=0, minf=47 00:35:27.442 IO depths : 1=0.4%, 2=6.7%, 4=25.0%, 8=55.8%, 16=12.1%, 32=0.0%, >=64=0.0% 00:35:27.442 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:27.442 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:27.442 issued rwts: total=3344,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:27.442 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:27.442 filename2: (groupid=0, jobs=1): err= 0: pid=1597015: Wed Jul 24 02:13:40 2024 00:35:27.442 read: IOPS=334, BW=1338KiB/s (1371kB/s)(13.2MiB/10089msec) 00:35:27.442 slat (usec): min=8, max=129, avg=51.30, stdev=27.06 00:35:27.442 clat (msec): min=32, max=432, avg=47.37, stdev=52.81 00:35:27.442 lat (msec): min=32, max=432, avg=47.42, stdev=52.81 00:35:27.442 clat percentiles (msec): 00:35:27.442 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:35:27.442 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 34], 60.00th=[ 34], 00:35:27.442 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 203], 00:35:27.442 | 99.00th=[ 309], 99.50th=[ 326], 99.90th=[ 326], 99.95th=[ 435], 00:35:27.442 | 99.99th=[ 435] 00:35:27.442 bw ( KiB/s): min= 256, max= 2048, per=4.16%, avg=1344.00, stdev=793.95, samples=20 00:35:27.442 iops : min= 64, max= 512, avg=336.00, stdev=198.49, samples=20 00:35:27.442 lat (msec) : 50=91.94%, 100=0.95%, 250=4.80%, 500=2.31% 00:35:27.442 cpu : usr=97.90%, sys=1.59%, ctx=39, majf=0, minf=55 00:35:27.442 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:27.442 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:27.442 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:27.442 issued rwts: total=3376,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:27.442 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:27.442 filename2: (groupid=0, jobs=1): err= 0: pid=1597016: Wed Jul 24 02:13:40 2024 00:35:27.442 read: IOPS=330, BW=1323KiB/s (1355kB/s)(13.1MiB/10108msec) 00:35:27.442 slat (usec): min=13, max=115, avg=32.04, stdev=10.67 00:35:27.442 clat (msec): min=32, max=402, avg=48.09, stdev=57.63 00:35:27.442 lat (msec): min=32, max=402, avg=48.12, stdev=57.63 00:35:27.442 clat percentiles (msec): 00:35:27.442 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:35:27.442 | 30.00th=[ 33], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:35:27.442 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 213], 00:35:27.442 | 99.00th=[ 309], 99.50th=[ 351], 99.90th=[ 376], 99.95th=[ 401], 00:35:27.442 | 99.99th=[ 401] 00:35:27.442 bw ( KiB/s): min= 127, max= 1920, per=4.12%, avg=1331.15, stdev=812.17, samples=20 00:35:27.442 iops : min= 31, max= 480, avg=332.75, stdev=203.10, samples=20 00:35:27.442 lat (msec) : 50=93.36%, 250=2.87%, 500=3.77% 00:35:27.442 cpu : usr=97.73%, sys=1.70%, ctx=22, majf=0, minf=30 00:35:27.442 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:35:27.442 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:27.442 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:27.442 issued rwts: total=3344,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:27.442 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:27.442 filename2: (groupid=0, jobs=1): err= 0: pid=1597017: Wed Jul 24 02:13:40 2024 00:35:27.442 read: IOPS=333, BW=1332KiB/s (1364kB/s)(13.1MiB/10090msec) 00:35:27.442 slat (usec): min=11, max=115, avg=76.09, stdev=13.64 00:35:27.442 clat (msec): min=31, max=362, avg=47.38, stdev=53.98 00:35:27.442 lat (msec): min=31, max=362, avg=47.45, stdev=53.98 00:35:27.442 clat percentiles (msec): 00:35:27.442 | 1.00th=[ 32], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:35:27.442 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 34], 00:35:27.442 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 34], 95.00th=[ 205], 00:35:27.442 | 99.00th=[ 300], 99.50th=[ 305], 99.90th=[ 347], 99.95th=[ 363], 00:35:27.442 | 99.99th=[ 363] 00:35:27.442 bw ( KiB/s): min= 256, max= 1920, per=4.14%, avg=1337.60, stdev=801.49, samples=20 00:35:27.442 iops : min= 64, max= 480, avg=334.40, stdev=200.37, samples=20 00:35:27.442 lat (msec) : 50=92.86%, 250=4.29%, 500=2.86% 00:35:27.442 cpu : usr=94.06%, sys=3.12%, ctx=189, majf=0, minf=42 00:35:27.442 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:35:27.442 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:27.442 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:27.442 issued rwts: total=3360,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:27.442 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:27.442 filename2: (groupid=0, jobs=1): err= 0: pid=1597018: Wed Jul 24 02:13:40 2024 00:35:27.442 read: IOPS=335, BW=1344KiB/s (1376kB/s)(13.3MiB/10115msec) 00:35:27.442 slat (nsec): min=8025, max=92005, avg=23345.72, stdev=12441.21 00:35:27.442 clat (msec): min=19, max=403, avg=47.32, stdev=49.64 00:35:27.442 lat (msec): min=19, max=403, avg=47.34, stdev=49.64 00:35:27.442 clat percentiles (msec): 00:35:27.442 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:35:27.442 | 30.00th=[ 33], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:35:27.442 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 203], 00:35:27.442 | 99.00th=[ 259], 99.50th=[ 309], 99.90th=[ 326], 99.95th=[ 405], 00:35:27.442 | 99.99th=[ 405] 00:35:27.442 bw ( KiB/s): min= 256, max= 2048, per=4.19%, avg=1352.80, stdev=791.35, samples=20 00:35:27.442 iops : min= 64, max= 512, avg=338.20, stdev=197.84, samples=20 00:35:27.442 lat (msec) : 20=0.26%, 50=91.55%, 100=0.24%, 250=6.53%, 500=1.41% 00:35:27.442 cpu : usr=97.77%, sys=1.59%, ctx=79, majf=0, minf=30 00:35:27.442 IO depths : 1=5.9%, 2=12.0%, 4=24.7%, 8=50.8%, 16=6.6%, 32=0.0%, >=64=0.0% 00:35:27.442 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:27.442 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:27.442 issued rwts: total=3398,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:27.442 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:27.442 00:35:27.442 Run status group 0 (all jobs): 00:35:27.442 READ: bw=31.5MiB/s (33.1MB/s), 1323KiB/s-1644KiB/s (1355kB/s-1684kB/s), io=319MiB (335MB), run=10017-10131msec 00:35:27.442 02:13:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:35:27.442 02:13:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:35:27.442 02:13:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:27.443 02:13:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:27.443 02:13:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:35:27.443 02:13:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:27.443 02:13:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:27.443 02:13:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:27.443 02:13:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:27.443 02:13:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:27.443 02:13:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:27.443 02:13:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:27.443 02:13:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:27.443 02:13:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:27.443 02:13:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:27.443 02:13:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:35:27.443 02:13:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:27.443 02:13:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:27.443 02:13:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:27.443 02:13:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:27.443 02:13:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:27.443 02:13:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:27.443 02:13:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:27.443 02:13:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:27.443 02:13:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:27.443 02:13:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:35:27.443 02:13:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:35:27.443 02:13:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:35:27.443 02:13:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:27.443 02:13:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:27.443 02:13:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:27.443 02:13:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:35:27.443 02:13:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:27.443 02:13:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:27.443 02:13:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:27.443 02:13:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:35:27.443 02:13:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:35:27.443 02:13:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:35:27.443 02:13:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:35:27.443 02:13:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:35:27.443 02:13:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:35:27.443 02:13:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:35:27.443 02:13:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:35:27.443 02:13:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:27.443 02:13:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:35:27.443 02:13:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:35:27.443 02:13:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:27.443 02:13:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:27.443 02:13:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:27.443 bdev_null0 00:35:27.443 02:13:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:27.443 02:13:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:27.443 02:13:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:27.443 02:13:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:27.443 02:13:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:27.443 02:13:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:27.443 02:13:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:27.443 02:13:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:27.443 02:13:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:27.443 02:13:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:27.443 02:13:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:27.443 02:13:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:27.443 [2024-07-24 02:13:40.888273] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:27.443 02:13:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:27.443 02:13:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:27.443 02:13:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:35:27.443 02:13:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:35:27.443 02:13:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:35:27.443 02:13:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:27.443 02:13:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:27.443 bdev_null1 00:35:27.443 02:13:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:27.443 02:13:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:27.443 02:13:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:27.443 02:13:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:27.443 02:13:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:27.443 02:13:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:27.443 02:13:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:27.443 02:13:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:27.443 02:13:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:27.443 02:13:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:27.443 02:13:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:27.443 02:13:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:27.443 02:13:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:27.443 02:13:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:35:27.443 02:13:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:35:27.443 02:13:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:35:27.443 02:13:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:35:27.443 02:13:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:35:27.443 02:13:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:27.443 02:13:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:27.443 { 00:35:27.443 "params": { 00:35:27.443 "name": "Nvme$subsystem", 00:35:27.443 "trtype": "$TEST_TRANSPORT", 00:35:27.443 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:27.443 "adrfam": "ipv4", 00:35:27.443 "trsvcid": "$NVMF_PORT", 00:35:27.443 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:27.443 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:27.443 "hdgst": ${hdgst:-false}, 00:35:27.443 "ddgst": ${ddgst:-false} 00:35:27.443 }, 00:35:27.443 "method": "bdev_nvme_attach_controller" 00:35:27.443 } 00:35:27.443 EOF 00:35:27.443 )") 00:35:27.443 02:13:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:27.443 02:13:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:27.443 02:13:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:35:27.443 02:13:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local fio_dir=/usr/src/fio 00:35:27.443 02:13:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:35:27.443 02:13:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:27.443 02:13:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local sanitizers 00:35:27.443 02:13:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:35:27.443 02:13:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:27.443 02:13:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # shift 00:35:27.443 02:13:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local asan_lib= 00:35:27.443 02:13:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:27.443 02:13:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:35:27.443 02:13:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:27.443 02:13:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:35:27.443 02:13:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:27.443 02:13:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # grep libasan 00:35:27.443 02:13:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:35:27.443 02:13:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:35:27.444 02:13:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:27.444 02:13:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:27.444 { 00:35:27.444 "params": { 00:35:27.444 "name": "Nvme$subsystem", 00:35:27.444 "trtype": "$TEST_TRANSPORT", 00:35:27.444 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:27.444 "adrfam": "ipv4", 00:35:27.444 "trsvcid": "$NVMF_PORT", 00:35:27.444 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:27.444 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:27.444 "hdgst": ${hdgst:-false}, 00:35:27.444 "ddgst": ${ddgst:-false} 00:35:27.444 }, 00:35:27.444 "method": "bdev_nvme_attach_controller" 00:35:27.444 } 00:35:27.444 EOF 00:35:27.444 )") 00:35:27.444 02:13:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:27.444 02:13:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:35:27.444 02:13:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:27.444 02:13:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:35:27.444 02:13:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:35:27.444 02:13:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:27.444 "params": { 00:35:27.444 "name": "Nvme0", 00:35:27.444 "trtype": "tcp", 00:35:27.444 "traddr": "10.0.0.2", 00:35:27.444 "adrfam": "ipv4", 00:35:27.444 "trsvcid": "4420", 00:35:27.444 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:27.444 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:27.444 "hdgst": false, 00:35:27.444 "ddgst": false 00:35:27.444 }, 00:35:27.444 "method": "bdev_nvme_attach_controller" 00:35:27.444 },{ 00:35:27.444 "params": { 00:35:27.444 "name": "Nvme1", 00:35:27.444 "trtype": "tcp", 00:35:27.444 "traddr": "10.0.0.2", 00:35:27.444 "adrfam": "ipv4", 00:35:27.444 "trsvcid": "4420", 00:35:27.444 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:27.444 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:27.444 "hdgst": false, 00:35:27.444 "ddgst": false 00:35:27.444 }, 00:35:27.444 "method": "bdev_nvme_attach_controller" 00:35:27.444 }' 00:35:27.444 02:13:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # asan_lib= 00:35:27.444 02:13:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:35:27.444 02:13:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:35:27.444 02:13:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:27.444 02:13:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # grep libclang_rt.asan 00:35:27.444 02:13:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:35:27.444 02:13:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # asan_lib= 00:35:27.444 02:13:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:35:27.444 02:13:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:27.444 02:13:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:27.444 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:35:27.444 ... 00:35:27.444 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:35:27.444 ... 00:35:27.444 fio-3.35 00:35:27.444 Starting 4 threads 00:35:27.444 EAL: No free 2048 kB hugepages reported on node 1 00:35:32.709 00:35:32.709 filename0: (groupid=0, jobs=1): err= 0: pid=1598499: Wed Jul 24 02:13:46 2024 00:35:32.709 read: IOPS=1741, BW=13.6MiB/s (14.3MB/s)(68.1MiB/5003msec) 00:35:32.709 slat (nsec): min=4589, max=68549, avg=18073.81, stdev=8230.29 00:35:32.709 clat (usec): min=1029, max=8885, avg=4541.79, stdev=200.80 00:35:32.709 lat (usec): min=1046, max=8909, avg=4559.86, stdev=200.98 00:35:32.709 clat percentiles (usec): 00:35:32.710 | 1.00th=[ 4228], 5.00th=[ 4359], 10.00th=[ 4359], 20.00th=[ 4424], 00:35:32.710 | 30.00th=[ 4490], 40.00th=[ 4490], 50.00th=[ 4555], 60.00th=[ 4555], 00:35:32.710 | 70.00th=[ 4621], 80.00th=[ 4621], 90.00th=[ 4686], 95.00th=[ 4752], 00:35:32.710 | 99.00th=[ 4883], 99.50th=[ 4948], 99.90th=[ 8029], 99.95th=[ 8848], 00:35:32.710 | 99.99th=[ 8848] 00:35:32.710 bw ( KiB/s): min=13824, max=14080, per=24.98%, avg=13926.40, stdev=100.97, samples=10 00:35:32.710 iops : min= 1728, max= 1760, avg=1740.80, stdev=12.62, samples=10 00:35:32.710 lat (msec) : 2=0.03%, 4=0.08%, 10=99.89% 00:35:32.710 cpu : usr=95.30%, sys=4.26%, ctx=11, majf=0, minf=105 00:35:32.710 IO depths : 1=0.2%, 2=2.2%, 4=72.6%, 8=25.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:32.710 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:32.710 complete : 0=0.0%, 4=90.1%, 8=9.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:32.710 issued rwts: total=8712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:32.710 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:32.710 filename0: (groupid=0, jobs=1): err= 0: pid=1598500: Wed Jul 24 02:13:46 2024 00:35:32.710 read: IOPS=1742, BW=13.6MiB/s (14.3MB/s)(68.1MiB/5003msec) 00:35:32.710 slat (nsec): min=4059, max=64405, avg=20477.04, stdev=7615.93 00:35:32.710 clat (usec): min=1016, max=8017, avg=4506.64, stdev=169.35 00:35:32.710 lat (usec): min=1030, max=8037, avg=4527.12, stdev=169.43 00:35:32.710 clat percentiles (usec): 00:35:32.710 | 1.00th=[ 4178], 5.00th=[ 4293], 10.00th=[ 4359], 20.00th=[ 4424], 00:35:32.710 | 30.00th=[ 4424], 40.00th=[ 4490], 50.00th=[ 4490], 60.00th=[ 4555], 00:35:32.710 | 70.00th=[ 4555], 80.00th=[ 4621], 90.00th=[ 4686], 95.00th=[ 4752], 00:35:32.710 | 99.00th=[ 4817], 99.50th=[ 4883], 99.90th=[ 5080], 99.95th=[ 6915], 00:35:32.710 | 99.99th=[ 8029] 00:35:32.710 bw ( KiB/s): min=13696, max=14080, per=25.01%, avg=13941.90, stdev=110.96, samples=10 00:35:32.710 iops : min= 1712, max= 1760, avg=1742.70, stdev=13.86, samples=10 00:35:32.710 lat (msec) : 2=0.05%, 4=0.26%, 10=99.69% 00:35:32.710 cpu : usr=95.14%, sys=4.12%, ctx=12, majf=0, minf=76 00:35:32.710 IO depths : 1=1.7%, 2=23.8%, 4=51.1%, 8=23.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:32.710 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:32.710 complete : 0=0.0%, 4=89.9%, 8=10.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:32.710 issued rwts: total=8720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:32.710 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:32.710 filename1: (groupid=0, jobs=1): err= 0: pid=1598501: Wed Jul 24 02:13:46 2024 00:35:32.710 read: IOPS=1742, BW=13.6MiB/s (14.3MB/s)(68.1MiB/5004msec) 00:35:32.710 slat (nsec): min=4538, max=69396, avg=22519.34, stdev=8729.04 00:35:32.710 clat (usec): min=1596, max=7186, avg=4499.11, stdev=151.47 00:35:32.710 lat (usec): min=1615, max=7214, avg=4521.63, stdev=151.64 00:35:32.710 clat percentiles (usec): 00:35:32.710 | 1.00th=[ 4178], 5.00th=[ 4293], 10.00th=[ 4359], 20.00th=[ 4424], 00:35:32.710 | 30.00th=[ 4424], 40.00th=[ 4490], 50.00th=[ 4490], 60.00th=[ 4555], 00:35:32.710 | 70.00th=[ 4555], 80.00th=[ 4621], 90.00th=[ 4686], 95.00th=[ 4686], 00:35:32.710 | 99.00th=[ 4817], 99.50th=[ 4883], 99.90th=[ 6128], 99.95th=[ 6194], 00:35:32.710 | 99.99th=[ 7177] 00:35:32.710 bw ( KiB/s): min=13696, max=14080, per=25.01%, avg=13939.20, stdev=112.08, samples=10 00:35:32.710 iops : min= 1712, max= 1760, avg=1742.40, stdev=14.01, samples=10 00:35:32.710 lat (msec) : 2=0.01%, 4=0.25%, 10=99.74% 00:35:32.710 cpu : usr=91.78%, sys=5.98%, ctx=97, majf=0, minf=95 00:35:32.710 IO depths : 1=1.5%, 2=24.8%, 4=50.2%, 8=23.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:32.710 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:32.710 complete : 0=0.0%, 4=89.9%, 8=10.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:32.710 issued rwts: total=8720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:32.710 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:32.710 filename1: (groupid=0, jobs=1): err= 0: pid=1598502: Wed Jul 24 02:13:46 2024 00:35:32.710 read: IOPS=1741, BW=13.6MiB/s (14.3MB/s)(68.1MiB/5003msec) 00:35:32.710 slat (nsec): min=4157, max=69699, avg=22105.79, stdev=8507.92 00:35:32.710 clat (usec): min=1201, max=8050, avg=4505.28, stdev=236.07 00:35:32.710 lat (usec): min=1216, max=8060, avg=4527.38, stdev=235.96 00:35:32.710 clat percentiles (usec): 00:35:32.710 | 1.00th=[ 4228], 5.00th=[ 4293], 10.00th=[ 4359], 20.00th=[ 4424], 00:35:32.710 | 30.00th=[ 4424], 40.00th=[ 4490], 50.00th=[ 4490], 60.00th=[ 4555], 00:35:32.710 | 70.00th=[ 4555], 80.00th=[ 4621], 90.00th=[ 4686], 95.00th=[ 4752], 00:35:32.710 | 99.00th=[ 4883], 99.50th=[ 5342], 99.90th=[ 7701], 99.95th=[ 7767], 00:35:32.710 | 99.99th=[ 8029] 00:35:32.710 bw ( KiB/s): min=13696, max=14080, per=24.98%, avg=13926.40, stdev=116.16, samples=10 00:35:32.710 iops : min= 1712, max= 1760, avg=1740.80, stdev=14.52, samples=10 00:35:32.710 lat (msec) : 2=0.13%, 4=0.25%, 10=99.62% 00:35:32.710 cpu : usr=94.58%, sys=4.54%, ctx=13, majf=0, minf=99 00:35:32.710 IO depths : 1=0.6%, 2=24.3%, 4=50.5%, 8=24.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:32.710 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:32.710 complete : 0=0.0%, 4=90.1%, 8=9.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:32.710 issued rwts: total=8712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:32.710 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:32.710 00:35:32.710 Run status group 0 (all jobs): 00:35:32.710 READ: bw=54.4MiB/s (57.1MB/s), 13.6MiB/s-13.6MiB/s (14.3MB/s-14.3MB/s), io=272MiB (286MB), run=5003-5004msec 00:35:32.710 02:13:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:35:32.710 02:13:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:35:32.710 02:13:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:32.710 02:13:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:32.710 02:13:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:35:32.710 02:13:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:32.710 02:13:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:32.710 02:13:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:32.710 02:13:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:32.710 02:13:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:32.710 02:13:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:32.710 02:13:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:32.710 02:13:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:32.710 02:13:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:32.710 02:13:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:32.710 02:13:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:35:32.710 02:13:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:32.710 02:13:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:32.710 02:13:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:32.710 02:13:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:32.710 02:13:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:32.710 02:13:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:32.710 02:13:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:32.710 02:13:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:32.710 00:35:32.710 real 0m24.179s 00:35:32.710 user 4m32.325s 00:35:32.710 sys 0m8.003s 00:35:32.710 02:13:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:32.710 02:13:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:32.710 ************************************ 00:35:32.710 END TEST fio_dif_rand_params 00:35:32.710 ************************************ 00:35:32.710 02:13:47 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:35:32.710 02:13:47 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:35:32.710 02:13:47 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:32.710 02:13:47 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:32.710 ************************************ 00:35:32.710 START TEST fio_dif_digest 00:35:32.710 ************************************ 00:35:32.710 02:13:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 00:35:32.710 02:13:47 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:35:32.710 02:13:47 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:35:32.710 02:13:47 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:35:32.710 02:13:47 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:35:32.710 02:13:47 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:35:32.710 02:13:47 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:35:32.710 02:13:47 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:35:32.710 02:13:47 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:35:32.710 02:13:47 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:35:32.710 02:13:47 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:35:32.710 02:13:47 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:35:32.710 02:13:47 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:35:32.710 02:13:47 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:35:32.710 02:13:47 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:35:32.710 02:13:47 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:35:32.710 02:13:47 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:35:32.711 02:13:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:32.711 02:13:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:32.711 bdev_null0 00:35:32.711 02:13:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:32.711 02:13:47 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:32.711 02:13:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:32.711 02:13:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:32.711 02:13:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:32.711 02:13:47 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:32.711 02:13:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:32.711 02:13:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:32.711 02:13:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:32.711 02:13:47 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:32.711 02:13:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:32.711 02:13:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:32.711 [2024-07-24 02:13:47.268612] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:32.711 02:13:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:32.711 02:13:47 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:35:32.711 02:13:47 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:35:32.711 02:13:47 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:32.711 02:13:47 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:35:32.711 02:13:47 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:35:32.711 02:13:47 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:32.711 02:13:47 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:32.711 02:13:47 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:32.711 { 00:35:32.711 "params": { 00:35:32.711 "name": "Nvme$subsystem", 00:35:32.711 "trtype": "$TEST_TRANSPORT", 00:35:32.711 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:32.711 "adrfam": "ipv4", 00:35:32.711 "trsvcid": "$NVMF_PORT", 00:35:32.711 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:32.711 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:32.711 "hdgst": ${hdgst:-false}, 00:35:32.711 "ddgst": ${ddgst:-false} 00:35:32.711 }, 00:35:32.711 "method": "bdev_nvme_attach_controller" 00:35:32.711 } 00:35:32.711 EOF 00:35:32.711 )") 00:35:32.711 02:13:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1354 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:32.711 02:13:47 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:35:32.711 02:13:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1335 -- # local fio_dir=/usr/src/fio 00:35:32.711 02:13:47 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:35:32.711 02:13:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:32.711 02:13:47 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:35:32.711 02:13:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local sanitizers 00:35:32.711 02:13:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1338 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:32.711 02:13:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # shift 00:35:32.711 02:13:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local asan_lib= 00:35:32.711 02:13:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:35:32.711 02:13:47 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:35:32.711 02:13:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:32.711 02:13:47 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:35:32.711 02:13:47 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:35:32.711 02:13:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # grep libasan 00:35:32.711 02:13:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:35:32.711 02:13:47 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:35:32.711 02:13:47 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:35:32.711 02:13:47 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:32.711 "params": { 00:35:32.711 "name": "Nvme0", 00:35:32.711 "trtype": "tcp", 00:35:32.711 "traddr": "10.0.0.2", 00:35:32.711 "adrfam": "ipv4", 00:35:32.711 "trsvcid": "4420", 00:35:32.711 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:32.711 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:32.711 "hdgst": true, 00:35:32.711 "ddgst": true 00:35:32.711 }, 00:35:32.711 "method": "bdev_nvme_attach_controller" 00:35:32.711 }' 00:35:32.711 02:13:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # asan_lib= 00:35:32.711 02:13:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:35:32.711 02:13:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:35:32.711 02:13:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:32.711 02:13:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # grep libclang_rt.asan 00:35:32.711 02:13:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:35:32.711 02:13:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # asan_lib= 00:35:32.711 02:13:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:35:32.711 02:13:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:32.711 02:13:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:32.711 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:35:32.711 ... 00:35:32.711 fio-3.35 00:35:32.711 Starting 3 threads 00:35:32.711 EAL: No free 2048 kB hugepages reported on node 1 00:35:44.913 00:35:44.913 filename0: (groupid=0, jobs=1): err= 0: pid=1599254: Wed Jul 24 02:13:58 2024 00:35:44.913 read: IOPS=205, BW=25.7MiB/s (26.9MB/s)(258MiB/10046msec) 00:35:44.913 slat (nsec): min=5056, max=78691, avg=17908.48, stdev=5036.61 00:35:44.913 clat (usec): min=8746, max=46110, avg=14530.73, stdev=1312.79 00:35:44.913 lat (usec): min=8755, max=46128, avg=14548.64, stdev=1312.98 00:35:44.913 clat percentiles (usec): 00:35:44.913 | 1.00th=[10814], 5.00th=[12780], 10.00th=[13173], 20.00th=[13698], 00:35:44.913 | 30.00th=[13960], 40.00th=[14222], 50.00th=[14484], 60.00th=[14746], 00:35:44.913 | 70.00th=[15139], 80.00th=[15401], 90.00th=[15926], 95.00th=[16188], 00:35:44.913 | 99.00th=[16909], 99.50th=[17171], 99.90th=[19792], 99.95th=[19792], 00:35:44.913 | 99.99th=[45876] 00:35:44.913 bw ( KiB/s): min=25856, max=27648, per=32.83%, avg=26419.20, stdev=624.87, samples=20 00:35:44.913 iops : min= 202, max= 216, avg=206.40, stdev= 4.88, samples=20 00:35:44.913 lat (msec) : 10=0.29%, 20=99.66%, 50=0.05% 00:35:44.913 cpu : usr=82.91%, sys=11.97%, ctx=587, majf=0, minf=163 00:35:44.914 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:44.914 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:44.914 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:44.914 issued rwts: total=2065,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:44.914 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:44.914 filename0: (groupid=0, jobs=1): err= 0: pid=1599255: Wed Jul 24 02:13:58 2024 00:35:44.914 read: IOPS=217, BW=27.2MiB/s (28.5MB/s)(273MiB/10043msec) 00:35:44.914 slat (nsec): min=4902, max=29751, avg=14499.83, stdev=1502.83 00:35:44.914 clat (usec): min=7515, max=46660, avg=13751.64, stdev=1342.93 00:35:44.914 lat (usec): min=7529, max=46674, avg=13766.14, stdev=1342.85 00:35:44.914 clat percentiles (usec): 00:35:44.914 | 1.00th=[ 9503], 5.00th=[12125], 10.00th=[12518], 20.00th=[12911], 00:35:44.914 | 30.00th=[13173], 40.00th=[13566], 50.00th=[13829], 60.00th=[13960], 00:35:44.914 | 70.00th=[14222], 80.00th=[14484], 90.00th=[15008], 95.00th=[15533], 00:35:44.914 | 99.00th=[16450], 99.50th=[16712], 99.90th=[21365], 99.95th=[21365], 00:35:44.914 | 99.99th=[46400] 00:35:44.914 bw ( KiB/s): min=25600, max=29440, per=34.69%, avg=27916.80, stdev=900.22, samples=20 00:35:44.914 iops : min= 200, max= 230, avg=218.10, stdev= 7.03, samples=20 00:35:44.914 lat (msec) : 10=1.05%, 20=98.76%, 50=0.18% 00:35:44.914 cpu : usr=92.66%, sys=6.84%, ctx=23, majf=0, minf=46 00:35:44.914 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:44.914 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:44.914 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:44.914 issued rwts: total=2182,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:44.914 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:44.914 filename0: (groupid=0, jobs=1): err= 0: pid=1599256: Wed Jul 24 02:13:58 2024 00:35:44.914 read: IOPS=205, BW=25.7MiB/s (27.0MB/s)(259MiB/10044msec) 00:35:44.914 slat (nsec): min=5036, max=32529, avg=14713.19, stdev=1969.30 00:35:44.914 clat (usec): min=11520, max=55389, avg=14531.27, stdev=2659.22 00:35:44.914 lat (usec): min=11537, max=55404, avg=14545.99, stdev=2659.18 00:35:44.914 clat percentiles (usec): 00:35:44.914 | 1.00th=[12125], 5.00th=[12780], 10.00th=[13173], 20.00th=[13566], 00:35:44.914 | 30.00th=[13829], 40.00th=[14091], 50.00th=[14353], 60.00th=[14615], 00:35:44.914 | 70.00th=[14877], 80.00th=[15139], 90.00th=[15664], 95.00th=[16057], 00:35:44.914 | 99.00th=[16909], 99.50th=[24511], 99.90th=[54264], 99.95th=[54789], 00:35:44.914 | 99.99th=[55313] 00:35:44.914 bw ( KiB/s): min=24576, max=27392, per=32.87%, avg=26447.30, stdev=805.63, samples=20 00:35:44.914 iops : min= 192, max= 214, avg=206.60, stdev= 6.33, samples=20 00:35:44.914 lat (msec) : 20=99.47%, 50=0.19%, 100=0.34% 00:35:44.914 cpu : usr=91.91%, sys=7.05%, ctx=513, majf=0, minf=102 00:35:44.914 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:44.914 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:44.914 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:44.914 issued rwts: total=2068,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:44.914 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:44.914 00:35:44.914 Run status group 0 (all jobs): 00:35:44.914 READ: bw=78.6MiB/s (82.4MB/s), 25.7MiB/s-27.2MiB/s (26.9MB/s-28.5MB/s), io=789MiB (828MB), run=10043-10046msec 00:35:44.914 02:13:58 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:35:44.914 02:13:58 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:35:44.914 02:13:58 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:35:44.914 02:13:58 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:44.914 02:13:58 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:35:44.914 02:13:58 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:44.914 02:13:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:44.914 02:13:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:44.914 02:13:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:44.914 02:13:58 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:44.914 02:13:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:44.914 02:13:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:44.914 02:13:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:44.914 00:35:44.914 real 0m11.026s 00:35:44.914 user 0m27.903s 00:35:44.914 sys 0m2.856s 00:35:44.914 02:13:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:44.914 02:13:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:44.914 ************************************ 00:35:44.914 END TEST fio_dif_digest 00:35:44.914 ************************************ 00:35:44.914 02:13:58 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:35:44.914 02:13:58 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:35:44.914 02:13:58 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:35:44.914 02:13:58 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:35:44.914 02:13:58 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:35:44.914 02:13:58 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:35:44.914 02:13:58 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:35:44.914 02:13:58 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:35:44.914 rmmod nvme_tcp 00:35:44.914 rmmod nvme_fabrics 00:35:44.914 rmmod nvme_keyring 00:35:44.914 02:13:58 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:35:44.914 02:13:58 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:35:44.914 02:13:58 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:35:44.914 02:13:58 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 1593212 ']' 00:35:44.914 02:13:58 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 1593212 00:35:44.914 02:13:58 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 1593212 ']' 00:35:44.914 02:13:58 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 1593212 00:35:44.914 02:13:58 nvmf_dif -- common/autotest_common.sh@953 -- # uname 00:35:44.914 02:13:58 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:44.914 02:13:58 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1593212 00:35:44.914 02:13:58 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:35:44.914 02:13:58 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:35:44.914 02:13:58 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1593212' 00:35:44.914 killing process with pid 1593212 00:35:44.914 02:13:58 nvmf_dif -- common/autotest_common.sh@967 -- # kill 1593212 00:35:44.914 02:13:58 nvmf_dif -- common/autotest_common.sh@972 -- # wait 1593212 00:35:44.914 02:13:58 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:35:44.914 02:13:58 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:44.914 Waiting for block devices as requested 00:35:44.914 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:35:44.914 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:45.172 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:45.172 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:45.172 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:45.172 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:45.432 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:45.432 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:45.432 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:45.432 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:45.692 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:45.692 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:45.692 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:45.952 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:45.952 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:45.952 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:45.952 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:46.231 02:14:00 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:35:46.231 02:14:00 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:35:46.231 02:14:00 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:46.231 02:14:00 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:35:46.231 02:14:00 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:46.231 02:14:00 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:46.231 02:14:00 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:48.148 02:14:02 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:35:48.148 00:35:48.148 real 1m6.366s 00:35:48.148 user 6m26.633s 00:35:48.148 sys 0m20.384s 00:35:48.148 02:14:02 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:48.148 02:14:02 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:48.148 ************************************ 00:35:48.148 END TEST nvmf_dif 00:35:48.148 ************************************ 00:35:48.148 02:14:02 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:35:48.148 02:14:02 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:35:48.148 02:14:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:48.148 02:14:02 -- common/autotest_common.sh@10 -- # set +x 00:35:48.148 ************************************ 00:35:48.148 START TEST nvmf_abort_qd_sizes 00:35:48.148 ************************************ 00:35:48.148 02:14:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:35:48.406 * Looking for test storage... 00:35:48.406 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:48.406 02:14:03 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:48.406 02:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:35:48.406 02:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:48.406 02:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:48.406 02:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:48.406 02:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:48.406 02:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:48.406 02:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:48.406 02:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:48.406 02:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:48.406 02:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:48.406 02:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:48.406 02:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:35:48.406 02:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:35:48.406 02:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:48.406 02:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:48.406 02:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:48.406 02:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:48.406 02:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:48.406 02:14:03 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:48.406 02:14:03 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:48.406 02:14:03 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:48.406 02:14:03 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:48.406 02:14:03 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:48.406 02:14:03 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:48.406 02:14:03 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:35:48.406 02:14:03 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:48.406 02:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:35:48.406 02:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:48.406 02:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:48.406 02:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:48.406 02:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:48.406 02:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:48.406 02:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:48.406 02:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:48.406 02:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:48.406 02:14:03 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:35:48.406 02:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:35:48.406 02:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:48.406 02:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:35:48.406 02:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:35:48.406 02:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:35:48.406 02:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:48.406 02:14:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:48.406 02:14:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:48.406 02:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:35:48.406 02:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:35:48.406 02:14:03 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:35:48.406 02:14:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:50.307 02:14:05 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:50.307 02:14:05 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:35:50.307 02:14:05 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:35:50.307 02:14:05 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:35:50.307 02:14:05 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:35:50.307 02:14:05 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:35:50.307 02:14:05 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:35:50.307 02:14:05 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:35:50.307 02:14:05 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:35:50.307 02:14:05 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:35:50.307 02:14:05 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:35:50.307 02:14:05 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:35:50.307 02:14:05 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:35:50.307 02:14:05 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:35:50.307 02:14:05 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:35:50.307 02:14:05 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:50.307 02:14:05 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:50.307 02:14:05 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:50.307 02:14:05 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:50.307 02:14:05 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:50.307 02:14:05 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:50.307 02:14:05 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:50.307 02:14:05 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:50.307 02:14:05 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:50.307 02:14:05 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:50.307 02:14:05 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:50.307 02:14:05 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:35:50.307 02:14:05 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:35:50.307 02:14:05 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:35:50.307 02:14:05 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:35:50.308 02:14:05 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:35:50.308 02:14:05 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:35:50.308 02:14:05 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:50.308 02:14:05 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:35:50.308 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:35:50.308 02:14:05 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:50.308 02:14:05 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:50.308 02:14:05 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:50.308 02:14:05 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:50.308 02:14:05 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:50.308 02:14:05 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:50.308 02:14:05 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:35:50.308 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:35:50.308 02:14:05 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:50.308 02:14:05 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:50.308 02:14:05 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:50.308 02:14:05 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:50.308 02:14:05 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:50.308 02:14:05 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:35:50.308 02:14:05 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:35:50.308 02:14:05 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:35:50.308 02:14:05 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:50.308 02:14:05 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:50.308 02:14:05 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:50.308 02:14:05 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:50.308 02:14:05 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:50.308 02:14:05 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:50.308 02:14:05 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:50.308 02:14:05 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:35:50.308 Found net devices under 0000:0a:00.0: cvl_0_0 00:35:50.308 02:14:05 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:50.308 02:14:05 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:50.308 02:14:05 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:50.308 02:14:05 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:50.308 02:14:05 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:50.308 02:14:05 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:50.308 02:14:05 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:50.308 02:14:05 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:50.308 02:14:05 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:35:50.308 Found net devices under 0000:0a:00.1: cvl_0_1 00:35:50.308 02:14:05 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:50.308 02:14:05 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:35:50.308 02:14:05 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:35:50.308 02:14:05 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:35:50.308 02:14:05 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:35:50.308 02:14:05 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:35:50.308 02:14:05 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:50.308 02:14:05 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:50.308 02:14:05 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:50.308 02:14:05 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:35:50.308 02:14:05 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:50.308 02:14:05 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:50.308 02:14:05 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:35:50.308 02:14:05 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:50.308 02:14:05 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:50.308 02:14:05 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:35:50.308 02:14:05 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:35:50.308 02:14:05 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:35:50.308 02:14:05 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:50.308 02:14:05 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:50.308 02:14:05 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:50.308 02:14:05 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:35:50.308 02:14:05 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:50.308 02:14:05 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:50.308 02:14:05 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:50.308 02:14:05 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:35:50.308 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:50.308 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.230 ms 00:35:50.308 00:35:50.308 --- 10.0.0.2 ping statistics --- 00:35:50.308 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:50.308 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:35:50.308 02:14:05 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:50.308 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:50.308 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.142 ms 00:35:50.308 00:35:50.308 --- 10.0.0.1 ping statistics --- 00:35:50.308 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:50.308 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:35:50.308 02:14:05 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:50.308 02:14:05 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:35:50.308 02:14:05 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:35:50.308 02:14:05 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:51.684 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:35:51.684 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:35:51.684 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:35:51.684 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:35:51.684 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:35:51.684 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:35:51.684 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:35:51.684 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:35:51.684 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:35:51.684 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:35:51.684 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:35:51.684 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:35:51.684 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:35:51.685 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:35:51.685 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:35:51.685 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:35:52.621 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:35:52.621 02:14:07 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:52.621 02:14:07 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:35:52.621 02:14:07 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:35:52.621 02:14:07 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:52.621 02:14:07 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:35:52.621 02:14:07 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:35:52.879 02:14:07 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:35:52.879 02:14:07 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:52.879 02:14:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:35:52.879 02:14:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:52.879 02:14:07 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=1604047 00:35:52.879 02:14:07 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:35:52.879 02:14:07 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 1604047 00:35:52.879 02:14:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 1604047 ']' 00:35:52.879 02:14:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:52.879 02:14:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:52.879 02:14:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:52.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:52.879 02:14:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:52.879 02:14:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:52.879 [2024-07-24 02:14:07.585025] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:35:52.879 [2024-07-24 02:14:07.585097] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:52.879 EAL: No free 2048 kB hugepages reported on node 1 00:35:52.879 [2024-07-24 02:14:07.646940] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:52.879 [2024-07-24 02:14:07.738762] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:52.879 [2024-07-24 02:14:07.738822] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:52.879 [2024-07-24 02:14:07.738848] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:52.879 [2024-07-24 02:14:07.738863] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:52.879 [2024-07-24 02:14:07.738875] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:52.879 [2024-07-24 02:14:07.738958] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:52.879 [2024-07-24 02:14:07.739014] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:35:52.879 [2024-07-24 02:14:07.739127] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:35:52.879 [2024-07-24 02:14:07.739129] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:53.137 02:14:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:53.137 02:14:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 00:35:53.137 02:14:07 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:53.137 02:14:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 00:35:53.137 02:14:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:53.137 02:14:07 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:53.137 02:14:07 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:35:53.137 02:14:07 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:35:53.137 02:14:07 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:35:53.137 02:14:07 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:35:53.137 02:14:07 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:35:53.137 02:14:07 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:88:00.0 ]] 00:35:53.137 02:14:07 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:35:53.137 02:14:07 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:35:53.137 02:14:07 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:88:00.0 ]] 00:35:53.137 02:14:07 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:35:53.137 02:14:07 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:35:53.137 02:14:07 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:35:53.137 02:14:07 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:35:53.137 02:14:07 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:88:00.0 00:35:53.137 02:14:07 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:35:53.137 02:14:07 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:88:00.0 00:35:53.137 02:14:07 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:35:53.137 02:14:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:35:53.137 02:14:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:53.137 02:14:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:53.137 ************************************ 00:35:53.137 START TEST spdk_target_abort 00:35:53.137 ************************************ 00:35:53.137 02:14:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 00:35:53.137 02:14:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:35:53.137 02:14:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:88:00.0 -b spdk_target 00:35:53.137 02:14:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:53.137 02:14:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:56.422 spdk_targetn1 00:35:56.422 02:14:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:56.422 02:14:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:56.422 02:14:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:56.422 02:14:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:56.422 [2024-07-24 02:14:10.748020] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:56.422 02:14:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:56.422 02:14:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:35:56.422 02:14:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:56.422 02:14:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:56.422 02:14:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:56.422 02:14:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:35:56.422 02:14:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:56.422 02:14:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:56.422 02:14:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:56.422 02:14:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:35:56.422 02:14:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:56.422 02:14:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:56.422 [2024-07-24 02:14:10.780272] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:56.422 02:14:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:56.422 02:14:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:35:56.422 02:14:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:35:56.422 02:14:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:35:56.422 02:14:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:35:56.422 02:14:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:35:56.422 02:14:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:35:56.422 02:14:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:35:56.422 02:14:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:35:56.422 02:14:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:35:56.422 02:14:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:56.422 02:14:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:35:56.422 02:14:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:56.422 02:14:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:35:56.422 02:14:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:56.422 02:14:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:35:56.422 02:14:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:56.422 02:14:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:56.422 02:14:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:56.422 02:14:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:56.422 02:14:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:56.422 02:14:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:56.423 EAL: No free 2048 kB hugepages reported on node 1 00:35:59.710 Initializing NVMe Controllers 00:35:59.710 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:35:59.710 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:59.710 Initialization complete. Launching workers. 00:35:59.710 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 11970, failed: 0 00:35:59.710 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1213, failed to submit 10757 00:35:59.719 success 780, unsuccess 433, failed 0 00:35:59.719 02:14:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:59.719 02:14:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:59.719 EAL: No free 2048 kB hugepages reported on node 1 00:36:03.004 Initializing NVMe Controllers 00:36:03.004 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:03.004 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:03.004 Initialization complete. Launching workers. 00:36:03.004 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8643, failed: 0 00:36:03.004 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1213, failed to submit 7430 00:36:03.004 success 329, unsuccess 884, failed 0 00:36:03.004 02:14:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:03.004 02:14:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:03.004 EAL: No free 2048 kB hugepages reported on node 1 00:36:06.285 Initializing NVMe Controllers 00:36:06.285 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:06.285 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:06.285 Initialization complete. Launching workers. 00:36:06.285 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31484, failed: 0 00:36:06.285 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2617, failed to submit 28867 00:36:06.285 success 536, unsuccess 2081, failed 0 00:36:06.285 02:14:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:36:06.285 02:14:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:06.285 02:14:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:06.285 02:14:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:06.285 02:14:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:36:06.285 02:14:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:06.285 02:14:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:07.220 02:14:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:07.220 02:14:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 1604047 00:36:07.220 02:14:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 1604047 ']' 00:36:07.220 02:14:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 1604047 00:36:07.220 02:14:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 00:36:07.220 02:14:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:07.220 02:14:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1604047 00:36:07.220 02:14:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:36:07.220 02:14:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:36:07.220 02:14:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1604047' 00:36:07.220 killing process with pid 1604047 00:36:07.220 02:14:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 1604047 00:36:07.220 02:14:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 1604047 00:36:07.478 00:36:07.478 real 0m14.299s 00:36:07.478 user 0m54.130s 00:36:07.478 sys 0m2.589s 00:36:07.478 02:14:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:07.478 02:14:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:07.478 ************************************ 00:36:07.478 END TEST spdk_target_abort 00:36:07.478 ************************************ 00:36:07.478 02:14:22 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:36:07.478 02:14:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:36:07.478 02:14:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:07.478 02:14:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:07.478 ************************************ 00:36:07.478 START TEST kernel_target_abort 00:36:07.478 ************************************ 00:36:07.478 02:14:22 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 00:36:07.478 02:14:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:36:07.478 02:14:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:36:07.478 02:14:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:07.478 02:14:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:07.478 02:14:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:07.478 02:14:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:07.478 02:14:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:07.478 02:14:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:07.478 02:14:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:07.478 02:14:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:07.478 02:14:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:07.478 02:14:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:36:07.478 02:14:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:36:07.478 02:14:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:36:07.478 02:14:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:07.478 02:14:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:07.478 02:14:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:36:07.478 02:14:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:36:07.478 02:14:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:36:07.478 02:14:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:36:07.478 02:14:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:36:07.478 02:14:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:08.418 Waiting for block devices as requested 00:36:08.418 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:36:08.710 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:08.710 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:08.974 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:08.975 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:08.975 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:08.975 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:08.975 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:09.233 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:09.233 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:09.233 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:09.233 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:09.491 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:09.491 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:09.491 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:09.749 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:09.749 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:09.749 02:14:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:36:09.749 02:14:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:36:09.749 02:14:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:36:09.749 02:14:24 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1660 -- # local device=nvme0n1 00:36:09.749 02:14:24 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:36:09.749 02:14:24 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1663 -- # [[ none != none ]] 00:36:09.749 02:14:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:36:09.749 02:14:24 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:36:09.749 02:14:24 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:36:09.749 No valid GPT data, bailing 00:36:09.749 02:14:24 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:36:10.007 02:14:24 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:36:10.007 02:14:24 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:36:10.007 02:14:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:36:10.007 02:14:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:36:10.007 02:14:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:10.007 02:14:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:10.007 02:14:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:36:10.007 02:14:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:36:10.007 02:14:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:36:10.007 02:14:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:36:10.007 02:14:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:36:10.007 02:14:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:36:10.007 02:14:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:36:10.007 02:14:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:36:10.007 02:14:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:36:10.007 02:14:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:36:10.007 02:14:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:36:10.007 00:36:10.007 Discovery Log Number of Records 2, Generation counter 2 00:36:10.007 =====Discovery Log Entry 0====== 00:36:10.007 trtype: tcp 00:36:10.007 adrfam: ipv4 00:36:10.007 subtype: current discovery subsystem 00:36:10.007 treq: not specified, sq flow control disable supported 00:36:10.007 portid: 1 00:36:10.007 trsvcid: 4420 00:36:10.007 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:36:10.007 traddr: 10.0.0.1 00:36:10.007 eflags: none 00:36:10.007 sectype: none 00:36:10.007 =====Discovery Log Entry 1====== 00:36:10.007 trtype: tcp 00:36:10.007 adrfam: ipv4 00:36:10.007 subtype: nvme subsystem 00:36:10.007 treq: not specified, sq flow control disable supported 00:36:10.007 portid: 1 00:36:10.007 trsvcid: 4420 00:36:10.007 subnqn: nqn.2016-06.io.spdk:testnqn 00:36:10.007 traddr: 10.0.0.1 00:36:10.007 eflags: none 00:36:10.007 sectype: none 00:36:10.007 02:14:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:36:10.007 02:14:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:36:10.007 02:14:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:36:10.008 02:14:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:36:10.008 02:14:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:36:10.008 02:14:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:36:10.008 02:14:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:36:10.008 02:14:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:36:10.008 02:14:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:36:10.008 02:14:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:10.008 02:14:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:36:10.008 02:14:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:10.008 02:14:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:36:10.008 02:14:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:10.008 02:14:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:36:10.008 02:14:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:10.008 02:14:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:36:10.008 02:14:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:10.008 02:14:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:10.008 02:14:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:10.008 02:14:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:10.008 EAL: No free 2048 kB hugepages reported on node 1 00:36:13.287 Initializing NVMe Controllers 00:36:13.287 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:13.287 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:13.287 Initialization complete. Launching workers. 00:36:13.287 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 38294, failed: 0 00:36:13.287 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 38294, failed to submit 0 00:36:13.287 success 0, unsuccess 38294, failed 0 00:36:13.287 02:14:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:13.287 02:14:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:13.287 EAL: No free 2048 kB hugepages reported on node 1 00:36:16.570 Initializing NVMe Controllers 00:36:16.570 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:16.570 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:16.570 Initialization complete. Launching workers. 00:36:16.570 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 78411, failed: 0 00:36:16.570 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 19774, failed to submit 58637 00:36:16.570 success 0, unsuccess 19774, failed 0 00:36:16.570 02:14:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:16.570 02:14:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:16.570 EAL: No free 2048 kB hugepages reported on node 1 00:36:19.854 Initializing NVMe Controllers 00:36:19.854 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:19.854 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:19.854 Initialization complete. Launching workers. 00:36:19.854 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 73010, failed: 0 00:36:19.854 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 18238, failed to submit 54772 00:36:19.854 success 0, unsuccess 18238, failed 0 00:36:19.854 02:14:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:36:19.854 02:14:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:36:19.854 02:14:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:36:19.854 02:14:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:19.854 02:14:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:19.854 02:14:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:36:19.854 02:14:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:19.854 02:14:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:36:19.854 02:14:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:36:19.854 02:14:34 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:20.421 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:36:20.421 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:36:20.421 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:36:20.421 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:36:20.421 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:36:20.421 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:36:20.421 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:36:20.421 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:36:20.421 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:36:20.421 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:36:20.421 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:36:20.421 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:36:20.421 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:36:20.421 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:36:20.680 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:36:20.680 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:36:21.616 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:36:21.616 00:36:21.616 real 0m14.102s 00:36:21.616 user 0m5.731s 00:36:21.616 sys 0m3.250s 00:36:21.616 02:14:36 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:21.616 02:14:36 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:21.616 ************************************ 00:36:21.616 END TEST kernel_target_abort 00:36:21.616 ************************************ 00:36:21.616 02:14:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:36:21.616 02:14:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:36:21.616 02:14:36 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:36:21.616 02:14:36 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:36:21.616 02:14:36 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:36:21.616 02:14:36 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:36:21.616 02:14:36 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:36:21.616 02:14:36 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:36:21.617 rmmod nvme_tcp 00:36:21.617 rmmod nvme_fabrics 00:36:21.617 rmmod nvme_keyring 00:36:21.617 02:14:36 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:36:21.617 02:14:36 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:36:21.617 02:14:36 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:36:21.617 02:14:36 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 1604047 ']' 00:36:21.617 02:14:36 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 1604047 00:36:21.617 02:14:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 1604047 ']' 00:36:21.617 02:14:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 1604047 00:36:21.617 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1604047) - No such process 00:36:21.617 02:14:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 1604047 is not found' 00:36:21.617 Process with pid 1604047 is not found 00:36:21.617 02:14:36 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:36:21.617 02:14:36 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:22.994 Waiting for block devices as requested 00:36:22.994 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:36:22.994 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:22.994 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:23.253 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:23.253 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:23.253 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:23.253 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:23.512 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:23.512 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:23.512 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:23.512 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:23.771 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:23.771 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:23.771 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:23.771 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:24.031 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:24.031 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:24.031 02:14:38 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:36:24.031 02:14:38 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:36:24.031 02:14:38 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:24.031 02:14:38 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:36:24.031 02:14:38 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:24.031 02:14:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:24.031 02:14:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:26.564 02:14:40 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:36:26.564 00:36:26.564 real 0m37.884s 00:36:26.564 user 1m2.073s 00:36:26.564 sys 0m9.178s 00:36:26.564 02:14:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:26.564 02:14:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:26.564 ************************************ 00:36:26.564 END TEST nvmf_abort_qd_sizes 00:36:26.564 ************************************ 00:36:26.564 02:14:40 -- spdk/autotest.sh@295 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:36:26.564 02:14:40 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:36:26.564 02:14:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:26.564 02:14:40 -- common/autotest_common.sh@10 -- # set +x 00:36:26.564 ************************************ 00:36:26.564 START TEST keyring_file 00:36:26.564 ************************************ 00:36:26.564 02:14:40 keyring_file -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:36:26.564 * Looking for test storage... 00:36:26.564 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:36:26.564 02:14:40 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:36:26.564 02:14:40 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:26.564 02:14:40 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:36:26.564 02:14:40 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:26.565 02:14:40 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:26.565 02:14:40 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:26.565 02:14:40 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:26.565 02:14:40 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:26.565 02:14:40 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:26.565 02:14:40 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:26.565 02:14:40 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:26.565 02:14:40 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:26.565 02:14:40 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:26.565 02:14:40 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:36:26.565 02:14:40 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:36:26.565 02:14:40 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:26.565 02:14:40 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:26.565 02:14:40 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:26.565 02:14:40 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:26.565 02:14:40 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:26.565 02:14:40 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:26.565 02:14:40 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:26.565 02:14:40 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:26.565 02:14:40 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:26.565 02:14:40 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:26.565 02:14:40 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:26.565 02:14:40 keyring_file -- paths/export.sh@5 -- # export PATH 00:36:26.565 02:14:40 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:26.565 02:14:40 keyring_file -- nvmf/common.sh@47 -- # : 0 00:36:26.565 02:14:40 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:26.565 02:14:40 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:26.565 02:14:40 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:26.565 02:14:40 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:26.565 02:14:40 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:26.565 02:14:40 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:26.565 02:14:40 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:26.565 02:14:40 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:26.565 02:14:40 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:36:26.565 02:14:40 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:36:26.565 02:14:40 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:36:26.565 02:14:40 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:36:26.565 02:14:40 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:36:26.565 02:14:40 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:36:26.565 02:14:40 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:36:26.565 02:14:40 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:36:26.565 02:14:41 keyring_file -- keyring/common.sh@17 -- # name=key0 00:36:26.565 02:14:41 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:36:26.565 02:14:41 keyring_file -- keyring/common.sh@17 -- # digest=0 00:36:26.565 02:14:41 keyring_file -- keyring/common.sh@18 -- # mktemp 00:36:26.565 02:14:41 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.DAEaa8hx0e 00:36:26.565 02:14:41 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:36:26.565 02:14:41 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:36:26.565 02:14:41 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:36:26.565 02:14:41 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:36:26.565 02:14:41 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:36:26.565 02:14:41 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:36:26.565 02:14:41 keyring_file -- nvmf/common.sh@705 -- # python - 00:36:26.565 02:14:41 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.DAEaa8hx0e 00:36:26.565 02:14:41 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.DAEaa8hx0e 00:36:26.565 02:14:41 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.DAEaa8hx0e 00:36:26.565 02:14:41 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:36:26.565 02:14:41 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:36:26.565 02:14:41 keyring_file -- keyring/common.sh@17 -- # name=key1 00:36:26.565 02:14:41 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:36:26.565 02:14:41 keyring_file -- keyring/common.sh@17 -- # digest=0 00:36:26.565 02:14:41 keyring_file -- keyring/common.sh@18 -- # mktemp 00:36:26.565 02:14:41 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.FTvK2lIVu3 00:36:26.565 02:14:41 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:36:26.565 02:14:41 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:36:26.565 02:14:41 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:36:26.565 02:14:41 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:36:26.565 02:14:41 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:36:26.565 02:14:41 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:36:26.565 02:14:41 keyring_file -- nvmf/common.sh@705 -- # python - 00:36:26.565 02:14:41 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.FTvK2lIVu3 00:36:26.565 02:14:41 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.FTvK2lIVu3 00:36:26.565 02:14:41 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.FTvK2lIVu3 00:36:26.565 02:14:41 keyring_file -- keyring/file.sh@30 -- # tgtpid=1609796 00:36:26.565 02:14:41 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:36:26.565 02:14:41 keyring_file -- keyring/file.sh@32 -- # waitforlisten 1609796 00:36:26.565 02:14:41 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 1609796 ']' 00:36:26.565 02:14:41 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:26.565 02:14:41 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:26.565 02:14:41 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:26.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:26.565 02:14:41 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:26.565 02:14:41 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:26.565 [2024-07-24 02:14:41.134445] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:36:26.565 [2024-07-24 02:14:41.134536] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1609796 ] 00:36:26.565 EAL: No free 2048 kB hugepages reported on node 1 00:36:26.565 [2024-07-24 02:14:41.190593] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:26.565 [2024-07-24 02:14:41.271685] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:26.824 02:14:41 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:26.824 02:14:41 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:36:26.824 02:14:41 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:36:26.824 02:14:41 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:26.824 02:14:41 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:26.824 [2024-07-24 02:14:41.512840] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:26.824 null0 00:36:26.824 [2024-07-24 02:14:41.544932] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:36:26.824 [2024-07-24 02:14:41.545443] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:36:26.824 [2024-07-24 02:14:41.552923] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:36:26.824 02:14:41 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:26.824 02:14:41 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:36:26.824 02:14:41 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:36:26.824 02:14:41 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:36:26.824 02:14:41 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:36:26.824 02:14:41 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:26.824 02:14:41 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:36:26.824 02:14:41 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:26.824 02:14:41 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:36:26.824 02:14:41 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:26.824 02:14:41 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:26.824 [2024-07-24 02:14:41.560921] nvmf_rpc.c: 788:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:36:26.824 request: 00:36:26.824 { 00:36:26.824 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:36:26.824 "secure_channel": false, 00:36:26.824 "listen_address": { 00:36:26.824 "trtype": "tcp", 00:36:26.824 "traddr": "127.0.0.1", 00:36:26.824 "trsvcid": "4420" 00:36:26.824 }, 00:36:26.824 "method": "nvmf_subsystem_add_listener", 00:36:26.824 "req_id": 1 00:36:26.824 } 00:36:26.824 Got JSON-RPC error response 00:36:26.824 response: 00:36:26.824 { 00:36:26.824 "code": -32602, 00:36:26.824 "message": "Invalid parameters" 00:36:26.824 } 00:36:26.824 02:14:41 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:36:26.824 02:14:41 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:36:26.824 02:14:41 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:36:26.824 02:14:41 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:36:26.824 02:14:41 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:36:26.824 02:14:41 keyring_file -- keyring/file.sh@46 -- # bperfpid=1609806 00:36:26.824 02:14:41 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:36:26.824 02:14:41 keyring_file -- keyring/file.sh@48 -- # waitforlisten 1609806 /var/tmp/bperf.sock 00:36:26.824 02:14:41 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 1609806 ']' 00:36:26.824 02:14:41 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:26.824 02:14:41 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:26.824 02:14:41 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:26.824 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:26.824 02:14:41 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:26.824 02:14:41 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:26.824 [2024-07-24 02:14:41.609587] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:36:26.824 [2024-07-24 02:14:41.609691] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1609806 ] 00:36:26.824 EAL: No free 2048 kB hugepages reported on node 1 00:36:26.824 [2024-07-24 02:14:41.669765] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:27.082 [2024-07-24 02:14:41.760500] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:27.082 02:14:41 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:27.082 02:14:41 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:36:27.082 02:14:41 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.DAEaa8hx0e 00:36:27.082 02:14:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.DAEaa8hx0e 00:36:27.341 02:14:42 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.FTvK2lIVu3 00:36:27.341 02:14:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.FTvK2lIVu3 00:36:27.600 02:14:42 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:36:27.600 02:14:42 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:36:27.600 02:14:42 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:27.600 02:14:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:27.600 02:14:42 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:27.858 02:14:42 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.DAEaa8hx0e == \/\t\m\p\/\t\m\p\.\D\A\E\a\a\8\h\x\0\e ]] 00:36:27.858 02:14:42 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:36:27.858 02:14:42 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:36:27.858 02:14:42 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:27.858 02:14:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:27.858 02:14:42 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:28.116 02:14:42 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.FTvK2lIVu3 == \/\t\m\p\/\t\m\p\.\F\T\v\K\2\l\I\V\u\3 ]] 00:36:28.116 02:14:42 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:36:28.116 02:14:42 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:28.116 02:14:42 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:28.116 02:14:42 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:28.116 02:14:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:28.116 02:14:42 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:28.374 02:14:43 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:36:28.374 02:14:43 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:36:28.374 02:14:43 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:28.374 02:14:43 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:28.374 02:14:43 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:28.374 02:14:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:28.374 02:14:43 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:28.635 02:14:43 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:36:28.635 02:14:43 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:28.635 02:14:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:28.893 [2024-07-24 02:14:43.652450] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:36:28.893 nvme0n1 00:36:28.893 02:14:43 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:36:28.893 02:14:43 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:28.893 02:14:43 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:28.893 02:14:43 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:28.893 02:14:43 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:28.893 02:14:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:29.151 02:14:43 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:36:29.151 02:14:43 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:36:29.151 02:14:43 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:29.151 02:14:43 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:29.151 02:14:43 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:29.151 02:14:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:29.151 02:14:43 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:29.409 02:14:44 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:36:29.409 02:14:44 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:29.673 Running I/O for 1 seconds... 00:36:30.647 00:36:30.647 Latency(us) 00:36:30.647 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:30.647 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:36:30.647 nvme0n1 : 1.01 6378.04 24.91 0.00 0.00 19963.97 4029.25 26796.94 00:36:30.647 =================================================================================================================== 00:36:30.647 Total : 6378.04 24.91 0.00 0.00 19963.97 4029.25 26796.94 00:36:30.647 0 00:36:30.647 02:14:45 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:36:30.647 02:14:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:36:30.905 02:14:45 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:36:30.905 02:14:45 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:30.905 02:14:45 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:30.905 02:14:45 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:30.905 02:14:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:30.905 02:14:45 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:31.163 02:14:45 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:36:31.163 02:14:45 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:36:31.163 02:14:45 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:31.163 02:14:45 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:31.163 02:14:45 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:31.163 02:14:45 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:31.163 02:14:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:31.424 02:14:46 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:36:31.424 02:14:46 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:31.424 02:14:46 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:36:31.424 02:14:46 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:31.424 02:14:46 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:36:31.424 02:14:46 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:31.424 02:14:46 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:36:31.424 02:14:46 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:31.424 02:14:46 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:31.424 02:14:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:31.682 [2024-07-24 02:14:46.364444] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:36:31.682 [2024-07-24 02:14:46.364970] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bf6710 (107): Transport endpoint is not connected 00:36:31.682 [2024-07-24 02:14:46.365957] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bf6710 (9): Bad file descriptor 00:36:31.682 [2024-07-24 02:14:46.366955] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:36:31.682 [2024-07-24 02:14:46.366980] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:36:31.682 [2024-07-24 02:14:46.366995] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:36:31.682 request: 00:36:31.682 { 00:36:31.682 "name": "nvme0", 00:36:31.682 "trtype": "tcp", 00:36:31.682 "traddr": "127.0.0.1", 00:36:31.682 "adrfam": "ipv4", 00:36:31.682 "trsvcid": "4420", 00:36:31.682 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:31.682 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:31.682 "prchk_reftag": false, 00:36:31.682 "prchk_guard": false, 00:36:31.682 "hdgst": false, 00:36:31.682 "ddgst": false, 00:36:31.682 "psk": "key1", 00:36:31.682 "method": "bdev_nvme_attach_controller", 00:36:31.682 "req_id": 1 00:36:31.682 } 00:36:31.682 Got JSON-RPC error response 00:36:31.682 response: 00:36:31.682 { 00:36:31.682 "code": -5, 00:36:31.682 "message": "Input/output error" 00:36:31.682 } 00:36:31.682 02:14:46 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:36:31.682 02:14:46 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:36:31.682 02:14:46 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:36:31.682 02:14:46 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:36:31.682 02:14:46 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:36:31.682 02:14:46 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:31.682 02:14:46 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:31.682 02:14:46 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:31.682 02:14:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:31.682 02:14:46 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:31.940 02:14:46 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:36:31.940 02:14:46 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:36:31.940 02:14:46 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:31.940 02:14:46 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:31.940 02:14:46 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:31.940 02:14:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:31.940 02:14:46 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:32.198 02:14:46 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:36:32.198 02:14:46 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:36:32.198 02:14:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:36:32.456 02:14:47 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:36:32.456 02:14:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:36:32.714 02:14:47 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:36:32.714 02:14:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:32.714 02:14:47 keyring_file -- keyring/file.sh@77 -- # jq length 00:36:32.972 02:14:47 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:36:32.972 02:14:47 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.DAEaa8hx0e 00:36:32.972 02:14:47 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.DAEaa8hx0e 00:36:32.972 02:14:47 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:36:32.972 02:14:47 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.DAEaa8hx0e 00:36:32.972 02:14:47 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:36:32.972 02:14:47 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:32.972 02:14:47 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:36:32.972 02:14:47 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:32.972 02:14:47 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.DAEaa8hx0e 00:36:32.972 02:14:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.DAEaa8hx0e 00:36:33.230 [2024-07-24 02:14:47.877174] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.DAEaa8hx0e': 0100660 00:36:33.230 [2024-07-24 02:14:47.877212] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:36:33.230 request: 00:36:33.230 { 00:36:33.230 "name": "key0", 00:36:33.230 "path": "/tmp/tmp.DAEaa8hx0e", 00:36:33.230 "method": "keyring_file_add_key", 00:36:33.230 "req_id": 1 00:36:33.230 } 00:36:33.230 Got JSON-RPC error response 00:36:33.230 response: 00:36:33.230 { 00:36:33.230 "code": -1, 00:36:33.230 "message": "Operation not permitted" 00:36:33.230 } 00:36:33.230 02:14:47 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:36:33.230 02:14:47 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:36:33.230 02:14:47 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:36:33.230 02:14:47 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:36:33.230 02:14:47 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.DAEaa8hx0e 00:36:33.230 02:14:47 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.DAEaa8hx0e 00:36:33.230 02:14:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.DAEaa8hx0e 00:36:33.489 02:14:48 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.DAEaa8hx0e 00:36:33.489 02:14:48 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:36:33.489 02:14:48 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:33.489 02:14:48 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:33.489 02:14:48 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:33.489 02:14:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:33.489 02:14:48 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:33.747 02:14:48 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:36:33.747 02:14:48 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:33.747 02:14:48 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:36:33.747 02:14:48 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:33.747 02:14:48 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:36:33.747 02:14:48 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:33.747 02:14:48 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:36:33.747 02:14:48 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:33.747 02:14:48 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:33.747 02:14:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:33.747 [2024-07-24 02:14:48.615217] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.DAEaa8hx0e': No such file or directory 00:36:33.747 [2024-07-24 02:14:48.615254] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:36:33.747 [2024-07-24 02:14:48.615294] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:36:33.747 [2024-07-24 02:14:48.615307] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:36:33.747 [2024-07-24 02:14:48.615329] bdev_nvme.c:6296:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:36:33.747 request: 00:36:33.747 { 00:36:33.747 "name": "nvme0", 00:36:33.747 "trtype": "tcp", 00:36:33.747 "traddr": "127.0.0.1", 00:36:33.747 "adrfam": "ipv4", 00:36:33.747 "trsvcid": "4420", 00:36:33.747 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:33.747 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:33.747 "prchk_reftag": false, 00:36:33.747 "prchk_guard": false, 00:36:33.747 "hdgst": false, 00:36:33.747 "ddgst": false, 00:36:33.747 "psk": "key0", 00:36:33.747 "method": "bdev_nvme_attach_controller", 00:36:33.747 "req_id": 1 00:36:33.747 } 00:36:33.747 Got JSON-RPC error response 00:36:33.747 response: 00:36:33.747 { 00:36:33.747 "code": -19, 00:36:33.747 "message": "No such device" 00:36:33.747 } 00:36:33.747 02:14:48 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:36:33.747 02:14:48 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:36:33.747 02:14:48 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:36:33.747 02:14:48 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:36:33.747 02:14:48 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:36:33.747 02:14:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:36:34.005 02:14:48 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:36:34.005 02:14:48 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:36:34.005 02:14:48 keyring_file -- keyring/common.sh@17 -- # name=key0 00:36:34.005 02:14:48 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:36:34.005 02:14:48 keyring_file -- keyring/common.sh@17 -- # digest=0 00:36:34.005 02:14:48 keyring_file -- keyring/common.sh@18 -- # mktemp 00:36:34.262 02:14:48 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.Ta5Y7uzQz8 00:36:34.262 02:14:48 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:36:34.262 02:14:48 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:36:34.262 02:14:48 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:36:34.262 02:14:48 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:36:34.262 02:14:48 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:36:34.262 02:14:48 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:36:34.262 02:14:48 keyring_file -- nvmf/common.sh@705 -- # python - 00:36:34.262 02:14:48 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.Ta5Y7uzQz8 00:36:34.262 02:14:48 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.Ta5Y7uzQz8 00:36:34.262 02:14:48 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.Ta5Y7uzQz8 00:36:34.262 02:14:48 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Ta5Y7uzQz8 00:36:34.262 02:14:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Ta5Y7uzQz8 00:36:34.520 02:14:49 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:34.520 02:14:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:34.778 nvme0n1 00:36:34.778 02:14:49 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:36:34.778 02:14:49 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:34.778 02:14:49 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:34.778 02:14:49 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:34.778 02:14:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:34.778 02:14:49 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:35.036 02:14:49 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:36:35.036 02:14:49 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:36:35.036 02:14:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:36:35.293 02:14:50 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:36:35.293 02:14:50 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:36:35.293 02:14:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:35.293 02:14:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:35.293 02:14:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:35.551 02:14:50 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:36:35.551 02:14:50 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:36:35.551 02:14:50 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:35.551 02:14:50 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:35.551 02:14:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:35.551 02:14:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:35.551 02:14:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:35.809 02:14:50 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:36:35.809 02:14:50 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:36:35.809 02:14:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:36:36.066 02:14:50 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:36:36.066 02:14:50 keyring_file -- keyring/file.sh@104 -- # jq length 00:36:36.066 02:14:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:36.323 02:14:51 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:36:36.323 02:14:51 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Ta5Y7uzQz8 00:36:36.323 02:14:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Ta5Y7uzQz8 00:36:36.580 02:14:51 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.FTvK2lIVu3 00:36:36.580 02:14:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.FTvK2lIVu3 00:36:36.837 02:14:51 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:36.837 02:14:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:37.094 nvme0n1 00:36:37.095 02:14:51 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:36:37.095 02:14:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:36:37.353 02:14:52 keyring_file -- keyring/file.sh@112 -- # config='{ 00:36:37.353 "subsystems": [ 00:36:37.353 { 00:36:37.353 "subsystem": "keyring", 00:36:37.353 "config": [ 00:36:37.353 { 00:36:37.353 "method": "keyring_file_add_key", 00:36:37.353 "params": { 00:36:37.353 "name": "key0", 00:36:37.353 "path": "/tmp/tmp.Ta5Y7uzQz8" 00:36:37.353 } 00:36:37.353 }, 00:36:37.353 { 00:36:37.353 "method": "keyring_file_add_key", 00:36:37.353 "params": { 00:36:37.353 "name": "key1", 00:36:37.353 "path": "/tmp/tmp.FTvK2lIVu3" 00:36:37.353 } 00:36:37.353 } 00:36:37.353 ] 00:36:37.353 }, 00:36:37.353 { 00:36:37.353 "subsystem": "iobuf", 00:36:37.353 "config": [ 00:36:37.353 { 00:36:37.353 "method": "iobuf_set_options", 00:36:37.353 "params": { 00:36:37.353 "small_pool_count": 8192, 00:36:37.353 "large_pool_count": 1024, 00:36:37.353 "small_bufsize": 8192, 00:36:37.353 "large_bufsize": 135168 00:36:37.353 } 00:36:37.353 } 00:36:37.353 ] 00:36:37.353 }, 00:36:37.353 { 00:36:37.353 "subsystem": "sock", 00:36:37.353 "config": [ 00:36:37.353 { 00:36:37.353 "method": "sock_set_default_impl", 00:36:37.353 "params": { 00:36:37.353 "impl_name": "posix" 00:36:37.353 } 00:36:37.353 }, 00:36:37.353 { 00:36:37.353 "method": "sock_impl_set_options", 00:36:37.353 "params": { 00:36:37.353 "impl_name": "ssl", 00:36:37.353 "recv_buf_size": 4096, 00:36:37.353 "send_buf_size": 4096, 00:36:37.353 "enable_recv_pipe": true, 00:36:37.353 "enable_quickack": false, 00:36:37.353 "enable_placement_id": 0, 00:36:37.353 "enable_zerocopy_send_server": true, 00:36:37.353 "enable_zerocopy_send_client": false, 00:36:37.353 "zerocopy_threshold": 0, 00:36:37.353 "tls_version": 0, 00:36:37.353 "enable_ktls": false 00:36:37.353 } 00:36:37.353 }, 00:36:37.353 { 00:36:37.353 "method": "sock_impl_set_options", 00:36:37.353 "params": { 00:36:37.353 "impl_name": "posix", 00:36:37.353 "recv_buf_size": 2097152, 00:36:37.353 "send_buf_size": 2097152, 00:36:37.353 "enable_recv_pipe": true, 00:36:37.353 "enable_quickack": false, 00:36:37.353 "enable_placement_id": 0, 00:36:37.353 "enable_zerocopy_send_server": true, 00:36:37.353 "enable_zerocopy_send_client": false, 00:36:37.353 "zerocopy_threshold": 0, 00:36:37.353 "tls_version": 0, 00:36:37.353 "enable_ktls": false 00:36:37.353 } 00:36:37.353 } 00:36:37.353 ] 00:36:37.353 }, 00:36:37.353 { 00:36:37.353 "subsystem": "vmd", 00:36:37.353 "config": [] 00:36:37.353 }, 00:36:37.353 { 00:36:37.353 "subsystem": "accel", 00:36:37.353 "config": [ 00:36:37.353 { 00:36:37.353 "method": "accel_set_options", 00:36:37.353 "params": { 00:36:37.353 "small_cache_size": 128, 00:36:37.353 "large_cache_size": 16, 00:36:37.353 "task_count": 2048, 00:36:37.353 "sequence_count": 2048, 00:36:37.353 "buf_count": 2048 00:36:37.353 } 00:36:37.353 } 00:36:37.353 ] 00:36:37.353 }, 00:36:37.353 { 00:36:37.353 "subsystem": "bdev", 00:36:37.353 "config": [ 00:36:37.353 { 00:36:37.353 "method": "bdev_set_options", 00:36:37.353 "params": { 00:36:37.353 "bdev_io_pool_size": 65535, 00:36:37.353 "bdev_io_cache_size": 256, 00:36:37.353 "bdev_auto_examine": true, 00:36:37.353 "iobuf_small_cache_size": 128, 00:36:37.353 "iobuf_large_cache_size": 16 00:36:37.353 } 00:36:37.353 }, 00:36:37.353 { 00:36:37.353 "method": "bdev_raid_set_options", 00:36:37.353 "params": { 00:36:37.353 "process_window_size_kb": 1024, 00:36:37.353 "process_max_bandwidth_mb_sec": 0 00:36:37.353 } 00:36:37.353 }, 00:36:37.353 { 00:36:37.353 "method": "bdev_iscsi_set_options", 00:36:37.353 "params": { 00:36:37.353 "timeout_sec": 30 00:36:37.353 } 00:36:37.353 }, 00:36:37.353 { 00:36:37.353 "method": "bdev_nvme_set_options", 00:36:37.353 "params": { 00:36:37.353 "action_on_timeout": "none", 00:36:37.353 "timeout_us": 0, 00:36:37.353 "timeout_admin_us": 0, 00:36:37.353 "keep_alive_timeout_ms": 10000, 00:36:37.353 "arbitration_burst": 0, 00:36:37.353 "low_priority_weight": 0, 00:36:37.353 "medium_priority_weight": 0, 00:36:37.353 "high_priority_weight": 0, 00:36:37.353 "nvme_adminq_poll_period_us": 10000, 00:36:37.353 "nvme_ioq_poll_period_us": 0, 00:36:37.353 "io_queue_requests": 512, 00:36:37.353 "delay_cmd_submit": true, 00:36:37.353 "transport_retry_count": 4, 00:36:37.353 "bdev_retry_count": 3, 00:36:37.353 "transport_ack_timeout": 0, 00:36:37.353 "ctrlr_loss_timeout_sec": 0, 00:36:37.353 "reconnect_delay_sec": 0, 00:36:37.353 "fast_io_fail_timeout_sec": 0, 00:36:37.353 "disable_auto_failback": false, 00:36:37.353 "generate_uuids": false, 00:36:37.353 "transport_tos": 0, 00:36:37.353 "nvme_error_stat": false, 00:36:37.353 "rdma_srq_size": 0, 00:36:37.353 "io_path_stat": false, 00:36:37.353 "allow_accel_sequence": false, 00:36:37.353 "rdma_max_cq_size": 0, 00:36:37.353 "rdma_cm_event_timeout_ms": 0, 00:36:37.353 "dhchap_digests": [ 00:36:37.353 "sha256", 00:36:37.353 "sha384", 00:36:37.353 "sha512" 00:36:37.353 ], 00:36:37.353 "dhchap_dhgroups": [ 00:36:37.353 "null", 00:36:37.353 "ffdhe2048", 00:36:37.353 "ffdhe3072", 00:36:37.353 "ffdhe4096", 00:36:37.353 "ffdhe6144", 00:36:37.353 "ffdhe8192" 00:36:37.353 ] 00:36:37.353 } 00:36:37.353 }, 00:36:37.353 { 00:36:37.353 "method": "bdev_nvme_attach_controller", 00:36:37.353 "params": { 00:36:37.353 "name": "nvme0", 00:36:37.353 "trtype": "TCP", 00:36:37.353 "adrfam": "IPv4", 00:36:37.353 "traddr": "127.0.0.1", 00:36:37.353 "trsvcid": "4420", 00:36:37.353 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:37.353 "prchk_reftag": false, 00:36:37.353 "prchk_guard": false, 00:36:37.353 "ctrlr_loss_timeout_sec": 0, 00:36:37.353 "reconnect_delay_sec": 0, 00:36:37.353 "fast_io_fail_timeout_sec": 0, 00:36:37.353 "psk": "key0", 00:36:37.353 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:37.353 "hdgst": false, 00:36:37.353 "ddgst": false 00:36:37.353 } 00:36:37.353 }, 00:36:37.353 { 00:36:37.353 "method": "bdev_nvme_set_hotplug", 00:36:37.353 "params": { 00:36:37.353 "period_us": 100000, 00:36:37.353 "enable": false 00:36:37.353 } 00:36:37.353 }, 00:36:37.353 { 00:36:37.353 "method": "bdev_wait_for_examine" 00:36:37.353 } 00:36:37.353 ] 00:36:37.353 }, 00:36:37.353 { 00:36:37.353 "subsystem": "nbd", 00:36:37.353 "config": [] 00:36:37.353 } 00:36:37.353 ] 00:36:37.353 }' 00:36:37.353 02:14:52 keyring_file -- keyring/file.sh@114 -- # killprocess 1609806 00:36:37.353 02:14:52 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 1609806 ']' 00:36:37.354 02:14:52 keyring_file -- common/autotest_common.sh@952 -- # kill -0 1609806 00:36:37.354 02:14:52 keyring_file -- common/autotest_common.sh@953 -- # uname 00:36:37.354 02:14:52 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:37.354 02:14:52 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1609806 00:36:37.354 02:14:52 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:36:37.354 02:14:52 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:36:37.354 02:14:52 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1609806' 00:36:37.354 killing process with pid 1609806 00:36:37.354 02:14:52 keyring_file -- common/autotest_common.sh@967 -- # kill 1609806 00:36:37.354 Received shutdown signal, test time was about 1.000000 seconds 00:36:37.354 00:36:37.354 Latency(us) 00:36:37.354 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:37.354 =================================================================================================================== 00:36:37.354 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:37.354 02:14:52 keyring_file -- common/autotest_common.sh@972 -- # wait 1609806 00:36:37.612 02:14:52 keyring_file -- keyring/file.sh@117 -- # bperfpid=1611260 00:36:37.612 02:14:52 keyring_file -- keyring/file.sh@119 -- # waitforlisten 1611260 /var/tmp/bperf.sock 00:36:37.612 02:14:52 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 1611260 ']' 00:36:37.612 02:14:52 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:37.612 02:14:52 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:36:37.612 02:14:52 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:37.612 02:14:52 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:37.612 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:37.612 02:14:52 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:37.612 02:14:52 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:37.612 02:14:52 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:36:37.612 "subsystems": [ 00:36:37.612 { 00:36:37.612 "subsystem": "keyring", 00:36:37.612 "config": [ 00:36:37.612 { 00:36:37.612 "method": "keyring_file_add_key", 00:36:37.612 "params": { 00:36:37.612 "name": "key0", 00:36:37.612 "path": "/tmp/tmp.Ta5Y7uzQz8" 00:36:37.612 } 00:36:37.612 }, 00:36:37.612 { 00:36:37.612 "method": "keyring_file_add_key", 00:36:37.612 "params": { 00:36:37.612 "name": "key1", 00:36:37.612 "path": "/tmp/tmp.FTvK2lIVu3" 00:36:37.612 } 00:36:37.612 } 00:36:37.612 ] 00:36:37.612 }, 00:36:37.612 { 00:36:37.612 "subsystem": "iobuf", 00:36:37.612 "config": [ 00:36:37.612 { 00:36:37.612 "method": "iobuf_set_options", 00:36:37.612 "params": { 00:36:37.612 "small_pool_count": 8192, 00:36:37.612 "large_pool_count": 1024, 00:36:37.612 "small_bufsize": 8192, 00:36:37.612 "large_bufsize": 135168 00:36:37.612 } 00:36:37.612 } 00:36:37.612 ] 00:36:37.612 }, 00:36:37.612 { 00:36:37.612 "subsystem": "sock", 00:36:37.612 "config": [ 00:36:37.612 { 00:36:37.612 "method": "sock_set_default_impl", 00:36:37.612 "params": { 00:36:37.612 "impl_name": "posix" 00:36:37.612 } 00:36:37.612 }, 00:36:37.612 { 00:36:37.612 "method": "sock_impl_set_options", 00:36:37.612 "params": { 00:36:37.612 "impl_name": "ssl", 00:36:37.612 "recv_buf_size": 4096, 00:36:37.612 "send_buf_size": 4096, 00:36:37.612 "enable_recv_pipe": true, 00:36:37.612 "enable_quickack": false, 00:36:37.612 "enable_placement_id": 0, 00:36:37.612 "enable_zerocopy_send_server": true, 00:36:37.612 "enable_zerocopy_send_client": false, 00:36:37.612 "zerocopy_threshold": 0, 00:36:37.612 "tls_version": 0, 00:36:37.612 "enable_ktls": false 00:36:37.612 } 00:36:37.612 }, 00:36:37.612 { 00:36:37.612 "method": "sock_impl_set_options", 00:36:37.612 "params": { 00:36:37.612 "impl_name": "posix", 00:36:37.612 "recv_buf_size": 2097152, 00:36:37.612 "send_buf_size": 2097152, 00:36:37.612 "enable_recv_pipe": true, 00:36:37.612 "enable_quickack": false, 00:36:37.612 "enable_placement_id": 0, 00:36:37.612 "enable_zerocopy_send_server": true, 00:36:37.612 "enable_zerocopy_send_client": false, 00:36:37.612 "zerocopy_threshold": 0, 00:36:37.612 "tls_version": 0, 00:36:37.612 "enable_ktls": false 00:36:37.612 } 00:36:37.612 } 00:36:37.612 ] 00:36:37.612 }, 00:36:37.612 { 00:36:37.612 "subsystem": "vmd", 00:36:37.612 "config": [] 00:36:37.612 }, 00:36:37.612 { 00:36:37.612 "subsystem": "accel", 00:36:37.612 "config": [ 00:36:37.612 { 00:36:37.612 "method": "accel_set_options", 00:36:37.612 "params": { 00:36:37.612 "small_cache_size": 128, 00:36:37.612 "large_cache_size": 16, 00:36:37.612 "task_count": 2048, 00:36:37.612 "sequence_count": 2048, 00:36:37.612 "buf_count": 2048 00:36:37.612 } 00:36:37.612 } 00:36:37.612 ] 00:36:37.612 }, 00:36:37.612 { 00:36:37.612 "subsystem": "bdev", 00:36:37.612 "config": [ 00:36:37.612 { 00:36:37.612 "method": "bdev_set_options", 00:36:37.612 "params": { 00:36:37.612 "bdev_io_pool_size": 65535, 00:36:37.612 "bdev_io_cache_size": 256, 00:36:37.612 "bdev_auto_examine": true, 00:36:37.612 "iobuf_small_cache_size": 128, 00:36:37.612 "iobuf_large_cache_size": 16 00:36:37.612 } 00:36:37.612 }, 00:36:37.612 { 00:36:37.612 "method": "bdev_raid_set_options", 00:36:37.612 "params": { 00:36:37.612 "process_window_size_kb": 1024, 00:36:37.612 "process_max_bandwidth_mb_sec": 0 00:36:37.612 } 00:36:37.612 }, 00:36:37.612 { 00:36:37.612 "method": "bdev_iscsi_set_options", 00:36:37.612 "params": { 00:36:37.612 "timeout_sec": 30 00:36:37.612 } 00:36:37.612 }, 00:36:37.612 { 00:36:37.612 "method": "bdev_nvme_set_options", 00:36:37.612 "params": { 00:36:37.612 "action_on_timeout": "none", 00:36:37.612 "timeout_us": 0, 00:36:37.612 "timeout_admin_us": 0, 00:36:37.612 "keep_alive_timeout_ms": 10000, 00:36:37.612 "arbitration_burst": 0, 00:36:37.612 "low_priority_weight": 0, 00:36:37.612 "medium_priority_weight": 0, 00:36:37.612 "high_priority_weight": 0, 00:36:37.612 "nvme_adminq_poll_period_us": 10000, 00:36:37.612 "nvme_ioq_poll_period_us": 0, 00:36:37.612 "io_queue_requests": 512, 00:36:37.612 "delay_cmd_submit": true, 00:36:37.612 "transport_retry_count": 4, 00:36:37.612 "bdev_retry_count": 3, 00:36:37.612 "transport_ack_timeout": 0, 00:36:37.612 "ctrlr_loss_timeout_sec": 0, 00:36:37.612 "reconnect_delay_sec": 0, 00:36:37.612 "fast_io_fail_timeout_sec": 0, 00:36:37.612 "disable_auto_failback": false, 00:36:37.612 "generate_uuids": false, 00:36:37.612 "transport_tos": 0, 00:36:37.612 "nvme_error_stat": false, 00:36:37.612 "rdma_srq_size": 0, 00:36:37.612 "io_path_stat": false, 00:36:37.612 "allow_accel_sequence": false, 00:36:37.612 "rdma_max_cq_size": 0, 00:36:37.612 "rdma_cm_event_timeout_ms": 0, 00:36:37.612 "dhchap_digests": [ 00:36:37.612 "sha256", 00:36:37.612 "sha384", 00:36:37.612 "sha512" 00:36:37.612 ], 00:36:37.612 "dhchap_dhgroups": [ 00:36:37.612 "null", 00:36:37.612 "ffdhe2048", 00:36:37.612 "ffdhe3072", 00:36:37.612 "ffdhe4096", 00:36:37.612 "ffdhe6144", 00:36:37.612 "ffdhe8192" 00:36:37.612 ] 00:36:37.612 } 00:36:37.613 }, 00:36:37.613 { 00:36:37.613 "method": "bdev_nvme_attach_controller", 00:36:37.613 "params": { 00:36:37.613 "name": "nvme0", 00:36:37.613 "trtype": "TCP", 00:36:37.613 "adrfam": "IPv4", 00:36:37.613 "traddr": "127.0.0.1", 00:36:37.613 "trsvcid": "4420", 00:36:37.613 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:37.613 "prchk_reftag": false, 00:36:37.613 "prchk_guard": false, 00:36:37.613 "ctrlr_loss_timeout_sec": 0, 00:36:37.613 "reconnect_delay_sec": 0, 00:36:37.613 "fast_io_fail_timeout_sec": 0, 00:36:37.613 "psk": "key0", 00:36:37.613 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:37.613 "hdgst": false, 00:36:37.613 "ddgst": false 00:36:37.613 } 00:36:37.613 }, 00:36:37.613 { 00:36:37.613 "method": "bdev_nvme_set_hotplug", 00:36:37.613 "params": { 00:36:37.613 "period_us": 100000, 00:36:37.613 "enable": false 00:36:37.613 } 00:36:37.613 }, 00:36:37.613 { 00:36:37.613 "method": "bdev_wait_for_examine" 00:36:37.613 } 00:36:37.613 ] 00:36:37.613 }, 00:36:37.613 { 00:36:37.613 "subsystem": "nbd", 00:36:37.613 "config": [] 00:36:37.613 } 00:36:37.613 ] 00:36:37.613 }' 00:36:37.613 [2024-07-24 02:14:52.428866] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:36:37.613 [2024-07-24 02:14:52.428956] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1611260 ] 00:36:37.613 EAL: No free 2048 kB hugepages reported on node 1 00:36:37.613 [2024-07-24 02:14:52.486097] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:37.870 [2024-07-24 02:14:52.573200] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:37.870 [2024-07-24 02:14:52.754782] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:36:38.802 02:14:53 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:38.802 02:14:53 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:36:38.802 02:14:53 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:36:38.802 02:14:53 keyring_file -- keyring/file.sh@120 -- # jq length 00:36:38.802 02:14:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:38.802 02:14:53 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:36:38.802 02:14:53 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:36:38.802 02:14:53 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:38.802 02:14:53 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:38.802 02:14:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:38.802 02:14:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:38.802 02:14:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:39.059 02:14:53 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:36:39.059 02:14:53 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:36:39.059 02:14:53 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:39.059 02:14:53 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:39.059 02:14:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:39.059 02:14:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:39.059 02:14:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:39.317 02:14:54 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:36:39.317 02:14:54 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:36:39.317 02:14:54 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:36:39.317 02:14:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:36:39.574 02:14:54 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:36:39.574 02:14:54 keyring_file -- keyring/file.sh@1 -- # cleanup 00:36:39.574 02:14:54 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.Ta5Y7uzQz8 /tmp/tmp.FTvK2lIVu3 00:36:39.574 02:14:54 keyring_file -- keyring/file.sh@20 -- # killprocess 1611260 00:36:39.574 02:14:54 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 1611260 ']' 00:36:39.574 02:14:54 keyring_file -- common/autotest_common.sh@952 -- # kill -0 1611260 00:36:39.574 02:14:54 keyring_file -- common/autotest_common.sh@953 -- # uname 00:36:39.575 02:14:54 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:39.575 02:14:54 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1611260 00:36:39.575 02:14:54 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:36:39.575 02:14:54 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:36:39.575 02:14:54 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1611260' 00:36:39.575 killing process with pid 1611260 00:36:39.575 02:14:54 keyring_file -- common/autotest_common.sh@967 -- # kill 1611260 00:36:39.575 Received shutdown signal, test time was about 1.000000 seconds 00:36:39.575 00:36:39.575 Latency(us) 00:36:39.575 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:39.575 =================================================================================================================== 00:36:39.575 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:36:39.575 02:14:54 keyring_file -- common/autotest_common.sh@972 -- # wait 1611260 00:36:39.833 02:14:54 keyring_file -- keyring/file.sh@21 -- # killprocess 1609796 00:36:39.833 02:14:54 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 1609796 ']' 00:36:39.833 02:14:54 keyring_file -- common/autotest_common.sh@952 -- # kill -0 1609796 00:36:39.833 02:14:54 keyring_file -- common/autotest_common.sh@953 -- # uname 00:36:39.833 02:14:54 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:39.833 02:14:54 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1609796 00:36:39.833 02:14:54 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:36:39.833 02:14:54 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:36:39.833 02:14:54 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1609796' 00:36:39.833 killing process with pid 1609796 00:36:39.833 02:14:54 keyring_file -- common/autotest_common.sh@967 -- # kill 1609796 00:36:39.833 [2024-07-24 02:14:54.666346] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:36:39.833 02:14:54 keyring_file -- common/autotest_common.sh@972 -- # wait 1609796 00:36:40.399 00:36:40.399 real 0m14.133s 00:36:40.399 user 0m35.326s 00:36:40.399 sys 0m3.285s 00:36:40.399 02:14:55 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:40.399 02:14:55 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:40.399 ************************************ 00:36:40.399 END TEST keyring_file 00:36:40.399 ************************************ 00:36:40.399 02:14:55 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:36:40.399 02:14:55 -- spdk/autotest.sh@297 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:36:40.399 02:14:55 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:36:40.399 02:14:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:40.399 02:14:55 -- common/autotest_common.sh@10 -- # set +x 00:36:40.399 ************************************ 00:36:40.399 START TEST keyring_linux 00:36:40.399 ************************************ 00:36:40.399 02:14:55 keyring_linux -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:36:40.399 * Looking for test storage... 00:36:40.399 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:36:40.399 02:14:55 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:36:40.399 02:14:55 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:40.399 02:14:55 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:36:40.399 02:14:55 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:40.399 02:14:55 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:40.399 02:14:55 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:40.399 02:14:55 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:40.399 02:14:55 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:40.399 02:14:55 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:40.399 02:14:55 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:40.399 02:14:55 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:40.399 02:14:55 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:40.399 02:14:55 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:40.399 02:14:55 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:36:40.399 02:14:55 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:36:40.399 02:14:55 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:40.399 02:14:55 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:40.399 02:14:55 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:40.399 02:14:55 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:40.399 02:14:55 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:40.399 02:14:55 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:40.399 02:14:55 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:40.399 02:14:55 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:40.399 02:14:55 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:40.399 02:14:55 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:40.400 02:14:55 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:40.400 02:14:55 keyring_linux -- paths/export.sh@5 -- # export PATH 00:36:40.400 02:14:55 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:40.400 02:14:55 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:36:40.400 02:14:55 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:40.400 02:14:55 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:40.400 02:14:55 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:40.400 02:14:55 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:40.400 02:14:55 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:40.400 02:14:55 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:40.400 02:14:55 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:40.400 02:14:55 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:40.400 02:14:55 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:36:40.400 02:14:55 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:36:40.400 02:14:55 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:36:40.400 02:14:55 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:36:40.400 02:14:55 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:36:40.400 02:14:55 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:36:40.400 02:14:55 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:36:40.400 02:14:55 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:36:40.400 02:14:55 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:36:40.400 02:14:55 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:36:40.400 02:14:55 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:36:40.400 02:14:55 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:36:40.400 02:14:55 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:36:40.400 02:14:55 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:36:40.400 02:14:55 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:36:40.400 02:14:55 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:36:40.400 02:14:55 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:36:40.400 02:14:55 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:36:40.400 02:14:55 keyring_linux -- nvmf/common.sh@705 -- # python - 00:36:40.400 02:14:55 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:36:40.400 02:14:55 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:36:40.400 /tmp/:spdk-test:key0 00:36:40.400 02:14:55 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:36:40.400 02:14:55 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:36:40.400 02:14:55 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:36:40.400 02:14:55 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:36:40.400 02:14:55 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:36:40.400 02:14:55 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:36:40.400 02:14:55 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:36:40.400 02:14:55 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:36:40.400 02:14:55 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:36:40.400 02:14:55 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:36:40.400 02:14:55 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:36:40.400 02:14:55 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:36:40.400 02:14:55 keyring_linux -- nvmf/common.sh@705 -- # python - 00:36:40.400 02:14:55 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:36:40.400 02:14:55 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:36:40.400 /tmp/:spdk-test:key1 00:36:40.400 02:14:55 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=1611623 00:36:40.400 02:14:55 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 1611623 00:36:40.400 02:14:55 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:36:40.400 02:14:55 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 1611623 ']' 00:36:40.400 02:14:55 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:40.400 02:14:55 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:40.400 02:14:55 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:40.400 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:40.400 02:14:55 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:40.400 02:14:55 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:40.658 [2024-07-24 02:14:55.305534] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:36:40.658 [2024-07-24 02:14:55.305627] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1611623 ] 00:36:40.658 EAL: No free 2048 kB hugepages reported on node 1 00:36:40.658 [2024-07-24 02:14:55.366937] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:40.658 [2024-07-24 02:14:55.456878] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:40.916 02:14:55 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:40.916 02:14:55 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:36:40.916 02:14:55 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:36:40.916 02:14:55 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:40.916 02:14:55 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:40.916 [2024-07-24 02:14:55.714791] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:40.916 null0 00:36:40.916 [2024-07-24 02:14:55.746852] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:36:40.916 [2024-07-24 02:14:55.747401] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:36:40.916 02:14:55 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:40.916 02:14:55 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:36:40.916 433815529 00:36:40.916 02:14:55 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:36:40.916 293416707 00:36:40.916 02:14:55 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=1611753 00:36:40.916 02:14:55 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:36:40.916 02:14:55 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 1611753 /var/tmp/bperf.sock 00:36:40.916 02:14:55 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 1611753 ']' 00:36:40.916 02:14:55 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:40.916 02:14:55 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:40.916 02:14:55 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:40.916 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:40.916 02:14:55 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:40.916 02:14:55 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:41.175 [2024-07-24 02:14:55.812330] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 22.11.4 initialization... 00:36:41.175 [2024-07-24 02:14:55.812400] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1611753 ] 00:36:41.175 EAL: No free 2048 kB hugepages reported on node 1 00:36:41.175 [2024-07-24 02:14:55.872522] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:41.175 [2024-07-24 02:14:55.962976] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:41.175 02:14:56 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:41.175 02:14:56 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:36:41.175 02:14:56 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:36:41.175 02:14:56 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:36:41.432 02:14:56 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:36:41.432 02:14:56 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:36:41.998 02:14:56 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:36:41.998 02:14:56 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:36:41.998 [2024-07-24 02:14:56.827108] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:36:42.255 nvme0n1 00:36:42.255 02:14:56 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:36:42.255 02:14:56 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:36:42.255 02:14:56 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:36:42.255 02:14:56 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:36:42.255 02:14:56 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:42.255 02:14:56 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:36:42.513 02:14:57 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:36:42.513 02:14:57 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:36:42.513 02:14:57 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:36:42.513 02:14:57 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:36:42.513 02:14:57 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:42.513 02:14:57 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:42.513 02:14:57 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:36:42.513 02:14:57 keyring_linux -- keyring/linux.sh@25 -- # sn=433815529 00:36:42.513 02:14:57 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:36:42.513 02:14:57 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:36:42.513 02:14:57 keyring_linux -- keyring/linux.sh@26 -- # [[ 433815529 == \4\3\3\8\1\5\5\2\9 ]] 00:36:42.513 02:14:57 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 433815529 00:36:42.513 02:14:57 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:36:42.513 02:14:57 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:42.771 Running I/O for 1 seconds... 00:36:43.704 00:36:43.704 Latency(us) 00:36:43.704 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:43.704 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:36:43.704 nvme0n1 : 1.01 6780.08 26.48 0.00 0.00 18753.47 12281.93 31457.28 00:36:43.704 =================================================================================================================== 00:36:43.704 Total : 6780.08 26.48 0.00 0.00 18753.47 12281.93 31457.28 00:36:43.704 0 00:36:43.704 02:14:58 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:36:43.704 02:14:58 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:36:43.962 02:14:58 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:36:43.962 02:14:58 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:36:43.962 02:14:58 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:36:43.962 02:14:58 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:36:43.962 02:14:58 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:36:43.962 02:14:58 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:44.219 02:14:59 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:36:44.219 02:14:59 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:36:44.219 02:14:59 keyring_linux -- keyring/linux.sh@23 -- # return 00:36:44.219 02:14:59 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:44.219 02:14:59 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:36:44.219 02:14:59 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:44.219 02:14:59 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:36:44.219 02:14:59 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:44.219 02:14:59 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:36:44.219 02:14:59 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:44.219 02:14:59 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:44.220 02:14:59 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:44.477 [2024-07-24 02:14:59.275579] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:36:44.477 [2024-07-24 02:14:59.276342] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ed4750 (107): Transport endpoint is not connected 00:36:44.477 [2024-07-24 02:14:59.277334] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ed4750 (9): Bad file descriptor 00:36:44.477 [2024-07-24 02:14:59.278332] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:36:44.477 [2024-07-24 02:14:59.278356] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:36:44.477 [2024-07-24 02:14:59.278385] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:36:44.477 request: 00:36:44.477 { 00:36:44.477 "name": "nvme0", 00:36:44.477 "trtype": "tcp", 00:36:44.477 "traddr": "127.0.0.1", 00:36:44.477 "adrfam": "ipv4", 00:36:44.477 "trsvcid": "4420", 00:36:44.477 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:44.478 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:44.478 "prchk_reftag": false, 00:36:44.478 "prchk_guard": false, 00:36:44.478 "hdgst": false, 00:36:44.478 "ddgst": false, 00:36:44.478 "psk": ":spdk-test:key1", 00:36:44.478 "method": "bdev_nvme_attach_controller", 00:36:44.478 "req_id": 1 00:36:44.478 } 00:36:44.478 Got JSON-RPC error response 00:36:44.478 response: 00:36:44.478 { 00:36:44.478 "code": -5, 00:36:44.478 "message": "Input/output error" 00:36:44.478 } 00:36:44.478 02:14:59 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:36:44.478 02:14:59 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:36:44.478 02:14:59 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:36:44.478 02:14:59 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:36:44.478 02:14:59 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:36:44.478 02:14:59 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:36:44.478 02:14:59 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:36:44.478 02:14:59 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:36:44.478 02:14:59 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:36:44.478 02:14:59 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:36:44.478 02:14:59 keyring_linux -- keyring/linux.sh@33 -- # sn=433815529 00:36:44.478 02:14:59 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 433815529 00:36:44.478 1 links removed 00:36:44.478 02:14:59 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:36:44.478 02:14:59 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:36:44.478 02:14:59 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:36:44.478 02:14:59 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:36:44.478 02:14:59 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:36:44.478 02:14:59 keyring_linux -- keyring/linux.sh@33 -- # sn=293416707 00:36:44.478 02:14:59 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 293416707 00:36:44.478 1 links removed 00:36:44.478 02:14:59 keyring_linux -- keyring/linux.sh@41 -- # killprocess 1611753 00:36:44.478 02:14:59 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 1611753 ']' 00:36:44.478 02:14:59 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 1611753 00:36:44.478 02:14:59 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:36:44.478 02:14:59 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:44.478 02:14:59 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1611753 00:36:44.478 02:14:59 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:36:44.478 02:14:59 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:36:44.478 02:14:59 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1611753' 00:36:44.478 killing process with pid 1611753 00:36:44.478 02:14:59 keyring_linux -- common/autotest_common.sh@967 -- # kill 1611753 00:36:44.478 Received shutdown signal, test time was about 1.000000 seconds 00:36:44.478 00:36:44.478 Latency(us) 00:36:44.478 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:44.478 =================================================================================================================== 00:36:44.478 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:44.478 02:14:59 keyring_linux -- common/autotest_common.sh@972 -- # wait 1611753 00:36:44.736 02:14:59 keyring_linux -- keyring/linux.sh@42 -- # killprocess 1611623 00:36:44.736 02:14:59 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 1611623 ']' 00:36:44.736 02:14:59 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 1611623 00:36:44.736 02:14:59 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:36:44.736 02:14:59 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:44.736 02:14:59 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1611623 00:36:44.736 02:14:59 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:36:44.736 02:14:59 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:36:44.736 02:14:59 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1611623' 00:36:44.736 killing process with pid 1611623 00:36:44.736 02:14:59 keyring_linux -- common/autotest_common.sh@967 -- # kill 1611623 00:36:44.736 02:14:59 keyring_linux -- common/autotest_common.sh@972 -- # wait 1611623 00:36:45.380 00:36:45.380 real 0m4.894s 00:36:45.380 user 0m9.280s 00:36:45.380 sys 0m1.579s 00:36:45.380 02:15:00 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:45.380 02:15:00 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:45.380 ************************************ 00:36:45.380 END TEST keyring_linux 00:36:45.380 ************************************ 00:36:45.380 02:15:00 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:36:45.380 02:15:00 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:36:45.380 02:15:00 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:36:45.380 02:15:00 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:36:45.380 02:15:00 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:36:45.380 02:15:00 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:36:45.380 02:15:00 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:36:45.380 02:15:00 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:36:45.380 02:15:00 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:36:45.380 02:15:00 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:36:45.380 02:15:00 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:36:45.380 02:15:00 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:36:45.380 02:15:00 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:36:45.380 02:15:00 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:36:45.380 02:15:00 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:36:45.380 02:15:00 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:36:45.380 02:15:00 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:36:45.380 02:15:00 -- common/autotest_common.sh@722 -- # xtrace_disable 00:36:45.380 02:15:00 -- common/autotest_common.sh@10 -- # set +x 00:36:45.380 02:15:00 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:36:45.380 02:15:00 -- common/autotest_common.sh@1390 -- # local autotest_es=0 00:36:45.380 02:15:00 -- common/autotest_common.sh@1391 -- # xtrace_disable 00:36:45.380 02:15:00 -- common/autotest_common.sh@10 -- # set +x 00:36:47.284 INFO: APP EXITING 00:36:47.284 INFO: killing all VMs 00:36:47.284 INFO: killing vhost app 00:36:47.284 INFO: EXIT DONE 00:36:48.219 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:36:48.219 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:36:48.219 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:36:48.219 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:36:48.219 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:36:48.219 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:36:48.219 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:36:48.219 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:36:48.219 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:36:48.219 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:36:48.219 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:36:48.219 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:36:48.219 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:36:48.219 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:36:48.219 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:36:48.219 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:36:48.219 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:36:49.594 Cleaning 00:36:49.594 Removing: /var/run/dpdk/spdk0/config 00:36:49.594 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:36:49.594 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:36:49.594 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:36:49.594 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:36:49.594 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:36:49.594 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:36:49.594 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:36:49.594 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:36:49.594 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:36:49.594 Removing: /var/run/dpdk/spdk0/hugepage_info 00:36:49.594 Removing: /var/run/dpdk/spdk1/config 00:36:49.594 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:36:49.594 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:36:49.594 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:36:49.594 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:36:49.594 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:36:49.594 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:36:49.594 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:36:49.594 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:36:49.594 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:36:49.594 Removing: /var/run/dpdk/spdk1/hugepage_info 00:36:49.594 Removing: /var/run/dpdk/spdk1/mp_socket 00:36:49.594 Removing: /var/run/dpdk/spdk2/config 00:36:49.594 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:36:49.594 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:36:49.594 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:36:49.594 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:36:49.594 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:36:49.594 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:36:49.594 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:36:49.594 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:36:49.594 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:36:49.594 Removing: /var/run/dpdk/spdk2/hugepage_info 00:36:49.594 Removing: /var/run/dpdk/spdk3/config 00:36:49.594 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:36:49.594 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:36:49.594 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:36:49.594 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:36:49.594 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:36:49.594 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:36:49.595 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:36:49.595 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:36:49.595 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:36:49.595 Removing: /var/run/dpdk/spdk3/hugepage_info 00:36:49.595 Removing: /var/run/dpdk/spdk4/config 00:36:49.595 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:36:49.595 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:36:49.595 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:36:49.595 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:36:49.595 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:36:49.595 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:36:49.595 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:36:49.595 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:36:49.595 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:36:49.595 Removing: /var/run/dpdk/spdk4/hugepage_info 00:36:49.595 Removing: /dev/shm/bdev_svc_trace.1 00:36:49.595 Removing: /dev/shm/nvmf_trace.0 00:36:49.595 Removing: /dev/shm/spdk_tgt_trace.pid1292006 00:36:49.595 Removing: /var/run/dpdk/spdk0 00:36:49.595 Removing: /var/run/dpdk/spdk1 00:36:49.595 Removing: /var/run/dpdk/spdk2 00:36:49.595 Removing: /var/run/dpdk/spdk3 00:36:49.595 Removing: /var/run/dpdk/spdk4 00:36:49.595 Removing: /var/run/dpdk/spdk_pid1290455 00:36:49.595 Removing: /var/run/dpdk/spdk_pid1291185 00:36:49.595 Removing: /var/run/dpdk/spdk_pid1292006 00:36:49.595 Removing: /var/run/dpdk/spdk_pid1292433 00:36:49.595 Removing: /var/run/dpdk/spdk_pid1293120 00:36:49.595 Removing: /var/run/dpdk/spdk_pid1293260 00:36:49.595 Removing: /var/run/dpdk/spdk_pid1293982 00:36:49.595 Removing: /var/run/dpdk/spdk_pid1293989 00:36:49.595 Removing: /var/run/dpdk/spdk_pid1294235 00:36:49.595 Removing: /var/run/dpdk/spdk_pid1295486 00:36:49.595 Removing: /var/run/dpdk/spdk_pid1296473 00:36:49.595 Removing: /var/run/dpdk/spdk_pid1296661 00:36:49.595 Removing: /var/run/dpdk/spdk_pid1296969 00:36:49.595 Removing: /var/run/dpdk/spdk_pid1297172 00:36:49.595 Removing: /var/run/dpdk/spdk_pid1297360 00:36:49.595 Removing: /var/run/dpdk/spdk_pid1297519 00:36:49.595 Removing: /var/run/dpdk/spdk_pid1297679 00:36:49.595 Removing: /var/run/dpdk/spdk_pid1297858 00:36:49.595 Removing: /var/run/dpdk/spdk_pid1298169 00:36:49.595 Removing: /var/run/dpdk/spdk_pid1300515 00:36:49.595 Removing: /var/run/dpdk/spdk_pid1300684 00:36:49.595 Removing: /var/run/dpdk/spdk_pid1300846 00:36:49.595 Removing: /var/run/dpdk/spdk_pid1300860 00:36:49.595 Removing: /var/run/dpdk/spdk_pid1301280 00:36:49.595 Removing: /var/run/dpdk/spdk_pid1301291 00:36:49.595 Removing: /var/run/dpdk/spdk_pid1301715 00:36:49.595 Removing: /var/run/dpdk/spdk_pid1301725 00:36:49.595 Removing: /var/run/dpdk/spdk_pid1302014 00:36:49.595 Removing: /var/run/dpdk/spdk_pid1302025 00:36:49.595 Removing: /var/run/dpdk/spdk_pid1302187 00:36:49.595 Removing: /var/run/dpdk/spdk_pid1302322 00:36:49.595 Removing: /var/run/dpdk/spdk_pid1302686 00:36:49.595 Removing: /var/run/dpdk/spdk_pid1302840 00:36:49.595 Removing: /var/run/dpdk/spdk_pid1303037 00:36:49.595 Removing: /var/run/dpdk/spdk_pid1303205 00:36:49.595 Removing: /var/run/dpdk/spdk_pid1303345 00:36:49.595 Removing: /var/run/dpdk/spdk_pid1303418 00:36:49.595 Removing: /var/run/dpdk/spdk_pid1303569 00:36:49.595 Removing: /var/run/dpdk/spdk_pid1303845 00:36:49.595 Removing: /var/run/dpdk/spdk_pid1304005 00:36:49.595 Removing: /var/run/dpdk/spdk_pid1304159 00:36:49.595 Removing: /var/run/dpdk/spdk_pid1304350 00:36:49.595 Removing: /var/run/dpdk/spdk_pid1304592 00:36:49.595 Removing: /var/run/dpdk/spdk_pid1304751 00:36:49.595 Removing: /var/run/dpdk/spdk_pid1304904 00:36:49.595 Removing: /var/run/dpdk/spdk_pid1305176 00:36:49.595 Removing: /var/run/dpdk/spdk_pid1305370 00:36:49.595 Removing: /var/run/dpdk/spdk_pid1305595 00:36:49.595 Removing: /var/run/dpdk/spdk_pid1305762 00:36:49.595 Removing: /var/run/dpdk/spdk_pid1306028 00:36:49.595 Removing: /var/run/dpdk/spdk_pid1306193 00:36:49.595 Removing: /var/run/dpdk/spdk_pid1306578 00:36:49.595 Removing: /var/run/dpdk/spdk_pid1307074 00:36:49.595 Removing: /var/run/dpdk/spdk_pid1307284 00:36:49.595 Removing: /var/run/dpdk/spdk_pid1307453 00:36:49.595 Removing: /var/run/dpdk/spdk_pid1307607 00:36:49.595 Removing: /var/run/dpdk/spdk_pid1307874 00:36:49.595 Removing: /var/run/dpdk/spdk_pid1307953 00:36:49.853 Removing: /var/run/dpdk/spdk_pid1308157 00:36:49.853 Removing: /var/run/dpdk/spdk_pid1310226 00:36:49.853 Removing: /var/run/dpdk/spdk_pid1312743 00:36:49.853 Removing: /var/run/dpdk/spdk_pid1319820 00:36:49.853 Removing: /var/run/dpdk/spdk_pid1320230 00:36:49.853 Removing: /var/run/dpdk/spdk_pid1322736 00:36:49.853 Removing: /var/run/dpdk/spdk_pid1322896 00:36:49.853 Removing: /var/run/dpdk/spdk_pid1325460 00:36:49.853 Removing: /var/run/dpdk/spdk_pid1329113 00:36:49.853 Removing: /var/run/dpdk/spdk_pid1331293 00:36:49.853 Removing: /var/run/dpdk/spdk_pid1337584 00:36:49.853 Removing: /var/run/dpdk/spdk_pid1342903 00:36:49.853 Removing: /var/run/dpdk/spdk_pid1344609 00:36:49.853 Removing: /var/run/dpdk/spdk_pid1345277 00:36:49.853 Removing: /var/run/dpdk/spdk_pid1355612 00:36:49.853 Removing: /var/run/dpdk/spdk_pid1357887 00:36:49.853 Removing: /var/run/dpdk/spdk_pid1411260 00:36:49.853 Removing: /var/run/dpdk/spdk_pid1414515 00:36:49.853 Removing: /var/run/dpdk/spdk_pid1418236 00:36:49.853 Removing: /var/run/dpdk/spdk_pid1422061 00:36:49.853 Removing: /var/run/dpdk/spdk_pid1422067 00:36:49.853 Removing: /var/run/dpdk/spdk_pid1422605 00:36:49.853 Removing: /var/run/dpdk/spdk_pid1423252 00:36:49.853 Removing: /var/run/dpdk/spdk_pid1423910 00:36:49.853 Removing: /var/run/dpdk/spdk_pid1424311 00:36:49.853 Removing: /var/run/dpdk/spdk_pid1424314 00:36:49.853 Removing: /var/run/dpdk/spdk_pid1424458 00:36:49.853 Removing: /var/run/dpdk/spdk_pid1424582 00:36:49.853 Removing: /var/run/dpdk/spdk_pid1424596 00:36:49.853 Removing: /var/run/dpdk/spdk_pid1425246 00:36:49.853 Removing: /var/run/dpdk/spdk_pid1425899 00:36:49.853 Removing: /var/run/dpdk/spdk_pid1426470 00:36:49.853 Removing: /var/run/dpdk/spdk_pid1426862 00:36:49.853 Removing: /var/run/dpdk/spdk_pid1426960 00:36:49.853 Removing: /var/run/dpdk/spdk_pid1427102 00:36:49.853 Removing: /var/run/dpdk/spdk_pid1427977 00:36:49.853 Removing: /var/run/dpdk/spdk_pid1428697 00:36:49.853 Removing: /var/run/dpdk/spdk_pid1434124 00:36:49.853 Removing: /var/run/dpdk/spdk_pid1459886 00:36:49.853 Removing: /var/run/dpdk/spdk_pid1462673 00:36:49.853 Removing: /var/run/dpdk/spdk_pid1463848 00:36:49.853 Removing: /var/run/dpdk/spdk_pid1465044 00:36:49.853 Removing: /var/run/dpdk/spdk_pid1465177 00:36:49.853 Removing: /var/run/dpdk/spdk_pid1465312 00:36:49.853 Removing: /var/run/dpdk/spdk_pid1465428 00:36:49.853 Removing: /var/run/dpdk/spdk_pid1465763 00:36:49.853 Removing: /var/run/dpdk/spdk_pid1467072 00:36:49.853 Removing: /var/run/dpdk/spdk_pid1467784 00:36:49.853 Removing: /var/run/dpdk/spdk_pid1468108 00:36:49.853 Removing: /var/run/dpdk/spdk_pid1469718 00:36:49.853 Removing: /var/run/dpdk/spdk_pid1470137 00:36:49.853 Removing: /var/run/dpdk/spdk_pid1470582 00:36:49.853 Removing: /var/run/dpdk/spdk_pid1473089 00:36:49.853 Removing: /var/run/dpdk/spdk_pid1476340 00:36:49.853 Removing: /var/run/dpdk/spdk_pid1479863 00:36:49.853 Removing: /var/run/dpdk/spdk_pid1503226 00:36:49.853 Removing: /var/run/dpdk/spdk_pid1505868 00:36:49.853 Removing: /var/run/dpdk/spdk_pid1509613 00:36:49.853 Removing: /var/run/dpdk/spdk_pid1510563 00:36:49.853 Removing: /var/run/dpdk/spdk_pid1511552 00:36:49.853 Removing: /var/run/dpdk/spdk_pid1514177 00:36:49.853 Removing: /var/run/dpdk/spdk_pid1516574 00:36:49.853 Removing: /var/run/dpdk/spdk_pid1521172 00:36:49.853 Removing: /var/run/dpdk/spdk_pid1521286 00:36:49.853 Removing: /var/run/dpdk/spdk_pid1524048 00:36:49.853 Removing: /var/run/dpdk/spdk_pid1524190 00:36:49.853 Removing: /var/run/dpdk/spdk_pid1524323 00:36:49.853 Removing: /var/run/dpdk/spdk_pid1524589 00:36:49.853 Removing: /var/run/dpdk/spdk_pid1524667 00:36:49.853 Removing: /var/run/dpdk/spdk_pid1525791 00:36:49.853 Removing: /var/run/dpdk/spdk_pid1526967 00:36:49.853 Removing: /var/run/dpdk/spdk_pid1528142 00:36:49.853 Removing: /var/run/dpdk/spdk_pid1529317 00:36:49.853 Removing: /var/run/dpdk/spdk_pid1530492 00:36:49.853 Removing: /var/run/dpdk/spdk_pid1531679 00:36:49.853 Removing: /var/run/dpdk/spdk_pid1535368 00:36:49.853 Removing: /var/run/dpdk/spdk_pid1535818 00:36:49.853 Removing: /var/run/dpdk/spdk_pid1537095 00:36:49.854 Removing: /var/run/dpdk/spdk_pid1537835 00:36:49.854 Removing: /var/run/dpdk/spdk_pid1541420 00:36:49.854 Removing: /var/run/dpdk/spdk_pid1543395 00:36:49.854 Removing: /var/run/dpdk/spdk_pid1546876 00:36:49.854 Removing: /var/run/dpdk/spdk_pid1550624 00:36:49.854 Removing: /var/run/dpdk/spdk_pid1556825 00:36:49.854 Removing: /var/run/dpdk/spdk_pid1561164 00:36:49.854 Removing: /var/run/dpdk/spdk_pid1561234 00:36:49.854 Removing: /var/run/dpdk/spdk_pid1573479 00:36:49.854 Removing: /var/run/dpdk/spdk_pid1573887 00:36:49.854 Removing: /var/run/dpdk/spdk_pid1574307 00:36:49.854 Removing: /var/run/dpdk/spdk_pid1574814 00:36:49.854 Removing: /var/run/dpdk/spdk_pid1575395 00:36:49.854 Removing: /var/run/dpdk/spdk_pid1575801 00:36:49.854 Removing: /var/run/dpdk/spdk_pid1576209 00:36:49.854 Removing: /var/run/dpdk/spdk_pid1576612 00:36:49.854 Removing: /var/run/dpdk/spdk_pid1579110 00:36:49.854 Removing: /var/run/dpdk/spdk_pid1579249 00:36:49.854 Removing: /var/run/dpdk/spdk_pid1583647 00:36:49.854 Removing: /var/run/dpdk/spdk_pid1583721 00:36:49.854 Removing: /var/run/dpdk/spdk_pid1585444 00:36:49.854 Removing: /var/run/dpdk/spdk_pid1590373 00:36:49.854 Removing: /var/run/dpdk/spdk_pid1590383 00:36:49.854 Removing: /var/run/dpdk/spdk_pid1593262 00:36:49.854 Removing: /var/run/dpdk/spdk_pid1594670 00:36:49.854 Removing: /var/run/dpdk/spdk_pid1596066 00:36:49.854 Removing: /var/run/dpdk/spdk_pid1596918 00:36:49.854 Removing: /var/run/dpdk/spdk_pid1598326 00:36:49.854 Removing: /var/run/dpdk/spdk_pid1599196 00:36:49.854 Removing: /var/run/dpdk/spdk_pid1604392 00:36:49.854 Removing: /var/run/dpdk/spdk_pid1604737 00:36:49.854 Removing: /var/run/dpdk/spdk_pid1605125 00:36:49.854 Removing: /var/run/dpdk/spdk_pid1606685 00:36:50.112 Removing: /var/run/dpdk/spdk_pid1606991 00:36:50.112 Removing: /var/run/dpdk/spdk_pid1607357 00:36:50.112 Removing: /var/run/dpdk/spdk_pid1609796 00:36:50.112 Removing: /var/run/dpdk/spdk_pid1609806 00:36:50.112 Removing: /var/run/dpdk/spdk_pid1611260 00:36:50.112 Removing: /var/run/dpdk/spdk_pid1611623 00:36:50.112 Removing: /var/run/dpdk/spdk_pid1611753 00:36:50.112 Clean 00:36:50.112 02:15:04 -- common/autotest_common.sh@1449 -- # return 0 00:36:50.112 02:15:04 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:36:50.112 02:15:04 -- common/autotest_common.sh@728 -- # xtrace_disable 00:36:50.112 02:15:04 -- common/autotest_common.sh@10 -- # set +x 00:36:50.112 02:15:04 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:36:50.112 02:15:04 -- common/autotest_common.sh@728 -- # xtrace_disable 00:36:50.112 02:15:04 -- common/autotest_common.sh@10 -- # set +x 00:36:50.112 02:15:04 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:36:50.112 02:15:04 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:36:50.112 02:15:04 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:36:50.112 02:15:04 -- spdk/autotest.sh@391 -- # hash lcov 00:36:50.112 02:15:04 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:36:50.112 02:15:04 -- spdk/autotest.sh@393 -- # hostname 00:36:50.112 02:15:04 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-11 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:36:50.370 geninfo: WARNING: invalid characters removed from testname! 00:37:22.432 02:15:32 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:22.433 02:15:36 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:24.958 02:15:39 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:28.235 02:15:42 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:30.761 02:15:45 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:34.142 02:15:48 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:36.672 02:15:51 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:37:36.672 02:15:51 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:36.672 02:15:51 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:37:36.672 02:15:51 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:36.672 02:15:51 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:36.672 02:15:51 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:36.672 02:15:51 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:36.672 02:15:51 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:36.672 02:15:51 -- paths/export.sh@5 -- $ export PATH 00:37:36.672 02:15:51 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:36.672 02:15:51 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:37:36.672 02:15:51 -- common/autobuild_common.sh@447 -- $ date +%s 00:37:36.672 02:15:51 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721780151.XXXXXX 00:37:36.672 02:15:51 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721780151.4kOoxD 00:37:36.672 02:15:51 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:37:36.672 02:15:51 -- common/autobuild_common.sh@453 -- $ '[' -n v22.11.4 ']' 00:37:36.672 02:15:51 -- common/autobuild_common.sh@454 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:37:36.672 02:15:51 -- common/autobuild_common.sh@454 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:37:36.672 02:15:51 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:37:36.672 02:15:51 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:37:36.672 02:15:51 -- common/autobuild_common.sh@463 -- $ get_config_params 00:37:36.672 02:15:51 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:37:36.672 02:15:51 -- common/autotest_common.sh@10 -- $ set +x 00:37:36.673 02:15:51 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:37:36.673 02:15:51 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:37:36.673 02:15:51 -- pm/common@17 -- $ local monitor 00:37:36.673 02:15:51 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:36.673 02:15:51 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:36.673 02:15:51 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:36.673 02:15:51 -- pm/common@21 -- $ date +%s 00:37:36.673 02:15:51 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:36.673 02:15:51 -- pm/common@21 -- $ date +%s 00:37:36.673 02:15:51 -- pm/common@25 -- $ sleep 1 00:37:36.673 02:15:51 -- pm/common@21 -- $ date +%s 00:37:36.673 02:15:51 -- pm/common@21 -- $ date +%s 00:37:36.673 02:15:51 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721780151 00:37:36.673 02:15:51 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721780151 00:37:36.673 02:15:51 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721780151 00:37:36.673 02:15:51 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721780151 00:37:36.673 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721780151_collect-vmstat.pm.log 00:37:36.673 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721780151_collect-cpu-load.pm.log 00:37:36.673 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721780151_collect-cpu-temp.pm.log 00:37:36.673 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721780151_collect-bmc-pm.bmc.pm.log 00:37:37.612 02:15:52 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:37:37.612 02:15:52 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j48 00:37:37.612 02:15:52 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:37.612 02:15:52 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:37:37.612 02:15:52 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:37:37.612 02:15:52 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:37:37.612 02:15:52 -- spdk/autopackage.sh@19 -- $ timing_finish 00:37:37.612 02:15:52 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:37:37.612 02:15:52 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:37:37.612 02:15:52 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:37:37.612 02:15:52 -- spdk/autopackage.sh@20 -- $ exit 0 00:37:37.612 02:15:52 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:37:37.612 02:15:52 -- pm/common@29 -- $ signal_monitor_resources TERM 00:37:37.612 02:15:52 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:37:37.612 02:15:52 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:37.612 02:15:52 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:37:37.612 02:15:52 -- pm/common@44 -- $ pid=1623627 00:37:37.612 02:15:52 -- pm/common@50 -- $ kill -TERM 1623627 00:37:37.612 02:15:52 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:37.612 02:15:52 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:37:37.612 02:15:52 -- pm/common@44 -- $ pid=1623629 00:37:37.612 02:15:52 -- pm/common@50 -- $ kill -TERM 1623629 00:37:37.612 02:15:52 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:37.612 02:15:52 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:37:37.612 02:15:52 -- pm/common@44 -- $ pid=1623631 00:37:37.612 02:15:52 -- pm/common@50 -- $ kill -TERM 1623631 00:37:37.612 02:15:52 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:37.612 02:15:52 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:37:37.612 02:15:52 -- pm/common@44 -- $ pid=1623660 00:37:37.612 02:15:52 -- pm/common@50 -- $ sudo -E kill -TERM 1623660 00:37:37.871 + [[ -n 1186810 ]] 00:37:37.871 + sudo kill 1186810 00:37:37.882 [Pipeline] } 00:37:37.901 [Pipeline] // stage 00:37:37.906 [Pipeline] } 00:37:37.923 [Pipeline] // timeout 00:37:37.929 [Pipeline] } 00:37:37.948 [Pipeline] // catchError 00:37:37.954 [Pipeline] } 00:37:37.975 [Pipeline] // wrap 00:37:37.981 [Pipeline] } 00:37:37.998 [Pipeline] // catchError 00:37:38.007 [Pipeline] stage 00:37:38.010 [Pipeline] { (Epilogue) 00:37:38.026 [Pipeline] catchError 00:37:38.028 [Pipeline] { 00:37:38.042 [Pipeline] echo 00:37:38.044 Cleanup processes 00:37:38.050 [Pipeline] sh 00:37:38.337 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:38.337 1623762 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:37:38.337 1623892 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:38.351 [Pipeline] sh 00:37:38.650 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:38.650 ++ grep -v 'sudo pgrep' 00:37:38.650 ++ awk '{print $1}' 00:37:38.650 + sudo kill -9 1623762 00:37:38.659 [Pipeline] sh 00:37:38.936 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:37:48.919 [Pipeline] sh 00:37:49.204 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:37:49.204 Artifacts sizes are good 00:37:49.218 [Pipeline] archiveArtifacts 00:37:49.224 Archiving artifacts 00:37:49.467 [Pipeline] sh 00:37:49.748 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:37:49.763 [Pipeline] cleanWs 00:37:49.773 [WS-CLEANUP] Deleting project workspace... 00:37:49.773 [WS-CLEANUP] Deferred wipeout is used... 00:37:49.781 [WS-CLEANUP] done 00:37:49.783 [Pipeline] } 00:37:49.801 [Pipeline] // catchError 00:37:49.813 [Pipeline] sh 00:37:50.093 + logger -p user.info -t JENKINS-CI 00:37:50.101 [Pipeline] } 00:37:50.117 [Pipeline] // stage 00:37:50.122 [Pipeline] } 00:37:50.138 [Pipeline] // node 00:37:50.143 [Pipeline] End of Pipeline 00:37:50.192 Finished: SUCCESS